jump to navigation

We Forget What We Don’t Use April 17, 2018

Posted by Peter Varhol in Software platforms, Strategy.
Tags: , , ,
add a comment

Years ago, I was a pilot.  SEL, as we said, single-engine land.  Once during my instruction, for about an hour, we spent time going over what he called recovery from unusual attitudes.  I went “under the hood”, putting on a plastic device that blocked my vision while he placed the plane in various situations.  Then he would lift the hood, to where I could only see the instruments.

I became quite good at this, focusing on two instruments – turn and bank, and airspeed.  Based on these instruments, I was able to recover to straight and level flight within seconds.

My instructor pilot realized what I was doing, and was a lot smarter than me.  The next time, it didn’t work; it made things worse, actually.  I panicked, and in a real life scenario, may well have crashed.

Today, I have a presentation I generically call “What Aircrews Can Teach IT” (the title changes based on the audience makeup).  It is focused on Crew Resource Management, a structured way of working and communicating so that responsibilities are understood and concerns are voiced.

But there is more that aircrews can teach us.  We panic when we have not seen a situation before.  Aircrews do too.  That’s why they practice, in a simulator, with a check pilot, hundreds of hours a year.  That’s why we have few commercial airline accidents today.  When we do, it is almost always because of crew error, because they are unfamiliar with their situation.

It’s the same in IT.  If we are faced with a situation we haven’t encountered before, chances are we will react emotionally and incorrectly to it.  The consequences may not be a fatal accident, but we can still do better.

I preach situational awareness in all aspects of life.  We need to understand our surroundings, pay attention to people and events that may affect us, and in general be prepared to react based on our reading of a situation.

In many professional jobs, we’ve forgotten about the value of training.  I don’t mean going to a class; I mean practicing scenarios, again and again, until they become second nature.  That’s what aircrews do.  And that’s what soldiers do.  And when we have something on the line, that is more valuable than anything else we could be doing.  And eventually it will pay off.

Advertisements

Revisiting Net Neutrality December 14, 2017

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: ,
add a comment

I wrote on this about three years ago.  As it seems that so-called net neutrality may be reaching the end of the road, at least for the near term, it is worthwhile cutting through the crap to examine what is really going on.

You know, I think that net neutrality has merits.  It certainly has marketing on its side; according to CNN, it means “to keep the internet open and fair.”

Ah, it doesn’t, and that is the problem.  It means that the streaming services such as the likes of Netflix and Amazon can hog bandwidth with impunity, and without paying a premium.  I am certain that CNN has a business reason to maintain net neutrality, and it is unfortunate that they are letting that business reason leak into their reporting.

The Internet is a finite resource.  There are some companies that use a great deal of it.  Should they pay more for doing so?  Perhaps, but the “net neutrality” supporters don’t want to have that conversation.  I say let’s talk about it, but the news establishment doesn’t want to do so.  They give it a high-sounding label, and proclaim it good.  The ones who oppose it are bad.  Case closed.

Net neutrality does (maybe) mean that the Internet is basically a utility, like electricity or water.  That isn’t necessarily a bad thing, but I am not sure it reflects reality.  Those companies, mostly the telecom folks, have invested billions of dollars, and are not at all guaranteed a profit.  It is a risk, and when individuals or companies take risks, they succeed or fail according to the market.  Yet the likes of CNN are treating them as your electric utility, guaranteed to make a set amount of money from the state Public Utilities Commission.  That doesn’t reflect their reality at all.

I think that net neutrality is ultimately the way to go.  But it supports some businesses over the expense of others.  Just like the alternative.

But I have to ask, CNN, why are you afraid to even have the conversation?  You have declared net neutrality to be The Way, and you will brook no further discussion.

Update:  And now the title of the CNN headline is this:  End of the Internet as we know it.  Can we get any more biased, CNN?

SpamCast on Machine Learning September 20, 2017

Posted by Peter Varhol in Software platforms.
Tags: ,
add a comment

Not really spam, of course, but Software Process and Measurement, the weekly podcast from Tom Cagley, who I met at the QUEST conference this past spring.  This turned out surprisingly well, and Tom posted it this past weekend.  If you have a few minutes, listen in.  It’s a good introduction to machine learning and the issues of testing machine learning systems, as well as skills needed to understand and work with these systems.  http://spamcast.libsyn.com/spamcast-460-peter-varhol-machine-learning-ai-testing-careers

What Brought About our AI Revolution? July 22, 2017

Posted by Peter Varhol in Algorithms, Software development, Software platforms.
Tags: , , ,
2 comments

Circa 1990, I was a computer science graduate student, writing forward-chaining rules in Lisp for AI applications.  We had Symbolics Lisp workstations, but I did most of my coding on my Mac, using ExperList or the wonderful XLisp written by friend and colleague David Betz.

Lisp was convoluted to work with, and in general rules-based systems required that there was an expert available to develop the rules.  It turns out that it’s very difficult for any human expert to describe in rules how they got a particular answer.  And those rules generally couldn’t take into account any data that might help it learn and refine over time.

As a result, most rules-based systems fell by the wayside.  While they could work for discrete problems where the steps to a conclusion were clearly defined, they weren’t very useful when the problem domain was ambiguous or there was no clear yes or no answer.

A couple of years later I moved on to working with neural networks.  Neural networks require data for training purposes.  These systems are made up of layered networks of equations (I used mostly fairly simple polynomial expressions, but sometimes the algorithms can get pretty sophisticated) that adapt based on known inputs and outputs.

Neural networks have the advantage of obtaining their expertise through the application of actual data.  However, due to the multiple layers of algorithms, it is usually impossible to determine how the system arrives at the answers it does.

Recently I presented on machine learning at the QUEST Conference in Chicago and at Expo:QA in Spain.  In interacting with the attendees, I realized something.  While some data scientists tend to use more complex algorithms today, the techniques involved in neural networks for machine learning are pretty much the same as they were when I was doing it, now 25 years ago.

So why are we having the explosion in machine learning, AI, and intelligent systems today?  When I was asked that question recently, I realized that there was only one possible answer.

Computing processing speeds continue to follow Moore’s Law (more or less), especially when we’re talking floating point SIMD/parallel processing operations.  Moore’s Law doesn’t directly relate to speed or performance, but there is a strong correlation.  And processors today are now fast enough to execute complex algorithms with data applied in parallel.  Some, like Nvidia, have wonderful GPUs that turn out to work very well with this type of problem.  Others, like Intel, have released an entire processor line dedicated to AI algorithms.

In other words, what has happened is that the hardware caught up to the software.  The software (and mathematical) techniques are fundamentally the same, but now the machine learning systems can run fast enough to actually be useful.

Why I Have to Keep Task Manager Running in Windows July 2, 2017

Posted by Peter Varhol in Software platforms.
Tags: , , , ,
add a comment

Over the last year or so, my daily personal laptop has been running slower and slower.  For a variety of reasons, Windows performance and reliability tends to degrade over time.  Memory especially, but also disk and CPU have been pegging at 100 percent all too frequently.  I suppose I could wipe the system and start again from scratch, but that’s also a good indication that it’s time to get a new laptop.

I’ve upgraded to a new laptop, a midrange Core i5 quad-core system with 8 GB of RAM, running Windows 10.  That will fix my memory, CPU and disk problems, I thought.

Wrong.  My system still hung regularly.  So I started investigating in more detail.

Chrome, for one very big reason.  I will typically keep four or five tabs open, and it doesn’t take long for one or two of them to take up well over 2GB of memory.  And I’m talking about very commonly used sites, like weather.com, fitbit.com, or cnn.com.

While I’ve read several reasons (some of which are contradictory or don’t reflect my situation) why Chrome consumes memory like a drunken sailor, there doesn’t seem to be a whole lot to do about it.  Some talk about disabling add-ins; the only add-in I have running is Flash, and that is still required by many commercial websites (and crashes just as frequently).

Chrome has also been known to consume huge amounts of CPU and disk bandwidth.  I haven’t really read anything actionable about what to do here.

So I keep Task Manager open.  When a Chrome tab starts to misbehave, there is no alternative but to kill the process.

But wait!  It’s not just Chrome!  In Windows 10, there’s also this process called Microsoft Telemetry Service (and yes, it is a Windows Service).  I found this service using 99 percent of my CPU on more than one occasion.  What does Microsoft Telemetry Service do?  It sends use information from your computer to Microsoft.  Not just error information; use information.

It is enabled by default.  If you disable it, some of the Windows updates will re-enable it without telling you.

My very strong recommendation is to disable it and the horse it rode in on.  I guess this is what we deserve in the Facebook era, where we have no privacy.

Gita, Carry My Groceries January 31, 2017

Posted by Peter Varhol in Software platforms, Technology and Culture.
add a comment

Longtime scooter manufacturer Vespa has apparently announced Piaggio Fast Forward, a division located here in Boston and formed to design and produce a robot called Gita that will carry your groceries.

Ah, no. I realize that there is a certain segment of the population that is aged or infirmed, and might need assistance with their groceries.  I feel for them, but they are few, and many might not be able to afford such a helper.

But for the vast majority of able-bodied adults among us, this smacks as sloth (as in the famed Seven Deadly Sins).  We don’t get enough exercise at it is, and this gives us yet another excuse to pass up on an opportunity to (only occasionally, granted), lift and carry.

Automation generally has good effects, and advances technology and life in general. This is automation without purpose.

I hope that this “innovation” fails miserably.

Alexa, Delete My Data December 25, 2016

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: , ,
add a comment

As we become inundated this holiday season by Amazon ads for its EchoDot voice system and Alexa artificial intelligent assistant, I confess I remain conflicted about the potential and reality of AI technology in our lives.

To be sure, the Alexa commercials are wonderful. For those of us who grew up under the influence of George Jetson (were they really only on TV for one season?), Alexa represents the realization of something that we could only dream about for the last 50+ years.  Few of us can afford a human assistant, but the intelligent virtual assistant is a reality.  The future is now!

It’s only when you think it through that it becomes more problematic. A necessary corollary to an intelligent virtual assistant is that assistant has enough data about you to recognize what are at times ambiguous instructions.  And by having that data, and current information about us, we could imagine issues with instructions like these:

“Alexa, I’m just going out for a few minutes; don’t bother setting the burglar alarm.”

“Alexa, turn the temperature down to 55 until January 15; I won’t be home.”

I’m sure that Google already has a lot of information on me. I rarely log into my Google account, but it identifies me anyway, so it knows what I search for.  And Google knows my travel photos, through Picasa.  Amazon also identifies me without logging in, but I don’t buy a lot through Amazon, so its data is less complete.  Your own mileage with these and other data aggregators may vary.

To be fair, the US government currently and in the past has been in possession of an incredible amount of information on most adults. I have held jobs and am a taxpayer; I have a driver’s license (and pilot’s license, for that matter); I am a military veteran; and I’ve held government security clearances.

I’d always believed that my best privacy protection was the fact that government databases didn’t talk to one another. The IRS didn’t know, and didn’t care, whether or not my military discharge was honorable (it was).  Yeah.  That may have been true at one time, but it is changing.  Data exchange between government agencies won’t be seamless in my lifetime, but it is heading, slowly but exorably in that direction.

And the commercial firms are far more efficient. Google and Facebook today know more about us than anyone might imagine.  Third party data brokers can make our data show up in the strangest places.

And lest you mistake me, I’m not saying that this is necessarily a bad thing. There are tradeoffs in every action we take.  Rather, it’s something that we let happen without thinking about it.  We can come up with all sorts of rationalizations on why we love the convenience and efficiency, but rarely ponder the other side of the coin.

I personally try to think about the implications every time I release data to a computer, and sometimes decline to do so (take that, Facebook). And in some cases, such as my writings and conference talks, I’ve made career decisions that I am well aware make more data available on me.  I haven’t yet decided on Alexa, but I am certainly not going to be an early adopter.

Update: Oh my. http://www.cnn.com/2016/12/28/tech/amazon-echo-alexa-bentonville-arkansas-murder-case-trnd/index.html

Uber Bullshit Disapproved October 28, 2016

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: ,
add a comment

I think it’s safe to say that Uber is full of BS. In this report, it heralds the availability of providing personal aircraft commuting options to consumers by 2020.  The article actually treats it as serious news.  I want to giggle.

I don’t even know where to begin. The article cites regulatory issues, but it is far more than simply that.  The regulations exist largely for safety and identity purposes, and any talk of regulation has to devolve into the many very good reasons behind them.  These are regulations that Uber can flout, as they have so many others.

It actually says that the costs are feasible, as long as the aircraft are self-piloting. Um, no, they aren’t.  Here’s why.

Pilots. That is the one cost that is actually manageable.  There is a plethora of 23-year old pilots with their newly-minted commercial ticket who would rather be doing this than picking up an occasional buck giving flying lessons.  They are not the expensive part of flying; they will do this for $20 an hour.  The actual manufacturing cost of the planes isn’t the gating factor either, even though even the most basic new private plane goes for about a quarter of a million USD.

Where is the cost? Liability insurance.  Liability insurance makes up over 30 percent of the cost of a new private plane, which is why private planes are no longer made in the US.  All of our private aircraft come from companies in Europe, where liability laws are different, and presumably much less expensive.

I know something about flying and aviation. I also know something about the history of personal flight, thanks to my father’s 1960s-era subscriptions to Popular Science magazine.

In a larger sense, we regulate aviation because unregulated flying is, well, dangerous. Flying is a serious endeavor that does not easily lend itself to simply getting a ride.  If your plane runs out of gas or has a mechanical issue, you can’t simply pull to the side of the road.  When I got my driver’s license, the instructor said, “Now you have the right to get yourself into an accident.”  When I got my pilot’s license, it was considerably more involved and serious.  No one wanted me to get killed; it would involve too much paperwork.

And weather. Enough said.  The ability to fly under instrument conditions is an entirely different kettle of fish, both for the plane and the pilot.  It takes years for a pilot to fully comprehend flying in inclement weather.

Leave it to automation, you say? Um, no.  Ultimately, there has to be a human in the loop, and that won’t change for at least half a century, if ever.  And remember that this problem is at least a factor of ten (probably more like a factor of 100) more difficult than self-driving cars, which have the luxury of operating in only two dimensions.

I could go on further, but this is already fantasy.

I’m not sure why Uber felt the need to commission and publish such a study, but it is nonsensical.