jump to navigation

I Don’t Need a Hero October 23, 2018

Posted by Peter Varhol in Software development, Software platforms, Strategy.
Tags: , ,
add a comment

Apologies to Bonnie Tyler, but we don’t need heroes, as we have defined them in our culture.  “He’s got to be strong, he’s got to be fast, and he’s got to be fresh from the fight.”  Um, no.

Atul Gawande, author of The Checklist Manifesto, makes it clear that the heroes, those in any profession that create a successful outcome primarily on the strength of their superhuman effort, don’t deserve to be recognized as true heroes.  In fact, we should try to avoid circumstances that appear to require a superhuman effort.

So what are heroes?  We would like to believe that they exist.  Myself, I am enamored with the astronauts of a bygone era, who faced significant uncertainties in pushing the envelope of technology, and accepted that their lives were perpetually in hock.  But, of course, they were the same ones who thought that they were better than those who sacrificed their lives, because they survived.

Today, according to Gawande, the heroes are those who can follow checklists in order to make sure that they don’t forget any step in a complex process.  The checklists themselves can be simple, in that they exist to prompt professionals to remember and execute seemingly simple steps that are often forgotten in the heat of crisis.

In short, Gawande believes in commercial airline pilots, such as Chesley (Sully) Sullenberger, who with his copilot Jeffrey Skiles glided their wounded plane to a ditching in the Hudson River off Midtown Manhattan.  Despite the fact that we all know Sully’s name in the Miracle on the Hudson, it was a team effort by the entire flight crew.  And they were always calm, and in control.

Today, software teams are made up on individuals, not close team members.  Because they rarely work as a team, it’s easy for one or more individuals to step up and fix a problem, without the help of the team.

There are several problems with that approach, however.  First, if an extra effort by one person is successful, the team may not try as hard in the future, knowing that they will be bailed out of difficult situations.  Second, the hero is not replicable; you can’t count on it again and again in those situations.  Third, the hero can’t solve every problem; other members of the team will eventually be needed.

It feels good to be the hero, the one who by virtue of extreme effort fixes a bad situation.  The world loves you.  You feel like you’ve accomplished something significant.  But you’re not at all a hero if your team wasn’t there for you.

Too Many Cameras June 15, 2018

Posted by Peter Varhol in Software platforms, Strategy, Technology and Culture.
Tags: ,
add a comment

The title above is a play off of the “Too Many Secrets” revelation in the 1992 movie Sneakers, in which Robert Redford’s character, who has a secret or two himself, finds himself in possession of the ultimate decryption device, and everyone wants it.

Today we have too many cameras around us.  This was brought home to me rather starkly when I received an email that said:

I’ve been recording you with your computer camera and caught you <censored>.  Shame on you.  If you don’t want me to send that video to your family and employer, pay me $1000.

I pause.  Did I really do <censored> in front of my computer camera?  I didn’t think so, but I do spend a lot of time in front of the screen.  In any case, <censored> didn’t quite rise to the level of blackmail concern, in my opinion, so I ignored it.

But is this scenario so completely far-fetched?  This article lists all of the cameras that Amazon can conceivably put in your home today, and in the near future, that list will certainly grow.  Other services, such as your PC vendor and security system provider, will add even more movie-ready devices.

In some ways, the explosion of cameras looking at our actions is good.  Cameras can nudge us to drive more safely, and to identify and find thieves and other bad guys.  They can help find lost or kidnapped children.

But even outside our home, they are a little creepy.  You don’t want to stop in the middle of the sidewalk and think, I’m being watched right now.  The vast majority of people simply don’t have any reason to be observed, and thinking about it can be disconcerting.

Inside, I simply don’t think we want them, phone and PC included.  I do believe that people realize it is happening, but in the short term, think the coolness of the Amazon products and the lack of friction in ordering from Amazon supersedes any thoughts about privacy.  They would rather have computers at their beck and call than think about the implications.

We need to do better than that if we want to live in an automated world.

Alexa, Phone Joe May 28, 2018

Posted by Peter Varhol in Algorithms, Software platforms, Technology and Culture.
Tags: , ,
add a comment

By now, the story of how Amazon Alexa recorded a private conversation and sent the recording off to a colleague is well-known.  Amazon has said that the event was a highly unlikely series of circumstances that will only happen very rarely.  Further, it promised to try to adjust the algorithms so that it didn’t happen again, but no guarantees, of course.

Forgive me if that doesn’t make me feel better.  Now, I’m not blaming Amazon, or Alexa, or the couple involved in the conversation.  What this scenario should be doing is radically readjusting what our expectations of a private conversation are.  About three decades ago, there was a short-lived (I believe) reality TV show called “Children Say the Funniest Things.”  It turned out that most of the funniest things concerned what they repeated from their parents.

Well, it’s not only our children that are in the room.  It’s also Internet-connected “smart” devices that can reliably digitally record our conversations and share them around the world.  Are we surprised?  We shouldn’t be.  Did we really think that putting a device that we could talk to in the room wouldn’t drastically change what privacy meant?

Well, here we are.  Alexa is not only a frictionless method of ordering products.  It is an unimpeachable witness listening to “some” conversations in the room.  Which ones?  Well, that’s not quite clear.  There are keywords, but depending on location, volume, and accent, Alexa may hear keywords where none are intended.

And it will decide who to share those conversations with, perhaps based on pre-programmed keywords.  Or perhaps based on an AI-type natural language interpretation of a statement.  Or, most concerning, based on a hack of the system.

One has to ask if in the very near future Alexa may well be subject to a warrant in a criminal case?  Guess what, it has already happened.  And unintended consequences will continue to occur, and many of those consequences will continue to be more and more public.

We may well accept that tradeoff – more and different unintended consequences in return for greater convenience in ordering things.  I’m aware that Alexa can do more than that, and that its range of capability will only continue to expand.  But so will the range of unintended consequences.

The Golden Age of Databases May 10, 2018

Posted by Peter Varhol in Architectures, Software platforms, Software tools.
Tags: , , ,
add a comment

Let’s face it, to most developers, databases are boring and opaque.  As long as I can create a data object to call the database and bring data into my application, I really don’t care about the underlying structures.  And many of us have an inherent bias against DBAs, for multiple reasons.  Years ago, one of my computer science graduate students made the proclamation, “I’m an engineer; I write technical applications.  I have no need for databases at all.”

I don’t think this is true anymore, if it ever was.  The problem was in the predominance of SQL relational databases.  The mathematical and logical foundation of relational databases is actually quite interesting, but from a practical standpoint actually setting up a database, whether through E-R diagrams or other approach, is pretty cut and dried.  And maintaining and performance tuning databases can often seem like an exercise in futility.

Certainly there were other types of databases and derivative products 20 or 30 years ago.  My old company, Progress Software, still makes a mint off its OpenEdge database and 4GL environment.  Sybase PowerBuilder was popular for at least two decades, and Borland Delphi still has a healthy following.  OLAP engines were available in the 1990s, working with SQL relational databases to quickly extract and report on relational data.

But traditional relational databases have disadvantages for today’s uses.  They are meant to be a highly reliable storage and retrieval system.  They tend to have the reliable part down pat, and there are almost universal means of reading, writing, modifying, and monitoring data in relational tables.

The world of data has changed.  While reliability and programming access of relational databases remains important in traditional enterprise applications, software has become essential in a wide variety of other areas.  This includes self-driving cars, financial trading, manufacturing, retail, and commercial applications in general.

Relational databases have been used in these areas, but have limitations that are becoming increasingly apparent as we stress them in ways they weren’t designed for.  So instead we are seeing alternatives that specialize in a specific area of storage and retrieval.  For example, the No-SQL MongoDB and MapReduce in general are making it possible to store large amounts of unstructured data, and to quickly search and retrieve data from that storage.  The open source InfluxDB provides a ready store for event-driven data, enabling applications to stream data based on a time series.  Databases such as FaunaDB can be used to implement blockchain.

All of these databases can run in the cloud, or on premises.  They tend to be easy to set up and use, and you can almost certainly find one to meet your specific needs.

So as you develop your next ground-breaking application, don’t find yourself limited by a relational database.  You’re not stuck in the same rut that you were ten years ago.  Take a look at what has to be called the Golden Age of databases.

We Forget What We Don’t Use April 17, 2018

Posted by Peter Varhol in Software platforms, Strategy.
Tags: , , ,
add a comment

Years ago, I was a pilot.  SEL, as we said, single-engine land.  Once during my instruction, for about an hour, we spent time going over what he called recovery from unusual attitudes.  I went “under the hood”, putting on a plastic device that blocked my vision while he placed the plane in various situations.  Then he would lift the hood, to where I could only see the instruments.

I became quite good at this, focusing on two instruments – turn and bank, and airspeed.  Based on these instruments, I was able to recover to straight and level flight within seconds.

My instructor pilot realized what I was doing, and was a lot smarter than me.  The next time, it didn’t work; it made things worse, actually.  I panicked, and in a real life scenario, may well have crashed.

Today, I have a presentation I generically call “What Aircrews Can Teach IT” (the title changes based on the audience makeup).  It is focused on Crew Resource Management, a structured way of working and communicating so that responsibilities are understood and concerns are voiced.

But there is more that aircrews can teach us.  We panic when we have not seen a situation before.  Aircrews do too.  That’s why they practice, in a simulator, with a check pilot, hundreds of hours a year.  That’s why we have few commercial airline accidents today.  When we do, it is almost always because of crew error, because they are unfamiliar with their situation.

It’s the same in IT.  If we are faced with a situation we haven’t encountered before, chances are we will react emotionally and incorrectly to it.  The consequences may not be a fatal accident, but we can still do better.

I preach situational awareness in all aspects of life.  We need to understand our surroundings, pay attention to people and events that may affect us, and in general be prepared to react based on our reading of a situation.

In many professional jobs, we’ve forgotten about the value of training.  I don’t mean going to a class; I mean practicing scenarios, again and again, until they become second nature.  That’s what aircrews do.  And that’s what soldiers do.  And when we have something on the line, that is more valuable than anything else we could be doing.  And eventually it will pay off.

Revisiting Net Neutrality December 14, 2017

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: ,
add a comment

I wrote on this about three years ago.  As it seems that so-called net neutrality may be reaching the end of the road, at least for the near term, it is worthwhile cutting through the crap to examine what is really going on.

You know, I think that net neutrality has merits.  It certainly has marketing on its side; according to CNN, it means “to keep the internet open and fair.”

Ah, it doesn’t, and that is the problem.  It means that the streaming services such as the likes of Netflix and Amazon can hog bandwidth with impunity, and without paying a premium.  I am certain that CNN has a business reason to maintain net neutrality, and it is unfortunate that they are letting that business reason leak into their reporting.

The Internet is a finite resource.  There are some companies that use a great deal of it.  Should they pay more for doing so?  Perhaps, but the “net neutrality” supporters don’t want to have that conversation.  I say let’s talk about it, but the news establishment doesn’t want to do so.  They give it a high-sounding label, and proclaim it good.  The ones who oppose it are bad.  Case closed.

Net neutrality does (maybe) mean that the Internet is basically a utility, like electricity or water.  That isn’t necessarily a bad thing, but I am not sure it reflects reality.  Those companies, mostly the telecom folks, have invested billions of dollars, and are not at all guaranteed a profit.  It is a risk, and when individuals or companies take risks, they succeed or fail according to the market.  Yet the likes of CNN are treating them as your electric utility, guaranteed to make a set amount of money from the state Public Utilities Commission.  That doesn’t reflect their reality at all.

I think that net neutrality is ultimately the way to go.  But it supports some businesses over the expense of others.  Just like the alternative.

But I have to ask, CNN, why are you afraid to even have the conversation?  You have declared net neutrality to be The Way, and you will brook no further discussion.

Update:  And now the title of the CNN headline is this:  End of the Internet as we know it.  Can we get any more biased, CNN?

SpamCast on Machine Learning September 20, 2017

Posted by Peter Varhol in Software platforms.
Tags: ,
add a comment

Not really spam, of course, but Software Process and Measurement, the weekly podcast from Tom Cagley, who I met at the QUEST conference this past spring.  This turned out surprisingly well, and Tom posted it this past weekend.  If you have a few minutes, listen in.  It’s a good introduction to machine learning and the issues of testing machine learning systems, as well as skills needed to understand and work with these systems.  http://spamcast.libsyn.com/spamcast-460-peter-varhol-machine-learning-ai-testing-careers

What Brought About our AI Revolution? July 22, 2017

Posted by Peter Varhol in Algorithms, Software development, Software platforms.
Tags: , , ,
2 comments

Circa 1990, I was a computer science graduate student, writing forward-chaining rules in Lisp for AI applications.  We had Symbolics Lisp workstations, but I did most of my coding on my Mac, using ExperList or the wonderful XLisp written by friend and colleague David Betz.

Lisp was convoluted to work with, and in general rules-based systems required that there was an expert available to develop the rules.  It turns out that it’s very difficult for any human expert to describe in rules how they got a particular answer.  And those rules generally couldn’t take into account any data that might help it learn and refine over time.

As a result, most rules-based systems fell by the wayside.  While they could work for discrete problems where the steps to a conclusion were clearly defined, they weren’t very useful when the problem domain was ambiguous or there was no clear yes or no answer.

A couple of years later I moved on to working with neural networks.  Neural networks require data for training purposes.  These systems are made up of layered networks of equations (I used mostly fairly simple polynomial expressions, but sometimes the algorithms can get pretty sophisticated) that adapt based on known inputs and outputs.

Neural networks have the advantage of obtaining their expertise through the application of actual data.  However, due to the multiple layers of algorithms, it is usually impossible to determine how the system arrives at the answers it does.

Recently I presented on machine learning at the QUEST Conference in Chicago and at Expo:QA in Spain.  In interacting with the attendees, I realized something.  While some data scientists tend to use more complex algorithms today, the techniques involved in neural networks for machine learning are pretty much the same as they were when I was doing it, now 25 years ago.

So why are we having the explosion in machine learning, AI, and intelligent systems today?  When I was asked that question recently, I realized that there was only one possible answer.

Computing processing speeds continue to follow Moore’s Law (more or less), especially when we’re talking floating point SIMD/parallel processing operations.  Moore’s Law doesn’t directly relate to speed or performance, but there is a strong correlation.  And processors today are now fast enough to execute complex algorithms with data applied in parallel.  Some, like Nvidia, have wonderful GPUs that turn out to work very well with this type of problem.  Others, like Intel, have released an entire processor line dedicated to AI algorithms.

In other words, what has happened is that the hardware caught up to the software.  The software (and mathematical) techniques are fundamentally the same, but now the machine learning systems can run fast enough to actually be useful.