jump to navigation

Automation Can Be Dangerous December 6, 2018

Posted by Peter Varhol in Software development, Software tools, Strategy, Uncategorized.
Tags: , ,
add a comment

Boeing has a great way to prevent aerodynamic stalls in their 737 MAX aircraft.  A set of sensors determines through airspeed and angle of attack that an aircraft is about to stall (that is, lose lift on its wings), and automatically pitch the nose down to recover.

Apparently malfunctioning sensors on Lion Air Flight 610 caused the aircraft nose to sharply pitch down absent any indication of a stall.  Preliminary analysis indicates that the pilots were unable to overcome the nose-down attitude, and the aircraft dove into the sea.  Boeing’s solution to this automation fault was explicit, even if its documentation wasn’t.  Turn off the system.

And this is what the software developers, testers, and their bosses don’t get.  Everyone thinks that automation is the silver bullet.  Automation is inherently superior to manual testing.  Automation will speed up testing, reduce costs, and increase quality.  We must have more automation engineers, and everyone not an automation engineer should just go away now.

There are many lessons here for software teams.  Automation is great when consistency in operation is required.  Automation will execute exactly the same steps until the cows come home.  That’s a great feature to have.

But many testing activities are not at all about consistency in operation.  In fact, relatively few are.  It would be good for smoke tests and regression tests to be consistent.  Synthetic testing in production also benefits from automation and consistency.

Other types of testing?  Not so much.  The purpose of regression testing, smoke testing, and testing in production is to validate the integrity of the application, and to make sure nothing bad is currently happening.  Those are valid goals, but they are only the start of testing.

Instead, testing is really about individual users and how they interact with an application.  Every person does things on a computer just a little different, so it behooves testers to do the same.  This isn’t harkening back to the days of weeks or months of testing, but rather acknowledging that the purpose of testing is to ensure an application is fit for use.  Human use.

And sometimes, whether through fault or misuse, automation breaks down, as in the case of the Lion Air 737.  And teams need to know what to do when that happens.

Now, when you are deploying software perhaps multiple times a day, it seems like it can take forever to sit down and actually use the product.  But remember the thousands more who are depending on the software and the efforts that go behind it.

In addition to knowing when and how to use automation in software testing, we also need to know when to shut it off, and use our own analytical skills to solve a problem.  Instead, all too often we shut down our own analytical skills in favor of automation.

Advertisements

I Don’t Need a Hero October 23, 2018

Posted by Peter Varhol in Software development, Software platforms, Strategy.
Tags: , ,
add a comment

Apologies to Bonnie Tyler, but we don’t need heroes, as we have defined them in our culture.  “He’s got to be strong, he’s got to be fast, and he’s got to be fresh from the fight.”  Um, no.

Atul Gawande, author of The Checklist Manifesto, makes it clear that the heroes, those in any profession that create a successful outcome primarily on the strength of their superhuman effort, don’t deserve to be recognized as true heroes.  In fact, we should try to avoid circumstances that appear to require a superhuman effort.

So what are heroes?  We would like to believe that they exist.  Myself, I am enamored with the astronauts of a bygone era, who faced significant uncertainties in pushing the envelope of technology, and accepted that their lives were perpetually in hock.  But, of course, they were the same ones who thought that they were better than those who sacrificed their lives, because they survived.

Today, according to Gawande, the heroes are those who can follow checklists in order to make sure that they don’t forget any step in a complex process.  The checklists themselves can be simple, in that they exist to prompt professionals to remember and execute seemingly simple steps that are often forgotten in the heat of crisis.

In short, Gawande believes in commercial airline pilots, such as Chesley (Sully) Sullenberger, who with his copilot Jeffrey Skiles glided their wounded plane to a ditching in the Hudson River off Midtown Manhattan.  Despite the fact that we all know Sully’s name in the Miracle on the Hudson, it was a team effort by the entire flight crew.  And they were always calm, and in control.

Today, software teams are made up on individuals, not close team members.  Because they rarely work as a team, it’s easy for one or more individuals to step up and fix a problem, without the help of the team.

There are several problems with that approach, however.  First, if an extra effort by one person is successful, the team may not try as hard in the future, knowing that they will be bailed out of difficult situations.  Second, the hero is not replicable; you can’t count on it again and again in those situations.  Third, the hero can’t solve every problem; other members of the team will eventually be needed.

It feels good to be the hero, the one who by virtue of extreme effort fixes a bad situation.  The world loves you.  You feel like you’ve accomplished something significant.  But you’re not at all a hero if your team wasn’t there for you.

Too Many Cameras June 15, 2018

Posted by Peter Varhol in Software platforms, Strategy, Technology and Culture.
Tags: ,
add a comment

The title above is a play off of the “Too Many Secrets” revelation in the 1992 movie Sneakers, in which Robert Redford’s character, who has a secret or two himself, finds himself in possession of the ultimate decryption device, and everyone wants it.

Today we have too many cameras around us.  This was brought home to me rather starkly when I received an email that said:

I’ve been recording you with your computer camera and caught you <censored>.  Shame on you.  If you don’t want me to send that video to your family and employer, pay me $1000.

I pause.  Did I really do <censored> in front of my computer camera?  I didn’t think so, but I do spend a lot of time in front of the screen.  In any case, <censored> didn’t quite rise to the level of blackmail concern, in my opinion, so I ignored it.

But is this scenario so completely far-fetched?  This article lists all of the cameras that Amazon can conceivably put in your home today, and in the near future, that list will certainly grow.  Other services, such as your PC vendor and security system provider, will add even more movie-ready devices.

In some ways, the explosion of cameras looking at our actions is good.  Cameras can nudge us to drive more safely, and to identify and find thieves and other bad guys.  They can help find lost or kidnapped children.

But even outside our home, they are a little creepy.  You don’t want to stop in the middle of the sidewalk and think, I’m being watched right now.  The vast majority of people simply don’t have any reason to be observed, and thinking about it can be disconcerting.

Inside, I simply don’t think we want them, phone and PC included.  I do believe that people realize it is happening, but in the short term, think the coolness of the Amazon products and the lack of friction in ordering from Amazon supersedes any thoughts about privacy.  They would rather have computers at their beck and call than think about the implications.

We need to do better than that if we want to live in an automated world.

More on AI and the Turing Test May 20, 2018

Posted by Peter Varhol in Architectures, Machine Learning, Strategy, Uncategorized.
Tags: , , ,
add a comment

It turns out that most people who care to comment are, to use the common phrase, creeped out at the thought of not knowing whether they are talking to an AI or a human being.  I get that, although I don’t think I’m myself bothered by such a notion.  After all, what do we know about people during a casual phone conversation?  Many of them probably sound like robots to us anyway.

And this article in the New York Times notes that Google was only able to accomplish this feat by severely limiting the domain in which the AI could interact with – in this case, making dinner reservations or a hair appointment.  The demonstration was still significant, but isn’t a truly practical application, even within a limited domain space.

Well, that’s true.  The era of an AI program interacting like a human across multiple domains is far away, even with the advances we’ve seen over the last few years.  And this is why I even doubt the viability of self-driving cars anytime soon.  The problem domains encountered by cars are enormously complex, far more so than any current tests have attempted.  From road surface to traffic situation to weather to individual preferences, today’s self-driving cars can’t deal with being in the wild.

You may retort that all of these conditions are objective and highly quantifiable, making it possible to anticipate and program for.  But we come across driving situations almost daily that have new elements that must be instinctively integrated into our body of knowledge and acted upon.  Computers certainly have the speed to do so, but they lack a good learning framework to identify critical data and integrate that data into their neural network to respond in real time.

Author Gary Marcus notes that what this means is that the deep learning approach to AI has failed.  I laughed when I came to the solution proposed by Dr. Marcus – that we return to the backward-chaining rules-based approach of two decades ago.  This was what I learned during much of my graduate studies, and was largely given up on in the 1990s as unworkable.  Building layer upon layer of interacting rules was tedious and error-prone, and it required an exacting understanding of just how backward chaining worked.

Ultimately, I think that the next generation of AI will incorporate both types of approaches.  The neural network to process data and come to a decision, and a rules-based system to provide the learning foundation and structure.

Don’t Break Things April 20, 2018

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags:
add a comment

The mantra in tech over the last several years has been “Move fast and break things.”  That culture has been manifested by headliners Uber and Facebook, as well as by countless Silicon Valley startups eager to deliver on what they know for sure is a winning strategy.

It’s long since time that we pushed back on that misguided attitude.  First, no, you don’t have to move fast.  While we retain the myth of the first mover advantage, if you look at history it is very much a myth.  Tech history is rife with lessons of established companies moving into a new area, “validating” that space, and pushing out the pioneering startups (Oracle in SQL databases, Facebook against MySpace, Microsoft in just about every market until about 2005, to cite three well-known examples).

Second, you don’t have to break things.  This wrongheaded attitude represents only a misleading part of a larger truism, that if you are headed in the wrong strategic or product direction, it’s better to know it earlier rather than later.  The implication with “breaking things” is that you don’t know if you are headed in the wrong direction unless you break something in the process.  Um, no.  You know it because you have business acumen, and are paying attention, not because you have broken anything.

It gets worse.  Companies such as Uber, Airbnb, and Zenefits have redefined breaking things to include laws and regulations that are inconvenient to their business models.  I simply cannot conceive of how this comes about.  The arrogance and hubris of such firms must be enormous.

Certainly there are countless laws and regulations that need to be rethought and rewritten as advances change how business might be practiced.  I have always said that the (only) positive thing about Uber was that it drastically reshaped the taxi industry, I think largely for the good.

But ignoring laws and regulations that you don’t like is simply wrong, in any sense you might think of it.  Rather, you work with government entities to educate them on what it possible to advance a particular product or service, and to openly advocate for legal change.

Oh, but that takes far too long for tech companies convinced that they have to move fast.  And they simply can’t be bothered anyway.  See my first point – moving fast is rarely a competitive advantage in tech.

It’s clear that Silicon Valley startups won’t buy into what I say here.  It’s up to us, the customer and the public, to object to such an absurd business mantra.  To date, we the public have either stayed on the sidelines, or even actively supported such criminal practices as Uber’s because of the convenience afforded us by the end result.  This has got to change.

Update:  Case in point, https://qz.com/1257229/electric-scooter-startup-bird-wants-to-make-it-legal-to-ride-scooters-on-the-sidewalk/.  It’s illegal but it’s not stopping the companies.

We Forget What We Don’t Use April 17, 2018

Posted by Peter Varhol in Software platforms, Strategy.
Tags: , , ,
add a comment

Years ago, I was a pilot.  SEL, as we said, single-engine land.  Once during my instruction, for about an hour, we spent time going over what he called recovery from unusual attitudes.  I went “under the hood”, putting on a plastic device that blocked my vision while he placed the plane in various situations.  Then he would lift the hood, to where I could only see the instruments.

I became quite good at this, focusing on two instruments – turn and bank, and airspeed.  Based on these instruments, I was able to recover to straight and level flight within seconds.

My instructor pilot realized what I was doing, and was a lot smarter than me.  The next time, it didn’t work; it made things worse, actually.  I panicked, and in a real life scenario, may well have crashed.

Today, I have a presentation I generically call “What Aircrews Can Teach IT” (the title changes based on the audience makeup).  It is focused on Crew Resource Management, a structured way of working and communicating so that responsibilities are understood and concerns are voiced.

But there is more that aircrews can teach us.  We panic when we have not seen a situation before.  Aircrews do too.  That’s why they practice, in a simulator, with a check pilot, hundreds of hours a year.  That’s why we have few commercial airline accidents today.  When we do, it is almost always because of crew error, because they are unfamiliar with their situation.

It’s the same in IT.  If we are faced with a situation we haven’t encountered before, chances are we will react emotionally and incorrectly to it.  The consequences may not be a fatal accident, but we can still do better.

I preach situational awareness in all aspects of life.  We need to understand our surroundings, pay attention to people and events that may affect us, and in general be prepared to react based on our reading of a situation.

In many professional jobs, we’ve forgotten about the value of training.  I don’t mean going to a class; I mean practicing scenarios, again and again, until they become second nature.  That’s what aircrews do.  And that’s what soldiers do.  And when we have something on the line, that is more valuable than anything else we could be doing.  And eventually it will pay off.

About Computer Science and Responsibility March 31, 2018

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: , ,
add a comment

Are we prepared to take on the responsibility of the consequences of our code?  That is clearly a loaded question.  Both individual programmers and their employers use all manner of code to gain a personal, financial, business, or wartime advantage.  I once had a programmer explain to me, “They tell me to build this menu, I build the menu.  They tell me to create these options, I create these options.  There is no thought involved.”

In one sense, yes.  By the time the project reaches the coder, there is usually little in doubt.  But while we are not the masterminds, we are the enablers.

I am not sure that all software programmers viewed their work abstractly, without acknowledging potential consequences.  Back in the 1980s, I knew many programmers who declined to work for the burgeoning defense industry in Massachusetts of the day, convinced that their code might be responsible for war and violent death (despite the state’s cultural, well, ambivalence to its defense industry to begin with).

Others are troubled by providing inaccurate information being used to make decisions, or by trying to manipulate people’s emotions to feel a particular way, to buy a particular product or service.  But that seems much less damaging or harmful than enabling the launch of a nuclear-tipped ballistic missile.

Or is it?  I am pretty sure that most who work for Facebook successfully do abstract their code from the results.  How else can you explain the company’s disregard of personal reaction to their extreme intrusion into the lives of their users?  I think that might have relatively little to do with their value systems, and more to do with the culture in which they work.

To be fair, this is not about Facebook, although I could not resist the dig.  Rather, this is to point out that the implementers, yes, the enablers, tend to be divorced from the decisions and the consequences.  To be specific:  Us.

Is this a problem?  After all, those who are making the decisions are better qualified to do so, and are paid to do so, usually better than the programmers.  Shouldn’t they be the ones taking the responsibility?

Ah, but they can use the same argument in response.  They are not the ones actually creating these systems; they are not implementing the actual weapons of harm.

Here is the point.  With military systems, we are well aware that we are enabling war to be fought, the killing of people and the destruction of property.  We can rationalize by saying that we are creating defensive systems, but we have still made a conscious choice here.

With social systems, we seem to care much less that we are potentially causing harm than in war systems.  In fact, the likes of Mark Zuckerberg still continue to insist that his creation is used only for good.  That is, of course, less and less believable as time marches on.

And to be clear, I am not a pacifist.  I served in the military in my youth.  I believe that the course of human history has largely been defined by war.  And that war is the inevitable result of human needs, for security, for sustenance, or for some other need.  It is likely that humanity in general will never grow out of the need to physically dominate others (case in point, Harvey Weinstein).

But as we continue to create software systems to manipulate people, and to do things that make them do what they would not otherwise do, is this really ethically different than creating a military system?  We may be able to rationalize it on some level, but in fact we also have to acknowledge that we are doing harm to people.

So if you are a programmer, can you with this understanding and in good conscience say that you are a force for good in the world?

Solving a Management Problem with Automation is Just Plain Wrong January 18, 2018

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: , ,
add a comment

This article is so fascinatingly wrong on so many levels that it is worth your time to read it.  On the surface, it may appear to offer some impartial logic, that we should automate because humans don’t perform consistently.

“At some point, every human being becomes unreliable.”  Well, yes.  Humans aren’t machines.  They have good days and bad days.  They have exceptional performances and poor performances.

Machines, on the other hand, are stunningly consistent, and least under most circumstances.  Certainly software bugs, power outages, and hardware breakdowns happen, and machines will fail to perform under many of those circumstances, but they are relatively rare.

But there is a problem here.  Actually, several problems.  The first is that machines will do exactly the same thing, every time, until the cows come home.  That’s what they are programmed to do, and they do it reasonably well.

Humans, on the other hand, experiment.  And through experimentation and inspiration come innovation, a better way of doing things.  Sometimes that better way is evolutionary, and sometimes it is revolutionary.  But that’s how society evolves and becomes better.  The machine will always do exactly the same thing, so there will never be better and innovative solutions.  We become static, and as a society old and tired.

Second, humans connect with other humans in a way machines cannot (the movie Robot and Frank notwithstanding).  This article starts with a story of a restaurant whose workers showed up when they felt like it.  Rather that addressing that problem directly, the owner implemented a largely automated (and hands off) assembly line of food.

What has happened here is that the restaurant owner has taken a management problem and attempted to solve it with the application of technology.  And by not acknowledging his own management failings, he will almost certainly fail in his technology solution too.

Except for probably fast food restaurants, people eat out in part for the experience.  We do not eat out only, and probably not even primarily, for sustenance, but rather to connect with our family and friends, and with random people we encounter.

If we cannot do that, we might as well just have glucose and nutrients pumped directly into our veins.