jump to navigation

Solving a Management Problem with Automation is Just Plain Wrong January 18, 2018

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: , ,
add a comment

This article is so fascinatingly wrong on so many levels that it is worth your time to read it.  On the surface, it may appear to offer some impartial logic, that we should automate because humans don’t perform consistently.

“At some point, every human being becomes unreliable.”  Well, yes.  Humans aren’t machines.  They have good days and bad days.  They have exceptional performances and poor performances.

Machines, on the other hand, are stunningly consistent, and least under most circumstances.  Certainly software bugs, power outages, and hardware breakdowns happen, and machines will fail to perform under many of those circumstances, but they are relatively rare.

But there is a problem here.  Actually, several problems.  The first is that machines will do exactly the same thing, every time, until the cows come home.  That’s what they are programmed to do, and they do it reasonably well.

Humans, on the other hand, experiment.  And through experimentation and inspiration come innovation, a better way of doing things.  Sometimes that better way is evolutionary, and sometimes it is revolutionary.  But that’s how society evolves and becomes better.  The machine will always do exactly the same thing, so there will never be better and innovative solutions.  We become static, and as a society old and tired.

Second, humans connect with other humans in a way machines cannot (the movie Robot and Frank notwithstanding).  This article starts with a story of a restaurant whose workers showed up when they felt like it.  Rather that addressing that problem directly, the owner implemented a largely automated (and hands off) assembly line of food.

What has happened here is that the restaurant owner has taken a management problem and attempted to solve it with the application of technology.  And by not acknowledging his own management failings, he will almost certainly fail in his technology solution too.

Except for probably fast food restaurants, people eat out in part for the experience.  We do not eat out only, and probably not even primarily, for sustenance, but rather to connect with our family and friends, and with random people we encounter.

If we cannot do that, we might as well just have glucose and nutrients pumped directly into our veins.

Advertisements

Let’s Have a Frank Discussion About Complexity December 7, 2017

Posted by Peter Varhol in Algorithms, Machine Learning, Strategy, Uncategorized.
Tags: , , , ,
add a comment

And let’s start with the human memory.  “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” is one of the most highly cited papers in psychology.  The title is rhetorical, of course; there is nothing magical about the number seven.  But the paper and associated psychological studies explicitly define the ability of the human mind to process increasingly complex information.

The short answer is that the human mind is a wonderful mechanism for some types of processing.  We can very rapidly process a large amount of sensory inputs, and draw some very quick but not terribly accurate conclusions (Kahneman’s Type 1 thinking), we can’t handle an overwhelming amount of quantitative data and expect to make any sense out of it.

In discussing machine learning systems, I often say that we as humans have too much data to reliably process ourselves.  So we set (mostly artificial) boundaries that let us ignore a large amount of data, so that we can pay attention when the data clearly signify a change in the status quo.

The point is that I don’t think there is a way for humans to deal directly with a lot of complexity.  And if we employ systems to evaluate that complexity and present it in human-understandable concepts, we are necessarily losing information in the process.

This, I think, is a corollary of Joel Spolsky’s Law of Leaky Abstractions, which says that anytime you abstract away from what is really happening with hardware and software, you lose information.  In many cases, that information is fairly trivial, but in some cases, it is critically valuable.  If we miss it, it can cause a serious problem.

While Joel was describing abstraction in a technical sense, I think that his law applies beyond that.  Any time that you add layers in order to better understand a scenario, you out of necessity lose information.  We look at the Dow Jones Industrial Average as a measure of the stock market, for example, rather than minutely examine every stock traded on the New York Stock Exchange.

That’s not a bad thing.  Abstraction makes it possible for us to better comprehend the world around us.

But it also means that we are losing information.  Most times, that’s not a disaster.  Sometimes it can lead us to disastrously bad decisions.

So what is the answer?  Well, abstract, but doubt.  And verify.

The Human In the Loop September 19, 2017

Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: ,
1 comment so far

A couple of years ago, I did a presentation entitled “Famous Software Failures”.  It described six events in history where poor quality or untested software caused significant damage, monetary loss, or death.

It was really more about system failures in general, or the interaction between hardware and software.  And ultimately is was about learning from these failures to help prevent future ones.

I mention this because the protagonist in one of these failures passed earlier this year.  Stanislav Petrov, a Soviet military officer who declined to report a launch of five ICBMs from the United States, as reported by their defense systems.  Believing that a real American offensive would involve many more missiles, Lieutenant Colonel Petrov refused to acknowledge the threat as legitimate and contended to his superiors that it was a false alarm (he was reprimanded for his actions, incidentally, and permitted to retire at his then-current rank).  The false alarm had been created by a rare alignment of sunlight on high-altitude clouds above North Dakota.

There is also a novel by Daniel Suarez, entitled Kill Decision, that postulates the rise of autonomous military drones that are empowered to make a decision on an attack without human input and intervention.  Suarez, an outstanding thriller writer, writes graphically and in detail of weapons and battles that we are convinced must be right around the next technology bend, or even here today.

As we move into a world where critical decisions have to be made instantaneously, we cannot underestimate the value of the human in the loop.  Whether the decision is made with a focus on logic (“They wouldn’t launch just five missiles”) or emotion (“I will not be remembered for starting a war”), it puts any decision in a larger and far more real context than a collection of anonymous algorithms.

The human can certainly be wrong, of course.  And no one person should be responsible for a decision that can cause the death of millions of people.  And we may find ourselves outmaneuvered by an adversary who relies successfully on instantaneous, autonomous decisions (as almost happened in Kill Decision).

As algorithms and intelligent systems become faster and better, human decisions aren’t necessarily needed or even desirable in a growing number of split-second situations.  But while they may be pushed to the edges, human decisions should not be pushed entirely off the page.

 

Analytics Don’t Apply in the Clutch June 21, 2017

Posted by Peter Varhol in Architectures, Strategy, Technology and Culture.
Tags: , , ,
1 comment so far

I was 13 years old, at Forbes Field, and rose with the crowd as Roberto Clemente hit a walk-off home run in the ninth inning to win an important game in the 1971 World Series hunt.  Clemente was a very good hitter for average, but had relatively few home runs.  He delivered in the clutch, as we say.

Moneyball ultimately works in baseball because of the importance of individual achievement in the outcome of games, and the length of the season.  162 games enables carefully thought out probabilities to win out over the long haul.

But teams practicing Moneyball learned that analytics weren’t enough once you got into the postseason.  Here’s the problem.  Probabilities are just that; they indicate a tendency or a trend over time, but don’t effectively predict the result of an individual event in that time series.  Teams such as the Boston Red Sox were World Series winners because they both practiced Moneyball and had high-priced stars proven to deliver results when the game was on the line.

Machine learning and advanced analytics have characteristics in common with Moneyball.  They provide you with the best answer, based on the application of the algorithms and the data used to train them.  Most of the time, that answer is correct within acceptable limits.  Occasionally, it is not.  That failure may simply be an annoyance, or it may have catastrophic consequences.

I have disparaged Nicholas Carr in these pages in the past.  My opinion of him changed radically as I watched his keynote address at the Conference for the Association of Software Testing in 2016 (this talk is similar).  In a nutshell, Carr says that we can’t automate, and trust that automation, without first having experience with the activity itself.  Simply, we can’t automate something that we can’t do ourselves.

All events are not created equal.  Many are routine, but a few might have significant consequences.  But analytics and AI treat all events within their problem domain as the same.  The human knows the difference, and can rise to the occasion with a higher probability than any learning system.

Learning systems are great.  On average, they will produce better results than a human over time.  However, the human is more likely to deliver when it counts.

Has Moneyball Killed Baseball? June 20, 2017

Posted by Peter Varhol in Education, Publishing, Strategy.
Tags: , , , ,
add a comment

Moneyball was a revelation to me.  It taught me that the experts could not effectively evaluate talent, and opened my own mind to the biases found in software development, testing, and team building.  Some of my best conference presentations and articles have been in this area.

But while Moneyball helped the Oakland Athletics, and eventually some other teams, it seems to be well on its way to killing the sport.  I’ve never been a big sports fan, but there were few other activities that could command the attention of a 12-year old in the late 1960s.

I grew up in the Pittsburgh area, and while I was too young to see the dramatic Bill Mazeroski home run in the 1960 World Series, I did see the heroics of Roberto Clemente and Willie Stargell in the 1971 World Series (my sister was administrative assistant at the church in Wilmington NC where Stargell had his funeral).  I lived in Baltimore where the Pirates won a Game 7 in dramatic fashion in 1979 (Steve Blass at the helm for his third game of the series, with Dave Guisti in relief).

But baseball has changed, and not in a good way.  Today, Moneyball has produced teams that focus on dramatic encounters like strikeouts, walks, and home runs.  I doubt this was what Billy Beane wanted to happen.  That makes baseball boring.  It is currently lacking in any of the strategy that it was best at.

As we move toward a world where we are increasingly using analytics to evaluate data and make decisions, we may be leaving the interesting parts of our problem domain behind.  I would like to think that machine learning and analytics are generally good for us, but perhaps they provide a crutch that ultimately makes our world less than it could be.  I hope we find a way to have the best of both.

Decisions, Decisions – There’s an Algorithm for That March 20, 2017

Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: , ,
add a comment

I remember shoveling sh** against the tide. Yes, I taught statistics and decision analysis to university business majors for about 15 years.  It wasn’t so much that they didn’t care as they didn’t want to know.

I had more than one student tell me that it was the job of a manager to make decisions, and numbers didn’t make any difference. Others said, “I make decisions the way they are supposed to be made, by my experience and intuition.  That’s what I’m paid for.”

Well, maybe not too much longer. After a couple of decades of robots performing “pick-and-place” and other manufacturing processes, now machine learning is in the early stages of transforming management.  It will help select job candidates, determine which employees are performing at a high level, and allocate resources between projects, among many other things.

So what’s a manager to do? Well, first, embrace the technology.  Simply, you are not going to win if you fight it.  It is inevitable.

Second, make a real effort to understand it. While computers and calculators were available, I always made my students “do it by hand” the first time around, so they could follow what the calculations were telling them.  You need to know what you are turning your decisions over to.

Third, integrate it into your work processes. By using machine learning to complement your own abilities.  Don’t ignore it, but don’t treat it as gospel either.

There are many philosophical questions at work here. Which is better, your experience or the numbers?  Kahneman says they are about the same, which does not bode well for human decision-making.  And the analysis of the numbers will only get better; can we say the same thing about human decision-making?

Of course, this has implications to the future of management. I’ll explore my thoughts there in a future post.

Your CEO Needs to be Tech-Savvy August 24, 2016

Posted by Peter Varhol in Strategy.
Tags:
add a comment

There was a time in the not-to-distant past when the usual business journals decried the insular nature of IT, and insisted that the CIO needed more business and less tech acumen. It was always accepted that the top tech person would never become CEO, because they lacked the business background and ability.

I would argue that today the shoe is on the other foot. IT, technology in general, and technology intelligently applied to business goals are prerequisites for top company leadership.

Any CEO that doesn’t understand the technology that is essential for the enterprise remaining in business, prospering, and beating the competition is woefully unqualified for the top post. That’s all there is to it.

This was succinctly demonstrated in the recent implosion of Delta’s reservation and operations systems.  A CEO that understood the vital importance of these systems would never have placed his or her enterprise in such a position.

Business academics, and business journals, you have it all wrong today. There is more than ample evidence that the CEO needs to understand what makes the business work, and that is technology.

And CEOs, take notice. You need to get up to speed on your vital technology fast.  Not doing so is a prescription for disaster.  And boards of directors, it is likely that the best candidate for your next CEO is your top technology person.

We Don’t Understand Our Numbers March 27, 2016

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags:
add a comment

I recently bought The Cult of Statistical Significance: How the Standard Error Cost Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deidre N. McCloskey.

Here’s the gist. Statistics is a great tool for demonstrating that a difference found between two sampling results is “real”.  What do I mean by real?  It means that if I measured the entire population, rather than just took samples, I would know that the results would be different.  Because I sample, I have uncertainty, and statistics provide a way to quantify the level of uncertainty.

How different? Well, that’s the rub.  We make certain assumptions about what we are measuring (normal distribution, binomial distribution), and we attempt to measure how much the data in each group differ from one another, based on the size of our sample.  If the two types of results are “different enough”, based on a combination of mean, variation, and distribution, we can claim that there is a statistically significant difference.  In other words, it there a real difference in this measure between the two groups?

But is the difference important? That’s the question we continually fail to ask.  The book Reclaiming Conversation talks about measurements not as a result, but as the beginning of a narrative.  The numbers are meaningless outside of their context.

Often a statistically significant difference becomes unimportant in a practical sense. In drug studies, for example, the study may be large enough, and the variability low enough, to confirm an improvement with an experimental drug regimen, but from a practical sense, the improvement isn’t large enough to invest to develop.

My sister Karen, a data analyst for a medical center, has pointed out to me that significance can also be in the other direction. She collects data on patient satisfaction, and points out that even minor dissatisfaction can have a large effect across both the patient population and the hospital.

That’s just one reason why the measurement is the beginning of the conversation, rather than the conclusion. The number is not the fait accompli; rather, it is the point at which we know enough about the subject to begin talking intelligently.