jump to navigation

Analytics Don’t Apply in the Clutch June 21, 2017

Posted by Peter Varhol in Architectures, Strategy, Technology and Culture.
Tags: , , ,
add a comment

I was 13 years old, at Forbes Field, and rose with the crowd as Roberto Clemente hit a walk-off home run in the ninth inning to win an important game in the 1971 World Series hunt.  Clemente was a very good hitter for average, but had relatively few home runs.  He delivered in the clutch, as we say.

Moneyball ultimately works in baseball because of the importance of individual achievement in the outcome of games, and the length of the season.  162 games enables carefully thought out probabilities to win out over the long haul.

But teams practicing Moneyball learned that analytics weren’t enough once you got into the postseason.  Here’s the problem.  Probabilities are just that; they indicate a tendency or a trend over time, but don’t effectively predict the result of an individual event in that time series.  Teams such as the Boston Red Sox were World Series winners because they both practiced Moneyball and had high-priced stars proven to deliver results when the game was on the line.

Machine learning and advanced analytics have characteristics in common with Moneyball.  They provide you with the best answer, based on the application of the algorithms and the data used to train them.  Most of the time, that answer is correct within acceptable limits.  Occasionally, it is not.  That failure may simply be an annoyance, or it may have catastrophic consequences.

I have disparaged Nicholas Carr in these pages in the past.  My opinion of him changed radically as I watched his keynote address at the Conference for the Association of Software Testing in 2016 (this talk is similar).  In a nutshell, Carr says that we can’t automate, and trust that automation, without first having experience with the activity itself.  Simply, we can’t automate something that we can’t do ourselves.

All events are not created equal.  Many are routine, but a few might have significant consequences.  But analytics and AI treat all events within their problem domain as the same.  The human knows the difference, and can rise to the occasion with a higher probability than any learning system.

Learning systems are great.  On average, they will produce better results than a human over time.  However, the human is more likely to deliver when it counts.

Has Moneyball Killed Baseball? June 20, 2017

Posted by Peter Varhol in Education, Publishing, Strategy.
Tags: , , , ,
add a comment

Moneyball was a revelation to me.  It taught me that the experts could not effectively evaluate talent, and opened my own mind to the biases found in software development, testing, and team building.  Some of my best conference presentations and articles have been in this area.

But while Moneyball helped the Oakland Athletics, and eventually some other teams, it seems to be well on its way to killing the sport.  I’ve never been a big sports fan, but there were few other activities that could command the attention of a 12-year old in the late 1960s.

I grew up in the Pittsburgh area, and while I was too young to see the dramatic Bill Mazeroski home run in the 1960 World Series, I did see the heroics of Roberto Clemente and Willie Stargell in the 1971 World Series (my sister was administrative assistant at the church in Wilmington NC where Stargell had his funeral).  I lived in Baltimore where the Pirates won a Game 7 in dramatic fashion in 1979 (Steve Blass at the helm for his third game of the series, with Dave Guisti in relief).

But baseball has changed, and not in a good way.  Today, Moneyball has produced teams that focus on dramatic encounters like strikeouts, walks, and home runs.  I doubt this was what Billy Beane wanted to happen.  That makes baseball boring.  It is currently lacking in any of the strategy that it was best at.

As we move toward a world where we are increasingly using analytics to evaluate data and make decisions, we may be leaving the interesting parts of our problem domain behind.  I would like to think that machine learning and analytics are generally good for us, but perhaps they provide a crutch that ultimately makes our world less than it could be.  I hope we find a way to have the best of both.

Decisions, Decisions – There’s an Algorithm for That March 20, 2017

Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: , ,
add a comment

I remember shoveling sh** against the tide. Yes, I taught statistics and decision analysis to university business majors for about 15 years.  It wasn’t so much that they didn’t care as they didn’t want to know.

I had more than one student tell me that it was the job of a manager to make decisions, and numbers didn’t make any difference. Others said, “I make decisions the way they are supposed to be made, by my experience and intuition.  That’s what I’m paid for.”

Well, maybe not too much longer. After a couple of decades of robots performing “pick-and-place” and other manufacturing processes, now machine learning is in the early stages of transforming management.  It will help select job candidates, determine which employees are performing at a high level, and allocate resources between projects, among many other things.

So what’s a manager to do? Well, first, embrace the technology.  Simply, you are not going to win if you fight it.  It is inevitable.

Second, make a real effort to understand it. While computers and calculators were available, I always made my students “do it by hand” the first time around, so they could follow what the calculations were telling them.  You need to know what you are turning your decisions over to.

Third, integrate it into your work processes. By using machine learning to complement your own abilities.  Don’t ignore it, but don’t treat it as gospel either.

There are many philosophical questions at work here. Which is better, your experience or the numbers?  Kahneman says they are about the same, which does not bode well for human decision-making.  And the analysis of the numbers will only get better; can we say the same thing about human decision-making?

Of course, this has implications to the future of management. I’ll explore my thoughts there in a future post.

Your CEO Needs to be Tech-Savvy August 24, 2016

Posted by Peter Varhol in Strategy.
Tags:
add a comment

There was a time in the not-to-distant past when the usual business journals decried the insular nature of IT, and insisted that the CIO needed more business and less tech acumen. It was always accepted that the top tech person would never become CEO, because they lacked the business background and ability.

I would argue that today the shoe is on the other foot. IT, technology in general, and technology intelligently applied to business goals are prerequisites for top company leadership.

Any CEO that doesn’t understand the technology that is essential for the enterprise remaining in business, prospering, and beating the competition is woefully unqualified for the top post. That’s all there is to it.

This was succinctly demonstrated in the recent implosion of Delta’s reservation and operations systems.  A CEO that understood the vital importance of these systems would never have placed his or her enterprise in such a position.

Business academics, and business journals, you have it all wrong today. There is more than ample evidence that the CEO needs to understand what makes the business work, and that is technology.

And CEOs, take notice. You need to get up to speed on your vital technology fast.  Not doing so is a prescription for disaster.  And boards of directors, it is likely that the best candidate for your next CEO is your top technology person.

We Don’t Understand Our Numbers March 27, 2016

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags:
add a comment

I recently bought The Cult of Statistical Significance: How the Standard Error Cost Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deidre N. McCloskey.

Here’s the gist. Statistics is a great tool for demonstrating that a difference found between two sampling results is “real”.  What do I mean by real?  It means that if I measured the entire population, rather than just took samples, I would know that the results would be different.  Because I sample, I have uncertainty, and statistics provide a way to quantify the level of uncertainty.

How different? Well, that’s the rub.  We make certain assumptions about what we are measuring (normal distribution, binomial distribution), and we attempt to measure how much the data in each group differ from one another, based on the size of our sample.  If the two types of results are “different enough”, based on a combination of mean, variation, and distribution, we can claim that there is a statistically significant difference.  In other words, it there a real difference in this measure between the two groups?

But is the difference important? That’s the question we continually fail to ask.  The book Reclaiming Conversation talks about measurements not as a result, but as the beginning of a narrative.  The numbers are meaningless outside of their context.

Often a statistically significant difference becomes unimportant in a practical sense. In drug studies, for example, the study may be large enough, and the variability low enough, to confirm an improvement with an experimental drug regimen, but from a practical sense, the improvement isn’t large enough to invest to develop.

My sister Karen, a data analyst for a medical center, has pointed out to me that significance can also be in the other direction. She collects data on patient satisfaction, and points out that even minor dissatisfaction can have a large effect across both the patient population and the hospital.

That’s just one reason why the measurement is the beginning of the conversation, rather than the conclusion. The number is not the fait accompli; rather, it is the point at which we know enough about the subject to begin talking intelligently.

The Myths Behind Technology and Productivity February 26, 2016

Posted by Peter Varhol in Strategy, Technology and Culture, Uncategorized.
Tags: ,
add a comment

There was a period of about 15 years from 1980 to 1995 when productivity grew at about half of the growth rate of the US economy. To many of us, this was the Golden Era of computing technology.  It was the time when computing emerged from the back office and became an emergent force in everyone’s lives.

When I entered the workforce, circa 1980, we typed correspondence (yes, on an IBM Selectric) and sent it through the postal system. For immediate correspondence, we sat for hours in front of the fax machine, dialing away.  Business necessarily moved at a slower pace.

So as we moved to immediate edits, email, and spreadsheets, why didn’t our measures of productivity correspondingly increase? Well, we really don’t know.  I will offer two hypotheses.  First, our national measures of productivity are lousy.  Our government measures productivity as hours in, product out.  We don’t produce defined product as much today as we did then (more of our effort is in services, which national productivity measures still more poorly), and we certainly don’t measure the quality of the product.  Computing technology has likely contributed to improving both of these.

Second, it is possible that improvements in productivity tend to lag leaps of technology. That is also a reasonable explanation.  It takes time for people to adapt to new technology, and it takes time for business processes to change or streamline in response.

Today, this article in Harvard Business Review discounts both of these hypotheses, instead focusing on the fact that we are communicating to more people, for little purpose.  Instead, this article focuses on what it calls the dark side of Metcalfe’s Law.  Metcalfe’s Law (named after Ethernet inventor and all-around smart guy Bob Metcalfe) states that the value of a telecommunications network is proportional to the square of the number of connected users of the system.

The dark side is that we talk to more people, with little productivity. I will acknowledge that technology has contributed to a certain amount of waste.  But it has also added an unmeasurable amount of quality to the finished product or service.  It has enabled talented people to work where they live, and not have to live where they work.  It has let us do things faster and more comprehensively than we were ever able to do in the past.

To say that this is not productive is simply stupid, and does not take into account anything in recent history.

Warning: I am not an economist, by any stretch of the imagination.  I am merely a reasonably intelligent technical person with innate curiosity about how things work.  However, from reading things like this, it’s not clear that many economists are reasonably intelligent people to begin with.

Microsoft Has Lost Its Marketing Mojo August 1, 2015

Posted by Peter Varhol in Software platforms, Strategy.
Tags: ,
add a comment

I am old enough to remember people standing in line outside of Best Buy at midnight before Windows 95 went on sale. We knew the RTM (release to manufacturing, another anachronism) date by heart, and our favorite PC manufacturers would give us hourly updates on GA (yes, general availability) for their products.

Today, we don’t even know that Windows 10 has been released (Microsoft has said that it may take several weeks to deliver on upgrades and new systems), yet we know the exact date that a new version of iOS hits our devices. I’m searching for a new laptop, and can’t even tell what edition of Windows 10 I might be able to obtain.

This is purely Microsoft’s fault, and it’s sad. It’s sad because the company actually has some very nice products, better than ever I think, yet is at a loss as to how to communicate that to its markets. Windows 10 has gotten some great reviews, and I am loving my Microsoft Band and the Microsoft Health app more each day. But millions of people who have bought the Apple SmartWatch don’t even know that the Band exists.

This failure falls squarely on Microsoft. I’m not entirely sure why Microsoft has failed so miserably, but unless it recognizes this failure and corrects it, there is no long term hope. I can only think that Microsoft believes it is so firmly entrenched in the enterprise that it doesn’t have to worry about any other markets.

I will date myself again, remembering all of the Unix variations and how they believed they were the only solution for enterprise computing. Today, no one is making money off of Unix (although Linux is alive and well, albeit nowhere near as profitable). Unix fundamentally died because of the sheer arrogance of DEC, HP, Sun, and other vendors who believed that the technology was unassailable. It was not, and if you believe otherwise you don’t know the history of your markets, which is yet another failure.

And it also means Microsoft has totally given up on the consumer. I fully expect that there will be no enhancements to the Band, and that it will end-of-life sometime in the foreseeable future. And that too is sad, because consumer tech is driving the industry today. Microsoft was always a participant there, but has given it up as a lost cause.

It’s not a failure of technology. Microsoft never had great technology (although I do believe today it is better than ever). It’s a failure of marketing, something that Microsoft has forgotten how to do.

How Mature Are Your Testing Practices? April 22, 2015

Posted by Peter Varhol in Software development, Strategy.
add a comment

I once worked for a commercial software development company whose executive management decided to pursue the Software Engineering Institute’s (SEI) CMMI (Capability Maturity Model-Integration) certification for its software development practices. It hired and trained a number of project managers across multiple locations, documented everything it could, and slowed down its schedules so that teams could learn and practice the new documented processes, and collect the data needed to improve those processes.

There were good reasons for this software provider to try to improve its practices at the time. It had a quality problem, with thousands of known defects not getting addressed and going into production, and its customers not happy with the situation.

However, this new initiative didn’t turn out so well, as you might imagine. After spending millions of dollars over several years, the organization eventually achieved CMMI Level 2 (the activities were repeatable). It wasn’t clear that quality improved, although it likely would have become incrementally better over a longer period of time. But time moved on, and CMMI certification ceased to have the cachet that it once did. Today, in a stunning reversal of strategy, this provider now claims to be fully committed to Agile development practices.

This is a cautionary tale for any software project that looks for a specific process as a solution to their quality or delivery issues. A particular process or discipline won’t automatically improve your software. In the case of my previous employer, CMMI added burdensome overhead to a software supplier that was also forced to respond more quickly to changing technologies.

There are a number of different maturity models that claim to enable organizations to develop and extend processes that can make a difference in software quality and delivery. The SEI’s CMMI is probably the best known and most widely utilized. There is also a testing maturity model, which specifies similar principles as CMMI into the testing realm. And software tools vendor Coverity has recently released its Development Testing Maturity Model, which outlines a phased-in approach to development testing adoption, and claims to better support a DevOps strategy.

All of these maturity models, in moderation, can be useful for software projects and organizations seeking to standardize and advance the maturity of their project processes. But they don’t automatically improve quality or on-schedule delivery of a product or application.

Instead, teams and organizations should build a process that best reflects their business needs and culture, and then continue to refine that process as needs change to ensure that it continues to improve their ability to deliver quality software. It’s not as important to develop a maturity model as it is to identify your process, customize your ALM tools for that process, and make sure your team is appropriately trained in using it.