We Don’t Understand Our Numbers March 27, 2016Posted by Peter Varhol in Strategy, Technology and Culture.
add a comment
I recently bought The Cult of Statistical Significance: How the Standard Error Cost Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deidre N. McCloskey.
Here’s the gist. Statistics is a great tool for demonstrating that a difference found between two sampling results is “real”. What do I mean by real? It means that if I measured the entire population, rather than just took samples, I would know that the results would be different. Because I sample, I have uncertainty, and statistics provide a way to quantify the level of uncertainty.
How different? Well, that’s the rub. We make certain assumptions about what we are measuring (normal distribution, binomial distribution), and we attempt to measure how much the data in each group differ from one another, based on the size of our sample. If the two types of results are “different enough”, based on a combination of mean, variation, and distribution, we can claim that there is a statistically significant difference. In other words, it there a real difference in this measure between the two groups?
But is the difference important? That’s the question we continually fail to ask. The book Reclaiming Conversation talks about measurements not as a result, but as the beginning of a narrative. The numbers are meaningless outside of their context.
Often a statistically significant difference becomes unimportant in a practical sense. In drug studies, for example, the study may be large enough, and the variability low enough, to confirm an improvement with an experimental drug regimen, but from a practical sense, the improvement isn’t large enough to invest to develop.
My sister Karen, a data analyst for a medical center, has pointed out to me that significance can also be in the other direction. She collects data on patient satisfaction, and points out that even minor dissatisfaction can have a large effect across both the patient population and the hospital.
That’s just one reason why the measurement is the beginning of the conversation, rather than the conclusion. The number is not the fait accompli; rather, it is the point at which we know enough about the subject to begin talking intelligently.
The Myths Behind Technology and Productivity February 26, 2016Posted by Peter Varhol in Strategy, Technology and Culture, Uncategorized.
Tags: productivity, technology
add a comment
There was a period of about 15 years from 1980 to 1995 when productivity grew at about half of the growth rate of the US economy. To many of us, this was the Golden Era of computing technology. It was the time when computing emerged from the back office and became an emergent force in everyone’s lives.
When I entered the workforce, circa 1980, we typed correspondence (yes, on an IBM Selectric) and sent it through the postal system. For immediate correspondence, we sat for hours in front of the fax machine, dialing away. Business necessarily moved at a slower pace.
So as we moved to immediate edits, email, and spreadsheets, why didn’t our measures of productivity correspondingly increase? Well, we really don’t know. I will offer two hypotheses. First, our national measures of productivity are lousy. Our government measures productivity as hours in, product out. We don’t produce defined product as much today as we did then (more of our effort is in services, which national productivity measures still more poorly), and we certainly don’t measure the quality of the product. Computing technology has likely contributed to improving both of these.
Second, it is possible that improvements in productivity tend to lag leaps of technology. That is also a reasonable explanation. It takes time for people to adapt to new technology, and it takes time for business processes to change or streamline in response.
Today, this article in Harvard Business Review discounts both of these hypotheses, instead focusing on the fact that we are communicating to more people, for little purpose. Instead, this article focuses on what it calls the dark side of Metcalfe’s Law. Metcalfe’s Law (named after Ethernet inventor and all-around smart guy Bob Metcalfe) states that the value of a telecommunications network is proportional to the square of the number of connected users of the system.
The dark side is that we talk to more people, with little productivity. I will acknowledge that technology has contributed to a certain amount of waste. But it has also added an unmeasurable amount of quality to the finished product or service. It has enabled talented people to work where they live, and not have to live where they work. It has let us do things faster and more comprehensively than we were ever able to do in the past.
To say that this is not productive is simply stupid, and does not take into account anything in recent history.
Warning: I am not an economist, by any stretch of the imagination. I am merely a reasonably intelligent technical person with innate curiosity about how things work. However, from reading things like this, it’s not clear that many economists are reasonably intelligent people to begin with.
Microsoft Has Lost Its Marketing Mojo August 1, 2015Posted by Peter Varhol in Software platforms, Strategy.
Tags: Microsoft Band, Windows 10
add a comment
I am old enough to remember people standing in line outside of Best Buy at midnight before Windows 95 went on sale. We knew the RTM (release to manufacturing, another anachronism) date by heart, and our favorite PC manufacturers would give us hourly updates on GA (yes, general availability) for their products.
Today, we don’t even know that Windows 10 has been released (Microsoft has said that it may take several weeks to deliver on upgrades and new systems), yet we know the exact date that a new version of iOS hits our devices. I’m searching for a new laptop, and can’t even tell what edition of Windows 10 I might be able to obtain.
This is purely Microsoft’s fault, and it’s sad. It’s sad because the company actually has some very nice products, better than ever I think, yet is at a loss as to how to communicate that to its markets. Windows 10 has gotten some great reviews, and I am loving my Microsoft Band and the Microsoft Health app more each day. But millions of people who have bought the Apple SmartWatch don’t even know that the Band exists.
This failure falls squarely on Microsoft. I’m not entirely sure why Microsoft has failed so miserably, but unless it recognizes this failure and corrects it, there is no long term hope. I can only think that Microsoft believes it is so firmly entrenched in the enterprise that it doesn’t have to worry about any other markets.
I will date myself again, remembering all of the Unix variations and how they believed they were the only solution for enterprise computing. Today, no one is making money off of Unix (although Linux is alive and well, albeit nowhere near as profitable). Unix fundamentally died because of the sheer arrogance of DEC, HP, Sun, and other vendors who believed that the technology was unassailable. It was not, and if you believe otherwise you don’t know the history of your markets, which is yet another failure.
And it also means Microsoft has totally given up on the consumer. I fully expect that there will be no enhancements to the Band, and that it will end-of-life sometime in the foreseeable future. And that too is sad, because consumer tech is driving the industry today. Microsoft was always a participant there, but has given it up as a lost cause.
It’s not a failure of technology. Microsoft never had great technology (although I do believe today it is better than ever). It’s a failure of marketing, something that Microsoft has forgotten how to do.
How Mature Are Your Testing Practices? April 22, 2015Posted by Peter Varhol in Software development, Strategy.
add a comment
I once worked for a commercial software development company whose executive management decided to pursue the Software Engineering Institute’s (SEI) CMMI (Capability Maturity Model-Integration) certification for its software development practices. It hired and trained a number of project managers across multiple locations, documented everything it could, and slowed down its schedules so that teams could learn and practice the new documented processes, and collect the data needed to improve those processes.
There were good reasons for this software provider to try to improve its practices at the time. It had a quality problem, with thousands of known defects not getting addressed and going into production, and its customers not happy with the situation.
However, this new initiative didn’t turn out so well, as you might imagine. After spending millions of dollars over several years, the organization eventually achieved CMMI Level 2 (the activities were repeatable). It wasn’t clear that quality improved, although it likely would have become incrementally better over a longer period of time. But time moved on, and CMMI certification ceased to have the cachet that it once did. Today, in a stunning reversal of strategy, this provider now claims to be fully committed to Agile development practices.
This is a cautionary tale for any software project that looks for a specific process as a solution to their quality or delivery issues. A particular process or discipline won’t automatically improve your software. In the case of my previous employer, CMMI added burdensome overhead to a software supplier that was also forced to respond more quickly to changing technologies.
There are a number of different maturity models that claim to enable organizations to develop and extend processes that can make a difference in software quality and delivery. The SEI’s CMMI is probably the best known and most widely utilized. There is also a testing maturity model, which specifies similar principles as CMMI into the testing realm. And software tools vendor Coverity has recently released its Development Testing Maturity Model, which outlines a phased-in approach to development testing adoption, and claims to better support a DevOps strategy.
All of these maturity models, in moderation, can be useful for software projects and organizations seeking to standardize and advance the maturity of their project processes. But they don’t automatically improve quality or on-schedule delivery of a product or application.
Instead, teams and organizations should build a process that best reflects their business needs and culture, and then continue to refine that process as needs change to ensure that it continues to improve their ability to deliver quality software. It’s not as important to develop a maturity model as it is to identify your process, customize your ALM tools for that process, and make sure your team is appropriately trained in using it.
Net Neutrality and The Oatmeal November 21, 2014Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: net neutrality, The Oatmeal
1 comment so far
I can understand why Matthew Inman doesn’t accept email on The Oatmeal, but it does make it difficult to raise an important issue. In this case, I would like to explore his take on net neutrality. Yes, I agree that Senator Cruz is probably an idiot, or at least pandering. But beyond that, I’m wondering just who the bad guy is here. Is it Comcast Xfinity, who would like to charge premium prices to companies with real time content delivery needs? Perhaps. Or is it Netflix, who is abusing an infrastructure not designed or operated for streaming high-quality video? Hmm.
Today we tend to think of the Internet as more or less a public utility, akin to our electrical service. That’s not quite correct. Actually, not at all correct. There was a time, when I was in college, where the Internet was a private, elitist academic network, yet funded entirely by the government. If you as an individual wanted access, you had to be an academic, a government-funded researcher, or at worst a paying student at a really good university. And there was some decent content on the Internet, albeit all text-based. That was the world at the time.
In the early 1990s, the powers-that-be (I really don’t know or care to assign it to one political party or the other, and neither should you (and no, Al Gore did not create the Internet, despite his resume)) decided to commercialize it.
That, I think most people would agree, was a Good Thing. We got ISPs (okay, we had AOL and Compuserve before that, but they specifically weren’t on the Internet until later), we got decent graphics tools, and we got modems to use with our phones. It provided for a burst of innovation, an explosion of content, and a democratization of access.
The phone companies made a half-assed attempt to offer higher access speeds, but DSL was expensive, difficult to buy and configure, and slow. The cable companies realized that they already had fat pipes into homes, and rushed to compete, spending hundreds of billions of dollars (granted, our subscription dollars, but a significant investment nonetheless) on network upgrades.
So here the Internet ceased being a public utility, if it ever was one, and became a commercial venture. I agree that the exclusive contracts still oddly provided by municipalities to cable companies makes it seem that way, but there is little reason for these to still exist. And in any case, they should be up for re-bid every few years, once again making it less of a utility.
So a business like Netflix comes along, and reduces its operating cost by offering a very high usage delivery on what is at worst a low-cost fixed-price medium. Is Comcast wrong by wanting more money for this type of use? To support the Netflix business model of making money from us?
I don’t know. Apparently Matthew Inman does. Good for him.
In theory, I believe net neutrality is the way to go. But it supports some businesses over the expense of others. Just like the alternative. So I simply don’t see a compelling reason to discard either concept.
I Need a Hero, Or Do I? July 13, 2014Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: Silicon Valley, Zuckerberg
add a comment
I am hardly a paragon of virtue. Still, this article nicely summarizes the various misdeeds and lapses of the multimillionaire entrepreneurial set that comprises the Silicon Valley elite, as they attempt to deal with other people and with society at large.
I think there are a couple of issues here. First, startups such as Uber and AirBNB are attempting to build businesses that fly in the face of established regulations concerning the type of business activity they are attempting to change. That may in fact be a good thing in general. The regulations exist for a certain public good, but they also exist to protect the status quo from change (Disclosure: I use neither service and remain wary of their value). I would venture to say that any attempt to change regulatory processes should have been attempted before, rather than after, the execution of the business. At best, the founders of these types of companies are tone-deaf to the world in which they live, though long term success may turn them into something more than they are today.
But there are also larger issues. To publicly practice sexism, ageism, intolerance, and active encouraging of frat boy behaviors is juvenile, quite possibly illegal, and at the very least stupid and insensitive. These are not stupid people, so I must believe that there is an overarching reason for acting in this manner. I can’t possibly help but think of Shanley Kane; despite her extreme and uncompromising stands, she directly lives much of the darker side that we too often gloss over, or in some cases even embrace.
The question is both broad and deep. Should we have any expectation that the leaders of technology should be leaders of character, or even be good people in general? Are we demanding they take on a role that they never asked for, and are not fit for?
On the other hand, does wealth and success convey the right to pursue ideas and policies that may fly in the face of reason? “Privacy is no longer a social norm,” says Zuckerberg, almost certainly because that position benefits him financially. I don’t recall him being appointed to define our privacy norms, but this represents more than an opinion, informed or not; it is also an expression of Facebook’s business strategy.
I think what annoys me most is his power to make that simple statement a reality, without a public debate on the matter. It’s not your call, Zuckerberg, and I can’t help but think that you believe it is. I would have thought that the likes of Eric Schmidt served as adult supervision to the baser instincts of Silicon Valley, until he meekly went along with the crowd.
To be fair, I don’t think these questions are new ones. In the 1800s, media moguls single-mindedly pursued extreme political agendas under the guise of freedom of the press. “Robber barons” single-mindedly pursued profits at the expense of respect and even lives.
Still, the Rockefellers and Carnegies ago attempted, with perhaps limited success, to atone for their baser instincts during their lives. I grew up in a town with a Carnegie Free Library, after all. Perhaps this is like what a more mature Bill Gates is doing today. We can hope that Zuckerberg matures, although I am not holding my breath. I think that boat has long since sailed on Schmidt.
But it’s difficult to say that this era is all that much different than the latter 1800s. We as a society like to think we’ve grown, but that’s probably not true. But this cycle will end, and at its close, we will be able to see more clearly just what our tech leaders are made of.
Mindsets and Software Testing May 18, 2014Posted by Peter Varhol in Software development, Strategy.
Tags: mindsets, testing
1 comment so far
All of us approach life with certain attitudes and expectations. These attitudes and expectations can affect how we perform and what we learn, both now and in the future.
According to researcher Carol Dweck, there are two fundamental mindsets – fixed and growth. A person with a fixed mindset feels the need to justify their abilities at every possible opportunity. If they succeed, it reaffirms their status as an intelligent and capable person. If they fail, it is a reflection upon their abilities, and there is nothing to be learned from it.
A person with a growth mindset recognizes that chance, opportunity, and skill all play a role in any activity in which they engage. Success can be due to a number of factors, of which our intelligence and abilities play only a part. More important, we can improve our abilities through failure, by not taking it personally and by delving into the lessons learned.
It’s important to understand that the idea of mindset is a continuum, so that few if any of us are entirely one or the other. And in some circumstances we may be more one than in others. I can personally attest to this through my own experiences.
This has a couple of implications to software development and testing. First, it means that we will almost certainly make mistakes. But how we respond to those mistakes is key to our futures. We can be defensive and protective of our self-perception, or we can learn and move on.
Second, and perhaps more important, is that failing at a project, creating buggy code, or failing to find bugs isn’t a reflection on our intelligence or abilities. At the very least, it’s not something that can’t be corrected. If we are willing to grow from it, we might recognize that our work is a marathon, rather than a sprint.
It also has implications to technical careers in general. I’ve failed more times that I would like to count. I’ve also succeeded many times too. With a fixed mindset, I’m not sure where that leads me. Charitably, I’m almost certainly a screw-up. With a growth mindset, I’m right where I want to be. I’ll leave the answer to that question as an exercise to the reader.
Why Do Biases Exist in the First Place? April 17, 2014Posted by Peter Varhol in Software development, Strategy.
add a comment
If we are biased in our analysis of situations and our decisions in those situations, something must have precipitated that bias. As I mentioned in my last post, it is often because we use Kahneman’s System 1, or “fast” thinking, when we should really use the more deliberate System 2 thinking.
But, of course, System 2 thinking requires conscious engagement, which we are reluctant to do for situations that we think we’ve seen before. It simply requires too much effort, and we think we can comprehend and decide based on other experiences. It should come as no surprise that our cognitive processes favor the simple over the complex. And even when we consciously engage System 2 thinking, we may “overthink” a situation and still make a poor decision.
Bias often occurs when we let our preconceptions influence our decisions, which is the realm of System 1 thought. That’s not to say that System 2 thinking can’t also be biased, but the more we think about things, the better the chance that we make a rational decision. It’s easier to mischaracterize a situation if we don’t think deeply about it first.
As for System 2 thinking, we simply can’t make all, or even very many, of our decisions by engaging our intellect. There isn’t enough time, and it takes too much energy. And even if we could, we may overanalyze situations and make errors in that way.
There is also another, more insidious reason why we exhibit biases in analyzing situations and making decisions. That is that we have yet another bias – we believe that we make better decisions than those around us. In other words, we are biased that we believe we have fewer biases than the next person!
Are we stuck with our biases, forever consigned to not make the best decisions in our lives? Well, we won’t eliminate bias from our lives, but by understanding how and why it happens, we can reduce biased decisions, and make better decisions in general. We have to understand how we make decisions (gut choices are usually biased), and recognize situations where we have made poor decisions in the past.
It’s asking a lot of a person to acknowledge poor decisions, and to remember those circumstances in the future. But the starting point to doing so is to understand the origin of biases.