jump to navigation

The Role of Heuristics in Bias April 24, 2014

Posted by Peter Varhol in Software development.
Tags: , ,
add a comment

A heuristic is what we often refer to as a “rule of thumb”. We’ve experienced a particular situation on several occasions, and have come up with a step-by-step process for dealing with it. It’s purely System 1 thinking in action, as we assess the situation and blindly follow rules that have worked for us in the past.

And heuristics are great. They help us make decisions fast in situations that we’ve experienced in the past. But when the situation only appears similar, but is really different, applying our heuristic can have a very bad effect, if it’s not right.

Here’s a real life example. Years ago, I took flying lessons and obtained my pilot’s license. One of the lessons involved going “under the hood”. The hood is a plastic device that goes over your head (see image). When the hood is down, you can’t see anything. When the hood is raised, you can see the instrument panel, but not outside of the plane.

hood

While the hood was down, the instructor pilot in the right seat put the plane into an unusual situation. That might be a bank, or a stall, or something that was unsustainable. When he raised the hood, I was required to use the instrument panel to analyze and diagnose the situation, and recover from it.

After several of these situations, I had developed a heuristic. I looked first at the turn and bank indicator; if we were turning or banking, I would get us back on course in straight flight. Then I would look at the airspeed indicator. If we were going too slow, I could lower the nose or advance power to get us back to a cruise speed.

This heuristic worked great, and four or five times I was able to recover the aircraft exceptionally quickly. I was quite proud of myself.

But my instructor figured out what I was doing, and the next time I applied my heuristic, it seemed to work. But I was fighting the controls! It wasn’t straight and level flight. I started scanning other instruments, and discovered that we were losing over a thousand feet a minute.

At that point, my heuristic had failed. But I wasn’t able to go back and analyze the situation. My mind froze, and if it weren’t for the instructor pilot, we may well have crashed.

The lesson is that when your heuristic doesn’t work, it may be worse than starting over at the beginning. You may simply not be able to.

Applying Cognitive Bias to Software Development and Testing April 21, 2014

Posted by Peter Varhol in Software development.
Tags: ,
add a comment

Through his psychology research, Daniel Kahneman (and his collaborator Amos Tversky) demonstrated that we are not rational investors. We make irrational decisions all the time, decisions that most definitely don’t optimize our expected utility. He proved this well enough that he was awarded a Nobel Prize in Economics.

Beyond economics, we exhibit the same behavior in other aspects of our lives, including our professional lives. Let’s take software testing as an example. We may have preconceived notions of how buggy a particular application is, and that will likely affect how we test it. We may have gotten that notion from previous experience with the development team, or from an initial use of a previous version of the software.

As a result of those preconceived notions, or biases, we are likely to plan and execute our work, and evaluate the results, differently than if they didn’t exist. If our prior experiences with the team or the software were negative, we may be overly harsh in our assessment of the software and its perceived flaws. If our experiences are positive, we may be willing to give questionable characteristics a free pass.

Lest it sounds like this is a conscious decision on our part, let me say right now that it’s almost always not so. It never occurs to us to think that we are biased. If we think of it at all, we believe that the bias is a good thing, because it puts us on alert for possible problems, or it gives us a warm fuzzy of the quality or fitness of the application.

Bias can be a good shortcut to the correct or optimal decision. More often, it is a way of analyzing a situation poorly and making an incorrect or less-than-ideal decision. Even if it might result in a good outcome, it’s incumbent of each of us to realize when we are being influenced by our own beliefs, and to question those beliefs.

We tend to think of software development and testing as highly analytical and scientific endeavors, but the fact is that they are both highly subjective and social. We work in close-knit teams, and the decisions are highly situational based on the exact circumstances of the problem. We tend to overestimate our individual and group abilities, and underestimate the complexity of the problems to be solved.

Further, we tend not to learn relevant lessons from past experiences, instead remaining overly optimistic, often in the face of a great deal of evidence to the contrary.

In subsequent postings, let’s take a look at some of the specific biases, how they affect our work, and how we can recognize and compensate for them.

How Do We Fix Testing? April 17, 2014

Posted by Peter Varhol in Software development, Software tools.
Tags:
add a comment

Here is a presentation abstract I hope to get accepted at a conference in the near future:

Perhaps in no other professional field is the dichotomy between theory and practice more starkly different than in the realm of software testing. Researchers and thought leaders claim that testing requires a high level of cognitive and interpersonal skills, in order to make judgments about the ability of software to fulfill its operational goals. In their minds, testing is about assessing and communicating the risks involved in deploying software in a specific state.

However, in many organizations, testing remains a necessary evil, and a cost to drive down as much as possible. Testing is merely a measure of conformance to requirements, without regard to the quality of requirements or how conformance is measured. This is certainly an important measure, but tells an incomplete story about the value of software in support of our business goals.

We as testers often help to perpetuate the status quo. Although in many cases we realize we can add far more value than we do, we continue to perform testing in a manner that reduces our value in the software development process.

This presentation looks at the state of the art as well of the state of common practice, and attempts to provide a rationale and roadmap whereby the practice of testing can be made more exciting and stimulating to the testing professional, as well as more valuable to the product and the organization.

Why Do Biases Exist in the First Place? April 17, 2014

Posted by Peter Varhol in Software development, Strategy.
add a comment

If we are biased in our analysis of situations and our decisions in those situations, something must have precipitated that bias. As I mentioned in my last post, it is often because we use Kahneman’s System 1, or “fast” thinking, when we should really use the more deliberate System 2 thinking.

But, of course, System 2 thinking requires conscious engagement, which we are reluctant to do for situations that we think we’ve seen before. It simply requires too much effort, and we think we can comprehend and decide based on other experiences. It should come as no surprise that our cognitive processes favor the simple over the complex. And even when we consciously engage System 2 thinking, we may “overthink” a situation and still make a poor decision.

Bias often occurs when we let our preconceptions influence our decisions, which is the realm of System 1 thought. That’s not to say that System 2 thinking can’t also be biased, but the more we think about things, the better the chance that we make a rational decision. It’s easier to mischaracterize a situation if we don’t think deeply about it first.

As for System 2 thinking, we simply can’t make all, or even very many, of our decisions by engaging our intellect. There isn’t enough time, and it takes too much energy. And even if we could, we may overanalyze situations and make errors in that way.

There is also another, more insidious reason why we exhibit biases in analyzing situations and making decisions. That is that we have yet another bias – we believe that we make better decisions than those around us. In other words, we are biased that we believe we have fewer biases than the next person!

Are we stuck with our biases, forever consigned to not make the best decisions in our lives? Well, we won’t eliminate bias from our lives, but by understanding how and why it happens, we can reduce biased decisions, and make better decisions in general. We have to understand how we make decisions (gut choices are usually biased), and recognize situations where we have made poor decisions in the past.

It’s asking a lot of a person to acknowledge poor decisions, and to remember those circumstances in the future. But the starting point to doing so is to understand the origin of biases.

Why We Are Biased in Our Development and Test Practices April 10, 2014

Posted by Peter Varhol in Software development, Strategy.
Tags: ,
add a comment

We all have biases. In general, despite the negative connotation of the word, that’s not a bad thing. Our past experiences cause us to view the world in specific ways, and incline us to certain decisions and actions over others. They are part of what make use individuals, and they assist us in our daily lives by letting us make many decisions with little or no conscious thought.

However, biases may also cause us to make many of those decisions poorly, which can hurt us in our lives, our jobs, and our relationships with others. In this series, I’m specifically looking at how biases influence the software development lifecycle – development, testing, agile processes, and other technical team-oriented processes.

Psychology researchers have attempted to categorize biases, and have come up with at least several dozen varieties. Some are overlapping, and some separate biases describe similar cognitive processes. But it is clear that how we think through a situation and make a decision is often influenced by our own experiences and biases.

Much of the foundation research comes from the work of psychologists Daniel Kahneman and Amos Tversky, and published in Kahneman’s Thinking, Fast and Slow. In this book, Kahneman describes a model of thinking comprised of two different systems. System 1 thinking is instinctive, automatic, and fast. It is also usually wrong, or at least gullible. Many biases come from our desire to apply this kind of thinking to more complex and varied situations.

System 2 thinking is slower, more deliberate, and more accurate. If we are being creative or solving a complex problem for the first time, we consciously engage our intellect to arrive at a decision. Using System 2 thinking can often reduce the possibility of bias. But this is hard work, and we generally don’t care to do it that often.

It’s interesting to note that Kahneman won a Nobel Prize in Economics for his research. He demonstrated that individuals and markets often don’t act rationally to maximize their utility. Often our biases come into play to cause us to make decisions that aren’t in our best interest.

In the case of software development and testing, our biases lead us into making flawed decisions on planning, design, implementation, and evaluation of our data. These factors likely play a significant role in our inability to successfully complete many software projects.

Cognitive Bias and Software Testing: An ongoing exploration April 7, 2014

Posted by Peter Varhol in Software development.
add a comment

Let me explain how this story started for me. I’ve always enjoyed the books by Michael Lewis, and I picked up Moneyball around the middle of 2011. The revelation I obtained was that “all baseball experts were wrong; they were biased in how they evaluated talent”. That bias intrigued me. Later, I read a profile done by Michael Lewis of Daniel Kahneman called “The King of Human Error”.

Then I read Kahneman’s career capstone book, Thinking, Fast and Slow. It provided the theoretical foundation for what became my most significant series of presentations over the last couple of years.

Moneyball gave me the idea of working this presentation around bias in building and running testing and agile teams, and in being a team member. I started giving versions of this presentation in mid-2012, and continued for well over a year. I would like to think that it’s been well-received. My title was Moneyball and the Science of Building Great Teams.

My collaborator Gerie Owen saw this presentation a couple of times, and realized that bias also applies to the practice of testing. We are likely biased in how we devise testing strategies, build and execute test cases, collect data, and evaluate results. She rolled many of these same concepts into a presentation called How Did I Miss That Bug? The Role of Cognitive Bias in Testing.

I think this is an important topic for the testing and ADLM community in general, because it has the potential to change how we practice and manage. So in a multi-part set of posts over the coming months, I would like to explore cognitive bias and its role in software testing, agile development, team building, and other aspects of the application development lifecycle.

I Am 95 Percent Confident June 9, 2013

Posted by Peter Varhol in Education, Technology and Culture.
Tags: ,
add a comment

I spent the first six years of my higher education studying psychology, along with a smattering of biology and chemistry.  While most people don’t think of psychology as a disciplined science, I found an affinity with the scientific method, and with the analysis and interpretation of research data.  I was good enough at it so that I went from there to get a masters degree in applied math.

I didn’t practice statistics much after that, but I’ve always maintained an excellent understanding of just how to interpret statistical techniques and their results.  And we get it wrong all the time.  For example:

  • Correlation does not mean causation, even when variables are intuitively related.  There may be cause and effect, or it could be in reverse (the dependent variable actually causes the corresponding value of the independent variable, rather than visa versa).  Or both variables may be caused by another, unknown and untested variable.  Or the result may simply have occurred through random chance.  Either way, a correlation doesn’t tell me anything about whether or not two (or more) variables are related in a real world sense.
  • Related to that, the coefficient of determination (R-squared) does not “explain” anything in a human sense.  There is no explanation in our thought patterns.  Most statistics books will say that the square of the correlation coefficient explains that amount of variation in the relationship between the variables.  We interpret “explains” in a causative sense.  Wrong.  It’s simply that the movement between two variables is a mathematical relationship with that amount of variation.  When I describe this, I prefer using the term “accounts for”.
  • Last, if I’m 95 percent confident there is a statistically significant difference between two results (a common cutoff for concluding that the difference is a “real” one), our minds tend to interpret that conclusion as “I’m really pretty sure about this.”  Wrong again.  It means that if I conducted the study 100 times, I would draw the same conclusion 95 times.  And that means five times I will draw the opposite conclusion.
  • Okay, one more, related to that last one.  Statistically significant does not mean significant in a practical sense.  I may conduct a drug study that indicates that a particular drug under development significantly improves our ability to recover from a certain type of cancer.  Sounds impressive, doesn’t it?  But the sample size and definition of recovery could be such that that the drug may only really save a couple of lives a year.  Does it make sense to spend billions to continue development of the drug, especially if it might have undesirable side effects?  Maybe not.

I could go on.  Scientific experiments in the natural and social sciences are valuable, and they often incrementally advance the field in which they are conducted, even if they are set up, conducted, or interpreted incorrectly.  That’s a good thing.

But even when scientists get the explanation of the results right, it is often presented to us incorrectly, or our minds draw an incorrect conclusion.  A part of that is that a looser interpretation is often more newsworthy.  Another part is that our minds often want to relate new information to our own circumstances.  And we often don’t understand statistics well enough to draw informed conclusions.

Let us remember that Mark Twain described three types of mendacity – lies, damned lies, and statistics.  Make no mistake, that last one is the most insidious.  And we fall for it all the time.

Of Software, Marketing, and Diversity June 7, 2013

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Oh, Shanley.  It pained me to read your latest missive on the marketing chick and the culture of misogyny.  It pained me because you are sometimes right, but perhaps more often not (or, to be fair, visa versa).  Yes, I’ve seen what you describe, although I would suspect not with the raw intensity you have.

Part of that raw intensity, I suspect, is driven by the Silicon Valley culture.  Whatever exists in America is magnified by the hype that the Valley types like to bring to anything that exists within its confines.

Many of us are too full of ourselves to recognize the value of others in a common endeavor.  Because we are not confident of our own position, we naturally if unreasonably order ourselves at the top of an uncertain food chain.  That means we tend to denigrate those without our particular skill set.

But that particular culture is nowhere near universal.  Many (I have no idea what percentage, but I suspect most) grow out of it.  Those who don’t are sentenced to a life of bad pizza, online games, and no social life.  They pay for their inability to adapt.

There is no single techie who can build, market, sell, and service a software product, and that hasn’t been possible for at least 30 years, if ever.  We all know that the most elegant and advanced technical solution is not likely to win in the market.  Those that build those technical solutions are at a loss to understand why they aren’t accepted, and are more likely to blame others than themselves.

So we create the marketing chick and denigrate her, even though marketing is a necessary skill for success.

It is a human failing, with the intensity increased by the win at all costs mentality in Silicon Valley.  Perhaps you see so much of it because of where you are.  That’s not to say it is right.  But it is to say that elsewhere it may be different.

Follow

Get every new post delivered to your Inbox.

Join 351 other followers