jump to navigation

Why We Are Biased in Our Development and Test Practices April 10, 2014

Posted by Peter Varhol in Software development, Strategy.
Tags: ,
add a comment

We all have biases. In general, despite the negative connotation of the word, that’s not a bad thing. Our past experiences cause us to view the world in specific ways, and incline us to certain decisions and actions over others. They are part of what make use individuals, and they assist us in our daily lives by letting us make many decisions with little or no conscious thought.

However, biases may also cause us to make many of those decisions poorly, which can hurt us in our lives, our jobs, and our relationships with others. In this series, I’m specifically looking at how biases influence the software development lifecycle – development, testing, agile processes, and other technical team-oriented processes.

Psychology researchers have attempted to categorize biases, and have come up with at least several dozen varieties. Some are overlapping, and some separate biases describe similar cognitive processes. But it is clear that how we think through a situation and make a decision is often influenced by our own experiences and biases.

Much of the foundation research comes from the work of psychologists Daniel Kahneman and Amos Tversky, and published in Kahneman’s Thinking, Fast and Slow. In this book, Kahneman describes a model of thinking comprised of two different systems. System 1 thinking is instinctive, automatic, and fast. It is also usually wrong, or at least gullible. Many biases come from our desire to apply this kind of thinking to more complex and varied situations.

System 2 thinking is slower, more deliberate, and more accurate. If we are being creative or solving a complex problem for the first time, we consciously engage our intellect to arrive at a decision. Using System 2 thinking can often reduce the possibility of bias. But this is hard work, and we generally don’t care to do it that often.

It’s interesting to note that Kahneman won a Nobel Prize in Economics for his research. He demonstrated that individuals and markets often don’t act rationally to maximize their utility. Often our biases come into play to cause us to make decisions that aren’t in our best interest.

In the case of software development and testing, our biases lead us into making flawed decisions on planning, design, implementation, and evaluation of our data. These factors likely play a significant role in our inability to successfully complete many software projects.

Cognitive Bias and Software Testing: An ongoing exploration April 7, 2014

Posted by Peter Varhol in Software development.
add a comment

Let me explain how this story started for me. I’ve always enjoyed the books by Michael Lewis, and I picked up Moneyball around the middle of 2011. The revelation I obtained was that “all baseball experts were wrong; they were biased in how they evaluated talent”. That bias intrigued me. Later, I read a profile done by Michael Lewis of Daniel Kahneman called “The King of Human Error”.

Then I read Kahneman’s career capstone book, Thinking, Fast and Slow. It provided the theoretical foundation for what became my most significant series of presentations over the last couple of years.

Moneyball gave me the idea of working this presentation around bias in building and running testing and agile teams, and in being a team member. I started giving versions of this presentation in mid-2012, and continued for well over a year. I would like to think that it’s been well-received. My title was Moneyball and the Science of Building Great Teams.

My collaborator Gerie Owen saw this presentation a couple of times, and realized that bias also applies to the practice of testing. We are likely biased in how we devise testing strategies, build and execute test cases, collect data, and evaluate results. She rolled many of these same concepts into a presentation called How Did I Miss That Bug? The Role of Cognitive Bias in Testing.

I think this is an important topic for the testing and ADLM community in general, because it has the potential to change how we practice and manage. So in a multi-part set of posts over the coming months, I would like to explore cognitive bias and its role in software testing, agile development, team building, and other aspects of the application development lifecycle.

I Am 95 Percent Confident June 9, 2013

Posted by Peter Varhol in Education, Technology and Culture.
Tags: ,
add a comment

I spent the first six years of my higher education studying psychology, along with a smattering of biology and chemistry.  While most people don’t think of psychology as a disciplined science, I found an affinity with the scientific method, and with the analysis and interpretation of research data.  I was good enough at it so that I went from there to get a masters degree in applied math.

I didn’t practice statistics much after that, but I’ve always maintained an excellent understanding of just how to interpret statistical techniques and their results.  And we get it wrong all the time.  For example:

  • Correlation does not mean causation, even when variables are intuitively related.  There may be cause and effect, or it could be in reverse (the dependent variable actually causes the corresponding value of the independent variable, rather than visa versa).  Or both variables may be caused by another, unknown and untested variable.  Or the result may simply have occurred through random chance.  Either way, a correlation doesn’t tell me anything about whether or not two (or more) variables are related in a real world sense.
  • Related to that, the coefficient of determination (R-squared) does not “explain” anything in a human sense.  There is no explanation in our thought patterns.  Most statistics books will say that the square of the correlation coefficient explains that amount of variation in the relationship between the variables.  We interpret “explains” in a causative sense.  Wrong.  It’s simply that the movement between two variables is a mathematical relationship with that amount of variation.  When I describe this, I prefer using the term “accounts for”.
  • Last, if I’m 95 percent confident there is a statistically significant difference between two results (a common cutoff for concluding that the difference is a “real” one), our minds tend to interpret that conclusion as “I’m really pretty sure about this.”  Wrong again.  It means that if I conducted the study 100 times, I would draw the same conclusion 95 times.  And that means five times I will draw the opposite conclusion.
  • Okay, one more, related to that last one.  Statistically significant does not mean significant in a practical sense.  I may conduct a drug study that indicates that a particular drug under development significantly improves our ability to recover from a certain type of cancer.  Sounds impressive, doesn’t it?  But the sample size and definition of recovery could be such that that the drug may only really save a couple of lives a year.  Does it make sense to spend billions to continue development of the drug, especially if it might have undesirable side effects?  Maybe not.

I could go on.  Scientific experiments in the natural and social sciences are valuable, and they often incrementally advance the field in which they are conducted, even if they are set up, conducted, or interpreted incorrectly.  That’s a good thing.

But even when scientists get the explanation of the results right, it is often presented to us incorrectly, or our minds draw an incorrect conclusion.  A part of that is that a looser interpretation is often more newsworthy.  Another part is that our minds often want to relate new information to our own circumstances.  And we often don’t understand statistics well enough to draw informed conclusions.

Let us remember that Mark Twain described three types of mendacity – lies, damned lies, and statistics.  Make no mistake, that last one is the most insidious.  And we fall for it all the time.

Of Software, Marketing, and Diversity June 7, 2013

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Oh, Shanley.  It pained me to read your latest missive on the marketing chick and the culture of misogyny.  It pained me because you are sometimes right, but perhaps more often not (or, to be fair, visa versa).  Yes, I’ve seen what you describe, although I would suspect not with the raw intensity you have.

Part of that raw intensity, I suspect, is driven by the Silicon Valley culture.  Whatever exists in America is magnified by the hype that the Valley types like to bring to anything that exists within its confines.

Many of us are too full of ourselves to recognize the value of others in a common endeavor.  Because we are not confident of our own position, we naturally if unreasonably order ourselves at the top of an uncertain food chain.  That means we tend to denigrate those without our particular skill set.

But that particular culture is nowhere near universal.  Many (I have no idea what percentage, but I suspect most) grow out of it.  Those who don’t are sentenced to a life of bad pizza, online games, and no social life.  They pay for their inability to adapt.

There is no single techie who can build, market, sell, and service a software product, and that hasn’t been possible for at least 30 years, if ever.  We all know that the most elegant and advanced technical solution is not likely to win in the market.  Those that build those technical solutions are at a loss to understand why they aren’t accepted, and are more likely to blame others than themselves.

So we create the marketing chick and denigrate her, even though marketing is a necessary skill for success.

It is a human failing, with the intensity increased by the win at all costs mentality in Silicon Valley.  Perhaps you see so much of it because of where you are.  That’s not to say it is right.  But it is to say that elsewhere it may be different.

Really Big Data and the Pursuit of Privacy June 7, 2013

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

There’s been so much excitement these days about the commercial potential of Big Data that we’ve forgotten that the Federal government is in the best position to obtain and analyze many terabytes of data.  We were reminded of that in a big way following revelations that the National Security Agency (NSA) was obtaining under secret court order information about all phone calls made by Verizon customers.  I am not a Verizon customer, but I have no doubt that the same court orders exist for other carriers.

(Interesting side note:  Many years ago, after I earned my MS in Math, I had a job offer to join the NSA as a civilian cryptologist.  Perhaps now I wish I had taken it.)

With virtually unlimited fast computing power, the NSA can identify patterns that provide a basis for follow-up law enforcement activities.

Here’s a simple example of how it works.  A computer program identifies twenty or so different phone numbers in the New York City area that have called the same number in, oh, the Kingdom of Jordan about two hundred times in the last two months.  The number in Jordan is a suspected front (through other sources) for some sort of terrorist activity.  This connection might provide law enforcement reason to look more closely into the activities of those making these calls.  That’s not inherently a bad thing.

Of course, there are ways that terrorists and criminals can combat this, such as the use of prepaid and disposable cell phones bought with cash, calling cards, and even random pay phones.  At best, analyzing call records represents one tool among many in the pursuit of wrongdoing, and not really a “Big Brother is Watching” scenario.

From a privacy standpoint, I’m mostly sanguine about the NSA collecting and analyzing calling data.  I’m not engaged in terrorist or criminal activities, and my phone calls are just a few data points among the billions out there.  I’m not directly threatened, or even inconvenienced.

But . . . there may be a slippery slope here.  The definition of suspicious calling activity may gradually expand to include things that aren’t illegal, but perhaps just unethical or embarrassing.  Once you have the data and the computing power, you can start looking for other things.  Call it scope creep, an all-too-common affliction of many projects.

And in a larger sense, many of our freedoms are actually constructed on the premise that the Federal government cannot connect the dots between the myriad of records held by the many Federal agencies on each of us.  Call it privacy by disorganization, but it has worked at least throughout my lifetime to protect my liberties.  But thanks to the advancements made in Big Data over the last several years, we may be seeing the end of that type of protection.

Security and privacy represent direct tradeoffs.  Unlike many Americans, I would prefer to be a little less secure and a little more private.  But the majority does rule, and I do believe that the majority has little issue with the current state of affairs.

You Don’t Have to Retire to a University Town April 28, 2013

Posted by Peter Varhol in Education, Technology and Culture.
Tags:
add a comment

Not that I’m looking at retirement anytime soon; I love what I do for a living, and can give it a lot of energy.  But there has been a push over the last decade or so for people to retire to university towns where they can experience the educational opportunities inherent in the academic environment.

I call BS on that life strategy.

I’m finishing up a MOOC through Coursera, and I have to say that the experience has rekindled an enthusiasm for higher education that I may have lost since I (voluntarily) left my tenure-track position in computer science and math, now almost seventeen years ago.

I have to give credit to Clay Shirky, whose tweet led me in the direction of the topic and course.  The course is A Beginner’s Guide to Irrational Behavior, taught by Dan Ariely at Duke University.  The topic fits well into my present interest in understanding and compensating for bias in software testing.

I really lacked the time to do it.  But the course organization is a wonderful combination of freedom to work on your own schedule (I’ve been on business travel three times in the last three weeks), and the structure needed to see it through.  You can fully participate in online hang-outs, wikis, readings, and lectures, do what is necessary to satisfactorily complete the course (this course requires an average score of 85 through all exercises and quizzes), or just pick and choose, depending on your interests and time.

Competitive person that I am, I chose to work toward course completion, while doing little of the extracurricular activities that can add spice to a learning experience.  I still work for a living, after all.

The fact of the matter is that you can live just about anywhere in the world with broadband Internet access, and still experience outstanding educational opportunities, makes the idea of living in a university town less vital to intellectual stimulation.  If you’re looking to a university town in retirement to keep your intellectual edge, you may be shortchanging yourself.

Testing and Tester Bias March 30, 2013

Posted by Peter Varhol in Software development.
Tags:
add a comment

Software testers are increasingly looking at how to approach the problems inherent in testing, and how the ways that we think, and the bias that we bring to our work affects the conclusions we draw.  Because we can’t test every aspect of an application exhaustively, much of the testing process is based on our past practices, judgment, and decisions that we make based on incomplete and often inconclusive evidence.

Much of the foundation behind examining how we approach and solve problems comes from Daniel Kahneman’s landmark book Thinking, Fast and Slow.  In the book, Kahneman, who is a psychologist and Nobel Prize-winning economist, defines two types of thinking.  The first is System 1 thinking.  System 1 thinking is a fast, involuntary, and largely instinctive method of thought that enables us to function on a daily basis.  If we sense things in our surroundings such as movement or sounds, our System 1 thinking interprets it instantly and responds if necessary.

System 2 thinking, in contrast, is slow, deliberate, and more considered.  If we’re faced with a situation we haven’t encountered before, or a difficult problem, we engage System 2 thinking and make a more focused decision.  We take the time to think through a problem and come up with what we believe is the best answer or response.

Each has its respective advantages and disadvantages.  System 1 thinking is good enough for most low-risk or immediate decisions, but is too simplistic for more difficult situations.  If we try to make complex decisions without engaging System 2 thinking, we risk making less than optimal decisions due to our own biases or a lack of information at the time a decision is made.

While System 2 thinking is more accurate in complex situations, it takes time to engage and think through a problem.  It’s a conscious process to decide to think more deeply about a situation, and to begin determining how to approach it.  For most simple decisions in our lives, it’s overkill and not timely enough to be useful.  System 2 thinking is also hard work, and can cause decision fatigue if done too often.

As a result, each type of thinking introduces biases into our daily work, which affects how we test and what conclusions we draw from our data.  We may depend too much on System 1 to draw fast conclusions in cases where further thought is needed, or on System 2 so much that we become fatigued and begin to make mistakes because we are mentally tired.

In practice, it’s better to alternate the two types of thinking so as to not overuse either.  If we have complex data to collect and evaluate, it helps if we break up that process with occasional rote activities such as automated regression testing. For example, exploratory testing is a gradual learning process that requires extensive System 2 thinking. In contrast, executing prepared test scripts is largely a rote exercise. Being able to alternate every day or two between the two approaches can keep testers sharp.

In my upcoming posts, I’ll take a closer look at biases that come about as a result of how we think as we approach a problem.  Then I’ll look at how these biased decisions can affect how we test and what we find when we do.

Can Our Shopping Cards Save Our Lives? March 17, 2013

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags:
add a comment

I’m a bit of a throwback when it comes to certain applications of technology.  In addition to not using Facebook, I don’t have supermarket rewards cards, or even use a credit or debit card at the supermarket.  My reasoning for the latter is simple – I would prefer not to have the supermarket chain know what I’m eating.  I realize that I may be giving up coupons or other special deals by not identifying myself, but I’m willing to accept that tradeoff.  It’s not a big deal either way, but it’s how I prefer to make that particular life decision.

But now there seems to be better reasons to use your supermarket reward card – according to this NBCNews.com article, it may save your life.  Really.

The story goes something like this.  When there is a known food contamination, health officials can see who bought that particular food, and approach those people individually, rather than send out vague alerts that not everyone sees or hears.

Count me as dubious.  This is really a sort of pie-in-the-sky application of Big Data that people can dream up when they picture the potential of the data itself.  It would take weeks to reach all of the buyers of a particular contaminated product, even if you could match all of the different systems and databases together somehow.  By then, the scare would have run its course.

The reality is that such data is stored in hundreds or thousands of different systems, without any means of pulling them together, let alone using it to query on a specific product across millions of purchases.

And then, of course, there are people like me, who still insist on dealing in cash, and remaining somewhat anonymous.  Although they could take my photo in the supermarket, and rather quickly match it up to my other identified photos on the Internet, where I am well known as a speaker and writer.

The idea is intriguing, but it falls into the same tradeoff as many other applications of technology in society today.  We can do things to make ourselves safer, but at the cost of providing more information.  Some don’t seem to have a problem with the latter, but I, in my doddering middle age, do.

Follow

Get every new post delivered to your Inbox.

Join 349 other followers