jump to navigation

Decisions, Decisions – There’s an Algorithm for That March 20, 2017

Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: , ,
add a comment

I remember shoveling sh** against the tide. Yes, I taught statistics and decision analysis to university business majors for about 15 years.  It wasn’t so much that they didn’t care as they didn’t want to know.

I had more than one student tell me that it was the job of a manager to make decisions, and numbers didn’t make any difference. Others said, “I make decisions the way they are supposed to be made, by my experience and intuition.  That’s what I’m paid for.”

Well, maybe not too much longer. After a couple of decades of robots performing “pick-and-place” and other manufacturing processes, now machine learning is in the early stages of transforming management.  It will help select job candidates, determine which employees are performing at a high level, and allocate resources between projects, among many other things.

So what’s a manager to do? Well, first, embrace the technology.  Simply, you are not going to win if you fight it.  It is inevitable.

Second, make a real effort to understand it. While computers and calculators were available, I always made my students “do it by hand” the first time around, so they could follow what the calculations were telling them.  You need to know what you are turning your decisions over to.

Third, integrate it into your work processes. By using machine learning to complement your own abilities.  Don’t ignore it, but don’t treat it as gospel either.

There are many philosophical questions at work here. Which is better, your experience or the numbers?  Kahneman says they are about the same, which does not bode well for human decision-making.  And the analysis of the numbers will only get better; can we say the same thing about human decision-making?

Of course, this has implications to the future of management. I’ll explore my thoughts there in a future post.

Testing and Tester Bias March 30, 2013

Posted by Peter Varhol in Software development.
Tags:
add a comment

Software testers are increasingly looking at how to approach the problems inherent in testing, and how the ways that we think, and the bias that we bring to our work affects the conclusions we draw.  Because we can’t test every aspect of an application exhaustively, much of the testing process is based on our past practices, judgment, and decisions that we make based on incomplete and often inconclusive evidence.

Much of the foundation behind examining how we approach and solve problems comes from Daniel Kahneman’s landmark book Thinking, Fast and Slow.  In the book, Kahneman, who is a psychologist and Nobel Prize-winning economist, defines two types of thinking.  The first is System 1 thinking.  System 1 thinking is a fast, involuntary, and largely instinctive method of thought that enables us to function on a daily basis.  If we sense things in our surroundings such as movement or sounds, our System 1 thinking interprets it instantly and responds if necessary.

System 2 thinking, in contrast, is slow, deliberate, and more considered.  If we’re faced with a situation we haven’t encountered before, or a difficult problem, we engage System 2 thinking and make a more focused decision.  We take the time to think through a problem and come up with what we believe is the best answer or response.

Each has its respective advantages and disadvantages.  System 1 thinking is good enough for most low-risk or immediate decisions, but is too simplistic for more difficult situations.  If we try to make complex decisions without engaging System 2 thinking, we risk making less than optimal decisions due to our own biases or a lack of information at the time a decision is made.

While System 2 thinking is more accurate in complex situations, it takes time to engage and think through a problem.  It’s a conscious process to decide to think more deeply about a situation, and to begin determining how to approach it.  For most simple decisions in our lives, it’s overkill and not timely enough to be useful.  System 2 thinking is also hard work, and can cause decision fatigue if done too often.

As a result, each type of thinking introduces biases into our daily work, which affects how we test and what conclusions we draw from our data.  We may depend too much on System 1 to draw fast conclusions in cases where further thought is needed, or on System 2 so much that we become fatigued and begin to make mistakes because we are mentally tired.

In practice, it’s better to alternate the two types of thinking so as to not overuse either.  If we have complex data to collect and evaluate, it helps if we break up that process with occasional rote activities such as automated regression testing. For example, exploratory testing is a gradual learning process that requires extensive System 2 thinking. In contrast, executing prepared test scripts is largely a rote exercise. Being able to alternate every day or two between the two approaches can keep testers sharp.

In my upcoming posts, I’ll take a closer look at biases that come about as a result of how we think as we approach a problem.  Then I’ll look at how these biased decisions can affect how we test and what we find when we do.