##
I Am 95 Percent Confident *June 9, 2013*

*Posted by Peter Varhol in Education, Technology and Culture.*

Tags: big data, statistics

trackback

Tags: big data, statistics

trackback

I spent the first six years of my higher education studying psychology, along with a smattering of biology and chemistry. While most people don’t think of psychology as a disciplined science, I found an affinity with the scientific method, and with the analysis and interpretation of research data. I was good enough at it so that I went from there to get a masters degree in applied math.

I didn’t practice statistics much after that, but I’ve always maintained an excellent understanding of just how to interpret statistical techniques and their results. And we get it wrong all the time. For example:

- Correlation does not mean causation, even when variables are intuitively related. There may be cause and effect, or it could be in reverse (the dependent variable actually causes the corresponding value of the independent variable, rather than visa versa). Or both variables may be caused by another, unknown and untested variable. Or the result may simply have occurred through random chance. Either way, a correlation doesn’t tell me anything about whether or not two (or more) variables are related in a real world sense.
- Related to that, the coefficient of determination (R-squared) does not “explain” anything in a human sense. There is no explanation in our thought patterns. Most statistics books will say that the square of the correlation coefficient explains that amount of variation in the relationship between the variables. We interpret “explains” in a causative sense. Wrong. It’s simply that the movement between two variables is a mathematical relationship with that amount of variation. When I describe this, I prefer using the term “accounts for”.
- Last, if I’m 95 percent confident there is a statistically significant difference between two results (a common cutoff for concluding that the difference is a “real” one), our minds tend to interpret that conclusion as “I’m really pretty sure about this.” Wrong again. It means that if I conducted the study 100 times, I would draw the same conclusion 95 times. And that means five times I will draw the opposite conclusion.
- Okay, one more, related to that last one. Statistically significant does not mean significant in a practical sense. I may conduct a drug study that indicates that a particular drug under development significantly improves our ability to recover from a certain type of cancer. Sounds impressive, doesn’t it? But the sample size and definition of recovery could be such that that the drug may only really save a couple of lives a year. Does it make sense to spend billions to continue development of the drug, especially if it might have undesirable side effects? Maybe not.

I could go on. Scientific experiments in the natural and social sciences are valuable, and they often incrementally advance the field in which they are conducted, even if they are set up, conducted, or interpreted incorrectly. That’s a good thing.

But even when scientists get the explanation of the results right, it is often presented to us incorrectly, or our minds draw an incorrect conclusion. A part of that is that a looser interpretation is often more newsworthy. Another part is that our minds often want to relate new information to our own circumstances. And we often don’t understand statistics well enough to draw informed conclusions.

Let us remember that Mark Twain described three types of mendacity – lies, damned lies, and statistics. Make no mistake, that last one is the most insidious. And we fall for it all the time.

## Comments»

No comments yet — be the first.