jump to navigation

Testing and the Black Swan January 6, 2013

Posted by Peter Varhol in Software platforms.
Tags: , ,

I noted in a post a few days ago that complex systems are subject to what are known as Black Swan events. We think of Black Swan events as very rare, requiring a complex set of circumstances to occur in order for a disaster to happen.

That’s fallacious reasoning. A Black Swan event happens not because of an unusual sequence of events, but because the corresponding system is complex, and has multiple unknown points of failure. It’s not out of our control, just out of our conception. And the events are “fat tail” ones; not a traditional Gaussian normal curve, but a flattened one, with a lot of probability in tail events.

I was speaking to my friend Jim Farley on this topic last night. He asked (no, demanded) that I distinguish between a complex system, where you can with sufficient foresight conceive of and control outcomes, and a chaotic one, which is highly dependent upon initial conditions and largely unpredictable if those conditions aren’t known.

His intent was to separate natural disasters from the definition. I’m not sure that the distinction is a worthwhile one to make, as the results seem more a matter of degree than a hard difference.

Can complex interrelated systems be tested? Not completely, of course; we don’t even completely test software that is very well defined. But what we need to do is to get away from the idea that catastrophic failures occur due to a complex sequence of highly unlikely events, and instead acknowledge that a complex system simply has a lot of points of failure.

This type of testing is similar to testing safety-critical software, where your goal is to map out the failure points and determine how best to make it fail. That’s a very different way that how testers tend to work with software, which is usually quite methodical and planned. The problem is that most failures are catastrophic and unplanned (how can you plan a failure?).

James Bach talks about the Buccaneer Tester in his blog (actually, buccaneer scholar, but I’m being selective). While his point is much broader, I’d like to focus on the part of the buccaneer that takes measured risks for a high reward. We would like someone who thoroughly abuses our software, and risks ridicule and even censure as a result. But that person is more likely to understand the boundaries under which our software operates.

And in general it helps to think out of the box. You want someone to do what any user may try, without fear that it isn’t covered in the spec or even conceived of as an error. Most testers are very much in the box. When looking at what can go wrong with a complex system, it’s important to both understand all of the individual components of that system, as well as what might happen outside of the system but within its ecosystem.



1. Testing and the Black Swan | QA Testing | Scoop.it - January 8, 2013

[…] I noted in a post a few days ago that complex systems are subject to what are known as Black Swan events. We think of Black Swan events as very rare, requiring a complex set of circumstances to occ…  […]

2. Jessica Nickel | Testing and the Black Swan - January 8, 2013

[…] See on pvarhol.wordpress.com […]

3. Five Blogs – 9 January 2013 « 5blogs - January 9, 2013

[…] Testing and the Black Swan Written by: Peter Varhol […]

4. Perspectives on Testing » The Seapine View - April 15, 2013

[…] have written here about testing and Black Swan events. In this post, Jeremy Wenisch provides his impressions as he is […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: