jump to navigation

Is Black Box Testing Dead? November 11, 2012

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
trackback

Testing has traditionally focused on making sure that requirements have been implemented appropriately. In general, we refer to that as “black box testing”; we really don’t give a lot of thought to how something worked. What we really care about is whether or not its behavior accurately reflected the stated requirements.

But as we become more agile, and testing becomes more integrated with the development process, black box testing may be going the way of the dodo bird.

I was at the TesTrek conference last week, where keynote speaker Bruce Pinn, the CIO at International Financial Data Services Ltd., noted that it behooved testers to take a programming course, so that they could better interact with developers.

I have my doubts as to whether a single programming course could educate testers on anything useful, but his larger point was that testers needed to become more technically involved in product development. He wanted testers more engaged in the development process, to be able to make recommendations as to the quality or testability of specific implementations.

In particular, he noted that his emerging metric of choice was not test coverage, but rather code coverage. Test coverage is the question of whether there are functional tests for all stated requirements, while code coverage measures the execution pathways through the code. The idea is that all relevant lines of code get executed at least once during testing. In practice, it’s not feasible to get 100 percent of code coverage, because some code exists for very obscure error-handling situations. In other cases, as we used to say when I studied formal mathematical modeling using Petri Nets, some states simply aren’t reachable.

I asked the obvious question of whether this meant he was advocating gray box or white box testing, and he responded in the affirmative. Test coverage is a metric of black box testing, because it doesn’t provide insight of anything underneath the surface of the application. Code coverage is a measure of how effective testing is at reaching as much of the code as possible.

In fact, both types of coverage are needed. Testers will always have to know if the product’s functionality meets the business needs. But they also have to understand how effective that testing is in actually exercising the underlying code. Ideally, testers should be running code coverage at the same time they’re executing their tests, so that they understand how comprehensive their test cases really are.

Advertisements

Comments»

1. Rikard Edgren - November 19, 2012

What about coverage of reliability, usability, security, performance, scalability, compatibility?
What about coverage of the code that isn’t there?

2. Scott Barber - December 1, 2012

I find our usage of “Black-Box” in relationship to testing to be completely bizarre. All testing (at least all testing done at above the individual electron level) is “Black-Box” to some degree.

Rather than thinking of “Black-Box” as “End-User, Completely Outside the System Perspective,” I like to think of it as “no knowledge of what is beneath the target of test” — Thus, if you are testing by calling a procedure with a variety of data & “checking” the output, but you have no access to, or knowledge of, the actual procedure code, you are testing a “Black-Box”.

In that regard, “Black-Box” testing will never *really* be dead, but that’s not even my main point.

My main point here is that the pattern that’s being described here is not really a shift to anything new, it’s a (positive) shift backwards to when teams were collaborative, unified, responsible and accountable.

In other fields, it’s called Research and Development and it’s a good thing. But what it does mean is that while a team surely needs to include significant testing skill every bit as much as it needs significant programming skill, it’s becoming less and less important what an individual person’s title is, what matters is that the team as a whole has the right mix of skills and applies them collaboratively.

If this means that today’s testers need to learn some programming or other technical skills, so be it. But it’s not about testers needing to become better programmers… or developers needing to become better testers — it’s about all of us developing as many skills as we can related to our project on top of our particular specialty (a.k.a. our “super power”) and using all of our skills to help our team succeed.

Simple, huh?

Peter Varhol - December 1, 2012

Thanks for your comment, Scott. I agree with your definition of black box testing. And more integrated teams are certain to bring more value than siloed ones. I was once a member of a product team where any communication between engineers, test, product management, etc. had to go up the chain of command to the VP level (at another location), where the VPs would discuss it. It was incredibly inefficient and unhelpful, and we bypassed it whenever we could.

Because we don’t know what skills we may need for a given project (programming, automation, communication, human interaction . . .), it behooves us to learn broad rather than deep. I don’t discount the value of expertise in a particular problem domain, but I’d rather know a little bit about a lot of things.

Gerie Owen - December 1, 2012

I agree…It’s probably more important for a tester to know a little bit about a lot of things. A little bit of knowledge enables one to ask relevant questions and asking the right questions leads to doing the right kind of testing.

3. markewaiteMark Waite - December 1, 2012

Isn’t the proposed measurement (“code coverage”, which I assume to mean either statement or branch coverage) at least somewhat independent of whether or not the tester has knowledge of the code being tested? For example, I can perform interactive, exploratory tests of instrumented code (cobertura for Java, gcov for gcc, coverage.py for python, etc.) and I or someone else could then use the resulting coverage report as an attempt to assess how well my exploratory testing covered the existing code.

I agree with the quoted presenter that code coverage reports are usually much more interesting than assessments of the mapping between requirements and documents which partially describe some of the things which a human might do to assess the quality of software. However, I think that is mostly due to the relatively low value I place on “test cases”, rather than on some extremely high value of code coverage reporting.

Code coverage reporting is a good technique to rapidly detect untested areas in implemented code. It does not detect issues in code that is not implemented, nor does it detect issues in many of the other attributes as mentioned by Rikard Edgren.

4. Steve Burnley (@steveburnley) - December 1, 2012

Did Bruce Pinn state whether he was a developer in a previous life? For me, that’s just another cop out for developers who won’t or don’t test their code and do a runner once handed to testers. I see more Grey Box testing creeping in along with more iterative build mad test, pretty much going back to the old RAD days, but that should never mean a developer does not test their own code, and, have it tested by another developer before it’s released into traditional independent test teams.

5. mag4automation - December 1, 2012

You know, CIOs, Directors, etc actually want the tester to be everything. I met one CIO who wanted a tester to have the business knowledge of the financial markets and at the same time possess the skill of the developer to understand the code and oh by the way make time to do test planning, test development and test execution to satisfy both the internal QA auditors and the external governing bodies of the SEC. I told him give me a minute to go and upload all the information into my brain then I will be ready :).

As a customer, what do I care about code coverage if the intended functionality isn’t useful. I agree that a collaborative approach is desired. You test to make sure the requirement/user story is satisfied but at the same time you can see how much code you are covering. However what happened to white box testing? My question to that CIO would be why are you not requiring the developers to do unit test of their code which should be designed to for code coverage? Surely testers can review the unit tests which is what some agile teams are doing. But it is far more effective for the developer to write good and effective unit tests that should pass before going to black box testing techniques.

I really enjoyed reading your article and I like some of the points you raised but it does get under my skin how people want testers to be everything without realizing the complexities of test effort in itself.

6. Michael Bolton - December 1, 2012

Perhaps Mr. Pinn should take a good testing course so that he can better interact with serious testers. There are certain things that he appears to be missing that a brief conversation with a skilled tester would reveal. One is that test coverage is about far more than “functional tests for all stated requirements”; what about the unstated, implicit, tacit requirements? Another is that code coverage tells us nothing about the value of the code that has been developed, nor does it tell us about threats to the value of that code and to the service that it helps to deliver.

As a parallel, I could take a copy of the text of Mr. Pinn’s presentation, and run it through a spelling and grammar checker, whereupon one could accurately claim that I had achieved 100% code coverage. Yet, if that were all I to do, the presentation could still have errors, inconsistencies, omissions, mischaracterizations, misunderstandings, or lies.

I could go on, but to save everyone time: 1) Read Rikard’s reply above. 2) Read Joel Montvelisky’s recent interview with Jerry Weinberg.

—Michael B.

Peter Varhol - December 1, 2012

Ah, I am probably oversimplifying the conversation, as I often do. But your points are valid, and were not addressed in his keynote.

7. jose - January 7, 2013

White box is done by developers, because they wrote the code, because they focus in programming. Testers do blackbox because they are experts in customers’ point of view. It’s not so difficult to undestand. The tendency to avoid blackbox testing is a typical consecuence of promoting programmers into project managers. They end up trying to avoid testing. “A good program doesn’t need (blackbox) tester”: how harmful has this myth been to the world. And how difficult to surpass.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: