jump to navigation

On Certifications, Knowledge, and Competence April 29, 2015

Posted by Peter Varhol in Software development.
add a comment

There is an ongoing and more recently growing controversy surrounding the ways that testers and other software professionals demonstrate their competence. In particular, the controversy centers on the testing services provided by the world’s largest provider of certification services, the ISTQB, and its US arm, the ASTQB.

The primary criticisms of the ISTQB seem to be that the use of a multiple choice test for certification trivializes testing knowledge and judgment. In addition, some see the tutorial offerings of supporters to be an unnecessary and unwanted commercial advantage in training for this specific certification. So there are currently online petitions that demand the ISTQB release its own data that assesses the validity of the certification.

While both of those criticisms have some truth, Cem Kaner notes in a comprehensive post that the ISTQB seems intent on measuring and improving its certifications, and supports its right to examine their data and institute those improvements in private. As long as it is not making demonstrably false statements regarding the knowledge gained as a result of certification, the organization has the legal and ethical right to improve its exams. Kaner does not claim to have reviewed all of the organization’s marketing materials, but does assert that he sees no apparently false claims.

Kaner, one of the true deans of software testing, is right. I have referred to him as our “adult supervision” on Twitter. But the argument goes beyond his level-headed analysis of what the ISTQB offers and does not offer to testers.

We are fortunate to work in a field where there are a lot of ways of adding value. Some write code to build software, and write unit tests to verify how they think discrete parts of the code should work. Others work closely with developers to understand the underlying structure and develop tests that reflect the requirements of the application and its underlying requirements. Still more are domain experts, fighting for the user community and possessing a keen understanding of what is needed beyond the stated requirements to deliver a quality product.

Measuring all of those skill sets in a single exam, whether multiple choice, single correct answer, or even in essay format, seems incongruous and indeed impossible. Yet a broad range of people and skills can and do help in determining whether an application meets the needs of the business and has the necessary quality to deploy.

Part of the larger problem is that some employers expect ISTQB certification as a prerequisite for a testing job, which distorts its value in the marketplace. Another part is that ISTQB marketing, while not demonstrably false, might be interpreted as misleading.

Kaner says that while the ISTQB certification lacks many essentials, we have not yet been able to devise anything better. He’s not happy with ISTQB certification, but for technical rather than business reasons. And he’s smart enough to know that there isn’t anything better right now, although he is hoping for alternatives to develop over time. He would prefer to expend energy in improving possible certifications, rather than fighting over the relative value of this particular one. That’s a position that’s hard to argue with.

How Mature Are Your Testing Practices? April 22, 2015

Posted by Peter Varhol in Software development, Strategy.
add a comment

I once worked for a commercial software development company whose executive management decided to pursue the Software Engineering Institute’s (SEI) CMMI (Capability Maturity Model-Integration) certification for its software development practices. It hired and trained a number of project managers across multiple locations, documented everything it could, and slowed down its schedules so that teams could learn and practice the new documented processes, and collect the data needed to improve those processes.

There were good reasons for this software provider to try to improve its practices at the time. It had a quality problem, with thousands of known defects not getting addressed and going into production, and its customers not happy with the situation.

However, this new initiative didn’t turn out so well, as you might imagine. After spending millions of dollars over several years, the organization eventually achieved CMMI Level 2 (the activities were repeatable). It wasn’t clear that quality improved, although it likely would have become incrementally better over a longer period of time. But time moved on, and CMMI certification ceased to have the cachet that it once did. Today, in a stunning reversal of strategy, this provider now claims to be fully committed to Agile development practices.

This is a cautionary tale for any software project that looks for a specific process as a solution to their quality or delivery issues. A particular process or discipline won’t automatically improve your software. In the case of my previous employer, CMMI added burdensome overhead to a software supplier that was also forced to respond more quickly to changing technologies.

There are a number of different maturity models that claim to enable organizations to develop and extend processes that can make a difference in software quality and delivery. The SEI’s CMMI is probably the best known and most widely utilized. There is also a testing maturity model, which specifies similar principles as CMMI into the testing realm. And software tools vendor Coverity has recently released its Development Testing Maturity Model, which outlines a phased-in approach to development testing adoption, and claims to better support a DevOps strategy.

All of these maturity models, in moderation, can be useful for software projects and organizations seeking to standardize and advance the maturity of their project processes. But they don’t automatically improve quality or on-schedule delivery of a product or application.

Instead, teams and organizations should build a process that best reflects their business needs and culture, and then continue to refine that process as needs change to ensure that it continues to improve their ability to deliver quality software. It’s not as important to develop a maturity model as it is to identify your process, customize your ALM tools for that process, and make sure your team is appropriately trained in using it.

The Challenges of Concurrency in Software March 12, 2015

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
add a comment

I learned how to program on an Apple Macintosh, circa late 1980s, using Pascal and C. As I pursued graduate work in computer science, I worked with Lisp and Smalltalk, running to the Motorola 680X0 and eventually the Intel architecture.

These were all single-threaded programs, meaning that they executed sequentially, one step at a time. As a CS grad student, and later as a university professor, I learned and taught about multi-threading and concurrent code execution.

But it was almost entirely theoretical. Until the turn of the century, almost no code was executed in parallel. Part of the reason was that none but the most sophisticated computers executed in parallel. Even as Intel and other processors moved decisively into multi-core architectures, operating systems and programmers weren’t ready to take advantage of this hardware innovation.

But only by taking advantage of multi-core and hyper-threaded processors could developers improve the performance of increasingly complex applications. So, aided by modern programming languages such as Java and C#, developers have been cautiously working on applications that can take better advantage of these processors.

To do so, they’re dusting off their old textbooks and looking at concepts like “critical section”, “fork”, and “join”. They are deeply examining their code to determine which operations can occur simultaneously without producing errors.

To be fair, several tools came out in the mid-2000s claiming the ability to automatically parallelize existing code, mostly by analyzing the code and trying to parcel out threads based on an expectation that certain code segments can be parallelized. In practice, there was not a lot of code that could safely be parallelized in this way.

But most new applications are multithreaded, and the operating system can dispatch threads to different cores and CPUs for concurrent execution. Using today’s processors, this is the only way to get the best performance out of modern software.

The problem is that developers are still fumbling their way through the process of writing code that can execute in parallel. There are two types of problems. One is deadlock, where code can’t continue because another thread won’t give up a resource, such as a piece of data. This will stop execution altogether.

Another is the insidious race condition, where the result is dependent upon which thread completes first. This is insidious because an incorrect result can be random, and is often difficult to detect because it may not result in an obvious error.

Fortunately, tools are emerging that help in the identification and analysis of concurrent software issues. One is Race Catcher, from Thinking Software. It can be used in two ways during the application lifecycle. During development and test, it can dynamically analyze Java code to look ahead for deadlocks and race conditions. You can’t predict the occurrence of a race condition, of course, but you can tell where the same data is being processed in different ways, at the same time.

In a headless version, it can run as an agent on production servers, doing the same thing. That’s a version of DevOps. We catch things in production before they become problems, and refer them back to development to be fixed.

In an era where software development is changing more quickly and dramatically than any time since the PC era, we need more tools like this.

A Road Less Taken January 26, 2015

Posted by Peter Varhol in Software development, Software platforms.
add a comment

Among the best jobs that I had in my long and interesting career is the roles of technical evangelist. I’ve had that formal title three times, and have had related roles on other occasions.

As you might expect, a technical evangelist evangelizes, or enthusiastically promotes, a particular technology, product or solution to a wide range of people. They could be C-level executives, or industry thought leaders, managers, tech leads, or individual contributors.

This is done through a number of ways – through articles in industry publications (these days mostly websites), white papers, blog posts, webcasts, podcasts, and screencasts, as well as speaking at user groups and industry conferences.

What good is evangelism? Well, it can play a number of different but interrelated roles. In the places I was an evangelist, it was mostly about brand awareness, with some customer contact for demonstrations and consultation. In other roles, it could serve as a specialized technical resource to help customers solve difficult problems. In still others, it is primarily a technical sales function. The important thing to do is to understand what is expected of the role, and to deliver on those expectations. It sounds simple, but in reality it rarely is.

A little bit about me. I don’t consider myself to be especially outgoing, but I was a university professor for a number of years, and through interest and repetition became a very good public speaker. I learned the art of storytelling through trial and error, and found it to be an effective teaching technique.

I was fortunate enough to receive training in academic instruction through the Air Force, and found that I had a knack for relating to my audiences. I found that I loved the travel and interaction with experts, peers, and students of the field. I would like to think that I built brand awareness very well.

The downside of being an evangelist is that many technology companies are ambivalent about the role. I worked for a company that had twenty evangelists, then systematically dismantled the team in order to look attractive for an exit strategy. In doing so, they may have sacrificed long term growth for a short term balance sheet.

Yet it is an exciting and fulfilling role. I hope to be able to get back into it at some point.

The Evolution of Software Delivery January 25, 2015

Posted by Peter Varhol in Software development.
Tags: ,
add a comment

This could alternatively be titled “Is Desktop Software Dead?” I’ll start at the beginning. Circa 2000, we delivered software in 12-18 month increments. It was desktop software, intended to debug and test Windows (and Java) applications, and manufactured and delivered on CDs and DVDs (don’t laugh; back then, even Microsoft delivered its MSDN entirely on DVD).

Our customer thought we delivered new versions too quickly. It took too long to install software on individual machines, and they didn’t want to do it that often. For the most part, they wanted to upgrade when Windows changed or Visual Studio changed, not when we wanted to push out new software.

Circa 2010, I attended a talk given by Kent Beck, in which he described how software delivery was speeding up over time. He examined how software testing and delivery would change as we accelerated to quarterly, monthly, weekly, and even daily deliveries. And he delivered this entire talk without once using the word agile.

There is a key factor missing in this story. That is that the type of software we produced for desktop computers ten years ago is no longer relevant. Sure, I still run MS Office, but that’s more of a reflex action. I could be using Google Docs, or Open Office, or something like that. The relevant software today is web, or mobile, or some hybrid of either or both that may also put something on my laptop.

In other words, the nature of software had changed. When we had to physically install it on individual computers, delivery was an annoyance for the users, to be avoided (tell me about it). They ultimately didn’t want our upgrades unless they absolutely needed them.

Today, for users, whether they are businesses or consumers, delivery has to be invisible. And, as Kent Beck described, done much more rapidly, perhaps almost instantaneously.

There are some tradeoffs to this model. There may be new features that could be difficult or unintuitive to use, and require instruction and training.

And it poses challenges for software teams. Agile and DevOps addresses some of these challenges, but delivery is an entirely different ballgame. Teams have to be able to quickly assess the quality of the updates and be able to roll back if need be. There has to be communications between IT operations and dev and test in order to make this happen.

Of Apps and Men December 18, 2014

Posted by Peter Varhol in Software development, Software platforms, Software tools.
add a comment

Fair warning – I am eventually going to say more about Uber. The apps business is an interesting one, and some historical context is necessary to understand just why. In the PC era, we typically paid hundreds of dollars for individual applications. As a result, we would buy only a few of them. And we would use those applications only when we were seated in front of our computers. The software business was the application, and selling it made the business.

In the smartphone/tablet era, however, apps are essentially free, or at worst cost only a few bucks. People are using more apps, and using them for longer periods of time than we ever did on the PC.

But that still doesn’t quite make the bottom line sing. I mention Uber above because of its recent valuation of $41 billion, at a time when the entire annual taxi revenue of the US is $11 billion. The standard line by the VCs is that it will transform all of surface transportation as more and more people use Uber, even rather than their own cars.

I don’t buy that argument, but that is a tale for another day. But the message, I think, is fundamentally correct. The message is that you don’t build a business on an app. You will never make money, at least not sustainable money, from the app. Rather, the app is the connection to your business. You use the app simply as another connection to your products or services, or as a connection to an entirely new type of business.

But today, you are not going to use and app to build a business that was the standard fare of the software industry only a few years ago.

The corollary, of course, is that almost every business will need its own app, sooner or later.  That represents a boon for developers.

Testing in the M2M World October 29, 2014

Posted by Peter Varhol in Software development.
Tags: ,
add a comment

We typically don’t think about all of the computers surrounding us. Our automobiles have at least a dozen processors, controlling acceleration, braking, engine operation and sensors, and telematics. Hospitals use computers in a wide variety of instruments, from EKG machines to X-ray devices to patient monitoring systems. Modern aircraft are overwhelmingly “fly by wire”, in that computers translate pilot instructions into digital decisions concerning setting engine speed and performance, and managing control surfaces. Mobile phones and tablets have some of the characteristics of traditional computers, but have different user interface interactions and often-unreliable connectivity.

We have had mobile and embedded computing devices for several decades. What makes today’s generation unique are two things – they often have user interfaces that enable users and operators to observe data and input instructions; and they are almost always interconnected. Further, the user interfaces are getting more complicated as these devices become still more powerful and more interconnected. These devices are comprised of non-standard hardware, using ASICs, FPGAs, or other custom designs.

Many traditional testers struggle with the different paradigms needed when testing applications that run on other than old-style computers. This is especially the case if the device is safety-critical; that is, if a failure can cause harm. We are trained to test to requirements, yet requirements are often incomplete, ambiguous, or make unstated assumptions concerning appropriate operation. Even if requirements are clear and complete, meeting those requires doesn’t necessarily guarantee a safe and high-quality product.

Further, virtually all of these devices are used outside of the normal and sanitized office environment. Whether outdoors, on roads, in hospital emergency rooms, or in the air at nine hundred kilometers an hour, embedded devices and software have to work in less-than-optimal conditions. We simply can’t test in all possible environments with all possible users, but with some effort we can intimately understand where our application might succeed or fail.


Start Small

In embedded and mobile projects, it is especially important to verify each software component as it is built, using a unit testing approach that ensures that the output corresponds to the input. However, unit testing by itself is insufficient. It typically does not look at the many ways that software running on a device can be used and abused. Testers are in a better position than developers to understand the risks inherent in the software, and device small testing tactics that exercise those risks.

That is why testers have to be intimately involved in early component testing. Developers attempt to confirm proper operation; testers are better to understand incorrect or malformed inputs, as well as user errors, and how to interpret incorrect outputs. Building unit tests that actually reflect what inputs will actually occur may in fact make a significant impact on the results. By catching poor error handling at this stage of the process, testers will actually make further testing, including GUI testing, easier and more reliable.


Build Up Gradually

Unit testing is a very useful first step in ensuring quality. However, when units are combined into higher level components, which comprise the application, those units may interact in unexpected ways. Testing is necessary at every level of integration in order to make sure that the application continues to work as expected.

Once a unit has passed testing, most developers are unconcerned about integration consequences. That’s where testers must take over. At each integration point, unit tests should be combined if necessary and executed again. Further, these tests should not just verify outputs for a given input, but also look at behaviors that may result in incorrect or inappropriate operation as the application continues to come together.


Attack the GUI

Testers have learned to test by verifying that requirements are met through GUI testing. However, testing and most especially mobile and embedded testing have to go well beyond that in order to better understand the application and its potential weaknesses.

Jon Hagar (Software Test Attacks to Break Mobile and Embedded Devices, 2013) and James Whittaker (How to Break Software, 2003) talk about attacking embedded software, and they are on the right track. You look at the risks that such software poses, and attack it based on those risks. If you are able to break it, you have exposed those risks as potential realities that need to be addressed before the software is released.

This means that test cases have to expand beyond requirements. That makes both testing and traceability more difficult than we are used to, but is a necessary part of creating an embedded application.


Always, Always Collect and Analyze Code Coverage Data

Most GUI testing on traditional desktop applications exercises about a third of the actual application code. In most mobile and embedded applications, it will likely be less, because more functionality is not accessible directly through the GUI. And that is not sufficient. It’s bad because it’s not clear what that code is doing, or why it is there, without testing it.

Much of that code is error-handling code, and embedded testers have to be able to generate the errors that this code is meant to handle. That means looking at malformed input, user error, unexpected events, and poor environmental conditions. These may not be part of the stated requirements, but are necessary in order to ensure the quality and proper operation of the software.

While one hundred percent code coverage is generally not achievable, project teams should decide on a code coverage goal and work to achieve that goal. Most such goals for complex systems are defined at around 70 or 80 percent of the code, depending on which type of code coverage is measured.

And code coverage makes a big difference early in the process. Whether in unit tests, or as software units are being integrated into larger components, code coverage gives testers a reality check on how much of the application is being tested. If it’s not enough, then attack test strategies must be formulated and execute to produce a better result.

GUI testing complements that by testing software features. Together, the two approaches provide a means of assessing both adherence to requirements and a deeper understanding of underlying testing effectiveness and software quality.

This approach will not result in perfect software and applications for mobile and embedded devices. However, it will provide testers with information on meeting requirements, extent of testing, and potential weaknesses.

The important thing is that testers recognize that mobile and embedded testing requires more than testing to requirements, and more than GUI testing. It means understanding the risks that the devices and their software pose to safety, mission, and overall successful operation of the application and device in general, and devising testing strategies that successfully address those risks.

What is the Deal with Self-Driving Cars? June 23, 2014

Posted by Peter Varhol in Software development, Technology and Culture.
Tags: ,
add a comment

Google, the media, and other interested parties are portraying self-driving cars as a panacea for drivers, traffic congestion, accidents, and other undesirable driving outcomes. I simply don’t get it, on multiple levels. I like the concept, but can’t connect it to any reasonable reality anytime in the future.

I’ve suspected that there would be issues with self-driving cars since they became a popular meme over the past year. At one level, there is the question of how you would test the technology. In normal system testing, you attempt to run tests that simulate actual use. But there are far too many possible scenarios for self-driving cars to reasonably test. Under other circumstances, it may be possible to test the most likely cases, but on a safety-critical system like a car, that’s simply not possible.

I’m reminded of my skepticism by this article on the utility of aircraft autopilot systems and their role in the operation and in some cases mis-operation of planes. One conclusion seems to be that autopilots actually make flying more complex, rather than simpler. That counterintuitive conclusion is based on the idea that the assumptions made by the autopilot are unexpected by the operators.

As a software guy, I’m okay with the idea that assumptions made by software can take people by surprise on occasion. It’s a difficult problem even for safety-critical systems, where people can die if the software makes an incorrect assumption. You can argue, probably successfully, that pilots shouldn’t be surprised by whatever a plane under their command does.

Drivers, not so much. As we look at aircraft autopilots, it is reasonable to draw a parallel between commercial aircraft and automobiles. Now, granted, aircraft operate in three dimensions. But automobiles have a greater range of operating options, in terms of speed, traffic, road types, road conditions, and so on. Commercial aircraft are already under positive control from the ground.

It’s not clear who will control driverless automobiles. It’s certainly unlikely that drivers are as attentive as pilots, yet will become at least as confused at times as they change where they want to go, and how they want to get there. And they won’t be observing the driving process any near as attentively as (I hope) pilots do.

Sigh. I’m not a Luddite. I’m excited about technology in general, and am an early adopter of many technologies (and, to be honest, a not-so-early adopter of others). But I simply don’t see self-driving automobiles taking off (pun intended) anytime in my lifetime.


Get every new post delivered to your Inbox.

Join 438 other followers