jump to navigation

How Do We Fix Testing? April 17, 2014

Posted by Peter Varhol in Software development, Software tools.
Tags:
1 comment so far

Here is a presentation abstract I hope to get accepted at a conference in the near future:

Perhaps in no other professional field is the dichotomy between theory and practice more starkly different than in the realm of software testing. Researchers and thought leaders claim that testing requires a high level of cognitive and interpersonal skills, in order to make judgments about the ability of software to fulfill its operational goals. In their minds, testing is about assessing and communicating the risks involved in deploying software in a specific state.

However, in many organizations, testing remains a necessary evil, and a cost to drive down as much as possible. Testing is merely a measure of conformance to requirements, without regard to the quality of requirements or how conformance is measured. This is certainly an important measure, but tells an incomplete story about the value of software in support of our business goals.

We as testers often help to perpetuate the status quo. Although in many cases we realize we can add far more value than we do, we continue to perform testing in a manner that reduces our value in the software development process.

This presentation looks at the state of the art as well of the state of common practice, and attempts to provide a rationale and roadmap whereby the practice of testing can be made more exciting and stimulating to the testing professional, as well as more valuable to the product and the organization.

Is Black Box Testing Dead? November 11, 2012

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
10 comments

Testing has traditionally focused on making sure that requirements have been implemented appropriately. In general, we refer to that as “black box testing”; we really don’t give a lot of thought to how something worked. What we really care about is whether or not its behavior accurately reflected the stated requirements.

But as we become more agile, and testing becomes more integrated with the development process, black box testing may be going the way of the dodo bird.

I was at the TesTrek conference last week, where keynote speaker Bruce Pinn, the CIO at International Financial Data Services Ltd., noted that it behooved testers to take a programming course, so that they could better interact with developers.

I have my doubts as to whether a single programming course could educate testers on anything useful, but his larger point was that testers needed to become more technically involved in product development. He wanted testers more engaged in the development process, to be able to make recommendations as to the quality or testability of specific implementations.

In particular, he noted that his emerging metric of choice was not test coverage, but rather code coverage. Test coverage is the question of whether there are functional tests for all stated requirements, while code coverage measures the execution pathways through the code. The idea is that all relevant lines of code get executed at least once during testing. In practice, it’s not feasible to get 100 percent of code coverage, because some code exists for very obscure error-handling situations. In other cases, as we used to say when I studied formal mathematical modeling using Petri Nets, some states simply aren’t reachable.

I asked the obvious question of whether this meant he was advocating gray box or white box testing, and he responded in the affirmative. Test coverage is a metric of black box testing, because it doesn’t provide insight of anything underneath the surface of the application. Code coverage is a measure of how effective testing is at reaching as much of the code as possible.

In fact, both types of coverage are needed. Testers will always have to know if the product’s functionality meets the business needs. But they also have to understand how effective that testing is in actually exercising the underlying code. Ideally, testers should be running code coverage at the same time they’re executing their tests, so that they understand how comprehensive their test cases really are.

Phil Zimmermann Is At It Again October 19, 2012

Posted by Peter Varhol in Software tools, Technology and Culture.
Tags: ,
add a comment

I am old enough to remember when Phil Zimmermann released Pretty Good Privacy, or PGP, as open source, circa 1991. I followed his strange but true legal travails with the US government for several years, in which he was under investigation for illegal munitions exports (PGP encryption), yet never arrested. It was only after three stressful years that the US government concluded, well something, and told him that he was able to go about his business.

Now there is a mobile app called Silent Circle that employs the same encryption on a phone, for voice, email, and text. PGP remains, well, pretty good, with an awful lot of computing horsepower and time required to break it.
PGP employs public/private key encryption. I have a key to encrypt, which is public. Only my trusted partners have the private key, to decrypt. The keys are typically 64-bit (old), 128-bit, or more.

I’m also old enough to remember the Clipper chip, a microprocessor that had embedded strong encryption for communications purposes. The catch was that the chip was designed by the NSA, and while the encryption was valid, it also included a “back door” that enabled the US Government to tap into it (purportedly with a court order, though I have no doubt that it could be done otherwise, for purposes other than criminal prosecution). The effort failed miserably, as computer and phone makers declined to use it, and other parties railed against it.

The Clipper chip was obviously ill-conceived (though, oddly enough, apparently not to the government). But I am in favor of law enforcement, though without the spectre of big brother government. These trends will always conflict, and it is right that they do so. Still, it is also right that freedom of, well, speech win out in this argument, even in the face of criminal activity. Let us find a different way to catch our criminals.

Is Impatience a Virtue? September 7, 2012

Posted by Peter Varhol in Software tools, Technology and Culture.
1 comment so far

I grew up in a household of very limited means.  If I wanted something beyond the basics, I had to save my paper route (do those still exist?) money, do odd jobs for pocket change, and in general deprive myself for weeks or months until I had the funds necessary.  I waited, somewhat patiently, until a desired goal was within my grasp.  Such an upbringing probably contributed to my not getting caught up in the credit economy, and coming out of the economic shocks of the last decade or so relatively unscathed.

But there are technology trends favoring impatience.  Thanks to the speed and ubiquity of Google, we have access to information that in the past may have been completely unavailable, or at least would have required hours or days of research in the local library.

Now Evan Selinger makes the claim that tools such as the iPhone Suri are turning impatience into a virtue.  When we want to know an answer, we ask Suri.  We may not even trust our own senses, instead preferring to ask the one who has all of the information (“Siri, is it raining outside?”).

He quotes MIT Research Fellow Michael Schrage (who was the only columnist worth reading in Computerworld circa early 1990s) as saying “How would you be different if you regularly had seven or eight conversations a day with your smartphone?”

I’m not calling any of this a bad thing.  We have the tools to be more knowledgeable and informed individuals, which may make us better consumers, better citizens, and more tolerant of other points of view.  Technology that aims to please ultimately makes it more accessible to more people.  These are generally good outcomes.

But it is different than the way we functioned in the past, and may have implications to our daily lives, from how we process information to how we make decisions.  I’ve always believed that no decision should be made before it had to be made, so that we can watch how information played out over time.  Having information so seamlessly available may mean that we’ll think we know more than we do, and make decisions more quickly.  That may not be the best outcome.

How Do You Marry Java and .NET? April 13, 2012

Posted by Peter Varhol in Software development, Software platforms, Software tools.
1 comment so far

Years ago, I did some work for Mainsoft, which had a technically cool way of running .NET code on Java application servers.  This involved dynamically cross-compiling .NET into Java.  The idea was that you could create your application (typically a web application) using Visual Studio, then with a minimum of effort, configure it to run on a JVM and application server.

I would provide a link for Mainsoft, except that the company has changed names and markets (www.harmon.ie).  Apparently it wasn’t a good enough idea for a company to make money from.  Part of the problem was that there were two paths to getting .NET to run on Java – you either did byte code translation, or your implemented some or all of the .NET Common Language Runtime in Java.  Mainsoft did mostly the first, but also found that it was easier to use Mono classes for a lot of the CLR.

But now there seems to be a way to do it in the opposite direction; that is, running a Java application in .NET.

Mainsoft’s strategy was easily comprehendible but rather niche – developers were experienced in .NET and wanted to use the best .NET development tools possible, but the enterprise wanted flexibility in deployment.

IKVM, an open source project, uses some of the Mono classes to enable Java code to run on .NET.  It also includes a .NET implementation of some Java classes.  I’m not sure why Mono is needed in this case (and in fact, it’s likely that the Mono project has largely run its course).  IKVM lists three components to the project:

• A Java Virtual Machine implemented in .NET

• A .NET implementation of the Java class libraries

• Tools that enable Java and .NET interoperability

Microsoft was promoting IKVM during its language conference, obviously as an existence proof for the concept that there may be people who are interested in porting from Java to .NET in this manner.

Still, as a practical matter, it doesn’t seem worthwhile doing.  There are plenty of JVMs for Windows (I realize that .NET is a subset of Windows, but as a practical matter is pretty well tied to it).  The distinction of running on .NET is one that most won’t bother to make.  Perhaps someone out there can offer an explanation.

In Memory of Dennis Ritchie October 14, 2011

Posted by Peter Varhol in Software development, Software platforms, Software tools.
1 comment so far

I woke up this morning to the news of the passing of Dennis Ritchie, computer science researcher at AT&T Bell Labs and inventor of the C programming language (he actually passed away last weekend).  The C Programming Language, a language manual by Brian Kernighan and Dennis Ritchie, was the bible for several generations of professional programmers.

The early 1970s saw the rise of multiprocessing, where multiple users shared the same computer in a timesharing arrangement.  The research state of the art at that time was MULTICS, an MIT research project.  Bell Labs took those concepts and Ken Thompson developed the Unix operating system (with Ritchie’s assistance).  Ritchie then designed the C language as a high-level language married closely to Unix and its underlying API and commands.

However, C was also created as a platform-independent language, a nod to the fact that Unix would eventually be ported to dozens (probably more like hundreds) of different processors and hardware architectures.  For example, it lacks a string library, because strings are implemented differently on different OSs.  So C programmers got used to doing an array of char to represent a string (third parties eventually came out with custom string libraries for different computers).

C has elements of both a high-level language and a systems programming language.  It had high-level constructs, but could also directly access memory locations through pointers.  It does no automatic allocation or deallocation of memory; malloc and free are among the first constructs learned by aspiring C programmers.  Further, C does essentially no type-checking; programmers could essentially copy data from one type to another, irrespective of the type size, at their own risk.

Functions are generally called by reference, by establishing the memory location of the function (called a pointer), and are called by referring the calling function to that location (called dereferencing the pointer).  This can make possible some extremely convoluted programming constructs.

These characteristics and others made C extremely flexible, but also extremely prone to programming errors.  When I was the BoundsChecker product manager at Compuware NuMega Labs, we determined that a large majority of C (and its object-oriented extension C++) programming errors were memory errors.  It is simply too complex for most C/C++ developers to fully understand and control how they are using memory.

C programs eventually became so unmanageable that many development teams now use managed languages such as Java or C#.  Both languages (as well as niche languages like Lisp and Smalltalk) automatically allocate memory when you define and use a variable, and reclaim that memory when there are no longer any links to it through a technique called garbage collection.  But many commercial applications still use C/C++, either for legacy or performance reasons.

I was a C programmer for a brief period of my career, and occasionally taught C++ as an academic.  During my time as an academic, I wrote a discrete event simulation application in Pascal (invented by Swiss computer scientist Nicolas Wirth), a similar language that provided much stricter type checking.  Despite the popularity (and to large extent necessity) of managed languages today, I still firmly believe that you can’t truly understand how to program a computer unless you have a clear picture of how your code is using memory.  And we owe that view of memory to Dennis Ritchie and C.

Wintel for the Smartphone Crowd November 14, 2010

Posted by Peter Varhol in Software tools.
add a comment

Wintel is the mashup term for the duopoly of Windows and Intel, dominant for so many years of desktop computing.  While it happened largely by accident, Intel processors and Microsoft Windows operating systems employed a loose partnership that powered a very high percentage of computers using the PC-standard architecture.

This article postulates a similar duopoly of Qualcomm and Android.  Qualcomm is a principal maker of CDMA chipsets for phones (the alternative is GSM, used by much of the rest of the world), while Google’s Android is an open source operating system for phones and perhaps tablets and other small form factor devices.

At first the comparison seems lame.  Phones use a variety of different processors, none of which are made by Qualcomm.  The communication chipsets may or may not have the same impact as the processor.  They don’t drive application compatibility, so I would argue that they aren’t as important as the CPU.

Further, Europe and Asia are not going to convert to CDMA, so this partnership will not become a global standard.  It really only applies to the US, and more specifically to the Verizon network (my own carrier, US Cellular, is also CDMA, and makes use of much of the Verizon network, so my phone is also CDMA).

But to someone who was there in the early days of the Wintel story, the parallels seem more apparent.  Up until the mid-1990s, it was by no means assured that Wintel would be as dominant as it was.  Unix (not Linux until later) was the only high-end desktop operating system, and Alpha, MIPS, and POWER processors were for those who needed the horsepower that Intel couldn’t provide.  It’s worthwhile noting that when Microsoft introduced Windows NT in 1993, it included versions for both Alpha and MIPS (as well as Intel x86 and the i860 processor).

Because of the different communication standards used, and the massive amounts of infrastructure needed to support those standards, it seems unlikely that Qualcomm and Android can achieve anywhere near the dominance of Wintel.  But the fact that the question is being asked says a great deal about how the phone continues to become the next dominant platform.

Are Domain-Specific Languages the Next Software Engineering Breakthrough? November 11, 2010

Posted by Peter Varhol in Software development, Software platforms, Software tools.
1 comment so far

Almost since people have been writing software, we’ve looked for better, more efficient, and more intuitive ways of doing it.  First-generation languages (machine code) gave way to second-generation languages (assembly), which was largely abandoned in favor of third-generation languages that are in mainstream use today.  Starting with the likes of Fortran and Cobol in the late 1950s, we now use C# and Java, plus a number of other less mainstream but still-important third-generation languages.

That’s not to say that we haven’t tried growing beyond the third generation.  For a while in the 1990s, fourth-generation languages like PowerBuilder and Progress attempted to make data access more intuitive.  Also in the 1980s, Japanese industry and academia embarked upon a far-reaching but poorly-understood Fifth-Generation Computing project that didn’t have any wide-ranging impact on software development.

And C# and Java represent a significant advance over the likes of C++ and Ada in that they are managed languages.  Rather than requiring the programmer to manually write code to allocate and deallocate computer memory, the underlying language platform does it automatically.

But at a conceptual level there’s little else fundamentally different between Fortran and C#.  To be clear, there is much that is different, but the language instructions themselves haven’t changed a whole lot.  While we have libraries and frameworks that enable us to abstract a bit more today (and doing so may create more problems), we are writing code as the same conceptual level that we did fifty years ago.

The buzz in the industry over the last several years has surrounded so-called domain-specific languages, or DSLs.  I’m reminded of this by this article on former Microsoft executive Charles Simonyi, who has since founded a company to create a foundation for implementing DSLs.  I’ve also participated in several conferences over the last couple of years where speakers have promoted DSL concepts.

DSLs are an attractive concept for a number of reasons.  Because they focus on a specific problem domain, they tend to be fairly simple.  Domain experts, rather than programmers, may be willing to adopt them because they abstract a programming problem into terms that they understand, and can build solutions for.

I really like the idea, but I’m doubtful in practice.  Fifteen-plus years ago as an academic, I actually wrote a DSL, a visual language for discrete event simulation.  I loved it, but even those interested in discrete event modeling were flummoxed at some of the things I did.  And these were people who were used to thinking in those terms.  Languages meant for specific types of problems have to be designed very carefully (I probably didn’t do that) just in order to appear on the radar.

And I think we’ve debunked the concept of the citizen programmer.  I’ve seen a lot of products intended to bring programming to the user come and go over the last twenty years.  Microsoft’s original Visual Basic was intended to do just that, but it was successful only because it was useful to professional programmers.  While a few domain experts adopted 4GLs and became programmers (I saw that while working at Progress Software), it’s very much an exception.

We prize programming languages in large part for their versatility, not their simplicity or their utility for a specific purpose.  DSLs aren’t flexible.  Even in a problem domain, you want the ability to draw in instructions and tools from other domains, and from a large toolkit in general.  Unless we rethink the fundamental concepts behind DSLs, this will be the next breakthrough that never was.

Follow

Get every new post delivered to your Inbox.

Join 393 other followers