jump to navigation

No AI Cannot Solve the World’s Biggest Problems February 23, 2016

Posted by Peter Varhol in Software development, Software platforms.
Tags: ,
add a comment

‘AI can solve world’s biggest problems’ – Google Brain engineer.

The headline screamed for a response. The type of work being done with Google Brain is impressive.  It relies on neural networks.  Neural networks are cool things.  I was recently reminded by a new LinkedIn connection of the work and writing that I did on neural networks in the 1990s.

Neural networks are exceptionally good algorithms. They use multiple levels of nonlinear equations, running in parallel and serially, that can approximate a given response to a set of data inputs, then adjust the variables so that each successive pass can result in a slightly better approximation (or it may not; that’s just the way nonlinear algorithms work sometimes).

That may sound like just so much gobbledygook, but it’s not. In the 1990s, I used neural networks to design a set of algorithms to power an electronic wind sensor.  In my aborted doctoral research, I was using neural networks to help dynamically determine the most efficient routings through large scale networks (such as the nascent Internet at the time).

Let’s be clear. What I did had nothing to do with the world’s biggest problems.  The world’s biggest problems don’t already have mountains of data that point to the correct answer.  In fact, they rarely, if ever, have correct answers.  What they have is a combination of analysis, guesswork, and the many compromises made by dozens of competing interests.  And they will never result in even close to a best answer.  But sometimes they produce a workable one.

From my own experiences, I came to believe that having an instinctive feel for the nature and behavior of your data gave you a leg up in defining your neural network. And they tended to work best when you could carefully bound the problem domain.  That is hardly the stuff of something that can solve the world’s most difficult problems.

Neural networks can seemingly work magic, but only when you have plenty of data, and already know the right answers to different combinations of that data. And even though, you are unlikely to get the best possible solution.

Advertisements

What Are We Doing With AI and Machine Learning? February 12, 2016

Posted by Peter Varhol in Software development, Uncategorized.
Tags: , ,
add a comment

When I was in graduate school, I studied artificial intelligence (AI), as a means for enabling computers to make decisions and to identify images using symbolic computers and functional languages. It turned out that there were a number of things wrong with this approach, especially twenty-five years ago.  Computers weren’t fast enough, and we were attacking the wrong problems.

But necessity is the mother of invention. Today, AI and machine learning are being used in what is being called predictive analytics.  In a nutshell, it’s not enough to react to an application failure.  Applications are complex to diagnose and repair, and any downtime on a critical application costs money and could harm people.  Simply, we are no longer in a position to allow applications to fail.

Today we have the data and analysis available to measure baseline characteristics of an application, and look for trends in a continual, real-time analysis of that data.  We want to be able to predict if an application is beginning to fail.  And we can use the data to diagnose just what is failing.  In that the team can work on fixing it before something goes wrong.

What kind of data am I talking about?  Have you ever looked at Perfmon on your computer?  In a console window, simply type Perfmon at the C prompt.  You will find a tool that lets you collect and plot an amazing number of different system and application characteristics.  Common ones are CPU utilization, network traffic, disk transfers, and page faults, but there are literally hundreds more.

The is a Big Data sort of thing; a server farm can generate terrabytes of log and other health data every day.  It is also a DevOps initiative.  We need tools to be able to aggregate and analyze the data, and present it in a format understandable by humans (at the top level, usually a dashboard of some sort).

How does testing fit in?  Well, we’ve typically been very siloed – dev, test, ops, network, security, etc.  A key facet of DevOps is to get these silos working together as one team.  And that may mean that testing has responsibilities after deployment as well as before.  They may establish the health baseline during the testing process, and also be the ones to monitor that health during production.

Microsoft, Phone Home July 12, 2015

Posted by Peter Varhol in Software development, Software platforms.
Tags: , ,
add a comment

I found this article on Computerworld about Microsoft’s retrenchment from Nokia’s phone business a worthwhile read, because of the company’s clear acknowledgment that it should never have spent billions of dollars to be in the handset business. Or, at best, perhaps it should have been, but it didn’t have the will to make a declining acquired business into a successful one.

I was nonplussed at the vitriol spewed in the comments section, personally attacking Steven J. Vaughn-Nichols in more ways than I cared to see. In particular, I was expecting to find an intelligent debate on the merits of his article; instead, it was pretty much of a trashing of his take on the topic.

To be fair, I think I may have hired SJVN for his first freelance assignment, circa 1990. And while the effort over the years was entirely his own, I am proud of the way that he has developed himself into a good and technical journalist in an era where neither quality is respected. Yes, he is a lifelong Linux guy, but that doesn’t make him a bad guy.

And I have to pretty much agree with him. That’s not an anti-Microsoft bias. That’s simply an acknowledgement that, despite spending tens of billions of dollars over a period of well over ten years, Microsoft still seems to lack the ability to commercialize its mobile software technology.

I confess that at one level I don’t understand its problem. Microsoft is technically very good, and in particular I love Cortina. Microsoft does really cool stuff, but outside of the enterprise realm, seems to have lost the will to effectively market.

I will say that a part of the company’s problem is on its obsession with the same look and feel across all devices. Windows on the desktop should never have translated to Windows on the phone. I had the opportunity to use a Microsoft Surface for several months, and while I admire the hardware design, the look and feel is far too crowded for the screen real estate. You may argue that it was intended to use with an external monitor, but it’s mobile, and the UI just doesn’t work for its screen form factor.

But there’s more than that, and I think it is a corporate culture thing, rather than a technology thing. Microsoft as a 100,000+ employee company simply has desktop too embedded into its DNA to pivot to anything else. Perhaps Nadella can change that, over time.

Some of the commenters lump Embedded and CE and Pocket PC and Surface in the same category in an attempt to say that Microsoft has been enormously successful in mobile devices, but that’s at best disingenuous, and at worst delusional.

I’m getting ready to buy about my fifteenth Windows PC (mostly laptops) in the last 20 years. I have mixed opinions of Microsoft’s technology during that time, but it is a necessity for many users (mostly including me). I fully accept Microsoft and its role in computing, but I’m neither a cheerleader nor a detractor.

So, while SJVN’s biases are evident, it is still a worthwhile article. And while the commenters’ biases are even more evident, they do not in any way add to the debate.

Should Coders Learn to Test? July 8, 2015

Posted by Peter Varhol in Software development.
Tags: ,
1 comment so far

This topic occurred to me in response to another posting that asked if testers should learn to code, and is a follow-on to my previous post on the unique skill set that software testers possess. If we seriously question whether testers should learn to code (and most opinions fall on the ‘yes’ side of this question), then it is relevant to ask the corresponding question.

Wait a minute, I hear you thinking. That’s a stupid question. Coders already know how to test; they are coders, after all. Testing is simply a subset of coding, the act of making sure that the code that they have written works properly.

It’s nice to believe that coders know how to test because they know how to code, but that’s fallacious reasoning. First, it’s a different skill set entirely. Coding is highly detail-oriented and focused on making sure the code is grammatically and logically correct. Testing asks whether that code covers all requirements, is usable, and is fit for its intended purpose.

Second, while it’s a cliché that coders can’t test their own code, that doesn’t make it any less true. I’ve tested my own code, and I shy away from edge cases or unusual inputs. Testers bravely go where coders don’t.

Third, testers do much more than checking code for logic bugs. Because of their broad mandate encompassing correctness as well as domain expertise and information from end users, testers must be both detail-oriented and focused on the end result, yet able to be flexible in terms of their goals.

So should coders learn the skills of testers? The answer is, of course, it depends. We should all be learning additional skills, but only have a finite amount of time, so there is a cost/benefit tradeoff for any professional in learning (and presumably practicing) a collateral skill. By learning how to test, coders by necessity have to reduce time learning a new language, or learning continuous integration, for example.

But there are technical advantages for coders to learn testing. At the very least, it will make them more thoughtful coders. At best, it can help them write better code, as they become experienced with knowing what testers look for.

If coders do decide to learn to test, they have to give up certain biases to effectively learn the skill set. Here are just a few of those biases.

  1. Users would never do that! That’s simply not true. Users will do everything you can imagine and many things you can’t.
  2. It’s not a bug, it’s a feature. Stop arguing, and start working together to determine if it’s a feature needed by the users.
  3. Testers just slow us down. That may be true in some cases, but most of the time testers speed up the application delivery process. If they seem to slow you down, perhaps it’s because developers didn’t do their jobs right to begin with.

Probably the best way for coders to learn how to test is to do pair-testing with an experienced tester. Participating in testing with someone who already has the skill set will help a developer learn how to spot weaknesses in their code, how end users approach their tasks with software, and how to assess risks and determine testing strategies.

The goal isn’t to turn coders into testers, but to make them better coders. For many coders, that’s a skill worth having.

Software Testing is a State of Mind July 2, 2015

Posted by Peter Varhol in Software development.
Tags: , ,
1 comment so far

I was prompted to write this by yet another post on whether or not testers should learn to code. While it gave cogent arguments on both sides, it (prematurely, I believe) concluded that testing is a fundamental skill for testers, and discussed how testers could develop their coding skills.

The reality is much more nuanced. There are different types of testers. An automation engineer is likely to be coding, or scripting, on a daily basis. A functional tester using an automated testing tool (commonly Selenium, or perhaps a commercial one) will write or modify scripts generated by the tool.

And in general, we try to automate repeatable processes. Often this can be done with customizable workflows in an ALM product, but there might be some amount of scripting required.

But while coding knowledge can improve a tester’s skill set, it’s not required for all roles. And sometimes it can detract from other, more important skills. That got me to thinking about the unique skill sets of testers. There are unique mental and experiential skills that testers bring to their job. The best testers intuitively recognize the skills needed, and work hard to develop them.

  • Curiosity. Good testers do more than execute test cases. They look for inconsistencies or incongruities, question why, and don’t stop looking for improvements until the software is deployed.
  • Logic and Discipline. Testers do very detailed work, and approach that work in a step-by-step logical fashion. Their thought processes have to be clear and sharp, and they have to move methodically from one step to the next.
  • Imagination. Testers understand the user personas and put themselves in their role. The best can work through the application as both a new and experienced user, and find things no one ever considered.
  • Confidence. Testers often have to present unpopular points of view. If they can do so while believing in their own skills and conclusions, while also taking into account differing points of view, they can be successful voice of both the user and application quality.
  • Dealing with Ambiguity. It’s rarely clear what a requirement says, whether the test case really addresses it, whether an issue is really an issue, and what priority it is. Testers have to be ready to create a hypothesis, and provide evidence to support or reject that hypothesis.

These tend to be different skill sets than those possessed by coders. In particular, many coders tend to focus very narrowly on their particular assignments, because of the level of detail required to understand and successfully implement their part of the application. Coders also dislike ambiguity; code either works or it doesn’t, it either satisfies a requirement or it doesn’t. Computers aren’t ambiguous, so coders can’t write code that doesn’t clearly produce a specific end result.

Coders may argue that they produce a tangible result, source code that makes an application concept a reality. Whereas the work product of testers is usually a bit more amorphous. Ideally, a tester would like to say that software meets requirements and is of high quality, in which case few defects will be found. If testers write many defects, it’s interpreted as negative news rather than a desired result.

But organizations can’t look at testers as second class participants because of that. Testers have a unique skill set that remains essential in getting good and useful software into the hands of users. I don’t think that skill set has been very well documented to date, however. And it may not be appreciated because of that.

On Certifications, Knowledge, and Competence April 29, 2015

Posted by Peter Varhol in Software development.
Tags:
add a comment

There is an ongoing and more recently growing controversy surrounding the ways that testers and other software professionals demonstrate their competence. In particular, the controversy centers on the testing services provided by the world’s largest provider of certification services, the ISTQB, and its US arm, the ASTQB.

The primary criticisms of the ISTQB seem to be that the use of a multiple choice test for certification trivializes testing knowledge and judgment. In addition, some see the tutorial offerings of supporters to be an unnecessary and unwanted commercial advantage in training for this specific certification. So there are currently online petitions that demand the ISTQB release its own data that assesses the validity of the certification.

While both of those criticisms have some truth, Cem Kaner notes in a comprehensive post that the ISTQB seems intent on measuring and improving its certifications, and supports its right to examine their data and institute those improvements in private. As long as it is not making demonstrably false statements regarding the knowledge gained as a result of certification, the organization has the legal and ethical right to improve its exams. Kaner does not claim to have reviewed all of the organization’s marketing materials, but does assert that he sees no apparently false claims.

Kaner, one of the true deans of software testing, is right. I have referred to him as our “adult supervision” on Twitter. But the argument goes beyond his level-headed analysis of what the ISTQB offers and does not offer to testers.

We are fortunate to work in a field where there are a lot of ways of adding value. Some write code to build software, and write unit tests to verify how they think discrete parts of the code should work. Others work closely with developers to understand the underlying structure and develop tests that reflect the requirements of the application and its underlying requirements. Still more are domain experts, fighting for the user community and possessing a keen understanding of what is needed beyond the stated requirements to deliver a quality product.

Measuring all of those skill sets in a single exam, whether multiple choice, single correct answer, or even in essay format, seems incongruous and indeed impossible. Yet a broad range of people and skills can and do help in determining whether an application meets the needs of the business and has the necessary quality to deploy.

Part of the larger problem is that some employers expect ISTQB certification as a prerequisite for a testing job, which distorts its value in the marketplace. Another part is that ISTQB marketing, while not demonstrably false, might be interpreted as misleading.

Kaner says that while the ISTQB certification lacks many essentials, we have not yet been able to devise anything better. He’s not happy with ISTQB certification, but for technical rather than business reasons. And he’s smart enough to know that there isn’t anything better right now, although he is hoping for alternatives to develop over time. He would prefer to expend energy in improving possible certifications, rather than fighting over the relative value of this particular one. That’s a position that’s hard to argue with.

How Mature Are Your Testing Practices? April 22, 2015

Posted by Peter Varhol in Software development, Strategy.
add a comment

I once worked for a commercial software development company whose executive management decided to pursue the Software Engineering Institute’s (SEI) CMMI (Capability Maturity Model-Integration) certification for its software development practices. It hired and trained a number of project managers across multiple locations, documented everything it could, and slowed down its schedules so that teams could learn and practice the new documented processes, and collect the data needed to improve those processes.

There were good reasons for this software provider to try to improve its practices at the time. It had a quality problem, with thousands of known defects not getting addressed and going into production, and its customers not happy with the situation.

However, this new initiative didn’t turn out so well, as you might imagine. After spending millions of dollars over several years, the organization eventually achieved CMMI Level 2 (the activities were repeatable). It wasn’t clear that quality improved, although it likely would have become incrementally better over a longer period of time. But time moved on, and CMMI certification ceased to have the cachet that it once did. Today, in a stunning reversal of strategy, this provider now claims to be fully committed to Agile development practices.

This is a cautionary tale for any software project that looks for a specific process as a solution to their quality or delivery issues. A particular process or discipline won’t automatically improve your software. In the case of my previous employer, CMMI added burdensome overhead to a software supplier that was also forced to respond more quickly to changing technologies.

There are a number of different maturity models that claim to enable organizations to develop and extend processes that can make a difference in software quality and delivery. The SEI’s CMMI is probably the best known and most widely utilized. There is also a testing maturity model, which specifies similar principles as CMMI into the testing realm. And software tools vendor Coverity has recently released its Development Testing Maturity Model, which outlines a phased-in approach to development testing adoption, and claims to better support a DevOps strategy.

All of these maturity models, in moderation, can be useful for software projects and organizations seeking to standardize and advance the maturity of their project processes. But they don’t automatically improve quality or on-schedule delivery of a product or application.

Instead, teams and organizations should build a process that best reflects their business needs and culture, and then continue to refine that process as needs change to ensure that it continues to improve their ability to deliver quality software. It’s not as important to develop a maturity model as it is to identify your process, customize your ALM tools for that process, and make sure your team is appropriately trained in using it.

The Challenges of Concurrency in Software March 12, 2015

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
add a comment

I learned how to program on an Apple Macintosh, circa late 1980s, using Pascal and C. As I pursued graduate work in computer science, I worked with Lisp and Smalltalk, running to the Motorola 680X0 and eventually the Intel architecture.

These were all single-threaded programs, meaning that they executed sequentially, one step at a time. As a CS grad student, and later as a university professor, I learned and taught about multi-threading and concurrent code execution.

But it was almost entirely theoretical. Until the turn of the century, almost no code was executed in parallel. Part of the reason was that none but the most sophisticated computers executed in parallel. Even as Intel and other processors moved decisively into multi-core architectures, operating systems and programmers weren’t ready to take advantage of this hardware innovation.

But only by taking advantage of multi-core and hyper-threaded processors could developers improve the performance of increasingly complex applications. So, aided by modern programming languages such as Java and C#, developers have been cautiously working on applications that can take better advantage of these processors.

To do so, they’re dusting off their old textbooks and looking at concepts like “critical section”, “fork”, and “join”. They are deeply examining their code to determine which operations can occur simultaneously without producing errors.

To be fair, several tools came out in the mid-2000s claiming the ability to automatically parallelize existing code, mostly by analyzing the code and trying to parcel out threads based on an expectation that certain code segments can be parallelized. In practice, there was not a lot of code that could safely be parallelized in this way.

But most new applications are multithreaded, and the operating system can dispatch threads to different cores and CPUs for concurrent execution. Using today’s processors, this is the only way to get the best performance out of modern software.

The problem is that developers are still fumbling their way through the process of writing code that can execute in parallel. There are two types of problems. One is deadlock, where code can’t continue because another thread won’t give up a resource, such as a piece of data. This will stop execution altogether.

Another is the insidious race condition, where the result is dependent upon which thread completes first. This is insidious because an incorrect result can be random, and is often difficult to detect because it may not result in an obvious error.

Fortunately, tools are emerging that help in the identification and analysis of concurrent software issues. One is Race Catcher, from Thinking Software. It can be used in two ways during the application lifecycle. During development and test, it can dynamically analyze Java code to look ahead for deadlocks and race conditions. You can’t predict the occurrence of a race condition, of course, but you can tell where the same data is being processed in different ways, at the same time.

In a headless version, it can run as an agent on production servers, doing the same thing. That’s a version of DevOps. We catch things in production before they become problems, and refer them back to development to be fixed.

In an era where software development is changing more quickly and dramatically than any time since the PC era, we need more tools like this.