jump to navigation

Automation Can Be Dangerous December 6, 2018

Posted by Peter Varhol in Software development, Software tools, Strategy, Uncategorized.
Tags: , ,
add a comment

Boeing has a great way to prevent aerodynamic stalls in their 737 MAX aircraft.  A set of sensors determines through airspeed and angle of attack that an aircraft is about to stall (that is, lose lift on its wings), and automatically pitch the nose down to recover.

Apparently malfunctioning sensors on Lion Air Flight 610 caused the aircraft nose to sharply pitch down absent any indication of a stall.  Preliminary analysis indicates that the pilots were unable to overcome the nose-down attitude, and the aircraft dove into the sea.  Boeing’s solution to this automation fault was explicit, even if its documentation wasn’t.  Turn off the system.

And this is what the software developers, testers, and their bosses don’t get.  Everyone thinks that automation is the silver bullet.  Automation is inherently superior to manual testing.  Automation will speed up testing, reduce costs, and increase quality.  We must have more automation engineers, and everyone not an automation engineer should just go away now.

There are many lessons here for software teams.  Automation is great when consistency in operation is required.  Automation will execute exactly the same steps until the cows come home.  That’s a great feature to have.

But many testing activities are not at all about consistency in operation.  In fact, relatively few are.  It would be good for smoke tests and regression tests to be consistent.  Synthetic testing in production also benefits from automation and consistency.

Other types of testing?  Not so much.  The purpose of regression testing, smoke testing, and testing in production is to validate the integrity of the application, and to make sure nothing bad is currently happening.  Those are valid goals, but they are only the start of testing.

Instead, testing is really about individual users and how they interact with an application.  Every person does things on a computer just a little different, so it behooves testers to do the same.  This isn’t harkening back to the days of weeks or months of testing, but rather acknowledging that the purpose of testing is to ensure an application is fit for use.  Human use.

And sometimes, whether through fault or misuse, automation breaks down, as in the case of the Lion Air 737.  And teams need to know what to do when that happens.

Now, when you are deploying software perhaps multiple times a day, it seems like it can take forever to sit down and actually use the product.  But remember the thousands more who are depending on the software and the efforts that go behind it.

In addition to knowing when and how to use automation in software testing, we also need to know when to shut it off, and use our own analytical skills to solve a problem.  Instead, all too often we shut down our own analytical skills in favor of automation.

Here’s Looking At You June 18, 2018

Posted by Peter Varhol in Algorithms, Machine Learning, Software tools, Technology and Culture.
Tags: , , ,
add a comment

I studied a rudimentary form of image recognition when I was a grad student.  While I could (sometimes) identify simple images based on obviously distinguishing characteristics, the limitations of rule-based systems, the computing power of Lisp Machines and early Macs, facial recognition was well beyond the capabilities of the day.

Today, facial recognition has benefitted greatly from better algorithms and faster processing, and is available commercially by several different companies.  There is some question as to the reliability, but at this point it’s probably better than any manual approach to comparing photos.  And that seems to be a problem for some.

Recently the ACLU and nearly 70 groups sent a letter to Amazon CEO Jeff Bezos, alongside the one from 20 shareholder groups, arguing Amazon should not provide surveillance systems such as facial recognition technology to the government.  Amazon has a facial recognition system called Rekognition (why would you use a spelling that is more reminiscent of evil times in our history?)

Once again, despite the Hitleresque product name, I don’t get the outrage.  We give the likes of Facebook our life history in detail, in pictures and video, and let them sell it on the open market, but the police can’t automate the search of photos?  That makes no sense.  Facebook continues to get our explicit approval for the crass but grossly profitable commercialization of our most intimate details, while our government cannot use commercial and legal software tools?

Make no mistake; I am troubled by our surveillance state, probably more than most people, but we cannot deny tools to our government that the Bad Guys can buy and use legally.  We may not like the result, but we seem happy to go along like sheep when it’s Facebook as the shepherd.

I tried for the life of me to curse our government for its intrusion in our lives, but we don’t seem to mind it when it’s Facebook, so I just can’t get excited about the whole thing.  I cannot imagine Zuckerberg running for President.  Why should he give up the most powerful position in the world to face the checks and balances of our government?

I am far more concerned about individuals using commercial facial recognition technology to identify and harass total strangers.  Imagine an attractive young lady (I am a heterosexual male, but it’s also applicable to other combinations) walking down the street.  I take her photo with my phone, and within seconds have her name, address, and life history (quite possibly from her Facebook account).  Were I that type of person (I hope I’m not), I could use that information to make her life difficult.  While I don’t think I would, there are people who would think nothing of doing so.

So my take is that if you don’t want the government to use commercial facial recognition software, demonstrate your honesty and integrity by getting the heck off of Facebook first.

Update:  Apple will automatically share your location when you call 911.  I think I’m okay with this, too.  When you call 911 for an emergency, presumably you want to be found.

The Golden Age of Databases May 10, 2018

Posted by Peter Varhol in Architectures, Software platforms, Software tools.
Tags: , , ,
add a comment

Let’s face it, to most developers, databases are boring and opaque.  As long as I can create a data object to call the database and bring data into my application, I really don’t care about the underlying structures.  And many of us have an inherent bias against DBAs, for multiple reasons.  Years ago, one of my computer science graduate students made the proclamation, “I’m an engineer; I write technical applications.  I have no need for databases at all.”

I don’t think this is true anymore, if it ever was.  The problem was in the predominance of SQL relational databases.  The mathematical and logical foundation of relational databases is actually quite interesting, but from a practical standpoint actually setting up a database, whether through E-R diagrams or other approach, is pretty cut and dried.  And maintaining and performance tuning databases can often seem like an exercise in futility.

Certainly there were other types of databases and derivative products 20 or 30 years ago.  My old company, Progress Software, still makes a mint off its OpenEdge database and 4GL environment.  Sybase PowerBuilder was popular for at least two decades, and Borland Delphi still has a healthy following.  OLAP engines were available in the 1990s, working with SQL relational databases to quickly extract and report on relational data.

But traditional relational databases have disadvantages for today’s uses.  They are meant to be a highly reliable storage and retrieval system.  They tend to have the reliable part down pat, and there are almost universal means of reading, writing, modifying, and monitoring data in relational tables.

The world of data has changed.  While reliability and programming access of relational databases remains important in traditional enterprise applications, software has become essential in a wide variety of other areas.  This includes self-driving cars, financial trading, manufacturing, retail, and commercial applications in general.

Relational databases have been used in these areas, but have limitations that are becoming increasingly apparent as we stress them in ways they weren’t designed for.  So instead we are seeing alternatives that specialize in a specific area of storage and retrieval.  For example, the No-SQL MongoDB and MapReduce in general are making it possible to store large amounts of unstructured data, and to quickly search and retrieve data from that storage.  The open source InfluxDB provides a ready store for event-driven data, enabling applications to stream data based on a time series.  Databases such as FaunaDB can be used to implement blockchain.

All of these databases can run in the cloud, or on premises.  They tend to be easy to set up and use, and you can almost certainly find one to meet your specific needs.

So as you develop your next ground-breaking application, don’t find yourself limited by a relational database.  You’re not stuck in the same rut that you were ten years ago.  Take a look at what has to be called the Golden Age of databases.

The Tyranny of Open Source July 28, 2016

Posted by Peter Varhol in Software development, Software platforms, Software tools.
Tags: ,
3 comments

If that title sounds strident, it quite possibly is. But hear me out.  I’ve been around the block once or twice.  I was a functioning adult when Richard Stallman wrote The GNU Manifesto, and have followed the Free Software Foundation, open source software licenses, and open source communities for perhaps longer than you have been alive (yes, I’m an older guy).

I like open source. I think it has radically changed the software industry, mostly for the better.

But. Yes, there is always a “but”.  I subscribe to many (too many) community forums, and almost daily I see someone with a query that begins “What is the best open source tool that will let me do <insert just about any technical task here>.”

When I see someone who asks such a question on a forum, I see someone who is flailing about, with no knowledge of the tools of their field, or even how to do a particular activity. That’s okay; we’ve all been in that position.  They are trying to get better.

We all have a job to do, and we want to do it as efficiently as possible. For any class of activity in the software development life cycle, there are a plethora of tools that make that task easier/manageable/possible.

If you tell me that it has to be an open source tool, you are telling me one of two things. First, your employer, who is presumably paying you a competitive (in other words, fairly substantial) salary, is unwilling to support you in getting your job done.  Second, you are afraid to ask if there is the prospect of paying for a commercial product.

And you need to know the reason before you ask the question in a forum.

There is a lot of great open source software out there that can help you do your job more efficiently. There is also a lot of really good commercial software out there that can help you do your job more efficiently.  If you are not casting a broad net across both, you are cheating both yourself and your employer.  If you cannot cast that broad net, then your employer is cheating you.

So for those of you who get onto community forums to ask about the best open source tool for a particular activity, I have a question in return. Are you afraid to ask for a budget, or have you been told in no uncertain terms that there is none?  You know, you might discover that you need help using your open source software, and have to buy support.  If you need help and can’t pay for it, then you have made an extremely poor decision.

So what am I trying to say? You should be looking for the best tool for your purpose.  If it is open source, you may have to be prepared to subscribe to support.  If it is commercial, you likely have to pay a fee up front.  If your sole purpose in asking for an open source product is to avoid payment, you need to run away from your work situation as quickly as possible.

The Challenges of Concurrency in Software March 12, 2015

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
add a comment

I learned how to program on an Apple Macintosh, circa late 1980s, using Pascal and C. As I pursued graduate work in computer science, I worked with Lisp and Smalltalk, running to the Motorola 680X0 and eventually the Intel architecture.

These were all single-threaded programs, meaning that they executed sequentially, one step at a time. As a CS grad student, and later as a university professor, I learned and taught about multi-threading and concurrent code execution.

But it was almost entirely theoretical. Until the turn of the century, almost no code was executed in parallel. Part of the reason was that none but the most sophisticated computers executed in parallel. Even as Intel and other processors moved decisively into multi-core architectures, operating systems and programmers weren’t ready to take advantage of this hardware innovation.

But only by taking advantage of multi-core and hyper-threaded processors could developers improve the performance of increasingly complex applications. So, aided by modern programming languages such as Java and C#, developers have been cautiously working on applications that can take better advantage of these processors.

To do so, they’re dusting off their old textbooks and looking at concepts like “critical section”, “fork”, and “join”. They are deeply examining their code to determine which operations can occur simultaneously without producing errors.

To be fair, several tools came out in the mid-2000s claiming the ability to automatically parallelize existing code, mostly by analyzing the code and trying to parcel out threads based on an expectation that certain code segments can be parallelized. In practice, there was not a lot of code that could safely be parallelized in this way.

But most new applications are multithreaded, and the operating system can dispatch threads to different cores and CPUs for concurrent execution. Using today’s processors, this is the only way to get the best performance out of modern software.

The problem is that developers are still fumbling their way through the process of writing code that can execute in parallel. There are two types of problems. One is deadlock, where code can’t continue because another thread won’t give up a resource, such as a piece of data. This will stop execution altogether.

Another is the insidious race condition, where the result is dependent upon which thread completes first. This is insidious because an incorrect result can be random, and is often difficult to detect because it may not result in an obvious error.

Fortunately, tools are emerging that help in the identification and analysis of concurrent software issues. One is Race Catcher, from Thinking Software. It can be used in two ways during the application lifecycle. During development and test, it can dynamically analyze Java code to look ahead for deadlocks and race conditions. You can’t predict the occurrence of a race condition, of course, but you can tell where the same data is being processed in different ways, at the same time.

In a headless version, it can run as an agent on production servers, doing the same thing. That’s a version of DevOps. We catch things in production before they become problems, and refer them back to development to be fixed.

In an era where software development is changing more quickly and dramatically than any time since the PC era, we need more tools like this.

Of Apps and Men December 18, 2014

Posted by Peter Varhol in Software development, Software platforms, Software tools.
Tags:
add a comment

Fair warning – I am eventually going to say more about Uber. The apps business is an interesting one, and some historical context is necessary to understand just why. In the PC era, we typically paid hundreds of dollars for individual applications. As a result, we would buy only a few of them. And we would use those applications only when we were seated in front of our computers. The software business was the application, and selling it made the business.

In the smartphone/tablet era, however, apps are essentially free, or at worst cost only a few bucks. People are using more apps, and using them for longer periods of time than we ever did on the PC.

But that still doesn’t quite make the bottom line sing. I mention Uber above because of its recent valuation of $41 billion, at a time when the entire annual taxi revenue of the US is $11 billion. The standard line by the VCs is that it will transform all of surface transportation as more and more people use Uber, even rather than their own cars.

I don’t buy that argument, but that is a tale for another day. But the message, I think, is fundamentally correct. The message is that you don’t build a business on an app. You will never make money, at least not sustainable money, from the app. Rather, the app is the connection to your business. You use the app simply as another connection to your products or services, or as a connection to an entirely new type of business.

But today, you are not going to use and app to build a business that was the standard fare of the software industry only a few years ago.

The corollary, of course, is that almost every business will need its own app, sooner or later.  That represents a boon for developers.

How Do We Fix Testing? April 17, 2014

Posted by Peter Varhol in Software development, Software tools.
Tags:
1 comment so far

Here is a presentation abstract I hope to get accepted at a conference in the near future:

Perhaps in no other professional field is the dichotomy between theory and practice more starkly different than in the realm of software testing. Researchers and thought leaders claim that testing requires a high level of cognitive and interpersonal skills, in order to make judgments about the ability of software to fulfill its operational goals. In their minds, testing is about assessing and communicating the risks involved in deploying software in a specific state.

However, in many organizations, testing remains a necessary evil, and a cost to drive down as much as possible. Testing is merely a measure of conformance to requirements, without regard to the quality of requirements or how conformance is measured. This is certainly an important measure, but tells an incomplete story about the value of software in support of our business goals.

We as testers often help to perpetuate the status quo. Although in many cases we realize we can add far more value than we do, we continue to perform testing in a manner that reduces our value in the software development process.

This presentation looks at the state of the art as well of the state of common practice, and attempts to provide a rationale and roadmap whereby the practice of testing can be made more exciting and stimulating to the testing professional, as well as more valuable to the product and the organization.

Is Black Box Testing Dead? November 11, 2012

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
10 comments

Testing has traditionally focused on making sure that requirements have been implemented appropriately. In general, we refer to that as “black box testing”; we really don’t give a lot of thought to how something worked. What we really care about is whether or not its behavior accurately reflected the stated requirements.

But as we become more agile, and testing becomes more integrated with the development process, black box testing may be going the way of the dodo bird.

I was at the TesTrek conference last week, where keynote speaker Bruce Pinn, the CIO at International Financial Data Services Ltd., noted that it behooved testers to take a programming course, so that they could better interact with developers.

I have my doubts as to whether a single programming course could educate testers on anything useful, but his larger point was that testers needed to become more technically involved in product development. He wanted testers more engaged in the development process, to be able to make recommendations as to the quality or testability of specific implementations.

In particular, he noted that his emerging metric of choice was not test coverage, but rather code coverage. Test coverage is the question of whether there are functional tests for all stated requirements, while code coverage measures the execution pathways through the code. The idea is that all relevant lines of code get executed at least once during testing. In practice, it’s not feasible to get 100 percent of code coverage, because some code exists for very obscure error-handling situations. In other cases, as we used to say when I studied formal mathematical modeling using Petri Nets, some states simply aren’t reachable.

I asked the obvious question of whether this meant he was advocating gray box or white box testing, and he responded in the affirmative. Test coverage is a metric of black box testing, because it doesn’t provide insight of anything underneath the surface of the application. Code coverage is a measure of how effective testing is at reaching as much of the code as possible.

In fact, both types of coverage are needed. Testers will always have to know if the product’s functionality meets the business needs. But they also have to understand how effective that testing is in actually exercising the underlying code. Ideally, testers should be running code coverage at the same time they’re executing their tests, so that they understand how comprehensive their test cases really are.