The Tyranny of Open Source July 28, 2016Posted by Peter Varhol in Software development, Software platforms, Software tools.
Tags: GNU, open source
If that title sounds strident, it quite possibly is. But hear me out. I’ve been around the block once or twice. I was a functioning adult when Richard Stallman wrote The GNU Manifesto, and have followed the Free Software Foundation, open source software licenses, and open source communities for perhaps longer than you have been alive (yes, I’m an older guy).
I like open source. I think it has radically changed the software industry, mostly for the better.
But. Yes, there is always a “but”. I subscribe to many (too many) community forums, and almost daily I see someone with a query that begins “What is the best open source tool that will let me do <insert just about any technical task here>.”
When I see someone who asks such a question on a forum, I see someone who is flailing about, with no knowledge of the tools of their field, or even how to do a particular activity. That’s okay; we’ve all been in that position. They are trying to get better.
We all have a job to do, and we want to do it as efficiently as possible. For any class of activity in the software development life cycle, there are a plethora of tools that make that task easier/manageable/possible.
If you tell me that it has to be an open source tool, you are telling me one of two things. First, your employer, who is presumably paying you a competitive (in other words, fairly substantial) salary, is unwilling to support you in getting your job done. Second, you are afraid to ask if there is the prospect of paying for a commercial product.
And you need to know the reason before you ask the question in a forum.
There is a lot of great open source software out there that can help you do your job more efficiently. There is also a lot of really good commercial software out there that can help you do your job more efficiently. If you are not casting a broad net across both, you are cheating both yourself and your employer. If you cannot cast that broad net, then your employer is cheating you.
So for those of you who get onto community forums to ask about the best open source tool for a particular activity, I have a question in return. Are you afraid to ask for a budget, or have you been told in no uncertain terms that there is none? You know, you might discover that you need help using your open source software, and have to buy support. If you need help and can’t pay for it, then you have made an extremely poor decision.
So what am I trying to say? You should be looking for the best tool for your purpose. If it is open source, you may have to be prepared to subscribe to support. If it is commercial, you likely have to pay a fee up front. If your sole purpose in asking for an open source product is to avoid payment, you need to run away from your work situation as quickly as possible.
The Challenges of Concurrency in Software March 12, 2015Posted by Peter Varhol in Software development, Software tools.
Tags: deadlock, Race conditions, Thinking Software
add a comment
I learned how to program on an Apple Macintosh, circa late 1980s, using Pascal and C. As I pursued graduate work in computer science, I worked with Lisp and Smalltalk, running to the Motorola 680X0 and eventually the Intel architecture.
These were all single-threaded programs, meaning that they executed sequentially, one step at a time. As a CS grad student, and later as a university professor, I learned and taught about multi-threading and concurrent code execution.
But it was almost entirely theoretical. Until the turn of the century, almost no code was executed in parallel. Part of the reason was that none but the most sophisticated computers executed in parallel. Even as Intel and other processors moved decisively into multi-core architectures, operating systems and programmers weren’t ready to take advantage of this hardware innovation.
But only by taking advantage of multi-core and hyper-threaded processors could developers improve the performance of increasingly complex applications. So, aided by modern programming languages such as Java and C#, developers have been cautiously working on applications that can take better advantage of these processors.
To do so, they’re dusting off their old textbooks and looking at concepts like “critical section”, “fork”, and “join”. They are deeply examining their code to determine which operations can occur simultaneously without producing errors.
To be fair, several tools came out in the mid-2000s claiming the ability to automatically parallelize existing code, mostly by analyzing the code and trying to parcel out threads based on an expectation that certain code segments can be parallelized. In practice, there was not a lot of code that could safely be parallelized in this way.
But most new applications are multithreaded, and the operating system can dispatch threads to different cores and CPUs for concurrent execution. Using today’s processors, this is the only way to get the best performance out of modern software.
The problem is that developers are still fumbling their way through the process of writing code that can execute in parallel. There are two types of problems. One is deadlock, where code can’t continue because another thread won’t give up a resource, such as a piece of data. This will stop execution altogether.
Another is the insidious race condition, where the result is dependent upon which thread completes first. This is insidious because an incorrect result can be random, and is often difficult to detect because it may not result in an obvious error.
Fortunately, tools are emerging that help in the identification and analysis of concurrent software issues. One is Race Catcher, from Thinking Software. It can be used in two ways during the application lifecycle. During development and test, it can dynamically analyze Java code to look ahead for deadlocks and race conditions. You can’t predict the occurrence of a race condition, of course, but you can tell where the same data is being processed in different ways, at the same time.
In a headless version, it can run as an agent on production servers, doing the same thing. That’s a version of DevOps. We catch things in production before they become problems, and refer them back to development to be fixed.
In an era where software development is changing more quickly and dramatically than any time since the PC era, we need more tools like this.
Of Apps and Men December 18, 2014Posted by Peter Varhol in Software development, Software platforms, Software tools.
add a comment
Fair warning – I am eventually going to say more about Uber. The apps business is an interesting one, and some historical context is necessary to understand just why. In the PC era, we typically paid hundreds of dollars for individual applications. As a result, we would buy only a few of them. And we would use those applications only when we were seated in front of our computers. The software business was the application, and selling it made the business.
In the smartphone/tablet era, however, apps are essentially free, or at worst cost only a few bucks. People are using more apps, and using them for longer periods of time than we ever did on the PC.
But that still doesn’t quite make the bottom line sing. I mention Uber above because of its recent valuation of $41 billion, at a time when the entire annual taxi revenue of the US is $11 billion. The standard line by the VCs is that it will transform all of surface transportation as more and more people use Uber, even rather than their own cars.
I don’t buy that argument, but that is a tale for another day. But the message, I think, is fundamentally correct. The message is that you don’t build a business on an app. You will never make money, at least not sustainable money, from the app. Rather, the app is the connection to your business. You use the app simply as another connection to your products or services, or as a connection to an entirely new type of business.
But today, you are not going to use and app to build a business that was the standard fare of the software industry only a few years ago.
The corollary, of course, is that almost every business will need its own app, sooner or later. That represents a boon for developers.
How Do We Fix Testing? April 17, 2014Posted by Peter Varhol in Software development, Software tools.
1 comment so far
Here is a presentation abstract I hope to get accepted at a conference in the near future:
Perhaps in no other professional field is the dichotomy between theory and practice more starkly different than in the realm of software testing. Researchers and thought leaders claim that testing requires a high level of cognitive and interpersonal skills, in order to make judgments about the ability of software to fulfill its operational goals. In their minds, testing is about assessing and communicating the risks involved in deploying software in a specific state.
However, in many organizations, testing remains a necessary evil, and a cost to drive down as much as possible. Testing is merely a measure of conformance to requirements, without regard to the quality of requirements or how conformance is measured. This is certainly an important measure, but tells an incomplete story about the value of software in support of our business goals.
We as testers often help to perpetuate the status quo. Although in many cases we realize we can add far more value than we do, we continue to perform testing in a manner that reduces our value in the software development process.
This presentation looks at the state of the art as well of the state of common practice, and attempts to provide a rationale and roadmap whereby the practice of testing can be made more exciting and stimulating to the testing professional, as well as more valuable to the product and the organization.
Is Black Box Testing Dead? November 11, 2012Posted by Peter Varhol in Software development, Software tools.
Tags: code coverage, Petri Net, test coverage
Testing has traditionally focused on making sure that requirements have been implemented appropriately. In general, we refer to that as “black box testing”; we really don’t give a lot of thought to how something worked. What we really care about is whether or not its behavior accurately reflected the stated requirements.
But as we become more agile, and testing becomes more integrated with the development process, black box testing may be going the way of the dodo bird.
I was at the TesTrek conference last week, where keynote speaker Bruce Pinn, the CIO at International Financial Data Services Ltd., noted that it behooved testers to take a programming course, so that they could better interact with developers.
I have my doubts as to whether a single programming course could educate testers on anything useful, but his larger point was that testers needed to become more technically involved in product development. He wanted testers more engaged in the development process, to be able to make recommendations as to the quality or testability of specific implementations.
In particular, he noted that his emerging metric of choice was not test coverage, but rather code coverage. Test coverage is the question of whether there are functional tests for all stated requirements, while code coverage measures the execution pathways through the code. The idea is that all relevant lines of code get executed at least once during testing. In practice, it’s not feasible to get 100 percent of code coverage, because some code exists for very obscure error-handling situations. In other cases, as we used to say when I studied formal mathematical modeling using Petri Nets, some states simply aren’t reachable.
I asked the obvious question of whether this meant he was advocating gray box or white box testing, and he responded in the affirmative. Test coverage is a metric of black box testing, because it doesn’t provide insight of anything underneath the surface of the application. Code coverage is a measure of how effective testing is at reaching as much of the code as possible.
In fact, both types of coverage are needed. Testers will always have to know if the product’s functionality meets the business needs. But they also have to understand how effective that testing is in actually exercising the underlying code. Ideally, testers should be running code coverage at the same time they’re executing their tests, so that they understand how comprehensive their test cases really are.
Phil Zimmermann Is At It Again October 19, 2012Posted by Peter Varhol in Software tools, Technology and Culture.
Tags: PGP, security
add a comment
I am old enough to remember when Phil Zimmermann released Pretty Good Privacy, or PGP, as open source, circa 1991. I followed his strange but true legal travails with the US government for several years, in which he was under investigation for illegal munitions exports (PGP encryption), yet never arrested. It was only after three stressful years that the US government concluded, well something, and told him that he was able to go about his business.
Now there is a mobile app called Silent Circle that employs the same encryption on a phone, for voice, email, and text. PGP remains, well, pretty good, with an awful lot of computing horsepower and time required to break it.
PGP employs public/private key encryption. I have a key to encrypt, which is public. Only my trusted partners have the private key, to decrypt. The keys are typically 64-bit (old), 128-bit, or more.
I’m also old enough to remember the Clipper chip, a microprocessor that had embedded strong encryption for communications purposes. The catch was that the chip was designed by the NSA, and while the encryption was valid, it also included a “back door” that enabled the US Government to tap into it (purportedly with a court order, though I have no doubt that it could be done otherwise, for purposes other than criminal prosecution). The effort failed miserably, as computer and phone makers declined to use it, and other parties railed against it.
The Clipper chip was obviously ill-conceived (though, oddly enough, apparently not to the government). But I am in favor of law enforcement, though without the spectre of big brother government. These trends will always conflict, and it is right that they do so. Still, it is also right that freedom of, well, speech win out in this argument, even in the face of criminal activity. Let us find a different way to catch our criminals.
Is Impatience a Virtue? September 7, 2012Posted by Peter Varhol in Software tools, Technology and Culture.
1 comment so far
I grew up in a household of very limited means. If I wanted something beyond the basics, I had to save my paper route (do those still exist?) money, do odd jobs for pocket change, and in general deprive myself for weeks or months until I had the funds necessary. I waited, somewhat patiently, until a desired goal was within my grasp. Such an upbringing probably contributed to my not getting caught up in the credit economy, and coming out of the economic shocks of the last decade or so relatively unscathed.
But there are technology trends favoring impatience. Thanks to the speed and ubiquity of Google, we have access to information that in the past may have been completely unavailable, or at least would have required hours or days of research in the local library.
Now Evan Selinger makes the claim that tools such as the iPhone Suri are turning impatience into a virtue. When we want to know an answer, we ask Suri. We may not even trust our own senses, instead preferring to ask the one who has all of the information (“Siri, is it raining outside?”).
He quotes MIT Research Fellow Michael Schrage (who was the only columnist worth reading in Computerworld circa early 1990s) as saying “How would you be different if you regularly had seven or eight conversations a day with your smartphone?”
I’m not calling any of this a bad thing. We have the tools to be more knowledgeable and informed individuals, which may make us better consumers, better citizens, and more tolerant of other points of view. Technology that aims to please ultimately makes it more accessible to more people. These are generally good outcomes.
But it is different than the way we functioned in the past, and may have implications to our daily lives, from how we process information to how we make decisions. I’ve always believed that no decision should be made before it had to be made, so that we can watch how information played out over time. Having information so seamlessly available may mean that we’ll think we know more than we do, and make decisions more quickly. That may not be the best outcome.
How Do You Marry Java and .NET? April 13, 2012Posted by Peter Varhol in Software development, Software platforms, Software tools.
1 comment so far
Years ago, I did some work for Mainsoft, which had a technically cool way of running .NET code on Java application servers. This involved dynamically cross-compiling .NET into Java. The idea was that you could create your application (typically a web application) using Visual Studio, then with a minimum of effort, configure it to run on a JVM and application server.
I would provide a link for Mainsoft, except that the company has changed names and markets (www.harmon.ie). Apparently it wasn’t a good enough idea for a company to make money from. Part of the problem was that there were two paths to getting .NET to run on Java – you either did byte code translation, or your implemented some or all of the .NET Common Language Runtime in Java. Mainsoft did mostly the first, but also found that it was easier to use Mono classes for a lot of the CLR.
But now there seems to be a way to do it in the opposite direction; that is, running a Java application in .NET.
Mainsoft’s strategy was easily comprehendible but rather niche – developers were experienced in .NET and wanted to use the best .NET development tools possible, but the enterprise wanted flexibility in deployment.
IKVM, an open source project, uses some of the Mono classes to enable Java code to run on .NET. It also includes a .NET implementation of some Java classes. I’m not sure why Mono is needed in this case (and in fact, it’s likely that the Mono project has largely run its course). IKVM lists three components to the project:
• A Java Virtual Machine implemented in .NET
• A .NET implementation of the Java class libraries
• Tools that enable Java and .NET interoperability
Microsoft was promoting IKVM during its language conference, obviously as an existence proof for the concept that there may be people who are interested in porting from Java to .NET in this manner.
Still, as a practical matter, it doesn’t seem worthwhile doing. There are plenty of JVMs for Windows (I realize that .NET is a subset of Windows, but as a practical matter is pretty well tied to it). The distinction of running on .NET is one that most won’t bother to make. Perhaps someone out there can offer an explanation.