Artificial Intelligence and the Real Kind July 11, 2016Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: artificial intelligence, Machine Learning
add a comment
Over the last couple of months, I’ve been giving a lot of thought to robots, artificial intelligence, and the potential for replacing human thought and action. A part of that comes from the announcement by the European Union that it had drafted a “bill of rights” for robots as potential cyber-citizens of a more egalitarian era. A second part comes from my recent article on TechBeacon, which I titled “Testing a Moving Target”.
The computer scientist in me wants to say “bullshit disapproved”. Computer programs do what we instruct them to do, no more or no less. We can’t instruct them to think, because we can’t algorithmically (or in any other way) define thinking. There is no objective or intuitive explanation for human thought.
The distinction is both real and important. Machines aren’t able to look for anything that their programmers don’t tell them to (I wanted to say “will never be able” there, but I have given up the word “never” in informed conversation).
There is, of course, the Turing Test, which generally purports a way to determine whether you are interacting with a real person or computer program. In limited ways, a program (Eliza was the first, but it was an easy trick) can fool a person.
Here is how I think human thought is different than computer programming. I can look at something seemingly unstructured, and build a structure out of it. A computer can’t, unless I as a programmer tell it what to look for. Sure, I can program generic learning algorithms, and have a computer run data through those algorithms to try to match it up as closely as possible. I can run an almost infinite number of training sequences, as long as I have enough data on how the system is supposed to behave.
Of course, as a human I need the imagination and experience to see patterns that may be hidden, and that others can’t see. Is that really any different than algorithm training (yes, I’m attempting to undercut my own argument)?
I would argue yes. Our intelligence is not derived from thousands of interactions with training data. Rather, well, we don’t really know where it comes from. I’ll offer a guess that it comes from a period of time in which we observe and make connections between very disparate bits of information. Sure, the neurons and synapses in our brain may bear a surface resemblance to the algorithms of a neural network, and some talent accrues through repetition, but I don’t think intelligence necessarily works that way.
All that said, I am very hesitant to declare that machine intelligence may not one day equal the human kind. Machines have certain advantages over us, such as incredible and accessible data storage capabilities, as well as almost infinite computing power that doesn’t have to be used on consciousness (or will it?). But at least today and for the foreseeable future, machine intelligence is likely to be distinguishable from the organic kind.
Some Hard Questions About Building a Team April 23, 2016Posted by Peter Varhol in Software development, Technology and Culture.
Tags: meritocracy, team
add a comment
I feel a certain kinship with Pieter Hintjens. From his blog, it sounds like his diagnosis was similar to mine, last year. My diagnosis was wrong, and I declined surgery. In the same universe, he had the Whipple procedure, and has had at least several years of life tacked onto the end of his existence. And they seem to have been productive years, in a professional and personal sense, although it sounds like he may have little time left on the mortal plane.
But, reading other posts of his, I would hesitate to place myself firmly in his camp. Among his posts, on the viability of GitHub moving forward:
>> . . . a climate in which political outsiders use the weapons of gender and race against meritocracy.
So what is meritocracy? And what are the weapons of gender and race?
There was a time when I believed in strict meritocracy, like it was something that was easily definable and measureable. Age and experience have cured me of that delusion. In fact, we can’t define meritocracy in any way that doesn’t include our own biases.
Let me explain. Certainly, we can devise a test to determine who is the best at a particular skill. Or can we?
I spent my formative years studying psychology, which is where I was introduced to the concept of bias. We have these things called IQ tests, which purport to measure innate intelligence. Or something like that. But whatever we are measuring is the end product of our own biases of what comprises intelligence. There is a question on the standard IQ Test: What color is a banana? Seems straightforward. But to someone growing up with spoiled bananas, or no bananas at all, or even is color-blind, the question becomes problematic. Irrespective of intelligence.
I would not bet on a team that had the ten best programmers. I would bet on a team that worked as a team, with strengths and weaknesses. To compensate for the weaknesses, we need different points of view. To get different points of view, we need team members that are different, yet are cohesive. That is harder, and we shy away from harder.
Yes, there are people, who in their ignorance or incompetence, brandish gender and race maxims as teleology. And yes, they are wrong in a fundamental sense. And it is unfortunate that we have to endure them.
But that doesn’t mean that there is not value here. We are lazy. We ascribe success to intelligence, or ability. I say no. Success means having teams with complementary skills, not the best skills necessarily, but skills that offer the best chance of working effectively together.
How can we tell the difference between real value and political one ups-man-ship? Ah, that is the rub. I won’t pretend to be able to do so. But I do know that choosing the ten best pure coders is a recipe for failure.
Testing and the Aftermath of Brussels March 26, 2016Posted by Peter Varhol in Software development.
Tags: terror, testing
add a comment
My collaborator Gerie Owen and I had the honor and privilege of speaking twice at the Belgium Testing Days conference. We are very familiar with the areas that suffered from indiscriminate terrorist bombings earlier this week, and our hearts go out to the victims and survivors.
Much can be said about this. I will limit my comments to its impact on the testing community in Belgium and in Europe in general. While the organizers of Belgium Testing Days have confirmed to us that they and their loved ones are well and safe, it seems like any organization that is trying to bring people together in Belgium, and to perhaps a lesser extent in other parts of Europe, is at risk.
And that sets back the testing community immensely. One of the important things that we do as a profession is to gather people of similar interest and enthusiasm to exchange knowledge and ideas to advance the field. It has been my perception that Belgium Testing Days was an important part of that exchange in the European testing community.
No matter where we work or live, we are influenced by outside events. In this case, these events can stunt the growth of our community, and the professional development of individuals.
Gathering in large groups may not be what we want to do right now, because of the risk of being involved in a terror attack. We all make our own decisions, of course. But I am going to continue to participate in testing and DevOps conferences, in Europe and beyond.
No AI Cannot Solve the World’s Biggest Problems February 23, 2016Posted by Peter Varhol in Software development, Software platforms.
Tags: Google Brain, neural networks
add a comment
‘AI can solve world’s biggest problems’ – Google Brain engineer.
The headline screamed for a response. The type of work being done with Google Brain is impressive. It relies on neural networks. Neural networks are cool things. I was recently reminded by a new LinkedIn connection of the work and writing that I did on neural networks in the 1990s.
Neural networks are exceptionally good algorithms. They use multiple levels of nonlinear equations, running in parallel and serially, that can approximate a given response to a set of data inputs, then adjust the variables so that each successive pass can result in a slightly better approximation (or it may not; that’s just the way nonlinear algorithms work sometimes).
That may sound like just so much gobbledygook, but it’s not. In the 1990s, I used neural networks to design a set of algorithms to power an electronic wind sensor. In my aborted doctoral research, I was using neural networks to help dynamically determine the most efficient routings through large scale networks (such as the nascent Internet at the time).
Let’s be clear. What I did had nothing to do with the world’s biggest problems. The world’s biggest problems don’t already have mountains of data that point to the correct answer. In fact, they rarely, if ever, have correct answers. What they have is a combination of analysis, guesswork, and the many compromises made by dozens of competing interests. And they will never result in even close to a best answer. But sometimes they produce a workable one.
From my own experiences, I came to believe that having an instinctive feel for the nature and behavior of your data gave you a leg up in defining your neural network. And they tended to work best when you could carefully bound the problem domain. That is hardly the stuff of something that can solve the world’s most difficult problems.
Neural networks can seemingly work magic, but only when you have plenty of data, and already know the right answers to different combinations of that data. And even though, you are unlikely to get the best possible solution.
What Are We Doing With AI and Machine Learning? February 12, 2016Posted by Peter Varhol in Software development, Uncategorized.
Tags: AI, Machine Learning, testing
add a comment
When I was in graduate school, I studied artificial intelligence (AI), as a means for enabling computers to make decisions and to identify images using symbolic computers and functional languages. It turned out that there were a number of things wrong with this approach, especially twenty-five years ago. Computers weren’t fast enough, and we were attacking the wrong problems.
But necessity is the mother of invention. Today, AI and machine learning are being used in what is being called predictive analytics. In a nutshell, it’s not enough to react to an application failure. Applications are complex to diagnose and repair, and any downtime on a critical application costs money and could harm people. Simply, we are no longer in a position to allow applications to fail.
Today we have the data and analysis available to measure baseline characteristics of an application, and look for trends in a continual, real-time analysis of that data. We want to be able to predict if an application is beginning to fail. And we can use the data to diagnose just what is failing. In that the team can work on fixing it before something goes wrong.
What kind of data am I talking about? Have you ever looked at Perfmon on your computer? In a console window, simply type Perfmon at the C prompt. You will find a tool that lets you collect and plot an amazing number of different system and application characteristics. Common ones are CPU utilization, network traffic, disk transfers, and page faults, but there are literally hundreds more.
The is a Big Data sort of thing; a server farm can generate terrabytes of log and other health data every day. It is also a DevOps initiative. We need tools to be able to aggregate and analyze the data, and present it in a format understandable by humans (at the top level, usually a dashboard of some sort).
How does testing fit in? Well, we’ve typically been very siloed – dev, test, ops, network, security, etc. A key facet of DevOps is to get these silos working together as one team. And that may mean that testing has responsibilities after deployment as well as before. They may establish the health baseline during the testing process, and also be the ones to monitor that health during production.
Microsoft, Phone Home July 12, 2015Posted by Peter Varhol in Software development, Software platforms.
Tags: Computerworld, SJVN, Windows Phone
add a comment
I found this article on Computerworld about Microsoft’s retrenchment from Nokia’s phone business a worthwhile read, because of the company’s clear acknowledgment that it should never have spent billions of dollars to be in the handset business. Or, at best, perhaps it should have been, but it didn’t have the will to make a declining acquired business into a successful one.
I was nonplussed at the vitriol spewed in the comments section, personally attacking Steven J. Vaughn-Nichols in more ways than I cared to see. In particular, I was expecting to find an intelligent debate on the merits of his article; instead, it was pretty much of a trashing of his take on the topic.
To be fair, I think I may have hired SJVN for his first freelance assignment, circa 1990. And while the effort over the years was entirely his own, I am proud of the way that he has developed himself into a good and technical journalist in an era where neither quality is respected. Yes, he is a lifelong Linux guy, but that doesn’t make him a bad guy.
And I have to pretty much agree with him. That’s not an anti-Microsoft bias. That’s simply an acknowledgement that, despite spending tens of billions of dollars over a period of well over ten years, Microsoft still seems to lack the ability to commercialize its mobile software technology.
I confess that at one level I don’t understand its problem. Microsoft is technically very good, and in particular I love Cortina. Microsoft does really cool stuff, but outside of the enterprise realm, seems to have lost the will to effectively market.
I will say that a part of the company’s problem is on its obsession with the same look and feel across all devices. Windows on the desktop should never have translated to Windows on the phone. I had the opportunity to use a Microsoft Surface for several months, and while I admire the hardware design, the look and feel is far too crowded for the screen real estate. You may argue that it was intended to use with an external monitor, but it’s mobile, and the UI just doesn’t work for its screen form factor.
But there’s more than that, and I think it is a corporate culture thing, rather than a technology thing. Microsoft as a 100,000+ employee company simply has desktop too embedded into its DNA to pivot to anything else. Perhaps Nadella can change that, over time.
Some of the commenters lump Embedded and CE and Pocket PC and Surface in the same category in an attempt to say that Microsoft has been enormously successful in mobile devices, but that’s at best disingenuous, and at worst delusional.
I’m getting ready to buy about my fifteenth Windows PC (mostly laptops) in the last 20 years. I have mixed opinions of Microsoft’s technology during that time, but it is a necessity for many users (mostly including me). I fully accept Microsoft and its role in computing, but I’m neither a cheerleader nor a detractor.
So, while SJVN’s biases are evident, it is still a worthwhile article. And while the commenters’ biases are even more evident, they do not in any way add to the debate.
Should Coders Learn to Test? July 8, 2015Posted by Peter Varhol in Software development.
Tags: coders, testing
1 comment so far
This topic occurred to me in response to another posting that asked if testers should learn to code, and is a follow-on to my previous post on the unique skill set that software testers possess. If we seriously question whether testers should learn to code (and most opinions fall on the ‘yes’ side of this question), then it is relevant to ask the corresponding question.
Wait a minute, I hear you thinking. That’s a stupid question. Coders already know how to test; they are coders, after all. Testing is simply a subset of coding, the act of making sure that the code that they have written works properly.
It’s nice to believe that coders know how to test because they know how to code, but that’s fallacious reasoning. First, it’s a different skill set entirely. Coding is highly detail-oriented and focused on making sure the code is grammatically and logically correct. Testing asks whether that code covers all requirements, is usable, and is fit for its intended purpose.
Second, while it’s a cliché that coders can’t test their own code, that doesn’t make it any less true. I’ve tested my own code, and I shy away from edge cases or unusual inputs. Testers bravely go where coders don’t.
Third, testers do much more than checking code for logic bugs. Because of their broad mandate encompassing correctness as well as domain expertise and information from end users, testers must be both detail-oriented and focused on the end result, yet able to be flexible in terms of their goals.
So should coders learn the skills of testers? The answer is, of course, it depends. We should all be learning additional skills, but only have a finite amount of time, so there is a cost/benefit tradeoff for any professional in learning (and presumably practicing) a collateral skill. By learning how to test, coders by necessity have to reduce time learning a new language, or learning continuous integration, for example.
But there are technical advantages for coders to learn testing. At the very least, it will make them more thoughtful coders. At best, it can help them write better code, as they become experienced with knowing what testers look for.
If coders do decide to learn to test, they have to give up certain biases to effectively learn the skill set. Here are just a few of those biases.
- Users would never do that! That’s simply not true. Users will do everything you can imagine and many things you can’t.
- It’s not a bug, it’s a feature. Stop arguing, and start working together to determine if it’s a feature needed by the users.
- Testers just slow us down. That may be true in some cases, but most of the time testers speed up the application delivery process. If they seem to slow you down, perhaps it’s because developers didn’t do their jobs right to begin with.
Probably the best way for coders to learn how to test is to do pair-testing with an experienced tester. Participating in testing with someone who already has the skill set will help a developer learn how to spot weaknesses in their code, how end users approach their tasks with software, and how to assess risks and determine testing strategies.
The goal isn’t to turn coders into testers, but to make them better coders. For many coders, that’s a skill worth having.
Software Testing is a State of Mind July 2, 2015Posted by Peter Varhol in Software development.
Tags: coding, skills, testing
1 comment so far
I was prompted to write this by yet another post on whether or not testers should learn to code. While it gave cogent arguments on both sides, it (prematurely, I believe) concluded that testing is a fundamental skill for testers, and discussed how testers could develop their coding skills.
The reality is much more nuanced. There are different types of testers. An automation engineer is likely to be coding, or scripting, on a daily basis. A functional tester using an automated testing tool (commonly Selenium, or perhaps a commercial one) will write or modify scripts generated by the tool.
And in general, we try to automate repeatable processes. Often this can be done with customizable workflows in an ALM product, but there might be some amount of scripting required.
But while coding knowledge can improve a tester’s skill set, it’s not required for all roles. And sometimes it can detract from other, more important skills. That got me to thinking about the unique skill sets of testers. There are unique mental and experiential skills that testers bring to their job. The best testers intuitively recognize the skills needed, and work hard to develop them.
- Curiosity. Good testers do more than execute test cases. They look for inconsistencies or incongruities, question why, and don’t stop looking for improvements until the software is deployed.
- Logic and Discipline. Testers do very detailed work, and approach that work in a step-by-step logical fashion. Their thought processes have to be clear and sharp, and they have to move methodically from one step to the next.
- Imagination. Testers understand the user personas and put themselves in their role. The best can work through the application as both a new and experienced user, and find things no one ever considered.
- Confidence. Testers often have to present unpopular points of view. If they can do so while believing in their own skills and conclusions, while also taking into account differing points of view, they can be successful voice of both the user and application quality.
- Dealing with Ambiguity. It’s rarely clear what a requirement says, whether the test case really addresses it, whether an issue is really an issue, and what priority it is. Testers have to be ready to create a hypothesis, and provide evidence to support or reject that hypothesis.
These tend to be different skill sets than those possessed by coders. In particular, many coders tend to focus very narrowly on their particular assignments, because of the level of detail required to understand and successfully implement their part of the application. Coders also dislike ambiguity; code either works or it doesn’t, it either satisfies a requirement or it doesn’t. Computers aren’t ambiguous, so coders can’t write code that doesn’t clearly produce a specific end result.
Coders may argue that they produce a tangible result, source code that makes an application concept a reality. Whereas the work product of testers is usually a bit more amorphous. Ideally, a tester would like to say that software meets requirements and is of high quality, in which case few defects will be found. If testers write many defects, it’s interpreted as negative news rather than a desired result.
But organizations can’t look at testers as second class participants because of that. Testers have a unique skill set that remains essential in getting good and useful software into the hands of users. I don’t think that skill set has been very well documented to date, however. And it may not be appreciated because of that.