jump to navigation

SpamCast on Machine Learning September 20, 2017

Posted by Peter Varhol in Software platforms.
Tags: ,
add a comment

Not really spam, of course, but Software Process and Measurement, the weekly podcast from Tom Cagley, who I met at the QUEST conference this past spring.  This turned out surprisingly well, and Tom posted it this past weekend.  If you have a few minutes, listen in.  It’s a good introduction to machine learning and the issues of testing machine learning systems, as well as skills needed to understand and work with these systems.  http://spamcast.libsyn.com/spamcast-460-peter-varhol-machine-learning-ai-testing-careers

Advertisements

What Brought About our AI Revolution? July 22, 2017

Posted by Peter Varhol in Algorithms, Software development, Software platforms.
Tags: , , ,
add a comment

Circa 1990, I was a computer science graduate student, writing forward-chaining rules in Lisp for AI applications.  We had Symbolics Lisp workstations, but I did most of my coding on my Mac, using ExperList or the wonderful XLisp written by friend and colleague David Betz.

Lisp was convoluted to work with, and in general rules-based systems required that there was an expert available to develop the rules.  It turns out that it’s very difficult for any human expert to described in rules how they got a particular answer.  And those rules generally couldn’t take into account any data that might help it learn and refine over time.

As a result, most rules-based systems fell by the wayside.  While they could work for discrete problems where the steps to a conclusion were clearly defined, they weren’t very useful when the problem domain was ambiguous or there was no clear yes or no answer.

A couple of years later I moved on to working with neural networks.  Neural networks require data for training purposes.  These systems are made up of layered networks of equations (I used mostly fairly simple polynomial expressions, but sometimes the algorithms can get pretty sophisticated) that adapt based on known inputs and outputs.

Neural networks have the advantage of obtaining their expertise through the application of actual data.  However, due to the multiple layers of algorithms, it is usually impossible to determine how the system arrives at the answers it does.

Recently I presented on machine learning at the QUEST Conference in Chicago and at Expo:QA in Spain.  In interacting with the attendees, I realized something.  While some data scientists tend to use more complex algorithms today, the techniques involved in neural networks for machine learning are pretty much the same as they were when I was doing it, now 25 years ago.

So why are we having the explosion in machine learning, AI, and intelligent systems today?  When I was asked that question recently, I realized that there was only one possible answer.

Computing processing speeds continue to follow Moore’s Law (more or less), especially when we’re talking floating point SIMD/parallel processing operations.  Moore’s Law doesn’t directly relate to speed or performance, but there is a strong correlation.  And processors today are now fast enough to execute complex algorithms with data applied in parallel.  Some, like Nvidia, have wonderful GPUs that turn out to work very well with this type of problem.  Others, like Intel, have released an entire processor line dedicated to AI algorithms.

In other words, what has happened is that the hardware caught up to the software.  The software (and mathematical) techniques are fundamentally the same, but now the machine learning systems can run fast enough to actually be useful.

Analytics Don’t Apply in the Clutch June 21, 2017

Posted by Peter Varhol in Architectures, Strategy, Technology and Culture.
Tags: , , ,
add a comment

I was 13 years old, at Forbes Field, and rose with the crowd as Roberto Clemente hit a walk-off home run in the ninth inning to win an important game in the 1971 World Series hunt.  Clemente was a very good hitter for average, but had relatively few home runs.  He delivered in the clutch, as we say.

Moneyball ultimately works in baseball because of the importance of individual achievement in the outcome of games, and the length of the season.  162 games enables carefully thought out probabilities to win out over the long haul.

But teams practicing Moneyball learned that analytics weren’t enough once you got into the postseason.  Here’s the problem.  Probabilities are just that; they indicate a tendency or a trend over time, but don’t effectively predict the result of an individual event in that time series.  Teams such as the Boston Red Sox were World Series winners because they both practiced Moneyball and had high-priced stars proven to deliver results when the game was on the line.

Machine learning and advanced analytics have characteristics in common with Moneyball.  They provide you with the best answer, based on the application of the algorithms and the data used to train them.  Most of the time, that answer is correct within acceptable limits.  Occasionally, it is not.  That failure may simply be an annoyance, or it may have catastrophic consequences.

I have disparaged Nicholas Carr in these pages in the past.  My opinion of him changed radically as I watched his keynote address at the Conference for the Association of Software Testing in 2016 (this talk is similar).  In a nutshell, Carr says that we can’t automate, and trust that automation, without first having experience with the activity itself.  Simply, we can’t automate something that we can’t do ourselves.

All events are not created equal.  Many are routine, but a few might have significant consequences.  But analytics and AI treat all events within their problem domain as the same.  The human knows the difference, and can rise to the occasion with a higher probability than any learning system.

Learning systems are great.  On average, they will produce better results than a human over time.  However, the human is more likely to deliver when it counts.

Has Moneyball Killed Baseball? June 20, 2017

Posted by Peter Varhol in Education, Publishing, Strategy.
Tags: , , , ,
add a comment

Moneyball was a revelation to me.  It taught me that the experts could not effectively evaluate talent, and opened my own mind to the biases found in software development, testing, and team building.  Some of my best conference presentations and articles have been in this area.

But while Moneyball helped the Oakland Athletics, and eventually some other teams, it seems to be well on its way to killing the sport.  I’ve never been a big sports fan, but there were few other activities that could command the attention of a 12-year old in the late 1960s.

I grew up in the Pittsburgh area, and while I was too young to see the dramatic Bill Mazeroski home run in the 1960 World Series, I did see the heroics of Roberto Clemente and Willie Stargell in the 1971 World Series (my sister was administrative assistant at the church in Wilmington NC where Stargell had his funeral).  I lived in Baltimore where the Pirates won a Game 7 in dramatic fashion in 1979 (Steve Blass at the helm for his third game of the series, with Dave Guisti in relief).

But baseball has changed, and not in a good way.  Today, Moneyball has produced teams that focus on dramatic encounters like strikeouts, walks, and home runs.  I doubt this was what Billy Beane wanted to happen.  That makes baseball boring.  It is currently lacking in any of the strategy that it was best at.

As we move toward a world where we are increasingly using analytics to evaluate data and make decisions, we may be leaving the interesting parts of our problem domain behind.  I would like to think that machine learning and analytics are generally good for us, but perhaps they provide a crutch that ultimately makes our world less than it could be.  I hope we find a way to have the best of both.

Artificial Intelligence and the Real Kind July 11, 2016

Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: ,
1 comment so far

Over the last couple of months, I’ve been giving a lot of thought to robots, artificial intelligence, and the potential for replacing human thought and action. A part of that comes from the announcement by the European Union that it had drafted a “bill of rights” for robots as potential cyber-citizens of a more egalitarian era.  A second part comes from my recent article on TechBeacon, which I titled “Testing a Moving Target”.

The computer scientist in me wants to say “bullshit disapproved”. Computer programs do what we instruct them to do, no more or no less.  We can’t instruct them to think, because we can’t algorithmically (or in any other way) define thinking.  There is no objective or intuitive explanation for human thought.

The distinction is both real and important. Machines aren’t able to look for anything that their programmers don’t tell them to (I wanted to say “will never be able” there, but I have given up the word “never” in informed conversation).

There is, of course, the Turing Test, which generally purports a way to determine whether you are interacting with a real person or computer program.  In limited ways, a program (Eliza was the first, but it was an easy trick) can fool a person.

Here is how I think human thought is different than computer programming. I can look at something seemingly unstructured, and build a structure out of it.  A computer can’t, unless I as a programmer tell it what to look for.  Sure, I can program generic learning algorithms, and have a computer run data through those algorithms to try to match it up as closely as possible.  I can run an almost infinite number of training sequences, as long as I have enough data on how the system is supposed to behave.

Of course, as a human I need the imagination and experience to see patterns that may be hidden, and that others can’t see. Is that really any different than algorithm training (yes, I’m attempting to undercut my own argument)?

I would argue yes. Our intelligence is not derived from thousands of interactions with training data.  Rather, well, we don’t really know where it comes from.  I’ll offer a guess that it comes from a period of time in which we observe and make connections between very disparate bits of information.  Sure, the neurons and synapses in our brain may bear a surface resemblance to the algorithms of a neural network, and some talent accrues through repetition, but I don’t think intelligence necessarily works that way.

All that said, I am very hesitant to declare that machine intelligence may not one day equal the human kind. Machines have certain advantages over us, such as incredible and accessible data storage capabilities, as well as almost infinite computing power that doesn’t have to be used on consciousness (or will it?).  But at least today and for the foreseeable future, machine intelligence is likely to be distinguishable from the organic kind.

What Are We Doing With AI and Machine Learning? February 12, 2016

Posted by Peter Varhol in Software development, Uncategorized.
Tags: , ,
add a comment

When I was in graduate school, I studied artificial intelligence (AI), as a means for enabling computers to make decisions and to identify images using symbolic computers and functional languages. It turned out that there were a number of things wrong with this approach, especially twenty-five years ago.  Computers weren’t fast enough, and we were attacking the wrong problems.

But necessity is the mother of invention. Today, AI and machine learning are being used in what is being called predictive analytics.  In a nutshell, it’s not enough to react to an application failure.  Applications are complex to diagnose and repair, and any downtime on a critical application costs money and could harm people.  Simply, we are no longer in a position to allow applications to fail.

Today we have the data and analysis available to measure baseline characteristics of an application, and look for trends in a continual, real-time analysis of that data.  We want to be able to predict if an application is beginning to fail.  And we can use the data to diagnose just what is failing.  In that the team can work on fixing it before something goes wrong.

What kind of data am I talking about?  Have you ever looked at Perfmon on your computer?  In a console window, simply type Perfmon at the C prompt.  You will find a tool that lets you collect and plot an amazing number of different system and application characteristics.  Common ones are CPU utilization, network traffic, disk transfers, and page faults, but there are literally hundreds more.

The is a Big Data sort of thing; a server farm can generate terrabytes of log and other health data every day.  It is also a DevOps initiative.  We need tools to be able to aggregate and analyze the data, and present it in a format understandable by humans (at the top level, usually a dashboard of some sort).

How does testing fit in?  Well, we’ve typically been very siloed – dev, test, ops, network, security, etc.  A key facet of DevOps is to get these silos working together as one team.  And that may mean that testing has responsibilities after deployment as well as before.  They may establish the health baseline during the testing process, and also be the ones to monitor that health during production.