jump to navigation

Let’s Have a Frank Discussion About Complexity December 7, 2017

Posted by Peter Varhol in Algorithms, Machine Learning, Strategy, Uncategorized.
Tags: , , , ,
add a comment

And let’s start with the human memory.  “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” is one of the most highly cited papers in psychology.  The title is rhetorical, of course; there is nothing magical about the number seven.  But the paper and associated psychological studies explicitly define the ability of the human mind to process increasingly complex information.

The short answer is that the human mind is a wonderful mechanism for some types of processing.  We can very rapidly process a large amount of sensory inputs, and draw some very quick but not terribly accurate conclusions (Kahneman’s Type 1 thinking), we can’t handle an overwhelming amount of quantitative data and expect to make any sense out of it.

In discussing machine learning systems, I often say that we as humans have too much data to reliably process ourselves.  So we set (mostly artificial) boundaries that let us ignore a large amount of data, so that we can pay attention when the data clearly signify a change in the status quo.

The point is that I don’t think there is a way for humans to deal directly with a lot of complexity.  And if we employ systems to evaluate that complexity and present it in human-understandable concepts, we are necessarily losing information in the process.

This, I think, is a corollary of Joel Spolsky’s Law of Leaky Abstractions, which says that anytime you abstract away from what is really happening with hardware and software, you lose information.  In many cases, that information is fairly trivial, but in some cases, it is critically valuable.  If we miss it, it can cause a serious problem.

While Joel was describing abstraction in a technical sense, I think that his law applies beyond that.  Any time that you add layers in order to better understand a scenario, you out of necessity lose information.  We look at the Dow Jones Industrial Average as a measure of the stock market, for example, rather than minutely examine every stock traded on the New York Stock Exchange.

That’s not a bad thing.  Abstraction makes it possible for us to better comprehend the world around us.

But it also means that we are losing information.  Most times, that’s not a disaster.  Sometimes it can lead us to disastrously bad decisions.

So what is the answer?  Well, abstract, but doubt.  And verify.


Are Engineering and Ethics Orthogonal Concepts? November 18, 2017

Posted by Peter Varhol in Algorithms, Technology and Culture.
Tags: , , ,
add a comment

Let me explain through example.  Facebook has a “fake news” problem.  Users sign up for a free account, then post, well, just about anything.  If it violates Facebook’s rules, the platform generally relies on users to report, although Facebook also has teams of editors and is increasingly using machine learning techniques to try to (emphasis on try) be proactive about flagging content.

(Developing machine learning algorithms is a capital expense, after all, while employing people is an operational one.  But I digress.)

But something can be clearly false while not violating Facebook guidelines.  Facebook is in the very early stages of attempting to authenticate the veracity of news (it will take many years, if it can be done at all), but it almost certainly won’t remove that content.  It will be flagged as possibly false, but still available for those who want to consume it.

It used to be that we as a society confined our fake news to outlets such as The Globe or the National Inquirer, tabloid papers typically sold at check-out lines at supermarkets.  Content was mostly about entertainment personalities, and consumption was limited to those that bothered to purchase it.

Now, however, anyone can be a publisher*.  And can publish anything.  Even at reputable news sources, copy editors and fact checkers have gone the way of the dodo bird.

It gets worse.  Now entire companies exist to write and publish fake news and outrageous views online.  Thanks to Google’s ad placement strategy, the more successful ones may actually get paid by Google to do so.

By orthogonal, I don’t mean contradictory.  At the fundamental level, orthogonal means “at right angles to.”  Variables that are orthogonal are statistically independent, in that changes in one don’t at all affect the other.

So let’s translate that to my point here.  Facebook, Google, and the others don’t see this as a societal problem, which is difficult and messy.  Rather they see it entirely as an engineering problem, solvable with the appropriate application of high technology.

At best, it’s both.  At worst, it is entirely a societal problem, to be solved with an appropriate (and messy) application of understanding, negotiation, and compromise.  That’s not Silicon Valley’s strong suit.

So they try to address it with their strength, rather than acknowledging that their societal skills as they exist today are inadequate to the immense task.  I would be happy to wait, if Silicon Valley showed any inclination to acknowledge this and try to develop those skills, but all I hear is crickets chirping.

These are very smart people, certainly smarter than me.  One can hope that age and wisdom will help them recognize and overcome their blind spots.  One can hope, can’t one?

*(Disclaimer:  I mostly publish my opinions on my blog.  When I use a fact, I try to verify it.  However, as I don’t make any money from this blog, I may occasionally cite something I believe to be a fact, but is actually wrong.  I apologize.)

In the Clutch September 28, 2017

Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture.
Tags: , ,
add a comment

I wrote a little while back about how some people are able to recognize the importance of the right decision or action in a given situation, and respond in a positive fashion.  We often call that delivering in the clutch.  This is as opposed to machine intelligence, which at least right now is not equipped to understand and respond to anything regarding the importance of a particular event in a sequence.

The question is if these systems will ever be able to tell that a particular event has outsized importance, and if they can use this information to um, try harder.

I have no doubt that we will be able to come up with metrics that can inform a machine learning system of a particularly critical event or events.  Taking an example from Moneyball of an at-bat, we can incorporate the inning, score, number of hits, and so on.  In other problem domains, such as application monitoring, we may not yet be collecting the data that we need, but given a little thought and creativity, I’m sure we can do so.

But I have difficulty imagining that machine learning systems will be able to rise to the occasion.  There is simply no mechanism in computer programming for that to happen.  You don’t save your best algorithms for important events; you use them all the time.  For a long-running computation, it may be helpful to add to the server farm, so you can finish more quickly or process more data, but most learning systems won’t be able or equipped to do that.

But code is not intelligence.  Algorithms cannot feel a sense of urgency to perform at the highest level; they are already performing at the highest level of which they are capable.

To be fair, at some indeterminate point in the future, it may be possible for algorithms to detect the need for new code pathways, and call subroutines to make those pathways a reality (or ask for humans to program them).  They may recognize that a particular result is suboptimal, and “ask” for additional data to make it better.  But why would that happen only for critical events?  We would create our systems to do that for any event.

Today, we don’t live in the world of Asimov’s positronic brains and the Three Laws of Robotics.  It will be a while before science is at that point, if ever.

Is this where human achievement can perform better than an algorithm?  Possibly, if we have the requisite human expertise.  There are a number of well-known examples where humans have had to take over when machines failed, some successfully, some unsuccessfully.  But the human has to be there, and has to be equipped professionally and mentally to do so.  That is why I am a strong believer in the human in the loop.

SpamCast on Machine Learning September 20, 2017

Posted by Peter Varhol in Software platforms.
Tags: ,
add a comment

Not really spam, of course, but Software Process and Measurement, the weekly podcast from Tom Cagley, who I met at the QUEST conference this past spring.  This turned out surprisingly well, and Tom posted it this past weekend.  If you have a few minutes, listen in.  It’s a good introduction to machine learning and the issues of testing machine learning systems, as well as skills needed to understand and work with these systems.  http://spamcast.libsyn.com/spamcast-460-peter-varhol-machine-learning-ai-testing-careers

What Brought About our AI Revolution? July 22, 2017

Posted by Peter Varhol in Algorithms, Software development, Software platforms.
Tags: , , ,
add a comment

Circa 1990, I was a computer science graduate student, writing forward-chaining rules in Lisp for AI applications.  We had Symbolics Lisp workstations, but I did most of my coding on my Mac, using ExperList or the wonderful XLisp written by friend and colleague David Betz.

Lisp was convoluted to work with, and in general rules-based systems required that there was an expert available to develop the rules.  It turns out that it’s very difficult for any human expert to described in rules how they got a particular answer.  And those rules generally couldn’t take into account any data that might help it learn and refine over time.

As a result, most rules-based systems fell by the wayside.  While they could work for discrete problems where the steps to a conclusion were clearly defined, they weren’t very useful when the problem domain was ambiguous or there was no clear yes or no answer.

A couple of years later I moved on to working with neural networks.  Neural networks require data for training purposes.  These systems are made up of layered networks of equations (I used mostly fairly simple polynomial expressions, but sometimes the algorithms can get pretty sophisticated) that adapt based on known inputs and outputs.

Neural networks have the advantage of obtaining their expertise through the application of actual data.  However, due to the multiple layers of algorithms, it is usually impossible to determine how the system arrives at the answers it does.

Recently I presented on machine learning at the QUEST Conference in Chicago and at Expo:QA in Spain.  In interacting with the attendees, I realized something.  While some data scientists tend to use more complex algorithms today, the techniques involved in neural networks for machine learning are pretty much the same as they were when I was doing it, now 25 years ago.

So why are we having the explosion in machine learning, AI, and intelligent systems today?  When I was asked that question recently, I realized that there was only one possible answer.

Computing processing speeds continue to follow Moore’s Law (more or less), especially when we’re talking floating point SIMD/parallel processing operations.  Moore’s Law doesn’t directly relate to speed or performance, but there is a strong correlation.  And processors today are now fast enough to execute complex algorithms with data applied in parallel.  Some, like Nvidia, have wonderful GPUs that turn out to work very well with this type of problem.  Others, like Intel, have released an entire processor line dedicated to AI algorithms.

In other words, what has happened is that the hardware caught up to the software.  The software (and mathematical) techniques are fundamentally the same, but now the machine learning systems can run fast enough to actually be useful.

Analytics Don’t Apply in the Clutch June 21, 2017

Posted by Peter Varhol in Architectures, Strategy, Technology and Culture.
Tags: , , ,
1 comment so far

I was 13 years old, at Forbes Field, and rose with the crowd as Roberto Clemente hit a walk-off home run in the ninth inning to win an important game in the 1971 World Series hunt.  Clemente was a very good hitter for average, but had relatively few home runs.  He delivered in the clutch, as we say.

Moneyball ultimately works in baseball because of the importance of individual achievement in the outcome of games, and the length of the season.  162 games enables carefully thought out probabilities to win out over the long haul.

But teams practicing Moneyball learned that analytics weren’t enough once you got into the postseason.  Here’s the problem.  Probabilities are just that; they indicate a tendency or a trend over time, but don’t effectively predict the result of an individual event in that time series.  Teams such as the Boston Red Sox were World Series winners because they both practiced Moneyball and had high-priced stars proven to deliver results when the game was on the line.

Machine learning and advanced analytics have characteristics in common with Moneyball.  They provide you with the best answer, based on the application of the algorithms and the data used to train them.  Most of the time, that answer is correct within acceptable limits.  Occasionally, it is not.  That failure may simply be an annoyance, or it may have catastrophic consequences.

I have disparaged Nicholas Carr in these pages in the past.  My opinion of him changed radically as I watched his keynote address at the Conference for the Association of Software Testing in 2016 (this talk is similar).  In a nutshell, Carr says that we can’t automate, and trust that automation, without first having experience with the activity itself.  Simply, we can’t automate something that we can’t do ourselves.

All events are not created equal.  Many are routine, but a few might have significant consequences.  But analytics and AI treat all events within their problem domain as the same.  The human knows the difference, and can rise to the occasion with a higher probability than any learning system.

Learning systems are great.  On average, they will produce better results than a human over time.  However, the human is more likely to deliver when it counts.

Has Moneyball Killed Baseball? June 20, 2017

Posted by Peter Varhol in Education, Publishing, Strategy.
Tags: , , , ,
add a comment

Moneyball was a revelation to me.  It taught me that the experts could not effectively evaluate talent, and opened my own mind to the biases found in software development, testing, and team building.  Some of my best conference presentations and articles have been in this area.

But while Moneyball helped the Oakland Athletics, and eventually some other teams, it seems to be well on its way to killing the sport.  I’ve never been a big sports fan, but there were few other activities that could command the attention of a 12-year old in the late 1960s.

I grew up in the Pittsburgh area, and while I was too young to see the dramatic Bill Mazeroski home run in the 1960 World Series, I did see the heroics of Roberto Clemente and Willie Stargell in the 1971 World Series (my sister was administrative assistant at the church in Wilmington NC where Stargell had his funeral).  I lived in Baltimore where the Pirates won a Game 7 in dramatic fashion in 1979 (Steve Blass at the helm for his third game of the series, with Dave Guisti in relief).

But baseball has changed, and not in a good way.  Today, Moneyball has produced teams that focus on dramatic encounters like strikeouts, walks, and home runs.  I doubt this was what Billy Beane wanted to happen.  That makes baseball boring.  It is currently lacking in any of the strategy that it was best at.

As we move toward a world where we are increasingly using analytics to evaluate data and make decisions, we may be leaving the interesting parts of our problem domain behind.  I would like to think that machine learning and analytics are generally good for us, but perhaps they provide a crutch that ultimately makes our world less than it could be.  I hope we find a way to have the best of both.

Artificial Intelligence and the Real Kind July 11, 2016

Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: ,
1 comment so far

Over the last couple of months, I’ve been giving a lot of thought to robots, artificial intelligence, and the potential for replacing human thought and action. A part of that comes from the announcement by the European Union that it had drafted a “bill of rights” for robots as potential cyber-citizens of a more egalitarian era.  A second part comes from my recent article on TechBeacon, which I titled “Testing a Moving Target”.

The computer scientist in me wants to say “bullshit disapproved”. Computer programs do what we instruct them to do, no more or no less.  We can’t instruct them to think, because we can’t algorithmically (or in any other way) define thinking.  There is no objective or intuitive explanation for human thought.

The distinction is both real and important. Machines aren’t able to look for anything that their programmers don’t tell them to (I wanted to say “will never be able” there, but I have given up the word “never” in informed conversation).

There is, of course, the Turing Test, which generally purports a way to determine whether you are interacting with a real person or computer program.  In limited ways, a program (Eliza was the first, but it was an easy trick) can fool a person.

Here is how I think human thought is different than computer programming. I can look at something seemingly unstructured, and build a structure out of it.  A computer can’t, unless I as a programmer tell it what to look for.  Sure, I can program generic learning algorithms, and have a computer run data through those algorithms to try to match it up as closely as possible.  I can run an almost infinite number of training sequences, as long as I have enough data on how the system is supposed to behave.

Of course, as a human I need the imagination and experience to see patterns that may be hidden, and that others can’t see. Is that really any different than algorithm training (yes, I’m attempting to undercut my own argument)?

I would argue yes. Our intelligence is not derived from thousands of interactions with training data.  Rather, well, we don’t really know where it comes from.  I’ll offer a guess that it comes from a period of time in which we observe and make connections between very disparate bits of information.  Sure, the neurons and synapses in our brain may bear a surface resemblance to the algorithms of a neural network, and some talent accrues through repetition, but I don’t think intelligence necessarily works that way.

All that said, I am very hesitant to declare that machine intelligence may not one day equal the human kind. Machines have certain advantages over us, such as incredible and accessible data storage capabilities, as well as almost infinite computing power that doesn’t have to be used on consciousness (or will it?).  But at least today and for the foreseeable future, machine intelligence is likely to be distinguishable from the organic kind.