jump to navigation

What Brought About our AI Revolution? July 22, 2017

Posted by Peter Varhol in Software development, Software platforms, Algorithms.
Tags: , , ,
add a comment

Circa 1990, I was a computer science graduate student, writing forward-chaining rules in Lisp for AI applications.  We had Symbolics Lisp workstations, but I did most of my coding on my Mac, using ExperList or the wonderful XLisp written by friend and colleague David Betz.

Lisp was convoluted to work with, and in general rules-based systems required that there was an expert available to develop the rules.  It turns out that it’s very difficult for any human expert to described in rules how they got a particular answer.  And those rules generally couldn’t take into account any data that might help it learn and refine over time.

As a result, most rules-based systems fell by the wayside.  While they could work for discrete problems where the steps to a conclusion were clearly defined, they weren’t very useful when the problem domain was ambiguous or there was no clear yes or no answer.

A couple of years later I moved on to working with neural networks.  Neural networks require data for training purposes.  These systems are made up of layered networks of equations (I used mostly fairly simple polynomial expressions, but sometimes the algorithms can get pretty sophisticated) that adapt based on known inputs and outputs.

Neural networks have the advantage of obtaining their expertise through the application of actual data.  However, due to the multiple layers of algorithms, it is usually impossible to determine how the system arrives at the answers it does.

Recently I presented on machine learning at the QUEST Conference in Chicago and at Expo:QA in Spain.  In interacting with the attendees, I realized something.  While some data scientists tend to use more complex algorithms today, the techniques involved in neural networks for machine learning are pretty much the same as they were when I was doing it, now 25 years ago.

So why are we having the explosion in machine learning, AI, and intelligent systems today?  When I was asked that question recently, I realized that there was only one possible answer.

Computing processing speeds continue to follow Moore’s Law (more or less), especially when we’re talking floating point SIMD/parallel processing operations.  Moore’s Law doesn’t directly relate to speed or performance, but there is a strong correlation.  And processors today are now fast enough to execute complex algorithms with data applied in parallel.  Some, like Nvidia, have wonderful GPUs that turn out to work very well with this type of problem.  Others, like Intel, have released an entire processor line dedicated to AI algorithms.

In other words, what has happened is that the hardware caught up to the software.  The software (and mathematical) techniques are fundamentally the same, but now the machine learning systems can run fast enough to actually be useful.

The Future is Now June 23, 2017

Posted by Peter Varhol in Algorithms, Technology and Culture.
Tags: ,
add a comment

And it is messy.  This article notes that it has been 15 years since the release of Minority Report, and today we are using predictive analytics to determine who might commit a crime, and where.

Perhaps it is the sign of the times.  Despite being safer than ever, we are also more afraid than ever.  We may not let our electronics onto commercial planes (though they are presumably okay in cargo).  We want to flag and restrict contact with people deemed high-risk.  We want to stay home.  We want the police to have more powers.

In a way it’s understandable.  This is a bias described aptly by Daniel Kahneman.  We can extrapolate from the general to the particular, but not from the particular to the general.  And there is also the primacy bias.  When we see a mass attack, was are likely to instinctively interpret that as an increase in attacks in general, rather than looking at the trends over time.

I’m reminded of the Buffalo Springfield song: “Paranoia strikes deep, into your lives it will creep.”

But there is a problem using predictive analytics in this fashion, as Tom Cruise discovered.  And this gets back to Nicholas Carr’s point – we can’t effectively automate what we can’t do ourselves.  If a human cannot draw the same or more accurate conclusions, we have no right to rely blindly on analytics.

I suspect that we are going to see increased misuses of analytics in the future, and that is regrettable.  We have to have data scientists, economists, and computer professionals step up and say that a particular application is inappropriate.

I will do so when I can.  I hope others will, too.