Artificial Intelligence and the Real Kind July 11, 2016Posted by Peter Varhol in Software development, Software platforms, Uncategorized.
Tags: artificial intelligence, Machine Learning
Over the last couple of months, I’ve been giving a lot of thought to robots, artificial intelligence, and the potential for replacing human thought and action. A part of that comes from the announcement by the European Union that it had drafted a “bill of rights” for robots as potential cyber-citizens of a more egalitarian era. A second part comes from my recent article on TechBeacon, which I titled “Testing a Moving Target”.
The computer scientist in me wants to say “bullshit disapproved”. Computer programs do what we instruct them to do, no more or no less. We can’t instruct them to think, because we can’t algorithmically (or in any other way) define thinking. There is no objective or intuitive explanation for human thought.
The distinction is both real and important. Machines aren’t able to look for anything that their programmers don’t tell them to (I wanted to say “will never be able” there, but I have given up the word “never” in informed conversation).
There is, of course, the Turing Test, which generally purports a way to determine whether you are interacting with a real person or computer program. In limited ways, a program (Eliza was the first, but it was an easy trick) can fool a person.
Here is how I think human thought is different than computer programming. I can look at something seemingly unstructured, and build a structure out of it. A computer can’t, unless I as a programmer tell it what to look for. Sure, I can program generic learning algorithms, and have a computer run data through those algorithms to try to match it up as closely as possible. I can run an almost infinite number of training sequences, as long as I have enough data on how the system is supposed to behave.
Of course, as a human I need the imagination and experience to see patterns that may be hidden, and that others can’t see. Is that really any different than algorithm training (yes, I’m attempting to undercut my own argument)?
I would argue yes. Our intelligence is not derived from thousands of interactions with training data. Rather, well, we don’t really know where it comes from. I’ll offer a guess that it comes from a period of time in which we observe and make connections between very disparate bits of information. Sure, the neurons and synapses in our brain may bear a surface resemblance to the algorithms of a neural network, and some talent accrues through repetition, but I don’t think intelligence necessarily works that way.
All that said, I am very hesitant to declare that machine intelligence may not one day equal the human kind. Machines have certain advantages over us, such as incredible and accessible data storage capabilities, as well as almost infinite computing power that doesn’t have to be used on consciousness (or will it?). But at least today and for the foreseeable future, machine intelligence is likely to be distinguishable from the organic kind.