jump to navigation

More on AI and the Turing Test May 20, 2018

Posted by Peter Varhol in Architectures, Machine Learning, Strategy, Uncategorized.
Tags: , , ,
add a comment

It turns out that most people who care to comment are, to use the common phrase, creeped out at the thought of not knowing whether they are talking to an AI or a human being.  I get that, although I don’t think I’m myself bothered by such a notion.  After all, what do we know about people during a casual phone conversation?  Many of them probably sound like robots to us anyway.

And this article in the New York Times notes that Google was only able to accomplish this feat by severely limiting the domain in which the AI could interact with – in this case, making dinner reservations or a hair appointment.  The demonstration was still significant, but isn’t a truly practical application, even within a limited domain space.

Well, that’s true.  The era of an AI program interacting like a human across multiple domains is far away, even with the advances we’ve seen over the last few years.  And this is why I even doubt the viability of self-driving cars anytime soon.  The problem domains encountered by cars are enormously complex, far more so than any current tests have attempted.  From road surface to traffic situation to weather to individual preferences, today’s self-driving cars can’t deal with being in the wild.

You may retort that all of these conditions are objective and highly quantifiable, making it possible to anticipate and program for.  But we come across driving situations almost daily that have new elements that must be instinctively integrated into our body of knowledge and acted upon.  Computers certainly have the speed to do so, but they lack a good learning framework to identify critical data and integrate that data into their neural network to respond in real time.

Author Gary Marcus notes that what this means is that the deep learning approach to AI has failed.  I laughed when I came to the solution proposed by Dr. Marcus – that we return to the backward-chaining rules-based approach of two decades ago.  This was what I learned during much of my graduate studies, and was largely given up on in the 1990s as unworkable.  Building layer upon layer of interacting rules was tedious and error-prone, and it required an exacting understanding of just how backward chaining worked.

Ultimately, I think that the next generation of AI will incorporate both types of approaches.  The neural network to process data and come to a decision, and a rules-based system to provide the learning foundation and structure.


Are Engineering and Ethics Orthogonal Concepts? November 18, 2017

Posted by Peter Varhol in Algorithms, Technology and Culture.
Tags: , , ,
add a comment

Let me explain through example.  Facebook has a “fake news” problem.  Users sign up for a free account, then post, well, just about anything.  If it violates Facebook’s rules, the platform generally relies on users to report, although Facebook also has teams of editors and is increasingly using machine learning techniques to try to (emphasis on try) be proactive about flagging content.

(Developing machine learning algorithms is a capital expense, after all, while employing people is an operational one.  But I digress.)

But something can be clearly false while not violating Facebook guidelines.  Facebook is in the very early stages of attempting to authenticate the veracity of news (it will take many years, if it can be done at all), but it almost certainly won’t remove that content.  It will be flagged as possibly false, but still available for those who want to consume it.

It used to be that we as a society confined our fake news to outlets such as The Globe or the National Inquirer, tabloid papers typically sold at check-out lines at supermarkets.  Content was mostly about entertainment personalities, and consumption was limited to those that bothered to purchase it.

Now, however, anyone can be a publisher*.  And can publish anything.  Even at reputable news sources, copy editors and fact checkers have gone the way of the dodo bird.

It gets worse.  Now entire companies exist to write and publish fake news and outrageous views online.  Thanks to Google’s ad placement strategy, the more successful ones may actually get paid by Google to do so.

By orthogonal, I don’t mean contradictory.  At the fundamental level, orthogonal means “at right angles to.”  Variables that are orthogonal are statistically independent, in that changes in one don’t at all affect the other.

So let’s translate that to my point here.  Facebook, Google, and the others don’t see this as a societal problem, which is difficult and messy.  Rather they see it entirely as an engineering problem, solvable with the appropriate application of high technology.

At best, it’s both.  At worst, it is entirely a societal problem, to be solved with an appropriate (and messy) application of understanding, negotiation, and compromise.  That’s not Silicon Valley’s strong suit.

So they try to address it with their strength, rather than acknowledging that their societal skills as they exist today are inadequate to the immense task.  I would be happy to wait, if Silicon Valley showed any inclination to acknowledge this and try to develop those skills, but all I hear is crickets chirping.

These are very smart people, certainly smarter than me.  One can hope that age and wisdom will help them recognize and overcome their blind spots.  One can hope, can’t one?

*(Disclaimer:  I mostly publish my opinions on my blog.  When I use a fact, I try to verify it.  However, as I don’t make any money from this blog, I may occasionally cite something I believe to be a fact, but is actually wrong.  I apologize.)

Google Blew It August 12, 2017

Posted by Peter Varhol in Technology and Culture, Uncategorized.
add a comment

I don’t think that statement surprises anyone.  Google had the opportunity to make a definitive statement about the technology industry, competence, inclusion, ability, and teamwork, and instead blew it as only a big, bureaucratic company could.  Here is why I think so.

First, Google enabled and apparently supported a culture in which views colored by politics are freely promoted.  That was simply stupid.  No one wins at the politics game (and mostly everyone loses).  We believe what we believe.  If we are thoughtful human beings with a growth mindset, our beliefs are likely to change, but over a period of years, not overnight.

Second, Google let the debate be framed as a liberal versus conservative one.  It is most emphatically not.  I hate those labels.  I am sure I have significant elements of each in my psyche, along with perhaps a touch of libertarianism.  To throw about such labels is insulting and ludicrous, and Google as a company and a culture enabled it.

Okay, then what is it, you may ask.  It is about mutual respect, across jobs, roles, product lines, and level of responsibility.  It is working with the person, regardless of gender, race, age, orientation, or whatever.  You don’t know their circumstances, you may not even know what they have been assigned to do.  Your goal is to achieve a robust and fruitful working relationship.  If you can’t, at least some of that may well be on you.

The fact that you work together at Google gives you more in common with each other than almost anyone else in the world.  There are so many shared values there that have nothing to do with political beliefs, reflexive or well-considered.  Share those common goals; all else can be discussed and bridged.  It’s only where you work, after all.

You may think poorly of a colleague.  God knows I have in the past, whether it be for perceived competence, poor work habits, skimpy hours, or seeming uninspired output (to be fair, over the years a few of my colleagues may have thought something similar about me).  They are there for a reason.  Someone thought they had business value.  Let’s expend a little more effort trying to find it.  Please.

So what would I have done, if I were Sundar Pichai?  Um, first, how about removing politics from the situation?  Get politics out of office discussions in general, and out of this topic in particular.  All too often, doctrinaire people (on both sides of the aisle) simply assume that everyone thinks their ideas are inevitably right.  Try listening more and assuming less.  If you can’t, Sundar, it is time to move aside and let an adult take over.

Second, Google needs everyone to understand what it stands for.  And I hope it does not stand for liberal or conservative.  I hope it wants everyone to grow, professionally, emotionally, and in their mindsets.  We can have an honest exchange of ideas without everyone going ballistic.

Get a grip, folks!  There is not a war on, despite Google’s ham-handed attempts to make it one.  We have more in common than we are different, and let’s work on that for a while.

I can’t fix Google’s monumental screw-up.  But I really hope I can move the dial ever so slightly toward respect and rational discourse.

An Open Letter to Nicholas Carr January 26, 2016

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

Mr. Carr – I understand that you are not able to respond to everything, or anything, but I hope you will be able to take this in the vein in which it is offered.

Last year, I was diagnosed with pancreatic cancer.  I was at a reasonably good hospital (Lahey Medical Center in Burlington, MA), and the head of surgery there strongly recommended a Whipple Procedure, which involves taking out about a third of my insides.  It was the only way I would live out the year.

You are right.  I should have listened to this professional advice, and taken the surgery.  It would have laid me up for several months, and given me a lower quality of life for a couple of decades until I died.

But I turned to Doctor Google, for a number of hours.  I can’t say I understood all of it, but I determined that it was premature to talk about surgery, and declined.  The diagnosis was wrong.  Today, I am a distance runner, and continually challenge myself physically.  I am in the best shape of my life.

You believe you are so right in your convictions.  You raise reasonable points.  I can respect that.  But reality is more nuanced than you give it credit for.  Without Doctor Google, I would have had the Whipple, and it would have been the wrong decision.  It would have been very detrimental for the rest of my life.  I realize that there is bad and contradictory information on the Internet.  We, as intelligent and reasonable people, can use it to help make our own decisions.  In another era (yes, an era I was also in; I am older than you), I would have followed the doctors’ (plural) advice, and have unnecessary and debilitating surgery.  Please let us use the Internet as it was intended, to inform and educate.  Of course, the decisions are ours.

Does Google make us stupid? No it does not.

That is all.


I Am Not Dead October 7, 2012

Posted by Peter Varhol in Technology and Culture.
Tags: ,

That’s a strange thing to say, but the question has been coming up this summer and into the fall.  The reason is Google.  I occasionally google myself (it is a verb now, right?) to see where I might appear, and since about June the Peter Varhol obituary has been trending toward the top of the results.  The fact that it seems to be that of an 83-year old person in Minnesota (I am about 30 years younger, and live in New Hampshire).

All of this is thanks to Google, which lists sites that claim there are a total of six people by my name in the US, including my father, who died in 1994 (he’s the one in Pennsylvania, FYI).  Beyond the other deceased fellow, there seem to be identically named men in Texas (owner of a septic installation company; read into that what you will), Connecticut, and Illinois.  Other than my father, I have never met any of them, and have no idea whether or not we are related.

And, by the way, I am not on Facebook, and will not be on Facebook, lest a future employer demand the password to my (nonexistent) Facebook account.  There is a Peter Varhol in Brno, Czech Republic, who is originally from Bratislava, Slovakia (where at least one side of my family can be definitively traced to).  He seems to be prominent on Facebook.  He may actually be a distant relative; there is a resemblance, but I can’t tell if it’s family, or ethnic.  If you search for Peter Varhol, Google will return Facebook profiles, none of which are mine.

The ability to find out things about me that are, well, not about me has implications to my life and all of our lives.  Even if we don’t have a common name (I don’t think I do), we may find that employers or potential employers look for us on the Internet, and think that we are somebody we are not.  If we are interviewing for a job, we may not even get a chance to say that we aren’t the person their search returned.

But it’s broader than that.  Those people in our lives may think we are someone that we are not, based on a Google (or Bing, or Yahoo) search.  That may affect how people around us think of us, and how they interact with us in the future.  A potential new friend may find someone with the same name who has a criminal complaint for stalking, for example.

I don’t know if there is a solution to this, and I don’t know how often it is an issue.  In general I believe that more information is better than less.  But I have heard from old friends or colleagues asking if I were in fact dead (if I don’t answer, does that lead to the default conclusions?), and I would prefer not to receive such queries.