jump to navigation

In the Clutch September 28, 2017

Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture.
Tags: , ,
trackback

I wrote a little while back about how some people are able to recognize the importance of the right decision or action in a given situation, and respond in a positive fashion.  We often call that delivering in the clutch.  This is as opposed to machine intelligence, which at least right now is not equipped to understand and respond to anything regarding the importance of a particular event in a sequence.

The question is if these systems will ever be able to tell that a particular event has outsized importance, and if they can use this information to um, try harder.

I have no doubt that we will be able to come up with metrics that can inform a machine learning system of a particularly critical event or events.  Taking an example from Moneyball of an at-bat, we can incorporate the inning, score, number of hits, and so on.  In other problem domains, such as application monitoring, we may not yet be collecting the data that we need, but given a little thought and creativity, I’m sure we can do so.

But I have difficulty imagining that machine learning systems will be able to rise to the occasion.  There is simply no mechanism in computer programming for that to happen.  You don’t save your best algorithms for important events; you use them all the time.  For a long-running computation, it may be helpful to add to the server farm, so you can finish more quickly or process more data, but most learning systems won’t be able or equipped to do that.

But code is not intelligence.  Algorithms cannot feel a sense of urgency to perform at the highest level; they are already performing at the highest level of which they are capable.

To be fair, at some indeterminate point in the future, it may be possible for algorithms to detect the need for new code pathways, and call subroutines to make those pathways a reality (or ask for humans to program them).  They may recognize that a particular result is suboptimal, and “ask” for additional data to make it better.  But why would that happen only for critical events?  We would create our systems to do that for any event.

Today, we don’t live in the world of Asimov’s positronic brains and the Three Laws of Robotics.  It will be a while before science is at that point, if ever.

Is this where human achievement can perform better than an algorithm?  Possibly, if we have the requisite human expertise.  There are a number of well-known examples where humans have had to take over when machines failed, some successfully, some unsuccessfully.  But the human has to be there, and has to be equipped professionally and mentally to do so.  That is why I am a strong believer in the human in the loop.

Comments»

No comments yet — be the first.

Leave a comment