jump to navigation

Interlude For, Of All Things, Corn on the Cob October 19, 2017

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

I grew up, well, not quite on a farm, but definitely not in suburbia.  We had large gardens (and cows, chickens, and even a pig), of which I partook of little, to my adult regret.  But I devoured corn on the cob, and still do to this day, now in New England.

I have tried broiled and grilled, and my preference is grilled, although you need a grill of course.

But as we move into a world of genetically modified crops, I am okay with that.  Really.  I dislike the non-GMO labels on my food.  I think they are pandering to those who don’t know that our crops have always been modified.  But I have a request.

As I shuck them, I cannot get rid of the hair.  If there is anything that you can do to get rid of the “hair” on the corn on the cob, I would appreciate it.

Advertisements

Bias and Truth and AI, Oh My October 4, 2017

Posted by Peter Varhol in Machine Learning, Software development, Technology and Culture.
Tags: ,
add a comment

I was just accepted to speak at the Toronto Machine Learning Summit next month, a circumstance that I never thought might happen.  I am not an academic researcher, after all, and while I have jumped back into machine learning after a hiatus of two decades, many more are fundamentally better at it than me.

The topic is Cognitive Bias in AI:  What Can Go Wrong?  It’s rather a follow-on from the presentations I’ve done on bias in software development and testing, but it doesn’t really fit into my usual conferences, so I attempted to cast my net into new waters.  For some reason, the Toronto folks said yes.

But it mostly means that I have to actually write the presentation.  And here is the rub:  We tend to believe that intelligent systems are always correct, and in those rare circumstances where they are not, it is simply the result of a software bug.

No.  A bug is a one-off error that can be corrected in code.  A bias is a systematic adjustment toward a predetermined conclusion that cannot be fixed with a code change.  At the very least the training data and machine learning architecture have to be re-thought.

And we have examples such as these:

If you’re not a white male, artificial intelligence’s use in healthcare could be dangerous.

When artificial intelligence judges a beauty contest, white people win.

But the fundamental question, as we pursue solutions across a wide range of applications, is:  Do we want human decisions, or do we want correct ones?  That’s not to say that all human decisions are incorrect, but only to point out that much of what we decide is colored by our bias.

I’m curious about what AI applications decide about this one.  Do we want to eliminate the bias, or do we want to reflect the values of the data we choose to use?  I hope the former, but the latter may win out, for a variety of reasons.

In the Clutch September 28, 2017

Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture.
Tags: , ,
add a comment

I wrote a little while back about how some people are able to recognize the importance of the right decision or action in a given situation, and respond in a positive fashion.  We often call that delivering in the clutch.  This is as opposed to machine intelligence, which at least right now is not equipped to understand and respond to anything regarding the importance of a particular event in a sequence.

The question is if these systems will ever be able to tell that a particular event has outsized importance, and if they can use this information to um, try harder.

I have no doubt that we will be able to come up with metrics that can inform a machine learning system of a particularly critical event or events.  Taking an example from Moneyball of an at-bat, we can incorporate the inning, score, number of hits, and so on.  In other problem domains, such as application monitoring, we may not yet be collecting the data that we need, but given a little thought and creativity, I’m sure we can do so.

But I have difficulty imagining that machine learning systems will be able to rise to the occasion.  There is simply no mechanism in computer programming for that to happen.  You don’t save your best algorithms for important events; you use them all the time.  For a long-running computation, it may be helpful to add to the server farm, so you can finish more quickly or process more data, but most learning systems won’t be able or equipped to do that.

But code is not intelligence.  Algorithms cannot feel a sense of urgency to perform at the highest level; they are already performing at the highest level of which they are capable.

To be fair, at some indeterminate point in the future, it may be possible for algorithms to detect the need for new code pathways, and call subroutines to make those pathways a reality (or ask for humans to program them).  They may recognize that a particular result is suboptimal, and “ask” for additional data to make it better.  But why would that happen only for critical events?  We would create our systems to do that for any event.

Today, we don’t live in the world of Asimov’s positronic brains and the Three Laws of Robotics.  It will be a while before science is at that point, if ever.

Is this where human achievement can perform better than an algorithm?  Possibly, if we have the requisite human expertise.  There are a number of well-known examples where humans have had to take over when machines failed, some successfully, some unsuccessfully.  But the human has to be there, and has to be equipped professionally and mentally to do so.  That is why I am a strong believer in the human in the loop.

SpamCast on Machine Learning September 20, 2017

Posted by Peter Varhol in Software platforms.
Tags: ,
add a comment

Not really spam, of course, but Software Process and Measurement, the weekly podcast from Tom Cagley, who I met at the QUEST conference this past spring.  This turned out surprisingly well, and Tom posted it this past weekend.  If you have a few minutes, listen in.  It’s a good introduction to machine learning and the issues of testing machine learning systems, as well as skills needed to understand and work with these systems.  http://spamcast.libsyn.com/spamcast-460-peter-varhol-machine-learning-ai-testing-careers

The Human In the Loop September 19, 2017

Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: ,
1 comment so far

A couple of years ago, I did a presentation entitled “Famous Software Failures”.  It described six events in history where poor quality or untested software caused significant damage, monetary loss, or death.

It was really more about system failures in general, or the interaction between hardware and software.  And ultimately is was about learning from these failures to help prevent future ones.

I mention this because the protagonist in one of these failures passed earlier this year.  Stanislav Petrov, a Soviet military officer who declined to report a launch of five ICBMs from the United States, as reported by their defense systems.  Believing that a real American offensive would involve many more missiles, Lieutenant Colonel Petrov refused to acknowledge the threat as legitimate and contended to his superiors that it was a false alarm (he was reprimanded for his actions, incidentally, and permitted to retire at his then-current rank).  The false alarm had been created by a rare alignment of sunlight on high-altitude clouds above North Dakota.

There is also a novel by Daniel Suarez, entitled Kill Decision, that postulates the rise of autonomous military drones that are empowered to make a decision on an attack without human input and intervention.  Suarez, an outstanding thriller writer, writes graphically and in detail of weapons and battles that we are convinced must be right around the next technology bend, or even here today.

As we move into a world where critical decisions have to be made instantaneously, we cannot underestimate the value of the human in the loop.  Whether the decision is made with a focus on logic (“They wouldn’t launch just five missiles”) or emotion (“I will not be remembered for starting a war”), it puts any decision in a larger and far more real context than a collection of anonymous algorithms.

The human can certainly be wrong, of course.  And no one person should be responsible for a decision that can cause the death of millions of people.  And we may find ourselves outmaneuvered by an adversary who relies successfully on instantaneous, autonomous decisions (as almost happened in Kill Decision).

As algorithms and intelligent systems become faster and better, human decisions aren’t necessarily needed or even desirable in a growing number of split-second situations.  But while they may be pushed to the edges, human decisions should not be pushed entirely off the page.

 

Are We Wrong About the Future of Digital Life? September 14, 2017

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Digital life offers the promise of no friction in our lives; that is, no difficulty in doing anything ordinary, such as shopping, meeting people, or traveling.  There is no impediment in our lives.  I have written about the idea of friction before, thinking that at least some friction is necessary for us to grow and develop as human beings.

Further, science fiction author Frank Herbert had some very definite ideas about friction, now over 50 years ago.  He invented a protagonist named Jorg X. McKie, who worked for the Bureau of Sabotage as a saboteur.  At some indeterminate time in the future, galactic government became so efficient that laws were conceived in the morning, passed in the afternoon, and effective in the evening.  McKie’s charter was to toss a monkey wrench into the workings of government, to slow it down so that people would be able to consider the impact of their rash decisions.

But let’s fast forward (or fast backward) to Bodega, the Silicon Valley startup that is trying to remove friction from convenience store stops.  Rather than visit a tiny hole-in-the-wall shop, patrons can pick up their goods at the gym, in their apartment building, or anywhere that is willing to accept a cabinet.  Customers use their app to unlock it, and their purchases are automatically recorded and charged.

It turns out that people are objecting.  Loudly.  It turns out that the bodega (a Hispanic term for the tiny shops) is more than just a convenience.  It is where neighborhood residents go to find out what is happening with other people, and to find out what is going on in general.  In an era where we are trying to remove interpersonal interaction, some of us also seem to be trying to restore it.

My point is that maybe we want to see our neighbors, or at least hear about them.  And the bodega turns out to be an ideal clearing house, so to speak.  I’ve seen something similar in northern Spain, where the tiny pinxtos shops serve pinxtos in the morning until the late afternoon, then transition into bars for the evening.  We visit one such place every morning when we are in Bilbao.  They don’t speak any English, and my Spanish is limited (and no Basque), but there is a certain community.

That is encouraging.  Certainly there is some friction in actually having a conversation, but there is also a great deal of value in obtaining information in this manner.  We establish a connection, but we also don’t know what we’re going to hear from visit to visit.

I wonder if there is any way that the company Bodega can replicate such an experience.  Perhaps not, and that is one strong reason why we will continue to rely on talking to other people.

More About Friction and Life September 5, 2017

Posted by Peter Varhol in Technology and Culture.
Tags: ,
3 comments

Apparently the next wave of getting friction out of our lives is to text people we are visiting, rather than ringing a doorbell (paywall).  It seems that doorbells disturb people (okay, in particular young people).  In some cases apparently seriously.

I’m ambivalent about this.  As one generation passes on to the next, customs change, and it is entirely likely that texting to let someone know you are outside of their door will become the new normal.  On the surface, it may not be a bad thing.

But there’s always a but.  It turns out that texting someone is an excuse for not seeing someone physically.  And there are plenty of places where I go that I may not know the phone number of the person inside.

But more about friction in general.  Friction is the difference between us as individuals gliding through life unimpeded, or having some roadblocks that prevent us from doing some of what we would like.  None of us like friction.  All of us need it.

Whatever else I may doubt, I am certain that friction is an essential part of a rich and fulfilling life.

If you are afraid of something, then there is good reason to face it.

First, friction teaches us patience and tolerance.  It teaches how to wait for what we decide is important in life.

Second, it teaches us what is important in our lives.  We don’t know what is important unless we have to work for it.

Third, it teaches us that we may have to change our needs and goals, based on the feedback we get in life.

Many of the Silicon Valley startups today are primarily about getting rid of friction in our lives.  Uber (especially), Blue Apron, really just about any phone app-based startup is about making our daily existence easier.

You heard it here first, folks.  Easier is good, but we also need to work, even for the daily chores that seem like they should be easier.  We may have to call for a cab, or look up a menu and pick up a meal.  Over the course of our lives, we learn many life lessons from experiences like that.

Do me a favor this week.  Try the doorbell.

Rage Against the Machine August 22, 2017

Posted by Peter Varhol in Technology and Culture.
1 comment so far

No, not them.  Rather, it is the question of our getting frustrated with our devices.  I might have an appointment that my device fails to remind me of (likely a setting we forgot, or something that was inadvertently turned off), and I snap at the device, rather than chastising myself for forgetting it.  Or I get poor information from my electronic assistant because it misinterprets the question.

And because our devices increasingly talking to us, we might feel an additional urge to talk back.  Or yell back if we don’t like the answers.

There are two schools of thought on this.  One is that acting out frustration against an inanimate object is an acceptable release valve and lessens our aggression against human recipients (a variation of this is the whole displacement syndrome in psychology), making it easier for us to deal with others.

The second is that acting aggressively toward an electronic assistant that speaks can actually make us more aggressive with actual people, because we become used to yelling at the device.

MIT researcher Sherry Turkle tends toward the latter result.  She says that “Yelling at our machines could lead to a “coarsening of how people treat each other.”

I’m not sure what the right answer here is; perhaps a bit of both, depending upon other personal variables and circumstances.

But I do know that yelling at an inanimate object, even if it does have a voice, will accomplish nothing productive.  Unlike the old saw where “Trying to teach a pig to fly won’t succeed, and it annoys the pig,” yelling at your electronic assistant won’t even annoy it.

Don’t do it, folks.