jump to navigation

The Human In the Loop September 19, 2017

Posted by Peter Varhol in Software development, Strategy, Technology and Culture.
Tags: ,
1 comment so far

A couple of years ago, I did a presentation entitled “Famous Software Failures”.  It described six events in history where poor quality or untested software caused significant damage, monetary loss, or death.

It was really more about system failures in general, or the interaction between hardware and software.  And ultimately is was about learning from these failures to help prevent future ones.

I mention this because the protagonist in one of these failures passed earlier this year.  Stanislav Petrov, a Soviet military officer who declined to report a launch of five ICBMs from the United States, as reported by their defense systems.  Believing that a real American offensive would involve many more missiles, Lieutenant Colonel Petrov refused to acknowledge the threat as legitimate and contended to his superiors that it was a false alarm (he was reprimanded for his actions, incidentally, and permitted to retire at his then-current rank).  The false alarm had been created by a rare alignment of sunlight on high-altitude clouds above North Dakota.

There is also a novel by Daniel Suarez, entitled Kill Decision, that postulates the rise of autonomous military drones that are empowered to make a decision on an attack without human input and intervention.  Suarez, an outstanding thriller writer, writes graphically and in detail of weapons and battles that we are convinced must be right around the next technology bend, or even here today.

As we move into a world where critical decisions have to be made instantaneously, we cannot underestimate the value of the human in the loop.  Whether the decision is made with a focus on logic (“They wouldn’t launch just five missiles”) or emotion (“I will not be remembered for starting a war”), it puts any decision in a larger and far more real context than a collection of anonymous algorithms.

The human can certainly be wrong, of course.  And no one person should be responsible for a decision that can cause the death of millions of people.  And we may find ourselves outmaneuvered by an adversary who relies successfully on instantaneous, autonomous decisions (as almost happened in Kill Decision).

As algorithms and intelligent systems become faster and better, human decisions aren’t necessarily needed or even desirable in a growing number of split-second situations.  But while they may be pushed to the edges, human decisions should not be pushed entirely off the page.

 

Advertisements

Are We Wrong About the Future of Digital Life? September 14, 2017

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Digital life offers the promise of no friction in our lives; that is, no difficulty in doing anything ordinary, such as shopping, meeting people, or traveling.  There is no impediment in our lives.  I have written about the idea of friction before, thinking that at least some friction is necessary for us to grow and develop as human beings.

Further, science fiction author Frank Herbert had some very definite ideas about friction, now over 50 years ago.  He invented a protagonist named Jorg X. McKie, who worked for the Bureau of Sabotage as a saboteur.  At some indeterminate time in the future, galactic government became so efficient that laws were conceived in the morning, passed in the afternoon, and effective in the evening.  McKie’s charter was to toss a monkey wrench into the workings of government, to slow it down so that people would be able to consider the impact of their rash decisions.

But let’s fast forward (or fast backward) to Bodega, the Silicon Valley startup that is trying to remove friction from convenience store stops.  Rather than visit a tiny hole-in-the-wall shop, patrons can pick up their goods at the gym, in their apartment building, or anywhere that is willing to accept a cabinet.  Customers use their app to unlock it, and their purchases are automatically recorded and charged.

It turns out that people are objecting.  Loudly.  It turns out that the bodega (a Hispanic term for the tiny shops) is more than just a convenience.  It is where neighborhood residents go to find out what is happening with other people, and to find out what is going on in general.  In an era where we are trying to remove interpersonal interaction, some of us also seem to be trying to restore it.

My point is that maybe we want to see our neighbors, or at least hear about them.  And the bodega turns out to be an ideal clearing house, so to speak.  I’ve seen something similar in northern Spain, where the tiny pinxtos shops serve pinxtos in the morning until the late afternoon, then transition into bars for the evening.  We visit one such place every morning when we are in Bilbao.  They don’t speak any English, and my Spanish is limited (and no Basque), but there is a certain community.

That is encouraging.  Certainly there is some friction in actually having a conversation, but there is also a great deal of value in obtaining information in this manner.  We establish a connection, but we also don’t know what we’re going to hear from visit to visit.

I wonder if there is any way that the company Bodega can replicate such an experience.  Perhaps not, and that is one strong reason why we will continue to rely on talking to other people.

More About Friction and Life September 5, 2017

Posted by Peter Varhol in Technology and Culture.
Tags: ,
3 comments

Apparently the next wave of getting friction out of our lives is to text people we are visiting, rather than ringing a doorbell (paywall).  It seems that doorbells disturb people (okay, in particular young people).  In some cases apparently seriously.

I’m ambivalent about this.  As one generation passes on to the next, customs change, and it is entirely likely that texting to let someone know you are outside of their door will become the new normal.  On the surface, it may not be a bad thing.

But there’s always a but.  It turns out that texting someone is an excuse for not seeing someone physically.  And there are plenty of places where I go that I may not know the phone number of the person inside.

But more about friction in general.  Friction is the difference between us as individuals gliding through life unimpeded, or having some roadblocks that prevent us from doing some of what we would like.  None of us like friction.  All of us need it.

Whatever else I may doubt, I am certain that friction is an essential part of a rich and fulfilling life.

If you are afraid of something, then there is good reason to face it.

First, friction teaches us patience and tolerance.  It teaches how to wait for what we decide is important in life.

Second, it teaches us what is important in our lives.  We don’t know what is important unless we have to work for it.

Third, it teaches us that we may have to change our needs and goals, based on the feedback we get in life.

Many of the Silicon Valley startups today are primarily about getting rid of friction in our lives.  Uber (especially), Blue Apron, really just about any phone app-based startup is about making our daily existence easier.

You heard it here first, folks.  Easier is good, but we also need to work, even for the daily chores that seem like they should be easier.  We may have to call for a cab, or look up a menu and pick up a meal.  Over the course of our lives, we learn many life lessons from experiences like that.

Do me a favor this week.  Try the doorbell.

Rage Against the Machine August 22, 2017

Posted by Peter Varhol in Technology and Culture.
1 comment so far

No, not them.  Rather, it is the question of our getting frustrated with our devices.  I might have an appointment that my device fails to remind me of (likely a setting we forgot, or something that was inadvertently turned off), and I snap at the device, rather than chastising myself for forgetting it.  Or I get poor information from my electronic assistant because it misinterprets the question.

And because our devices increasingly talking to us, we might feel an additional urge to talk back.  Or yell back if we don’t like the answers.

There are two schools of thought on this.  One is that acting out frustration against an inanimate object is an acceptable release valve and lessens our aggression against human recipients (a variation of this is the whole displacement syndrome in psychology), making it easier for us to deal with others.

The second is that acting aggressively toward an electronic assistant that speaks can actually make us more aggressive with actual people, because we become used to yelling at the device.

MIT researcher Sherry Turkle tends toward the latter result.  She says that “Yelling at our machines could lead to a “coarsening of how people treat each other.”

I’m not sure what the right answer here is; perhaps a bit of both, depending upon other personal variables and circumstances.

But I do know that yelling at an inanimate object, even if it does have a voice, will accomplish nothing productive.  Unlike the old saw where “Trying to teach a pig to fly won’t succeed, and it annoys the pig,” yelling at your electronic assistant won’t even annoy it.

Don’t do it, folks.

Google Blew It August 12, 2017

Posted by Peter Varhol in Technology and Culture, Uncategorized.
Tags:
add a comment

I don’t think that statement surprises anyone.  Google had the opportunity to make a definitive statement about the technology industry, competence, inclusion, ability, and teamwork, and instead blew it as only a big, bureaucratic company could.  Here is why I think so.

First, Google enabled and apparently supported a culture in which views colored by politics are freely promoted.  That was simply stupid.  No one wins at the politics game (and mostly everyone loses).  We believe what we believe.  If we are thoughtful human beings with a growth mindset, our beliefs are likely to change, but over a period of years, not overnight.

Second, Google let the debate be framed as a liberal versus conservative one.  It is most emphatically not.  I hate those labels.  I am sure I have significant elements of each in my psyche, along with perhaps a touch of libertarianism.  To throw about such labels is insulting and ludicrous, and Google as a company and a culture enabled it.

Okay, then what is it, you may ask.  It is about mutual respect, across jobs, roles, product lines, and level of responsibility.  It is working with the person, regardless of gender, race, age, orientation, or whatever.  You don’t know their circumstances, you may not even know what they have been assigned to do.  Your goal is to achieve a robust and fruitful working relationship.  If you can’t, at least some of that may well be on you.

The fact that you work together at Google gives you more in common with each other than almost anyone else in the world.  There are so many shared values there that have nothing to do with political beliefs, reflexive or well-considered.  Share those common goals; all else can be discussed and bridged.  It’s only where you work, after all.

You may think poorly of a colleague.  God knows I have in the past, whether it be for perceived competence, poor work habits, skimpy hours, or seeming uninspired output (to be fair, over the years a few of my colleagues may have thought something similar about me).  They are there for a reason.  Someone thought they had business value.  Let’s expend a little more effort trying to find it.  Please.

So what would I have done, if I were Sundar Pichai?  Um, first, how about removing politics from the situation?  Get politics out of office discussions in general, and out of this topic in particular.  All too often, doctrinaire people (on both sides of the aisle) simply assume that everyone thinks their ideas are inevitably right.  Try listening more and assuming less.  If you can’t, Sundar, it is time to move aside and let an adult take over.

Second, Google needs everyone to understand what it stands for.  And I hope it does not stand for liberal or conservative.  I hope it wants everyone to grow, professionally, emotionally, and in their mindsets.  We can have an honest exchange of ideas without everyone going ballistic.

Get a grip, folks!  There is not a war on, despite Google’s ham-handed attempts to make it one.  We have more in common than we are different, and let’s work on that for a while.

I can’t fix Google’s monumental screw-up.  But I really hope I can move the dial ever so slightly toward respect and rational discourse.

The Incorrect Assumptions Surrounding Diversity in Tech August 7, 2017

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

There was a time in my life when I believed that the tech industry was a strict meritocracy, that the best would out.  At this stage of my life, I now realize that is a pipe dream.

Can we define the best software engineers?  We can perhaps define good ones, and perhaps also define poor ones, in a general sense.  “I know it when I see it,” said former Supreme Court Justice Potter Stewart, speaking of pornography.  Which may not be that different from speaking of code.

The problem is that those are very subjective and biased measures.  The person who writes fast code may not write the best code.  The person who write the best code may be slow as molasses.  Which is better?  There are certainly people who write the best code fast, but are they writing code that will make the company and product successful?

There are a thousand tech startups born every year.  They think they have a great idea, but all ideas are flawed.  A few are flawed technically, but most are flawed in terms of understanding the need or the market.  Those ideas have blind spots that others outside of that creative process can also certainly readily recognize.

The ultimate question for companies is what do you want to be when you grow up.  Companies build applications that reflect its market focus.  But they also build applications that reflect its teams.  When we build products, we do so for people like us.

In tech companies, we are building a product.  I have built products before.  Software engineers make hundreds of tactical decisions on how to implement product every day.  Product managers make dozens of strategic decisions on what products to build, what it runs on, and what features to include.

I have made those decisions.  I am painfully aware that every single decision I make has an accompanying bias.  I dislike that, because I know that decisions I have made can foil the larger goals of being successful and profitable.

I want a diverse team participating in those decisions.  Because I don’t trust that my own biases will let me make decisions that will build the best product, for widest customer base.  I mean gender, race, economic status, orientation, age, everything I can include.  Many tech companies use the term “cultural fit” to eliminate any diversity from their teams.  Diverse teams may have more tension, because you have different experiences and think differently, but you end up making better decisions in the end.  I’m pretty sure that’s demonstrable in practice.

You may believe that you know everything, and are the best at any endeavor you pursue.  Let me let you in on something: you are not.  We would all be amazed at what everyone around us can contribute.  If we just let them.

The Internet is After Me August 4, 2017

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

The Internet is after me.  It’s trying to grab my attention, for a few seconds, so I can dote on cute little kittens scurrying across my dinner, in exchange for my unconscious watching an ad for Tidy Bowl for half a minute.

Hey, I’m old.  Did I tell you that?  I must have forgotten.  Kittens are the only thing I recognize on the Internet, but boy, are they cute.  I have cats, and they sure are cute, but the ones on the Internet are so highly trained in the art and science of cuteness that I simply can’t stop looking at them.  But there is a problem.

I walk down the street, looking at my kittens on a screen that my eyes can barely discern, and I see random ads popping up on the screen.  In fact, the ads are covering up the kittens!  I stop abruptly, causing a little boy to crash right into my backside, and look around me.

Yes!  There is the Hamburger Haven, right across the street.  I look back down and my screen, and yes, the kittens were chasing hamburgers!  And when a kitten caught one, it polished it off with a lick of its lips and a smirk on its face.  What a cute kitten!

It made me hungry just watching.  I looked up again at the Hamburger Haven, then started crossing the street.  I didn’t get far before I got clipped by a car.  I spun around and fell, but was still focused on getting to that hamburger.  Double meat, double cheese, bacon, oh yes bacon, lettuce, and ketchup.  A tomato would not be overkill, would it?

I wasn’t hurt, more startled, but the car screeched to a stop, and a young guy got out.  He was looking at his phone as he rushed to my side, but I don’t think he was calling 9-1-1.  No, he showed me the screen, and said, “Let’s go get a hamburger.”  Damn.

So we had a hamburger.  And super fries.  And a drink.  But as I licked my lips, I realized an essential truth.

They not only know who I am, they know where I am.  And they want to sell me hamburgers.  Maybe panty hose, though I hope not.  But kittens?  Oh yes, show me more kittens.

What Brought About our AI Revolution? July 22, 2017

Posted by Peter Varhol in Algorithms, Software development, Software platforms.
Tags: , , ,
add a comment

Circa 1990, I was a computer science graduate student, writing forward-chaining rules in Lisp for AI applications.  We had Symbolics Lisp workstations, but I did most of my coding on my Mac, using ExperList or the wonderful XLisp written by friend and colleague David Betz.

Lisp was convoluted to work with, and in general rules-based systems required that there was an expert available to develop the rules.  It turns out that it’s very difficult for any human expert to described in rules how they got a particular answer.  And those rules generally couldn’t take into account any data that might help it learn and refine over time.

As a result, most rules-based systems fell by the wayside.  While they could work for discrete problems where the steps to a conclusion were clearly defined, they weren’t very useful when the problem domain was ambiguous or there was no clear yes or no answer.

A couple of years later I moved on to working with neural networks.  Neural networks require data for training purposes.  These systems are made up of layered networks of equations (I used mostly fairly simple polynomial expressions, but sometimes the algorithms can get pretty sophisticated) that adapt based on known inputs and outputs.

Neural networks have the advantage of obtaining their expertise through the application of actual data.  However, due to the multiple layers of algorithms, it is usually impossible to determine how the system arrives at the answers it does.

Recently I presented on machine learning at the QUEST Conference in Chicago and at Expo:QA in Spain.  In interacting with the attendees, I realized something.  While some data scientists tend to use more complex algorithms today, the techniques involved in neural networks for machine learning are pretty much the same as they were when I was doing it, now 25 years ago.

So why are we having the explosion in machine learning, AI, and intelligent systems today?  When I was asked that question recently, I realized that there was only one possible answer.

Computing processing speeds continue to follow Moore’s Law (more or less), especially when we’re talking floating point SIMD/parallel processing operations.  Moore’s Law doesn’t directly relate to speed or performance, but there is a strong correlation.  And processors today are now fast enough to execute complex algorithms with data applied in parallel.  Some, like Nvidia, have wonderful GPUs that turn out to work very well with this type of problem.  Others, like Intel, have released an entire processor line dedicated to AI algorithms.

In other words, what has happened is that the hardware caught up to the software.  The software (and mathematical) techniques are fundamentally the same, but now the machine learning systems can run fast enough to actually be useful.