jump to navigation

Solving a Management Problem with Automation is Just Plain Wrong January 18, 2018

Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: , ,
add a comment

This article is so fascinatingly wrong on so many levels that it is worth your time to read it.  On the surface, it may appear to offer some impartial logic, that we should automate because humans don’t perform consistently.

“At some point, every human being becomes unreliable.”  Well, yes.  Humans aren’t machines.  They have good days and bad days.  They have exceptional performances and poor performances.

Machines, on the other hand, are stunningly consistent, and least under most circumstances.  Certainly software bugs, power outages, and hardware breakdowns happen, and machines will fail to perform under many of those circumstances, but they are relatively rare.

But there is a problem here.  Actually, several problems.  The first is that machines will do exactly the same thing, every time, until the cows come home.  That’s what they are programmed to do, and they do it reasonably well.

Humans, on the other hand, experiment.  And through experimentation and inspiration come innovation, a better way of doing things.  Sometimes that better way is evolutionary, and sometimes it is revolutionary.  But that’s how society evolves and becomes better.  The machine will always do exactly the same thing, so there will never be better and innovative solutions.  We become static, and as a society old and tired.

Second, humans connect with other humans in a way machines cannot (the movie Robot and Frank notwithstanding).  This article starts with a story of a restaurant whose workers showed up when they felt like it.  Rather that addressing that problem directly, the owner implemented a largely automated (and hands off) assembly line of food.

What has happened here is that the restaurant owner has taken a management problem and attempted to solve it with the application of technology.  And by not acknowledging his own management failings, he will almost certainly fail in his technology solution too.

Except for probably fast food restaurants, people eat out in part for the experience.  We do not eat out only, and probably not even primarily, for sustenance, but rather to connect with our family and friends, and with random people we encounter.

If we cannot do that, we might as well just have glucose and nutrients pumped directly into our veins.

Advertisements

Tech Products That Should Never Have Been Conceived January 11, 2018

Posted by Peter Varhol in Technology and Culture.
Tags: , , ,
add a comment

I say that with some trepidation, for a number of reasons.  First, for the last 30 years I’ve made my living off of tech in some capacity or other.  Second, I’m all in favor of progress in technology.  It makes those of us who work in it wealthier, and it has the potential to significantly benefit society.

It’s the direction of progress that sometimes concerns me.  There are a number of things that can be invented, but probably should not be.  One is the Gita, from Boston-based Vespa subsidiary Piaggio Fast Forward.  Gita is a mobile item carrier that follows people carrying up to 44 pounds.  It simply rolls along behind you,

A close second is the auto-following suitcase.  A young WSJ writer covering CES (paywall) writes about her experiences with these, and likens having to pull your own carry-on through an airport as hiking the Oregon Trail.  Um.  She points out that it’s practical, in that you can have an iced latte in each hand, and not worry about losing your bag.  Right.

What’s even worse is the Modobag, a ridable piece of luggage.  And the images on the website show young people using it.  I am imagining playing bumper cars, so to speak, in a crowded airport concourse.

I recognize that there is a niche though possibly legitimate use for products like these.  Elderly or infirm might find them useful, although that represents a pretty small percentage of the traveling public.  And despite an occasional marketing word to the contrary, these products are clearly focused on an entirely different demographic.

And I recognize that at least a few of the articles are intended to be partly tongue-in-cheek.  But that’s no excuse to not conclude that these particular emperors have no clothes.

But we have reached an era where tech companies don’t particularly care about benefitting society.  They think they can make their fortunes on young people who think they are hip by spending thousands of dollars on the latest gadgets.

Gita was announced a year ago, and doesn’t even seem to be in beta yet, so perhaps it will never see daylight.  Good.  And most airlines have said that they will not embark motorized bags in which the battery cannot be removed.  As these bags will take up more space and weight than a conventional bag, I see this as only a half measure, but it is causing some rethinking among companies making them.

Folks, forget the stupid iced latte.  Stuff like this serves no purpose whatsoever except to make you look silly.

Revisiting Net Neutrality December 14, 2017

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: ,
add a comment

I wrote on this about three years ago.  As it seems that so-called net neutrality may be reaching the end of the road, at least for the near term, it is worthwhile cutting through the crap to examine what is really going on.

You know, I think that net neutrality has merits.  It certainly has marketing on its side; according to CNN, it means “to keep the internet open and fair.”

Ah, it doesn’t, and that is the problem.  It means that the streaming services such as the likes of Netflix and Amazon can hog bandwidth with impunity, and without paying a premium.  I am certain that CNN has a business reason to maintain net neutrality, and it is unfortunate that they are letting that business reason leak into their reporting.

The Internet is a finite resource.  There are some companies that use a great deal of it.  Should they pay more for doing so?  Perhaps, but the “net neutrality” supporters don’t want to have that conversation.  I say let’s talk about it, but the news establishment doesn’t want to do so.  They give it a high-sounding label, and proclaim it good.  The ones who oppose it are bad.  Case closed.

Net neutrality does (maybe) mean that the Internet is basically a utility, like electricity or water.  That isn’t necessarily a bad thing, but I am not sure it reflects reality.  Those companies, mostly the telecom folks, have invested billions of dollars, and are not at all guaranteed a profit.  It is a risk, and when individuals or companies take risks, they succeed or fail according to the market.  Yet the likes of CNN are treating them as your electric utility, guaranteed to make a set amount of money from the state Public Utilities Commission.  That doesn’t reflect their reality at all.

I think that net neutrality is ultimately the way to go.  But it supports some businesses over the expense of others.  Just like the alternative.

But I have to ask, CNN, why are you afraid to even have the conversation?  You have declared net neutrality to be The Way, and you will brook no further discussion.

Update:  And now the title of the CNN headline is this:  End of the Internet as we know it.  Can we get any more biased, CNN?

Who Will Thrive in an AI World? November 26, 2017

Posted by Peter Varhol in Machine Learning, Technology and Culture.
add a comment

Software engineers, of course, who understand both relevant programming languages and the math behind the algorithms.  That is significantly less than the universe of software engineers in general, but I don’t see even those math-deprived programmers having a big problem, at least in the short term.

Beyond that?  Are we all toast?

Well, no.  Today someone asked me how machine learning would affect health care jobs.  I thought to where health care was going with machine learning, and to my own experiences with health care.  “The survivors will be those who can understand what the algorithms tell them, but also talk with the patients those results affect.”

I have dealt with doctors (such as my current PCP, who could be a much better doctor if she simply trusted herself) who simply look at test results and parrot them back to you.  I had a doctor who I liked and trusted, who could not find cancer but insisted it was there, based on photographs (it was not).

These are not health care professionals who will thrive in an era of AI-assisted medical evaluation and diagnosis.  They simply parrot test results, without adding value or effectively communicating with the patient.

To be fair, our system has created this kind of doctor, who is afraid of using their expertise to express an independent opinion.  I had one who did employ his expertise, during my cancer scare.  He came into my room, and said, “Where is your nose drain?  How come you’re not choking?”  Then “I looked at your MRI from six years ago, and you had indications then.  Whatever this is, it probably isn’t cancer.”  It wasn’t.

Doctors have become afraid to use their expertise, because of the fear of lawsuits and other recriminations.  That is unfortunate, and of course not entirely their fault.  But this is just the kind of doctor who will not survive the machine learning revolution.

I think that general conclusion can be extended to other fields.  Those that become overly reliant on machine results, and decline to employ their own expertise, will ultimately be left behind.  Those who are willing to use those results, yet supplement them with their own expertise, and effectively explain it to their patients, will succeed.  We are still people, after all, and need to communicate with one another.

Pay for Performance, Mathematics Edition November 21, 2017

Posted by Peter Varhol in Education, Technology and Culture.
Tags:
add a comment

I’ve always been suspicious of standardized tests that conclude that US students were average or worse in mathematics than others.  My primary issue is that it is very likely that many more US students took these types of comparison tests than in other countries, and while the mean tended to be average, the standard deviation was larger than average, meaning that many did much more poorly, but many also did much better.  The popular press tends to find fault with anything that reeks of US influence, and neglects to mention such a basic measure for better comparison.

There is a study that offers a different but related conclusion, however.  It claims that US students are competitively capable, but only when sufficiently motivated.  How do you motivate them?  Well, by paying them, of course.  When students are financially rewarded, their math results are significantly elevated.

This means that US students aren’t (necessarily) stupid, or undereducated, just unmotivated.  It’s an intriguing  proposition, one that I think deserves more study.

Are Engineering and Ethics Orthogonal Concepts? November 18, 2017

Posted by Peter Varhol in Algorithms, Technology and Culture.
Tags: , , ,
add a comment

Let me explain through example.  Facebook has a “fake news” problem.  Users sign up for a free account, then post, well, just about anything.  If it violates Facebook’s rules, the platform generally relies on users to report, although Facebook also has teams of editors and is increasingly using machine learning techniques to try to (emphasis on try) be proactive about flagging content.

(Developing machine learning algorithms is a capital expense, after all, while employing people is an operational one.  But I digress.)

But something can be clearly false while not violating Facebook guidelines.  Facebook is in the very early stages of attempting to authenticate the veracity of news (it will take many years, if it can be done at all), but it almost certainly won’t remove that content.  It will be flagged as possibly false, but still available for those who want to consume it.

It used to be that we as a society confined our fake news to outlets such as The Globe or the National Inquirer, tabloid papers typically sold at check-out lines at supermarkets.  Content was mostly about entertainment personalities, and consumption was limited to those that bothered to purchase it.

Now, however, anyone can be a publisher*.  And can publish anything.  Even at reputable news sources, copy editors and fact checkers have gone the way of the dodo bird.

It gets worse.  Now entire companies exist to write and publish fake news and outrageous views online.  Thanks to Google’s ad placement strategy, the more successful ones may actually get paid by Google to do so.

By orthogonal, I don’t mean contradictory.  At the fundamental level, orthogonal means “at right angles to.”  Variables that are orthogonal are statistically independent, in that changes in one don’t at all affect the other.

So let’s translate that to my point here.  Facebook, Google, and the others don’t see this as a societal problem, which is difficult and messy.  Rather they see it entirely as an engineering problem, solvable with the appropriate application of high technology.

At best, it’s both.  At worst, it is entirely a societal problem, to be solved with an appropriate (and messy) application of understanding, negotiation, and compromise.  That’s not Silicon Valley’s strong suit.

So they try to address it with their strength, rather than acknowledging that their societal skills as they exist today are inadequate to the immense task.  I would be happy to wait, if Silicon Valley showed any inclination to acknowledge this and try to develop those skills, but all I hear is crickets chirping.

These are very smart people, certainly smarter than me.  One can hope that age and wisdom will help them recognize and overcome their blind spots.  One can hope, can’t one?

*(Disclaimer:  I mostly publish my opinions on my blog.  When I use a fact, I try to verify it.  However, as I don’t make any money from this blog, I may occasionally cite something I believe to be a fact, but is actually wrong.  I apologize.)

Facebook, Fake News and Accounts, and Where Do We Go From Here? October 31, 2017

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Those of you who read me know that I am no fan of Facebook, for a wide variety of reasons.  I am not a member, and will never be one, even though it may hurt me professionally.  In short, I believe that Mark Zuckerberg is a megalomaniac who fancies Facebook as a modern religion, and himself as god, or at least the living prophet.

And regrettably, he may be right.  Because Facebook is far more than the “personal-ad-in-your-face” that I thought when I presented past objections.  Over the past 10 months, it has become pretty clear that Facebook is allowing itself to be used for purposes of influencing elections and sowing strife, sometimes violently.

The fact of the matter is that Zuckerberg and Facebook worship at the altar of the dollar, and everything else be damned.

Worse, from a technology standpoint, Facebook treats its probably-fatal flaws as mere software bugs, an inconvenience that it may fix if they rise up too far in the priority queue.

Still worse, the public-facing response is “We can’t be expected to police everything that happens on our site, can we?”

Well, yes, you can.  It is not “We can fix this,” or “We don’t think this is a problem.”  It is “You are at fault.”

In an earlier era of media (like, 10 years ago), publishers used to examine and vet every single advertisement.  Today it’s too hard?  That’s what Zuckerberg says.  That is the ultimate cop-out.  And that sick attitude is a side effect of worshiping at the altar of the dollar.

On Facebook, we are hearing louder echoes of our own voices.  Not different opinions.  And Facebook will not change that, because it will hurt their revenue.  And that is wrong in the most fundamental way.

So where do we go from here?  I would like to argue for people to stop using Facebook completely, but I know that’s not going to happen.  Maybe we should just be using Facebook to keep in touch with friends, as was originally intended.  We really don’t have ten thousand friends; I have about 900 connections on LinkedIn, and probably don’t even remember half of them.  And I don’t read news from them.

Can we possibly cool the addiction that millions of people seem to have to Facebook?  I don’t know, but for the sake of our future I think we need to try.

Bias and Truth and AI, Oh My October 4, 2017

Posted by Peter Varhol in Machine Learning, Software development, Technology and Culture.
Tags: ,
add a comment

I was just accepted to speak at the Toronto Machine Learning Summit next month, a circumstance that I never thought might happen.  I am not an academic researcher, after all, and while I have jumped back into machine learning after a hiatus of two decades, many more are fundamentally better at it than me.

The topic is Cognitive Bias in AI:  What Can Go Wrong?  It’s rather a follow-on from the presentations I’ve done on bias in software development and testing, but it doesn’t really fit into my usual conferences, so I attempted to cast my net into new waters.  For some reason, the Toronto folks said yes.

But it mostly means that I have to actually write the presentation.  And here is the rub:  We tend to believe that intelligent systems are always correct, and in those rare circumstances where they are not, it is simply the result of a software bug.

No.  A bug is a one-off error that can be corrected in code.  A bias is a systematic adjustment toward a predetermined conclusion that cannot be fixed with a code change.  At the very least the training data and machine learning architecture have to be re-thought.

And we have examples such as these:

If you’re not a white male, artificial intelligence’s use in healthcare could be dangerous.

When artificial intelligence judges a beauty contest, white people win.

But the fundamental question, as we pursue solutions across a wide range of applications, is:  Do we want human decisions, or do we want correct ones?  That’s not to say that all human decisions are incorrect, but only to point out that much of what we decide is colored by our bias.

I’m curious about what AI applications decide about this one.  Do we want to eliminate the bias, or do we want to reflect the values of the data we choose to use?  I hope the former, but the latter may win out, for a variety of reasons.