jump to navigation

Empathetic Technology is an Idea Whose Time Should Never Come June 20, 2018

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

I love TED talks.  They are generally well thought out and well-presented, and offer some significant insights on things that may not have occurred to me before.

I really, really wanted to give a thumbs-up to Poppy Crum’s talk on empathetic technology, but it contradicted some of my fundamental beliefs on human behavior and growth.  She talks about how measuring and understanding the physical attributes of emotion will help draw us together, so that we don’t have to feel so alone and misunderstood.

Well, I suppose that’s one way to look at it.  I rather look at it as wearing a permanent lie detector.  Now, that may not be a bad thing, unless you are playing poker or negotiating a deal.  But exposing our innermost emotions to others is rightly a gradual thing, and should be under our control, rather than immediately available through technology.

Also, the example that she demonstrates in the audience requires data from the entire audience, rather than from a single individual.  And her example was highly contrived, and it’s not at all clear that it would work in practice.  It involved measuring changes in CO2 emissions from the audience based on reacting to something unexpected.

But in general, her thesis violates my thoughts on emotional friction.  Other people don’t understand us.  Other people do things that make us feel uncomfortable.  Guess what?  Adapting to that is how we grow as human beings.  And growth is what makes us human.  Now, granted, in a few cases where attempts at emotional growth result in psychopathologies, there seems like there could be value here.  But . . .

I recall the Isaac Asimov novel The Naked Sun, where humans who interact physically with others are considered pathologic.  So we become content to view each other electronically, rather than interact physically.  I see a significant loss of humanity there.

And despite how Poppy Crum paints it, I see a significant loss of humanity with her plan, too.  She is correct in that empathetic technology can help identify those whose psyches may break under the strain of adapting to friction, but I think the loss of our humanity in general overwhelms this single good.

Advertisements

Here’s Looking At You June 18, 2018

Posted by Peter Varhol in Algorithms, Machine Learning, Software tools, Technology and Culture.
Tags: , , ,
add a comment

I studied a rudimentary form of image recognition when I was a grad student.  While I could (sometimes) identify simple images based on obviously distinguishing characteristics, the limitations of rule-based systems, the computing power of Lisp Machines and early Macs, facial recognition was well beyond the capabilities of the day.

Today, facial recognition has benefitted greatly from better algorithms and faster processing, and is available commercially by several different companies.  There is some question as to the reliability, but at this point it’s probably better than any manual approach to comparing photos.  And that seems to be a problem for some.

Recently the ACLU and nearly 70 groups sent a letter to Amazon CEO Jeff Bezos, alongside the one from 20 shareholder groups, arguing Amazon should not provide surveillance systems such as facial recognition technology to the government.  Amazon has a facial recognition system called Rekognition (why would you use a spelling that is more reminiscent of evil times in our history?)

Once again, despite the Hitleresque product name, I don’t get the outrage.  We give the likes of Facebook our life history in detail, in pictures and video, and let them sell it on the open market, but the police can’t automate the search of photos?  That makes no sense.  Facebook continues to get our explicit approval for the crass but grossly profitable commercialization of our most intimate details, while our government cannot use commercial and legal software tools?

Make no mistake; I am troubled by our surveillance state, probably more than most people, but we cannot deny tools to our government that the Bad Guys can buy and use legally.  We may not like the result, but we seem happy to go along like sheep when it’s Facebook as the shepherd.

I tried for the life of me to curse our government for its intrusion in our lives, but we don’t seem to mind it when it’s Facebook, so I just can’t get excited about the whole thing.  I cannot imagine Zuckerberg running for President.  Why should he give up the most powerful position in the world to face the checks and balances of our government?

I am far more concerned about individuals using commercial facial recognition technology to identify and harass total strangers.  Imagine an attractive young lady (I am a heterosexual male, but it’s also applicable to other combinations) walking down the street.  I take her photo with my phone, and within seconds have her name, address, and life history (quite possibly from her Facebook account).  Were I that type of person (I hope I’m not), I could use that information to make her life difficult.  While I don’t think I would, there are people who would think nothing of doing so.

So my take is that if you don’t want the government to use commercial facial recognition software, demonstrate your honesty and integrity by getting the heck off of Facebook first.

Update:  Apple will automatically share your location when you call 911.  I think I’m okay with this, too.  When you call 911 for an emergency, presumably you want to be found.

Too Many Cameras June 15, 2018

Posted by Peter Varhol in Software platforms, Strategy, Technology and Culture.
Tags: ,
add a comment

The title above is a play off of the “Too Many Secrets” revelation in the 1992 movie Sneakers, in which Robert Redford’s character, who has a secret or two himself, finds himself in possession of the ultimate decryption device, and everyone wants it.

Today we have too many cameras around us.  This was brought home to me rather starkly when I received an email that said:

I’ve been recording you with your computer camera and caught you <censored>.  Shame on you.  If you don’t want me to send that video to your family and employer, pay me $1000.

I pause.  Did I really do <censored> in front of my computer camera?  I didn’t think so, but I do spend a lot of time in front of the screen.  In any case, <censored> didn’t quite rise to the level of blackmail concern, in my opinion, so I ignored it.

But is this scenario so completely far-fetched?  This article lists all of the cameras that Amazon can conceivably put in your home today, and in the near future, that list will certainly grow.  Other services, such as your PC vendor and security system provider, will add even more movie-ready devices.

In some ways, the explosion of cameras looking at our actions is good.  Cameras can nudge us to drive more safely, and to identify and find thieves and other bad guys.  They can help find lost or kidnapped children.

But even outside our home, they are a little creepy.  You don’t want to stop in the middle of the sidewalk and think, I’m being watched right now.  The vast majority of people simply don’t have any reason to be observed, and thinking about it can be disconcerting.

Inside, I simply don’t think we want them, phone and PC included.  I do believe that people realize it is happening, but in the short term, think the coolness of the Amazon products and the lack of friction in ordering from Amazon supersedes any thoughts about privacy.  They would rather have computers at their beck and call than think about the implications.

We need to do better than that if we want to live in an automated world.

Cognitive Bias in Machine Learning June 8, 2018

Posted by Peter Varhol in Algorithms, Machine Learning.
Tags: , , ,
add a comment

I’ve danced around this topic over the last eight months or so, and now think I’ve learned enough to say something definitive.

So here is the problem.  Neural networks are sets of layered algorithms.  It might have three layers, or it might have over a hundred.  These algorithms, which can be as simple as polynomials, or as complex as partial derivatives, process incoming data and pass it up to the next level for further processing.

Where do these layers of algorithms come from?  Well, that’s a much longer story.  For the time being, let’s just say they are the secret sauce of the data scientists.

The entire goal is to produce an output that accurately models the real-life outcome.  So we run our independent variables through the layers of algorithms and compare the output to the reality.

There is a problem with this.  Given a complex enough neural network, it is entirely possible that any data set can be trained to provide an acceptable output, even if it’s not related to the problem domain.

And that’s the problem.  If any random data set will work for training, then choosing a truly representative data set can be a real challenge.  Of course, will would never use a random data set for training; we would use something that was related to the problem domain.  And here is where the potential for bias creeps in.

Bias is disproportionate weight in favor of or against one thing, person, or group compared with another.  It’s when we make one choice over another for emotional rather than logical reasons.  Of course, computers can’t show emotion, but they can reflect the biases of their data, and the biases of their designers.  So we have data scientists either working with data sets that don’t completely represent the problem domain, or making incorrect assumptions between relationships between data and results.

In fact, depending on the data, the bias can be drastic.  MIT researchers have recently demonstrated Norman, the psychopathic AI.  Norman was trained with written captions describing graphic images about death from the darkest corners of Reddit.  Norman sees only violent imagery in Rorschach inkblot cards.  And of course there was Tay, the artificial intelligence chatter bot that was originally released by Microsoft Corporation on Twitter.  After less than a day, Twitter users discovered that Tay could be trained with tweets, and trained it to be obnoxious and racist.

So the data we use to train our neural networks can make a big difference in the results.  We might pick out terrorists based on their appearance or religious affiliation, rather than any behavior or criminal record.  Or we might deny loans to people based on where they live, rather than their ability to pay.

On the one hand, biases may make machine learning systems seem more, well, human.  On the other, we want outcomes from our machine learning systems that accurately reflect the problem domain, and not biased.  We don’t want our human biases to become inherited by our computers.

Can Machines Learn Cause and Effect? June 6, 2018

Posted by Peter Varhol in Algorithms, Machine Learning.
Tags: , , ,
add a comment

Judea Pearl is one of the giants of what started as an offshoot of classical statistics, but has evolved into the machine learning area of study.  His actual contributions deal with Bayesian statistics, along with prior and conditional probabilities.

If it sounds like a mouthful, it is.  Bayes Theorem and its accompanying statistical models are at the same time surprisingly intuitive and mind-blowingly obtuse (at least to me, of course).  Bayes Theorem describes the probability of a particular outcome, based on prior knowledge of conditions that might be related to the outcome.  Further, we update that probability when we have new information, so it is dynamic.

So when Judea Pearl talks, I listen carefully.  In this interview, he is pointing out that machine learning and AI as practiced today is limited by the techniques we are using.  In particular, he claims that neural networks simply “do curve fitting,” rather than understand about relationships.  His goal is for machines to discern cause and effect between variables, that is “A causes B to happen, B causes C to happen, but C does not cause A or B”.  He thinks that Bayesian inference is ultimately a way to do this.

It’s a provocative statement to say that we can teach machines about cause and effect.  Cause and effect is a very situational concept.  Even most humans stumble over it.  For example, does more education cause people to have a higher income?  Well maybe.  Or it may be that more intelligence causes a higher income, but more intelligent people also tend to have more education.  I’m simply not sure about how we would go about training a machine, using only quantitative data, about cause and effect.

As for neural networks being mere curve-fitting, well, okay, in a way.  He is correct to point out that what we are doing with these algorithms is not finding Truth, or cause and effect, but rather looking at the best way of expressing a relationship between our data and the outcome produced (or desired, in the case of unsupervised learning).

All that says is that there is a relationship between the data and the outcome.  Is it causal?  It’s entirely possible that not even a human knows.

And it’s not at all clear to me that this is what Bayesian inference is saying.  And in fact I don’t see anything in any statistical technique that allows us to assume cause and effect.  Right now, the closest we come to this in simple correlation is R-squared, which allows us to say how much of a statistical correlation is “explained” by the data.  But “explained” doesn’t mean what you think it means.

As for teaching machines cause and effect, I don’t discount it eventually.  Human intelligence and free will is an existence proof; we exhibit those characteristics, at least some of the time, so it is not unreasonable to think that machines might someday also do so.  That said, it certainly won’t happen in my lifetime.

And about data.  We fool ourselves here too.  More on this in the next post.

Facebook and the Cult of Secrecy June 5, 2018

Posted by Peter Varhol in Publishing, Technology and Culture.
Tags: ,
add a comment

I recall the worldwide controversy in 2013 surrounding National Security Agency contractor Edward Snowden, who published secret (and above) information about the NSA listening programs to the world at large.  These revelations prompted some worldwide protests against the data collection by the NSA (and by extension GCHQ in the UK and others).

I gave the entire Snowden mess a shrug of my shoulders.  I am not a big fan of secrets, personal or institutional.  I do think that there are things in life that we justifiably attempt to keep secret, for a variety of reasons.  However, I also believe that any attempt to keep something a secret for any significant period of time is ultimately futile.  “Three people can keep a secret, if two are dead” represents my belief in the longevity of secrets.

However, I can’t help but marvel at people protesting against government data collection, yet those same people, and many more, willingly giving far more personal data to Facebook.  I simply don’t get why Facebook, which is undeniably more effective than the NSA, gets a pass on their deeper intrusions in our lives.

Facebook should have taught us that there are no secrets.  I don’t think that we’ve learned that lesson, and I certainly don’t think Facebook has.  This article notes the company’s duplicitous behavior regarding what it says and what it actually does.  In this case, it was Zuckerberg himself who told Congress that they no longer shared user and friend information with third parties.

It turns out that Facebook deliberately decided not to classify 60 (yes, 60) phone manufacturers as third parties.  Zuckerberg’s excuse: they needed to provide them with real user data in order to test the integration with the app on their devices.  Un, no.

Now, I am a tester by temperament, and know darn well that the normal practice is to munge data used for testing.  Facebook providing 60 vendors with real data is not testing, it is yet another violation of their terms of service.  Oh, but Facebook is allowed to do that as long as someone (the janitor, perhaps) apologizes and says it won’t happen again.

So here you have it – Facebook lies, and will continue lying as long as they can get away with it.  And who lets them get away with it?  You do.

Update: Facebook bug set 14 million users’ sharing settings to public.  I really don’t at all understand why people put up with this.

Memorial Day 2018 May 28, 2018

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

I am a veteran.  I served six years as an Air Force officer, separating as a captain.  I wanted to fly; I had my private ticket at 17, but lacked the perfect eyesight needed to fly in the military.  So I flew a desk, got two masters degrees, and eventually got past the stage of my life where flying was important.

I was in San Antonio this past weekend, on a riverboat cruise, when the guide asked how many on the tour were active duty or veterans.  Despite the fact that San Antonio stands on the pillars of multiple Army and Air Force bases, only three of the 50 or so raised their hands (and one of them was a just-graduated ROTC cadet in uniform).  I was at a DevOps conference in Nashville last fall, in a room of 300 mostly young people, where the Iraqi War vet organizer asked how many were veterans.  My hand went up.  Period.

I served my country honorably (the DD-214 says so), but thinking back, I could have done so better.  I may not have been motivated by patriotism, but over the years that initial service has made me a different, and I think better, person.

We’ve had stupid wars (Spanish-American War, anyone?) and we’ve had unpopular wars (Vietnam certainly takes the cake here), and will continue to do so.  That is not for those who have chosen to serve to decide, although as human beings, many I’m sure have had opinions in the matter.  That’s what veterans have helped to protect, current events notwithstanding.

Service to our country would do all of us good.  It does not mean love, or patriotism; rather, it means that we recognize that we could not have our freedoms without sacrifice.  For most of us in the military, the sacrifices are minimal – a regimented lifestyle, a nod to authority, restrictions on our time and efforts.  But service doesn’t have to be in the military; all adults should seek out any opportunities to preserve our freedoms and ideals.

Those who have fallen in battle made the ultimate sacrifice.  I’m pretty sure that none intended to die for their country, but they did, and today is the day we remember them.  We may object to war in general, or government in general, or a specific war or government, but those who have died don’t deserve to be in that discussion.  So for one day, put aside politics and beliefs, and remember those who have died so that we could have the rights and privileges that we do.  Thank you.

Alexa, Phone Joe May 28, 2018

Posted by Peter Varhol in Algorithms, Software platforms, Technology and Culture.
Tags: , ,
add a comment

By now, the story of how Amazon Alexa recorded a private conversation and sent the recording off to a colleague is well-known.  Amazon has said that the event was a highly unlikely series of circumstances that will only happen very rarely.  Further, it promised to try to adjust the algorithms so that it didn’t happen again, but no guarantees, of course.

Forgive me if that doesn’t make me feel better.  Now, I’m not blaming Amazon, or Alexa, or the couple involved in the conversation.  What this scenario should be doing is radically readjusting what our expectations of a private conversation are.  About three decades ago, there was a short-lived (I believe) reality TV show called “Children Say the Funniest Things.”  It turned out that most of the funniest things concerned what they repeated from their parents.

Well, it’s not only our children that are in the room.  It’s also Internet-connected “smart” devices that can reliably digitally record our conversations and share them around the world.  Are we surprised?  We shouldn’t be.  Did we really think that putting a device that we could talk to in the room wouldn’t drastically change what privacy meant?

Well, here we are.  Alexa is not only a frictionless method of ordering products.  It is an unimpeachable witness listening to “some” conversations in the room.  Which ones?  Well, that’s not quite clear.  There are keywords, but depending on location, volume, and accent, Alexa may hear keywords where none are intended.

And it will decide who to share those conversations with, perhaps based on pre-programmed keywords.  Or perhaps based on an AI-type natural language interpretation of a statement.  Or, most concerning, based on a hack of the system.

One has to ask if in the very near future Alexa may well be subject to a warrant in a criminal case?  Guess what, it has already happened.  And unintended consequences will continue to occur, and many of those consequences will continue to be more and more public.

We may well accept that tradeoff – more and different unintended consequences in return for greater convenience in ordering things.  I’m aware that Alexa can do more than that, and that its range of capability will only continue to expand.  But so will the range of unintended consequences.