jump to navigation

Facebook and Zuckerberg Offend Yet Again February 3, 2020

Posted by Peter Varhol in Software platforms, Technology and Culture.
add a comment

I admit that I criticize Mark Zuckerberg on a pretty regular basis.  My primary defense is that he deserves it.  In attempting to (yet again) redefine the scope and mission of Facebook and associated properties (Instagram, et. al.), he has said that he will use his own guiding principles.  These guiding principles are:

  1. Free expression. In other words, Facebook users will be able to say whatever they want, within the scope of applicable law, without interference from Facebook. Get ready for a Facebook that doesn’t even bother to give lip service to truth.
  2. Privacy. Ah, not private from Facebook, who wants to monetize your most intimate details, but private from outside requests for transparency, including from law enforcement agencies.

So here’s the problem.  Zuckerberg is welcome to express his personal principles, and I might even be in agreement with some of them (though I doubt it).  The problem is that principles don’t get you very far when you’re trying to define workable solutions in the real life for millions of diverse users and other stakeholders.  Real life, with multiple concerns and stakeholders, doesn’t easily lend itself to clean and obvious answers.  His pious spouting of so-called principles is really a weak justification for exploiting the billions of Facebook and Instagram users to the max.

Tellingly, as I am writing this, writer Stephen King has bailed from Facebook with the same thoughts, that there is far too much obviously false information on the site, and that he has grave doubts about their desire and ability to offer privacy.

All this brings me to conclude that Zuckerberg has just one guiding principle in life – to make as much money as possible.

Face it, Facebook is a rogue company, led by a sociopath who cares only about himself.  Yet so many people let it control so much of their lives.

About Tweeting December 6, 2019

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: , , , ,
add a comment

I’ve done a presentation generically entitled “Talking to People: The Forgotten Software Tool.”  The time I gave it at DevOps Days Berlin 2016 was probably the closest I’ve ever come to getting a standing ovation.  The thesis of the talk, based in part on MIT’s Sherry Turkle’s book Reclaiming Conversation is that we as a society are increasingly preferring digital means of communications to physical ones.  For generations raised with smartphones, tablets, and (legacy) computers, face to face communications can be a struggle.

I am not a digital native in broadcasting my thoughts and activities to the rest of the world.  I have held jobs where tweeting, for example, was a job requirement in order to help build the company brand or get more page views.  I did so, even willingly, but my efforts were not nearly as voluminous as some of my colleagues.

I have to remember to tweet, or blog.  While I tend to be an introverted person throughout my life, decades ago I reluctantly recognized the need to reach out to others.  At the time, all of that was face to face, because digital connections didn’t exist.

Now there are so many ways to communicate without looking at someone.  I’ve had a number of video calls lately using Zoom, often with people who are using dual monitors.  They have the video showing on the large screen to one side, and look at that screen, and seemingly away from me.  It was funny, once I realized what was happening.

By itself, that’s not a bad thing, and in fact those with dual screens may not even realize they’re not really looking at you.  But it does damage the trust you try to build up by looking someone in the eye, and reading their nonverbal communications, is degraded even further with many digital forms of communications.

And tweeting is one of them.  And because we don’t know many (if at all) of the people who are reading our tweets, and don’t have to look them in the eye, we don’t feel obliged to be respectful (like many of Elon Musk’s more bizarre tweets).  That’s true even for those of us whose tweets are almost entirely professional.

Speech is not free.  We pay for it with everything we say.  Our reputations, the trust other people have in us, our ability to communicate effectively, and even to the point of lawsuits, are dependent upon not using Twitter as an attack platform.

Okay, here’s my solution.  Twitter needs to be banned from normal discourse.  In fact, Twitter is without normal discourse.  It should be entirely a professional platform.  I realize that this isn’t going to happen, but Twitter is too dangerous to our means of communication to simply dismiss.

The Path to Autonomous Automobiles Will Be Longer Than We Think July 14, 2019

Posted by Peter Varhol in Algorithms, Machine Learning, Software platforms, Technology and Culture.
Tags: , ,
add a comment

I continue to be amused by people who believe that fully autonomous automobiles are right around the corner.  “They’re already in use in many cities!” they exclaim (no, they’re not).  In a post earlier this year, I’ve listed four reasons why we will be unlikely to see fully autonomous vehicles in my lifetime; at the very top of the list is mapping technology, maps, and geospatial information.

That makes the story of hundreds of cars trying to get to Denver International Airport being misdirected by Google Maps all that much more amusing.  Due to an accident on the main highway to DIA, Google Maps suggested an alternative, which eventually became a muddy mess that trapped over 100 cars in the middle of the prairie.

Of course, Google disavowed any responsibility, claiming that it makes no promises with regard to road conditions, and that users should check road conditions ahead of time.  Except that it did say that this dirt road would take about 20 minutes less than the main road.  Go figure.  While not a promise, it does sound like a factual statement on, well, road conditions.  And, to be fair, they did check road conditions ahead of time – with Google!

While this is funny (at least reading about it), it points starkly to the limitations of digital maps for use with car navigation.  Autonomous cars require maps with exacting detail, within feet or even inches.  Yet if Google as one of the best examples of mapping cannot get an entire route right, then there is no hope for fully autonomous cars to use these same maps sans driver.

But, I hear you say, how often does this happen?  It happens often.  I’ve often taken a Lyft to a particular street address in Arlington, Massachusetts, a close-in suburb of Boston.  The Lyft (and, I would guess, Uber) maps have it as a through street, but in ground truth it is bisected by the Minuteman Bikeway and blocked to vehicular traffic.  Yet every single Lyft tries to take me down one end of that street in vain.  Autonomous cars need much better navigation than this, especially in and around major cities.

And Google can’t have it both ways, supplying us with traffic conditions yet disavowing any responsibility in doing so.  Of course, that approach is part and parcel to any major tech company, so we shouldn’t be surprised.  But we should be very wary in the geospatial information they provide.

Should We Let Computers Control Aircraft? March 23, 2019

Posted by Peter Varhol in Algorithms, Software platforms.
1 comment so far

Up until the early 1990s, pilots controlled airliners directly, using hydraulic systems.  A hydraulic system contains a heavy fluid (hydraulic oil) in tubes whose pressure is used to physically push control surfaces in the desired direction.  In other words, the pilots directly manipulated the aircraft control surfaces.

There is some comfort in direct control, in that we are certain that our commands translate directly to control surface motion.  There have only been a few instances where aircraft have complete lost hydraulics.  The best-known one is United Flight 232, in 1989, where an exploding engine on the DC-10 punctured lines in all three hydraulic systems.  The airliner crash landed in Sioux City, Iowa, with the loss of about a third of the passengers and crew, yet was considered to be a successful operation.

A second was a DHL A300 cargo plane hit by a missile after takeoff from Baghdad Airport in 2003.  It managed to return to the airport without loss of life (there was only a crew of three on board), although it ended up off the runway.

In 1984, Airbus launched the A320, the first fly-by-wire airliner.  This craft used wires between the flight controls used by the pilot and the control surfaces, with computers sitting in the middle.  The computers accept a control request from the pilot, interpret it in light of all other flight data available, and decide if and how to carry out the request (note the term “request”).  There were a few incidents with early A320s, but it was generally successful.

Today, all airliners are fly-by-wire.  Cockpit controls request changes in control surfaces, and the computer decides if it is safe to carry them out.  The computers also make continuous adjustments to the control surfaces, enabling smooth flight without pilot intervention.  In practice, pilots (captain or first officer) only fly manually for perhaps a few minutes of every flight.  Even when they fly manually, they are using the fly-by-wire system, albeit with less computer intervention.  Oh, and if the computer determines that a request cannot be executed safely, it won’t.

Fly-by-wire is inarguably safer than direct-fly hydraulic systems in controlling an aircraft.  Pilots make mistakes, and a few of those mistakes can have serious consequences.  But fewer mistakes can be made if the computer is in charge.  Another but:  Anyone who says that no mistakes can be made by the computer is on drugs.

Fly-by-wire systems are controlled by complex software, and software has an inherent problem – it isn’t and can’t be perfect.  And while aircraft software is developed under strict safety protocols, that doesn’t prevent bugs.  In the 737 MAX MCAS software, Boeing seems to have forgotten that, and made the system difficult to override.  And it didn’t document the changes to pilot manuals.  And that, apparently, is why we are here.  I am not even clear that the MCAS software is buggy; instead, it seems like it performed as designed, but the design was crappy.

The real solution is that yes, the computer has to fly the airplane under most circumstances.  The aircrew in that case are flight managers, not pilots in the traditional sense.  But if there is an unusual situation (bad storm, computer or sensor failure, structural failure, or more), the pilots must be trained to take over and fly the plane safely.  That is where both airliner manufacturers and airlines are falling down right now.

Aircrews are forgetting, or not learning, how to fly planes.  And not learning situational awareness, when they are able to comprehend when something is going wrong, and need to intervene.  It’s not their fault; aircraft and flying has changed enormously in the last two decades, and there is a generation of younger pilots who may not be able to recognize a deteriorating situation, or what to do about it.

Will Self-Driving Cars Ever Be Truly So? January 7, 2019

Posted by Peter Varhol in Architectures, Machine Learning, Software platforms, Technology and Culture.
Tags: , , ,
comments closed

The quick answer is we will not be in self-driving cars during my lifetime.  Nor your lifetime.  Nor any combination.  Despite pronouncements by so-called pundits, entrepreneurs, reporters, and GM, there is no chance of a self-driving car being so under all conditions, let alone everyone in a self-driving car, with all that that implies.

The fact of the matter is that the Waymo CEO has come out and said that he doesn’t imagine a scenario where self-driving cars will operate under all conditions without occasional human intervention.  Ever.  “Driverless vehicles will always have constraints,” he says.  Most of his competitors now agree.

So what do we have today?  We have some high-profile demonstrations under ideal conditions, some high-profile announcements that say we are all going to be in self-driving cars within a few years.  And one completely preventable death.  That’s about it.  I will guess that we are about 70 percent of the way there, but that last 30 percent is going to be a real slog.

What are the problems?

  1. Mapping.  Today, self-driving cars operate only on routes that have been mapped in detail.  I’ll give you an example.  I was out running in my neighborhood one morning, and was stopped by someone looking for a specific street.  I realized that there was a barricaded fire road from my neighborhood leading to that street.  His GPS showed it as a through street, which was wrong (he preferred to believe his GPS rather than me).  If GPS and mapping cannot get every single street right, self-driving cars won’t work.  Period.
  2. Weather.  Rain or snow interrupts GPS signals.  As does certain terrain.  It’s unlikely that we will ever have reliable GPS, Internet, and sensor data under extreme weather condition.  Which in most of the country happens several months a year.
  3. Internet.  A highway of self-driving cars must necessarily communicate with each other.  This map (paywall) pretty much explains it all.  There are large swaths of America, especially in rural areas, that lack reliable Internet connection.
  4. AI.  Self-driving cars look toward AI to identify objects in the road.  This technology has the most potential to improve over time.  Except in bad weather.  And poorly mapped streets.

So right now we have impressive demonstrations that have no basis in reality.  I won’t discount the progress that has been made.  But we should be under no illusions that self-driving cars are right around the corner.

The good news is that we will likely see specific application in practice in a shorter period of time.  Long-haul trucking is one area that has great potential for the shorter term.  It will involve re-architecting our trucking system to create terminals around the Interstate highway system, but that seems doable, and would be a nice application of this technology.

I Don’t Need a Hero October 23, 2018

Posted by Peter Varhol in Software development, Software platforms, Strategy.
Tags: , ,
add a comment

Apologies to Bonnie Tyler, but we don’t need heroes, as we have defined them in our culture.  “He’s got to be strong, he’s got to be fast, and he’s got to be fresh from the fight.”  Um, no.

Atul Gawande, author of The Checklist Manifesto, makes it clear that the heroes, those in any profession that create a successful outcome primarily on the strength of their superhuman effort, don’t deserve to be recognized as true heroes.  In fact, we should try to avoid circumstances that appear to require a superhuman effort.

So what are heroes?  We would like to believe that they exist.  Myself, I am enamored with the astronauts of a bygone era, who faced significant uncertainties in pushing the envelope of technology, and accepted that their lives were perpetually in hock.  But, of course, they were the same ones who thought that they were better than those who sacrificed their lives, because they survived.

Today, according to Gawande, the heroes are those who can follow checklists in order to make sure that they don’t forget any step in a complex process.  The checklists themselves can be simple, in that they exist to prompt professionals to remember and execute seemingly simple steps that are often forgotten in the heat of crisis.

In short, Gawande believes in commercial airline pilots, such as Chesley (Sully) Sullenberger, who with his copilot Jeffrey Skiles glided their wounded plane to a ditching in the Hudson River off Midtown Manhattan.  Despite the fact that we all know Sully’s name in the Miracle on the Hudson, it was a team effort by the entire flight crew.  And they were always calm, and in control.

Today, software teams are made up on individuals, not close team members.  Because they rarely work as a team, it’s easy for one or more individuals to step up and fix a problem, without the help of the team.

There are several problems with that approach, however.  First, if an extra effort by one person is successful, the team may not try as hard in the future, knowing that they will be bailed out of difficult situations.  Second, the hero is not replicable; you can’t count on it again and again in those situations.  Third, the hero can’t solve every problem; other members of the team will eventually be needed.

It feels good to be the hero, the one who by virtue of extreme effort fixes a bad situation.  The world loves you.  You feel like you’ve accomplished something significant.  But you’re not at all a hero if your team wasn’t there for you.

Too Many Cameras June 15, 2018

Posted by Peter Varhol in Software platforms, Strategy, Technology and Culture.
Tags: ,
add a comment

The title above is a play off of the “Too Many Secrets” revelation in the 1992 movie Sneakers, in which Robert Redford’s character, who has a secret or two himself, finds himself in possession of the ultimate decryption device, and everyone wants it.

Today we have too many cameras around us.  This was brought home to me rather starkly when I received an email that said:

I’ve been recording you with your computer camera and caught you <censored>.  Shame on you.  If you don’t want me to send that video to your family and employer, pay me $1000.

I pause.  Did I really do <censored> in front of my computer camera?  I didn’t think so, but I do spend a lot of time in front of the screen.  In any case, <censored> didn’t quite rise to the level of blackmail concern, in my opinion, so I ignored it.

But is this scenario so completely far-fetched?  This article lists all of the cameras that Amazon can conceivably put in your home today, and in the near future, that list will certainly grow.  Other services, such as your PC vendor and security system provider, will add even more movie-ready devices.

In some ways, the explosion of cameras looking at our actions is good.  Cameras can nudge us to drive more safely, and to identify and find thieves and other bad guys.  They can help find lost or kidnapped children.

But even outside our home, they are a little creepy.  You don’t want to stop in the middle of the sidewalk and think, I’m being watched right now.  The vast majority of people simply don’t have any reason to be observed, and thinking about it can be disconcerting.

Inside, I simply don’t think we want them, phone and PC included.  I do believe that people realize it is happening, but in the short term, think the coolness of the Amazon products and the lack of friction in ordering from Amazon supersedes any thoughts about privacy.  They would rather have computers at their beck and call than think about the implications.

We need to do better than that if we want to live in an automated world.

Alexa, Phone Joe May 28, 2018

Posted by Peter Varhol in Algorithms, Software platforms, Technology and Culture.
Tags: , ,
add a comment

By now, the story of how Amazon Alexa recorded a private conversation and sent the recording off to a colleague is well-known.  Amazon has said that the event was a highly unlikely series of circumstances that will only happen very rarely.  Further, it promised to try to adjust the algorithms so that it didn’t happen again, but no guarantees, of course.

Forgive me if that doesn’t make me feel better.  Now, I’m not blaming Amazon, or Alexa, or the couple involved in the conversation.  What this scenario should be doing is radically readjusting what our expectations of a private conversation are.  About three decades ago, there was a short-lived (I believe) reality TV show called “Children Say the Funniest Things.”  It turned out that most of the funniest things concerned what they repeated from their parents.

Well, it’s not only our children that are in the room.  It’s also Internet-connected “smart” devices that can reliably digitally record our conversations and share them around the world.  Are we surprised?  We shouldn’t be.  Did we really think that putting a device that we could talk to in the room wouldn’t drastically change what privacy meant?

Well, here we are.  Alexa is not only a frictionless method of ordering products.  It is an unimpeachable witness listening to “some” conversations in the room.  Which ones?  Well, that’s not quite clear.  There are keywords, but depending on location, volume, and accent, Alexa may hear keywords where none are intended.

And it will decide who to share those conversations with, perhaps based on pre-programmed keywords.  Or perhaps based on an AI-type natural language interpretation of a statement.  Or, most concerning, based on a hack of the system.

One has to ask if in the very near future Alexa may well be subject to a warrant in a criminal case?  Guess what, it has already happened.  And unintended consequences will continue to occur, and many of those consequences will continue to be more and more public.

We may well accept that tradeoff – more and different unintended consequences in return for greater convenience in ordering things.  I’m aware that Alexa can do more than that, and that its range of capability will only continue to expand.  But so will the range of unintended consequences.