jump to navigation

Flying, Now and Then January 16, 2020

Posted by Peter Varhol in travel.
Tags:
add a comment

First, let me say that Chris Elliott does good work.  Really good work.  If I had a beef on travel, I would want his team on my side.  But occasionally, for whatever reason, he goes off the reservation.  I guess I get it, in a larger sense he’s got business reasons for catering to those who expect Cadillac service on a Yugo budget, but I have to call him on this.

In this article for USA Today, he bemoans the fact that what used to come with your airline ticket is now a group of optional extras, at an additional price.  I get what he’s saying, but Chris knows better than most people that he is being misleading here.

But basically he’s correct in his details, although he leaves significant gaps in his explanations.

Now, let’s take a look at the airline industry over the last fifty years.  Fifty years ago, the closest I got to a commercial airliner was when my father took me out to the end of the runway at Greater Pittsburgh International, and we spent the afternoon watching jets (mostly 707s) take off.  Occasionally there would be a C-123 or C-97 from the Air National Guard base there too.  I had no expectation of actually flying in a commercial aircraft; that was for people of means, and we most certainly were not that.

And yes, they had things that were included in the ticket, including meals, drinks, seat assignments, and luggage handling.  I get that.  Once again, they serviced people of means, with very good service.

A lot has changed since the Airline Deregulation Act of 1978.  Mostly, airline fares (and to some extent service) have been a race to the bottom.  That’s not necessarily a bad thing.  Far more people who could not dream of flying before this do so, and many do so often.  I count that as a net positive, although it involved tradeoffs.

Second, in getting to a reasonable point of profitability today, just about every airline has filed Chapter 11 bankruptcy.  Some storied names (PanAm, TWA, Braniff, etc.) went under completely.  Costs had to come down.  9/11 almost destroyed the commercial airline industry in the United States.  Those that survived reassembled the pieces, and are once again going concerns.

I won’t say that it was easy.  Many hardworking people saw their pay and benefits slashed (although, this also happened to other people in many industries over the last twenty years).  Today, we look at an airline making a couple billion dollars in profit, and we say they are cheating the public.  However, their profit is also on the low end of US corporations in general, even though a couple billion dollars sounds like a lot of money.

And seriously, even factoring in the fees that have come with decoupling fares, those fares are in general the same, or even less (accounting for inflation) that the elitist flying public paid in the 1960s and 1970s.

Now, I am most definitely not a shill for the US airlines.  I do frequent one airline group more than others, and fly frequently enough to at least occasionally get many of these perks for free.  But people simply can’t expect the service of fifty years ago with fares in force today.

Chris, give it a break.  You’re not telling the whole story.  I, and many others, would stand no chance of flying today without the drastic shift we have seen in the industry.

Are Cell Phones the Cause of Society’s Schisms? January 9, 2020

Posted by Peter Varhol in Technology and Culture, Uncategorized.
Tags: , ,
add a comment

More broadly, we might ask this question of social media in general, since the phone is simply a proxy for a wide range of services.  This intriguing article in the MIT Technology Review provides an anecdotal tale of a philosophy professor who, believing that he wasn’t communicating with his students effectively, offered extra credit to those who would give up their use of cell phones for nine days, and write about the experience.

While it doesn’t have the same academic rigor as Sherry Turkle’s Reclaiming Conversation, it is a telling story of teens finding out that there is more to the world than is available from their phone screens.

And it’s not a new thesis, but stories like these also reinforce that there have been drastic changes in society and culture in a short period of time.

At one level, social media lets us engage many people without actually seeing them.  When you look someone in the eye, and gauge their reaction in real time, what we say can be very different.  When we don’t, negative messages seem to be magnified.

At another level, social media lets us pick and choose who we communicate with.  Generally, that means we are less likely to be exposed to different ideas, and more inclined to believe unreliable or bogus sources.  I would like to say that is our choice, except that it’s not clear we easily have any other choice.

What we have created is a massive societal experiment in which within a decade we have dramatically shifted the nature of interpersonal interactions.  Whereas the majority of interaction was face to face, today it is largely remote.  Where most interaction were one on one, we find that the remote interactions are more one on many.  And where many of our interactions were casual encounters with random people, today they are with people we already know and associate with.

Mark Zuckerberg and Facebook say it’s all for the good of society, and that’s what they stand for.  They are too biased to offer an honest take, with hundreds of billions of dollars at stake.  I will say that it’s instructive that Zuckerberg, while publicly promoting openness and sharing, has chosen to build his own personal estate behind walls.  Let him live in a walkup for a few years; he never has, and never will.  Live like your users live, Mark, is my final word to him.  You have created this world; you are not responding to it.

In the meantime, are cell phones good or bad?  I will offer that they are a tool, and it is the apps that we choose to use make them one or the other.

Will We Have Completely Autonomous Airliners? January 2, 2020

Posted by Peter Varhol in aviation, Machine Learning, Technology and Culture.
Tags: , , , ,
add a comment

This has been the long term trend, and two recent stories have added to the debate.  First, the new FAA appropriations bill includes a directive to study single-pilot airliners used for cargo.  Second is this story in Wall Street Journal (paywall), discussing how the Boeing 737 MAX crashes has caused the company to advocate even more for fully autonomous airliners.

I have issues with that.  First, Boeing’s reasoning is fallacious.  The 737 MAX crashes were not pilot error, but rather design and implementation errors, and inadequate information for documentation and training.  Boeing as a culture apparently still refuses to acknowledge that.

Second, as I have said many times before, automation is great when used in normal operations.  When something goes wrong, automation more often than not does the opposite of the right thing, attempting to continue normal operations in an abnormal situation.

As for a single pilot, when things go wrong, a single pilot is likely to be too focused on the most immediate, rather than carrying out a division of labor.  It seems like in an emergency situation, two experienced heads are better than one.  And there are instances, albeit rare, where a pilot becomes incapacitated, and a second person is needed.

Boeing is claiming that AI will provide the equivalent of a competent second pilot.  That’s not what AI is all about.  Despite the ability to learn, a machine learning system would have to have seen the circumstances of the failure before, and have a solution, or at least an approximation of a solution, as a part of its training.  This is not black magic, as Boeing seems to think.  It is a straightforward process of data and training.

AI does only what it is trained to do.  Boeing says that pilot error is the leading cause of airliner incidents.  They are correct, but it’s not as simple as that.  Pilot error is a catch-phrase that includes a number of different things, including wrong decisions, poor information, or inadequate training, among others.  While they can easily be traced back to the pilot, they are the result of several different causes of errors and omissions.

So I have my doubts as to whether full automation is possible or even desirable.  And the same applies to a single pilot.  Under normal operations, it might be a good approach.  But life is full of unexpected surprises.

Statistical Significance and Real Life December 23, 2019

Posted by Peter Varhol in Algorithms, Education, Technology and Culture.
Tags: , ,
add a comment

I have a degree in applied math, and have taught statistics for a number of years.  I like to think that I have an intuitive feel for numbers and how they are best interpreted (of course, I also like to think that I am handsome and witty).

Over the last few years there has been concern among the academic community that most people massively misinterpret what statistical significance is telling them.  Most research is done by comparing two separate groups (people, drugs, ages, treatments, and so on), one of which is not changed, while the other of which undergoes a change (most experiments are actually more complex than this, with multiple change groups representing different stimuli, different doses, or different behaviors).  The two groups are then compared through a quantitative measurement of the characteristic under test.

Because we are sampling the population, there is some uncertainty in the result.  Only if we have complete information (a census) can we make a statement with certainty, and we almost never have that.  Statistical significance means that there is a small percentage (usually one or five percent) that a certain result can be found only by chance, thus suggesting that there is a real difference between the control and experimental groups.

Statistical significance is a narrow mathematical term.  It refers to interpreting the mathematics, not applying the result to the real world.  I try to make the distinction between statistical significance and practical significance.  Practical significance is when the experimental conclusion can result in meaningful action in the problem domain.  “This drug always cures cancer”, for example, can never be true, for multiple reasons.  But we might like to make the statement that we can save twenty thousand lives a year; that might result in action in promoting a cure.

The problem is that many policy makers and the general public conflate the two.  If something is statistically significant, how can it also not be practically significant?  A large sample size can identify and amplify tiny differences that in many cases don’t matter in the grand scheme of things.

And there is such a thing as the Type I error (there is also a Type II error, which I’ll write about later).  The Type I error says that we falsely reject the hypothesis that there is no difference between the groups.  And what are the odds of that?  Pretty good, actually.  Chances are that you got those results through random chance, not because there is a real difference.

Many studies analyzed by statistics use multiple statistical tests, sometimes numbered in the hundreds.  If you do a hundred statistical tests, and you find five that give you statistically significant results at the 95 percent level, what do you conclude?  Many researchers breathe a sigh of relief and exclaim “Publish!”  Because in many cases their jobs are dependent on publishable results.

While we can use statistics and mathematics in general to help us understand complex problems, we have to mentally separate the narrow mathematical interpretations from the broader solution and policy ones.  But most researchers, either through ignorance or because it behooves their careers to publish, do so.  And the lay public and policy makers will bow to the cult of statistical significance, making things worse rather than better.

What Will History Write of Today’s Robber Barons? December 10, 2019

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

I love studying history, and have not done anywhere near as much of it as I would like.  In my public school youth, I studied Andrew Carnegie, John Rockefeller, George Westinghouse, and others that came to be known collectively as the “robber barons”.  They and others became symbols of the abuse of common labor (my ancestors), even as they drove the expansion and modernization of the United States.

Now, in my formative years, we had massive industrial companies that became the leaders of the world – US Steel, Standard Oil (granted, that was the Rockefellers, but it persists through successor companies), and others.  In those years, I realized that while they were titans of industry, they also created a very polluted world.  I recognize that we didn’t necessarily know better as a society, but I also recognize that these industrialists took advantage of the Commons for their own extreme financial benefit.

(As a Boy Scout at perhaps 13, my troop was taken on a tour, arranged through a parent, of the St. Joseph zinc smelting plant in Shippensburg, PA.  Yes, the same town that had the first commercial atomic power plant in the US, and where I was a Pinkerton night watchman at 20.  We had to climb a 400-foot open staircase to get into the plant.  I, with a fear of open heights at the time, held on to the railing for dear life.  In doing so, I scraped several inches of industrial grit, likely hazardous, off that handrail.)

Today, we have our own set of robber barons – Bill Gates, Steve Jobs, Larry Ellison, Larry Page, Mark Zuckerberg, and others.  They too broke new ground in innovation, in definitely cleaner industries, yet may not be leaving the world in better shape than it was.  I use the term robber barons non-judgmentally, but rather in comparison to similar figures in the past.

Fifty years from now will bring a new generation of robber barons.  I will be dead at that point, as will most or all of the current generation (Bill Gates is two years older than I, and Steve Jobs has already passed), but I’m curious as to how history will treat them.  A lot depends on what the world looks like at that time, but I would guess that their companies would represent legacy industries, with entirely new ways of computing coming into being.

Their wealth is clearly generational, although Gates has said that he and Melinda will give away 95 percent of their wealth during their lives.  I’ve had recent exposure to the Bill and Melinda Gates Foundation, and I can believe they are serious about it.  I think the foundational aspects of traditional Microsoft computing (Windows, Office) will be eclipsed by then, but its approach to cloud computing may be adaptable enough to last fifty years.

Databases are a different story.  There will always be the need to store and access data, and process it, but the technology has become fragmented by different approaches – SQL, No-SQL, time series, graph, and so on.  Search will also likely change and fragment, and is likely not to be driven by an advertiser model.  In fact, search may become a public utility.  Facebook is fundamentally flawed in its current iteration, yet unacknowledged by Zuckerberg, and will almost certainly be supplanted by other ways to connect with people.

The question of whether the current generation of robber barons did fundamental good is more difficult to answer.  Like most robber barons of the past, they attempted to maximize their net worth, sometimes through questionable means.  They are cleaner than past industrial corporations, yet still have significant deficiencies (equality, advocacy for modern employment standards) that they will likely not account for anytime soon.  And they may influence laws and regulations in undesirable ways.

So today’s robber barons will likely be held in the same mixed contempt as those in the past.  While the unbiased judgment may be a bit more nuanced, they will be held to the standards of the future, not of today.  And the future will not look kindly on some of the actions that may seem acceptable today.

When AI is the Author December 9, 2019

Posted by Peter Varhol in Algorithms, Machine Learning.
Tags: , ,
add a comment

I have been a professional writer (among many other things) since 1988.  I’ve certainly written over a couple thousand articles and blog posts, and in my free time have authored two fiction thrillers, with several more on the way.  I have found over the years that I write fast, clearly, and when called for, imaginatively.

Now there are machine learning systems that can do all of that.

I knew that more recent ML systems had been able to automatically write news articles for publication on news sites.  At the beginning of this year, OpenAI, a research foundation committed to ethical uses of AI, announced that they had produced a text generator so good that they weren’t going to release the trained model, fearful that it would be used to spread fake news.  They instead released a much smaller model for researchers to experiment with, as well as a technical paper.

Today, though, they are using the same GPT-2 model to create creative works such as fiction and poetry.  I started wondering if there was nothing machine learning could not do, or at least mimic.

But there’s a catch.  The ML systems have to be given a starting point.  That starting point seems to be the beginning of the story, which has to be provided to these systems.  Once they have a starting point, they can come up with follow-on sentences that can be both creative and factual.

But they can’t do so without that starting point.  They can provide the middle, and perhaps the end, although I suspect that the end would have to be tailored toward a particular circumstance.

Decades ago, I read a short story titled “In Medias Res”.  That refers to a writing technique where the actual writing begins in the middle of the story, and fills in the backstory through several possible techniques, such as flashback.  In this case, however, it was about a commercial writer who could only write the middle of stories.  Other writers, who could see their beginning and end, but not the middle, hired him to write the middle of their stories.  He was troubled that he always came in the middle, and was incapable of writing a complete story.

While I’m guessing that ML techniques will eventually be good enough to compose much of our news and fiction, the advances of today are only capable of in medias res.  So I will continue writing.

About Tweeting December 6, 2019

Posted by Peter Varhol in Software platforms, Technology and Culture.
Tags: , , , ,
add a comment

I’ve done a presentation generically entitled “Talking to People: The Forgotten Software Tool.”  The time I gave it at DevOps Days Berlin 2016 was probably the closest I’ve ever come to getting a standing ovation.  The thesis of the talk, based in part on MIT’s Sherry Turkle’s book Reclaiming Conversation is that we as a society are increasingly preferring digital means of communications to physical ones.  For generations raised with smartphones, tablets, and (legacy) computers, face to face communications can be a struggle.

I am not a digital native in broadcasting my thoughts and activities to the rest of the world.  I have held jobs where tweeting, for example, was a job requirement in order to help build the company brand or get more page views.  I did so, even willingly, but my efforts were not nearly as voluminous as some of my colleagues.

I have to remember to tweet, or blog.  While I tend to be an introverted person throughout my life, decades ago I reluctantly recognized the need to reach out to others.  At the time, all of that was face to face, because digital connections didn’t exist.

Now there are so many ways to communicate without looking at someone.  I’ve had a number of video calls lately using Zoom, often with people who are using dual monitors.  They have the video showing on the large screen to one side, and look at that screen, and seemingly away from me.  It was funny, once I realized what was happening.

By itself, that’s not a bad thing, and in fact those with dual screens may not even realize they’re not really looking at you.  But it does damage the trust you try to build up by looking someone in the eye, and reading their nonverbal communications, is degraded even further with many digital forms of communications.

And tweeting is one of them.  And because we don’t know many (if at all) of the people who are reading our tweets, and don’t have to look them in the eye, we don’t feel obliged to be respectful (like many of Elon Musk’s more bizarre tweets).  That’s true even for those of us whose tweets are almost entirely professional.

Speech is not free.  We pay for it with everything we say.  Our reputations, the trust other people have in us, our ability to communicate effectively, and even to the point of lawsuits, are dependent upon not using Twitter as an attack platform.

Okay, here’s my solution.  Twitter needs to be banned from normal discourse.  In fact, Twitter is without normal discourse.  It should be entirely a professional platform.  I realize that this isn’t going to happen, but Twitter is too dangerous to our means of communication to simply dismiss.

Capricorn One and Other Conspiracies November 18, 2019

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

Capricorn One was a 1977 movie about a purported Mars space mission.  It turned out that the three astronauts were escorted off the spacecraft just before liftoff, and were told that the spacecraft was not capable of supporting life during the journey.  Months (more like two years) later, the entire spacecraft malfunctioned on the way home, and the astronauts realized that they out of necessity must die to maintain the illusion of success.  Thanks to an intrepid reporter and one astronaut who made it briefly to freedom, they exposed the conspiracy to fake the Mars landing.

It was a really good conspiracy movie.  But there were many situations where in real life the conspiracy could have been exposed.  Essentially, it took all of NASA to maintain the illusion of a successful mission up until the point the capsule malfunctioned.

That simply won’t happen.  As it won’t in any secret conspiracy.  In Capricorn One, it couldn’t have happened like this, because much of NASA would have known.  Yes, there was a NASA worker who thought there was something suspicious, but in reality it would be far more than a single control room worker.  And they can just speak up, without threat of an untimely death.  There is absolutely nothing that motivates them to keep a conspiracy.

That’s not to say that people won’t try to conspire.  It simply means that they are all doomed to failure.

So this leads us to the Flat Earth conspiracy.  You can personally see the Earth curve at altitude, and the images of Earth from space are convincing to all but the conspiracy-minded.  Yet people, for reasons unknown, are convinced that the Earth is flat.

I’m sorry, the physics of rotation and gravity and the like (some apparently claim that because they haven’t seen gravity, they are convinced it doesn’t exist.  Whatever that means.) are pretty unambiguous.  Apparently the members of Flat Earth are growing rapidly.  It’s really a shame that people, even seemingly intelligent people, don’t have a fundamental grasp of science.

The psychology behind this and other conspiracies is fascinating.  According to the experts, conspiracies are a way for people to believe they are in control of events.  Yet it’s not at all clear to me how believing in a massive conspiracy makes people in control.  So in reality, we have people rejecting hard science because, well, because they want to.  And that’s not a reason that will help them through life at all.