jump to navigation

Automation Can Be Dangerous December 6, 2018

Posted by Peter Varhol in Software development, Software tools, Strategy, Uncategorized.
Tags: , ,
add a comment

Boeing has a great way to prevent aerodynamic stalls in their 737 MAX aircraft.  A set of sensors determines through airspeed and angle of attack that an aircraft is about to stall (that is, lose lift on its wings), and automatically pitch the nose down to recover.

Apparently malfunctioning sensors on Lion Air Flight 610 caused the aircraft nose to sharply pitch down absent any indication of a stall.  Preliminary analysis indicates that the pilots were unable to overcome the nose-down attitude, and the aircraft dove into the sea.  Boeing’s solution to this automation fault was explicit, even if its documentation wasn’t.  Turn off the system.

And this is what the software developers, testers, and their bosses don’t get.  Everyone thinks that automation is the silver bullet.  Automation is inherently superior to manual testing.  Automation will speed up testing, reduce costs, and increase quality.  We must have more automation engineers, and everyone not an automation engineer should just go away now.

There are many lessons here for software teams.  Automation is great when consistency in operation is required.  Automation will execute exactly the same steps until the cows come home.  That’s a great feature to have.

But many testing activities are not at all about consistency in operation.  In fact, relatively few are.  It would be good for smoke tests and regression tests to be consistent.  Synthetic testing in production also benefits from automation and consistency.

Other types of testing?  Not so much.  The purpose of regression testing, smoke testing, and testing in production is to validate the integrity of the application, and to make sure nothing bad is currently happening.  Those are valid goals, but they are only the start of testing.

Instead, testing is really about individual users and how they interact with an application.  Every person does things on a computer just a little different, so it behooves testers to do the same.  This isn’t harkening back to the days of weeks or months of testing, but rather acknowledging that the purpose of testing is to ensure an application is fit for use.  Human use.

And sometimes, whether through fault or misuse, automation breaks down, as in the case of the Lion Air 737.  And teams need to know what to do when that happens.

Now, when you are deploying software perhaps multiple times a day, it seems like it can take forever to sit down and actually use the product.  But remember the thousands more who are depending on the software and the efforts that go behind it.

In addition to knowing when and how to use automation in software testing, we also need to know when to shut it off, and use our own analytical skills to solve a problem.  Instead, all too often we shut down our own analytical skills in favor of automation.

Advertisements

Does Social Media Need to Go? October 27, 2018

Posted by Peter Varhol in Technology and Culture, Uncategorized.
Tags:
add a comment

I have been in tech publishing since 1988.  Fulltime, as an editor, senior editor, executive editor, editor in chief, and editorial director has encompassed, um, perhaps nine years.  I’ve freelanced in the interim.

In that time, I’ve learned something about publishing in general.  Publishing involves a certain responsibility to its readers.  That responsibility, in a nutshell, is to curate content in an honest way, and to present that content as representative of what the publisher stands for.  They stand by what is on their platform.

Social media emphatically does not curate.  Not only that, but it praises the fact that it does not curate.  Instead, it says that it cannot possibly curate, and it requires its users to self-curate.  But, of course, it doesn’t provide a reliable means for users to report their curation.

Facebook and other social media platforms have accepted the mantle of publishers, without accepting the responsibility of being publishers.  It has made them enormously profitable.  In fact, they even use our intimate personal data, and sell it to any buyers.  We seem to be okay with that.

I am emphatically not.  Today, someone has published their murderous intentions on social media, then carried them out.  How can we be okay with this?

Unless you disavow social media right now, I will argue that you are complicit in murder and other heinous crimes.  Are you okay with that?

Health Care Doesn’t Care October 7, 2018

Posted by Peter Varhol in Uncategorized.
Tags:
add a comment

A couple of incidents last week reminded me that while the U.S. might have the “health” part down pretty well, it is very much lacking in the “care” part.  The first incident surrounded a hospital appointment I have later this week.  Thursday I received an automated text asking me to confirm the appointment.

On the surface, this sounds like a good application of technology.  However, the text told me to respond by 9 PM.  I happened to be several time zones away, active at a conference, and didn’t see the text until after 9 PM EDT.  I responded anyway, and my response was rejected.  And the texts contained no phone number to call to confirm my appointment.  I hope they haven’t cancelled it, as it took me about two months to get this appointment, but I have to wait until Monday when this office is staffed to find out.

Second, Thursday also I received a call from another doctor’s office, and was told that I needed to consult with the doctor before renewing a prescription.  I explained that I was traveling almost every day between now and mid-November (about six weeks).  She repeated that my prescription wouldn’t be renewed until I saw the doctor.  I asked if I could schedule an appointment for mid-November.  No, I was told, that schedule wasn’t available yet.

I’m not unduly concerned, as the condition this prescription treated is much better, and I would only need to take it occasionally.  But here is the problem.  Our health care system is concerned only about itself, not its customers (patients).  The hoops they make their customers jump through are almost entirely for their convenience.  In my stories above, there is no apparent rationale for requiring a response within four hours, and to not provide other contact information is simply criminal.  While I appreciate that a doctor might want to consult on my condition and make adjustments to the prescription, there is no earthly reason why their schedule does not go out six weeks into the future.

And regrettably, there is no alternative for customers except to deal with the system.

Let me also say that I have encountered a number of fine and caring individual health care professionals.  It’s not the individuals that are the problem (for the most part); it is the system.  Now, you may argue that the people are the system, and I might agree with you.  But most of the health care professionals I talk to feel helpless to change it.

Both health care professionals and their customers have to rise up in revolution and take control.  It is the only thing we can do.  Together, we can reinsert the “care” into health care.

This Year I’m Doing 9/11 Different September 10, 2018

Posted by Peter Varhol in Uncategorized.
Tags: , ,
add a comment

I lost two coworkers on 9/11, one in each plane from Boston to LA that were hijacked and flown into the World Trade Center towers.  I and others in the office have vivid and forever memories of the events of that day.  I’ve written about that in the past.  Now I have something that I think is better, and more useful.

I have registered for the 9/11 Heroes Run.  It honors military, first responders, and victims of 9/11.  There are nationwide runs, both 5K and 10K, but there are none local to me in New England, so I have made it a virtual run.  I can run any race between September 1 and October 14 and it will count.  I also made a small donation to the charity.  For the first time, I think I might help make a difference on this day.

Upon registering, you are asked to designate your hero.  I chose Graham Berkeley, one of my fallen coworkers.  I didn’t know Graham well, but he seemed to be a unique and multi-talented individual.

I know it’s an overused idiom, but Never Forget.

Get Thee to a Spaceport! August 13, 2018

Posted by Peter Varhol in Uncategorized.
Tags:
add a comment

To many of my generation, the United States has a singular space launch facility, at Cape Canaveral, Florida.  Thanks to my time in the Air Force, I know of at least three others – Wallops Island, Virginia (a NASA complex); Vandenberg AFB, California, and the Kodiak Launch Complex on Kodiak Island, Alaska.  Thanks to my personal interest in space exploration, I know of two more – Spaceport America, in Truth or Consequences, New Mexico, and the Blue Origin launch complex near Van Horn, Texas.

But wait!  There are more.  Elon Musk has his own with SpaceX, of course, on the Texas coast (although SpaceX and Blue Origin use Cape Canaveral for operational launches right now).  Oddly, there is also the Oklahoma Spaceport, Cecil Spaceport in Jacksonville, Florida; and the Mojave Air and Space Port in the California desert.  The newest licensed spaceport is at Ellington Field in Houston, although it cannot yet support launches or recoveries.

Complicated?  Yeah.

The dynamics of achieving orbit are complex, but like any physics problem, consistent.  There is a small but distinct advantage in launching close to the Equator (at least for east-west launches), in effect using the Earth’s rotation to help propel a rocket upward.  Probably the most efficient is the Guiana Space Center, in French Guiana and within about five degrees of the Equator, used by the European Space Agency for many manned and unmanned launches.  Tyura Tam, in Kazakhstan, is also comfortably close to the Equator.  Tyura Tam (Baikonur), once a part of the larger Soviet Union, is now leased by the Russians for their launches.

Here in the US, the Kennedy Space Center is used for all manned launches (regrettably none over the last several years).  It launches Equatorially, to the east, over the Atlantic Ocean, in order to minimize the chance of failures over populated areas.  Vandenberg and Kodiak both launch into polar orbits, once again over the ocean.

There have been many other sites around the world that have been used for space launches.  China, Japan, and India have all launched unmanned satellites into orbit, and many other countries have designated spaceports.  Certainly over one hundred sites worldwide have either launched vehicles or are capable of doing so.

That begs the question why.  The manned space program has certainly garnered the lion’s share of popular attention, but hundreds of satellites are launched into space every year.  While many of these are launched from Cape Canaveral or Vandenberg, the volume is simply too great for those two sites alone.  Navigation, geophysical and environmental (including farming), Internet, and of course military are just a few of the uses for satellites today.

In an era where the US has largely depended upon commercial firms to deliver satellites and other payloads, the proliferation of US spaceports both lowers costs and gets satellites in orbit faster.  It also helps develop an industrial base in nontraditional parts of the country.

The majority of US spaceports today are that in name only; few if any launches are occurring outside of Cape Canaveral/Kennedy, Wallops Island, and Vandenberg.  But as the need for orbital launch capabilities heats up, some of the others are in on the ground floor.

Memorial Day 2018 May 28, 2018

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

I am a veteran.  I served six years as an Air Force officer, separating as a captain.  I wanted to fly; I had my private ticket at 17, but lacked the perfect eyesight needed to fly in the military.  So I flew a desk, got two masters degrees, and eventually got past the stage of my life where flying was important.

I was in San Antonio this past weekend, on a riverboat cruise, when the guide asked how many on the tour were active duty or veterans.  Despite the fact that San Antonio stands on the pillars of multiple Army and Air Force bases, only three of the 50 or so raised their hands (and one of them was a just-graduated ROTC cadet in uniform).  I was at a DevOps conference in Nashville last fall, in a room of 300 mostly young people, where the Iraqi War vet organizer asked how many were veterans.  My hand went up.  Period.

I served my country honorably (the DD-214 says so), but thinking back, I could have done so better.  I may not have been motivated by patriotism, but over the years that initial service has made me a different, and I think better, person.

We’ve had stupid wars (Spanish-American War, anyone?) and we’ve had unpopular wars (Vietnam certainly takes the cake here), and will continue to do so.  That is not for those who have chosen to serve to decide, although as human beings, many I’m sure have had opinions in the matter.  That’s what veterans have helped to protect, current events notwithstanding.

Service to our country would do all of us good.  It does not mean love, or patriotism; rather, it means that we recognize that we could not have our freedoms without sacrifice.  For most of us in the military, the sacrifices are minimal – a regimented lifestyle, a nod to authority, restrictions on our time and efforts.  But service doesn’t have to be in the military; all adults should seek out any opportunities to preserve our freedoms and ideals.

Those who have fallen in battle made the ultimate sacrifice.  I’m pretty sure that none intended to die for their country, but they did, and today is the day we remember them.  We may object to war in general, or government in general, or a specific war or government, but those who have died don’t deserve to be in that discussion.  So for one day, put aside politics and beliefs, and remember those who have died so that we could have the rights and privileges that we do.  Thank you.

More on AI and the Turing Test May 20, 2018

Posted by Peter Varhol in Architectures, Machine Learning, Strategy, Uncategorized.
Tags: , , ,
add a comment

It turns out that most people who care to comment are, to use the common phrase, creeped out at the thought of not knowing whether they are talking to an AI or a human being.  I get that, although I don’t think I’m myself bothered by such a notion.  After all, what do we know about people during a casual phone conversation?  Many of them probably sound like robots to us anyway.

And this article in the New York Times notes that Google was only able to accomplish this feat by severely limiting the domain in which the AI could interact with – in this case, making dinner reservations or a hair appointment.  The demonstration was still significant, but isn’t a truly practical application, even within a limited domain space.

Well, that’s true.  The era of an AI program interacting like a human across multiple domains is far away, even with the advances we’ve seen over the last few years.  And this is why I even doubt the viability of self-driving cars anytime soon.  The problem domains encountered by cars are enormously complex, far more so than any current tests have attempted.  From road surface to traffic situation to weather to individual preferences, today’s self-driving cars can’t deal with being in the wild.

You may retort that all of these conditions are objective and highly quantifiable, making it possible to anticipate and program for.  But we come across driving situations almost daily that have new elements that must be instinctively integrated into our body of knowledge and acted upon.  Computers certainly have the speed to do so, but they lack a good learning framework to identify critical data and integrate that data into their neural network to respond in real time.

Author Gary Marcus notes that what this means is that the deep learning approach to AI has failed.  I laughed when I came to the solution proposed by Dr. Marcus – that we return to the backward-chaining rules-based approach of two decades ago.  This was what I learned during much of my graduate studies, and was largely given up on in the 1990s as unworkable.  Building layer upon layer of interacting rules was tedious and error-prone, and it required an exacting understanding of just how backward chaining worked.

Ultimately, I think that the next generation of AI will incorporate both types of approaches.  The neural network to process data and come to a decision, and a rules-based system to provide the learning foundation and structure.

Google AI and the Turing Test May 12, 2018

Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture, Uncategorized.
Tags: , , ,
add a comment

Alan Turing was a renowned mathematician in Britain, and during WW 2 worked at Bletchley Park in cryptography.  He was an early computer pioneer, and today is probably best known for the Turing Test, a way of distinguishing between computers and humans (hypothetical at the time).

More specifically, the Turing Test was designed to see if a computer could pass for a human being, and was based on having a conversation with the computer.  If the human could not distinguish between talking to a human and talking to a computer, the computer was said to have passed the Turing Test.  No computer has ever done so, although Joseph Weizenbaum’s Eliza psychology therapist in the 1960s was pretty clever (think Alfred Adler).

The Google AI passes the Turing Test.  https://www.youtube.com/watch?v=D5VN56jQMWM&feature=youtu.be.

I’m of two minds about this.  First, it is a great technical and scientific achievement.  This is a problem that for decades was thought to be intractable.  Syntax has definite structure and is relatively easy to parse.  While humans seem to understand language semantics instinctively, there are ambiguities that can only be learned through training.  That’s where deep learning through neural networks comes in.  And to respond in real time is a testament to today’s computing power.

Second, and we need this because we don’t want to have phone conversations?  Of course, the potential applications go far beyond calling to make a hair appointment.  For a computer to understand human speech and respond intelligently to the semantics of human words, it requires some significant training in human conversation.  That certainly implies deep learning, along with highly sophisticated algorithms.  It can apply to many different types of human interaction.

But no computing technology is without tradeoffs, and intelligent AI conversation is no exception.  I’m reminded of Sherry Turkle’s book Reclaiming Conversation.  It posits that people are increasingly afraid of having spontaneous conversations with one another, mostly because we cede control of the situation.  We prefer communications where we can script our responses ahead of time to conform to our expectations of ourselves.

Having our “AI assistant” conduct many of those conversations for us seems like simply one more step in our abdication as human beings, unwilling to face other human beings in unscripted communications.  Also, it is a way of reducing friction in our daily lives, something I have written about several times in the past.

Reducing friction is also a tradeoff.  It seems worthwhile to make day to day activities easier, but as we do, we also fail to grow as human beings.  I’m not sure where the balance lies here, but we should not strive single-mindedly to eliminate friction from our lives.

5/14 Update:  “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing…As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay.” – Zeynep Tufekci, Professor & Writer