jump to navigation

Memorial Day 2018 May 28, 2018

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

I am a veteran.  I served six years as an Air Force officer, separating as a captain.  I wanted to fly; I had my private ticket at 17, but lacked the perfect eyesight needed to fly in the military.  So I flew a desk, got two masters degrees, and eventually got past the stage of my life where flying was important.

I was in San Antonio this past weekend, on a riverboat cruise, when the guide asked how many on the tour were active duty or veterans.  Despite the fact that San Antonio stands on the pillars of multiple Army and Air Force bases, only three of the 50 or so raised their hands (and one of them was a just-graduated ROTC cadet in uniform).  I was at a DevOps conference in Nashville last fall, in a room of 300 mostly young people, where the Iraqi War vet organizer asked how many were veterans.  My hand went up.  Period.

I served my country honorably (the DD-214 says so), but thinking back, I could have done so better.  I may not have been motivated by patriotism, but over the years that initial service has made me a different, and I think better, person.

We’ve had stupid wars (Spanish-American War, anyone?) and we’ve had unpopular wars (Vietnam certainly takes the cake here), and will continue to do so.  That is not for those who have chosen to serve to decide, although as human beings, many I’m sure have had opinions in the matter.  That’s what veterans have helped to protect, current events notwithstanding.

Service to our country would do all of us good.  It does not mean love, or patriotism; rather, it means that we recognize that we could not have our freedoms without sacrifice.  For most of us in the military, the sacrifices are minimal – a regimented lifestyle, a nod to authority, restrictions on our time and efforts.  But service doesn’t have to be in the military; all adults should seek out any opportunities to preserve our freedoms and ideals.

Those who have fallen in battle made the ultimate sacrifice.  I’m pretty sure that none intended to die for their country, but they did, and today is the day we remember them.  We may object to war in general, or government in general, or a specific war or government, but those who have died don’t deserve to be in that discussion.  So for one day, put aside politics and beliefs, and remember those who have died so that we could have the rights and privileges that we do.  Thank you.

Advertisements

Alexa, Phone Joe May 28, 2018

Posted by Peter Varhol in Algorithms, Software platforms, Technology and Culture.
Tags: , ,
add a comment

By now, the story of how Amazon Alexa recorded a private conversation and sent the recording off to a colleague is well-known.  Amazon has said that the event was a highly unlikely series of circumstances that will only happen very rarely.  Further, it promised to try to adjust the algorithms so that it didn’t happen again, but no guarantees, of course.

Forgive me if that doesn’t make me feel better.  Now, I’m not blaming Amazon, or Alexa, or the couple involved in the conversation.  What this scenario should be doing is radically readjusting what our expectations of a private conversation are.  About three decades ago, there was a short-lived (I believe) reality TV show called “Children Say the Funniest Things.”  It turned out that most of the funniest things concerned what they repeated from their parents.

Well, it’s not only our children that are in the room.  It’s also Internet-connected “smart” devices that can reliably digitally record our conversations and share them around the world.  Are we surprised?  We shouldn’t be.  Did we really think that putting a device that we could talk to in the room wouldn’t drastically change what privacy meant?

Well, here we are.  Alexa is not only a frictionless method of ordering products.  It is an unimpeachable witness listening to “some” conversations in the room.  Which ones?  Well, that’s not quite clear.  There are keywords, but depending on location, volume, and accent, Alexa may hear keywords where none are intended.

And it will decide who to share those conversations with, perhaps based on pre-programmed keywords.  Or perhaps based on an AI-type natural language interpretation of a statement.  Or, most concerning, based on a hack of the system.

One has to ask if in the very near future Alexa may well be subject to a warrant in a criminal case?  Guess what, it has already happened.  And unintended consequences will continue to occur, and many of those consequences will continue to be more and more public.

We may well accept that tradeoff – more and different unintended consequences in return for greater convenience in ordering things.  I’m aware that Alexa can do more than that, and that its range of capability will only continue to expand.  But so will the range of unintended consequences.

How Do We Learn Languages? May 26, 2018

Posted by Peter Varhol in Technology and Culture.
Tags:
add a comment

Aaron Schlossberg’s anti-Spanish rant is darkly amusing in its naivety, shocking it is explicitness.  Would I have been subject to the same treatment if found speaking English in a restaurant in Spain?  I don’t think so.

My grandparents came from Bratislava, in what is today the Republic of Slovakia, but at the time was Austria-Hungary.  All four of them are listed on the Ellis Island rolls (to be fair, I have only found three of them, but both spellings and the past are vague at best).  And yes, apparently Andy Warhol was a second cousin or something (thanks, Karen); names were pronounced and spelled differently at different times.

My parents spoke some Slovak, but rather than pass on the language to their children, used it to hide what they were saying from the children.  Today I regret this.  In general, I wish I had had the opportunity to learn different languages growing up.

Many of my school classmates were children or grandchildren of immigrants, mostly from central, southern, and eastern Europe.  There was one classmate I remember who was a very good student, and spoke good English with a slight accent.  I learned that English was his second language, that only Ukrainian was spoken in his household.

I travel quite a bit today.  I took Spanish in high school (now 40 years ago) and am in Spain once or twice a year for several years now, and my understanding of Spanish is coming along nicely.  I know a few words of German, and gave my twelve words of Russian a workout in Kiev two weeks ago (and even learned a word or two of Ukrainian).

As Hiro Protagonist noted in the  wonderful grunge novel Snow Crash, America in the near future is good at only four things – music, movies, microcode, and fast pizza delivery.  But it is precisely those things (I will also add aviation) that make the English language known throughout the world.

So how do we learn other languages?  We learn through practice, pure and simple.  Years ago, my sister took a degree in French, never used it, and today cannot remember a single word.  I meet people in Europe who know three or four languages well, because they can travel two hundred miles and hear several different languages.  Switzerland has four national languages.

We don’t have an official language, English or otherwise.  Let’s keep it like that, and let’s hear and practice other languages in the United States.  It will make us better citizens.

More on AI and the Turing Test May 20, 2018

Posted by Peter Varhol in Architectures, Machine Learning, Strategy, Uncategorized.
Tags: , , ,
add a comment

It turns out that most people who care to comment are, to use the common phrase, creeped out at the thought of not knowing whether they are talking to an AI or a human being.  I get that, although I don’t think I’m myself bothered by such a notion.  After all, what do we know about people during a casual phone conversation?  Many of them probably sound like robots to us anyway.

And this article in the New York Times notes that Google was only able to accomplish this feat by severely limiting the domain in which the AI could interact with – in this case, making dinner reservations or a hair appointment.  The demonstration was still significant, but isn’t a truly practical application, even within a limited domain space.

Well, that’s true.  The era of an AI program interacting like a human across multiple domains is far away, even with the advances we’ve seen over the last few years.  And this is why I even doubt the viability of self-driving cars anytime soon.  The problem domains encountered by cars are enormously complex, far more so than any current tests have attempted.  From road surface to traffic situation to weather to individual preferences, today’s self-driving cars can’t deal with being in the wild.

You may retort that all of these conditions are objective and highly quantifiable, making it possible to anticipate and program for.  But we come across driving situations almost daily that have new elements that must be instinctively integrated into our body of knowledge and acted upon.  Computers certainly have the speed to do so, but they lack a good learning framework to identify critical data and integrate that data into their neural network to respond in real time.

Author Gary Marcus notes that what this means is that the deep learning approach to AI has failed.  I laughed when I came to the solution proposed by Dr. Marcus – that we return to the backward-chaining rules-based approach of two decades ago.  This was what I learned during much of my graduate studies, and was largely given up on in the 1990s as unworkable.  Building layer upon layer of interacting rules was tedious and error-prone, and it required an exacting understanding of just how backward chaining worked.

Ultimately, I think that the next generation of AI will incorporate both types of approaches.  The neural network to process data and come to a decision, and a rules-based system to provide the learning foundation and structure.

Google AI and the Turing Test May 12, 2018

Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture, Uncategorized.
Tags: , , ,
add a comment

Alan Turing was a renowned mathematician in Britain, and during WW 2 worked at Bletchley Park in cryptography.  He was an early computer pioneer, and today is probably best known for the Turing Test, a way of distinguishing between computers and humans (hypothetical at the time).

More specifically, the Turing Test was designed to see if a computer could pass for a human being, and was based on having a conversation with the computer.  If the human could not distinguish between talking to a human and talking to a computer, the computer was said to have passed the Turing Test.  No computer has ever done so, although Joseph Weizenbaum’s Eliza psychology therapist in the 1960s was pretty clever (think Alfred Adler).

The Google AI passes the Turing Test.  https://www.youtube.com/watch?v=D5VN56jQMWM&feature=youtu.be.

I’m of two minds about this.  First, it is a great technical and scientific achievement.  This is a problem that for decades was thought to be intractable.  Syntax has definite structure and is relatively easy to parse.  While humans seem to understand language semantics instinctively, there are ambiguities that can only be learned through training.  That’s where deep learning through neural networks comes in.  And to respond in real time is a testament to today’s computing power.

Second, and we need this because we don’t want to have phone conversations?  Of course, the potential applications go far beyond calling to make a hair appointment.  For a computer to understand human speech and respond intelligently to the semantics of human words, it requires some significant training in human conversation.  That certainly implies deep learning, along with highly sophisticated algorithms.  It can apply to many different types of human interaction.

But no computing technology is without tradeoffs, and intelligent AI conversation is no exception.  I’m reminded of Sherry Turkle’s book Reclaiming Conversation.  It posits that people are increasingly afraid of having spontaneous conversations with one another, mostly because we cede control of the situation.  We prefer communications where we can script our responses ahead of time to conform to our expectations of ourselves.

Having our “AI assistant” conduct many of those conversations for us seems like simply one more step in our abdication as human beings, unwilling to face other human beings in unscripted communications.  Also, it is a way of reducing friction in our daily lives, something I have written about several times in the past.

Reducing friction is also a tradeoff.  It seems worthwhile to make day to day activities easier, but as we do, we also fail to grow as human beings.  I’m not sure where the balance lies here, but we should not strive single-mindedly to eliminate friction from our lives.

5/14 Update:  “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing…As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay.” – Zeynep Tufekci, Professor & Writer 

The Golden Age of Databases May 10, 2018

Posted by Peter Varhol in Architectures, Software platforms, Software tools.
Tags: , , ,
add a comment

Let’s face it, to most developers, databases are boring and opaque.  As long as I can create a data object to call the database and bring data into my application, I really don’t care about the underlying structures.  And many of us have an inherent bias against DBAs, for multiple reasons.  Years ago, one of my computer science graduate students made the proclamation, “I’m an engineer; I write technical applications.  I have no need for databases at all.”

I don’t think this is true anymore, if it ever was.  The problem was in the predominance of SQL relational databases.  The mathematical and logical foundation of relational databases is actually quite interesting, but from a practical standpoint actually setting up a database, whether through E-R diagrams or other approach, is pretty cut and dried.  And maintaining and performance tuning databases can often seem like an exercise in futility.

Certainly there were other types of databases and derivative products 20 or 30 years ago.  My old company, Progress Software, still makes a mint off its OpenEdge database and 4GL environment.  Sybase PowerBuilder was popular for at least two decades, and Borland Delphi still has a healthy following.  OLAP engines were available in the 1990s, working with SQL relational databases to quickly extract and report on relational data.

But traditional relational databases have disadvantages for today’s uses.  They are meant to be a highly reliable storage and retrieval system.  They tend to have the reliable part down pat, and there are almost universal means of reading, writing, modifying, and monitoring data in relational tables.

The world of data has changed.  While reliability and programming access of relational databases remains important in traditional enterprise applications, software has become essential in a wide variety of other areas.  This includes self-driving cars, financial trading, manufacturing, retail, and commercial applications in general.

Relational databases have been used in these areas, but have limitations that are becoming increasingly apparent as we stress them in ways they weren’t designed for.  So instead we are seeing alternatives that specialize in a specific area of storage and retrieval.  For example, the No-SQL MongoDB and MapReduce in general are making it possible to store large amounts of unstructured data, and to quickly search and retrieve data from that storage.  The open source InfluxDB provides a ready store for event-driven data, enabling applications to stream data based on a time series.  Databases such as FaunaDB can be used to implement blockchain.

All of these databases can run in the cloud, or on premises.  They tend to be easy to set up and use, and you can almost certainly find one to meet your specific needs.

So as you develop your next ground-breaking application, don’t find yourself limited by a relational database.  You’re not stuck in the same rut that you were ten years ago.  Take a look at what has to be called the Golden Age of databases.