jump to navigation

Education Should Make Us Uncomfortable May 3, 2021

Posted by Peter Varhol in Education, travel.
Tags: ,
add a comment

I value education more than almost anything else in life.  It is the mechanism by which we grow as individuals, and become better people and citizens.  There are few higher purposes in life.

In the seventh grade, I was required to memorize Lincoln’s Gettysburg Address.  I did so, but I didn’t know what it meant.  It was only subsequent education that I understood just what every single word in the speech said.  Today, 50 years later, I can still recite it verbatim, and tell you a powerful story of the meaning behind it.

I am reading stuff today referring to critical race theory, and the 1619 Project, and about how various states are passing laws against teaching such controversial topics.

I don’t quite get what critical race theory and the 1619 Project are all about, but I can tell you one thing.  If they make you uncomfortable, you should learn about them.

I was made comfortable in my education, of history, civics, social studies, and current events in my public school years.  In retrospect, some of the approaches to learning these topics then are laughable to me today.  I became uncomfortable when I learned that we sent Japanese immigrants to concentration camps, and that we more or less massacred most native Americans through the early 1900s, and much more.  And worse, that we vilified and murdered workers who were seeking a decent pay and work hours in the 1900s.

So my understanding of my country became more nuanced.  I still think the United States has the best system and greatest democracy ever, while recognizing that there is substantial room for improvement.  That’s what ongoing education buys you – an appreciation of what you have, coupled by a desire to make it better.

Most people don’t go that route.  Instead, they like narratives that make Americans the unabashed good guys.  Well, guess what?  We weren’t, and we aren’t.

That doesn’t make us evil; it makes us human.  Like people everywhere, we do the best we can.  I have had the opportunity to meet and talk to people in many different countries, on their home turf.  In many cases (Spain with Franco, Ukraine over the past decade, Serbs in the late 1990s, Swedes in World War II), their national histories contain ugly periods.  Yet the people I talk to strive to make the future better.  Mykola, in Ukraine, was a Chernobyl baby, born within the restricted zone, and told me that he snuck out to participate in the massive government protests of 2012.  “I supported my extended family and couldn’t get into trouble,” he told me.  “But I needed to be there.”

That’s what good, ordinary people do when confronted with their national history – they work to make things better.  If we know and appreciate our history, we can too.

Our COVID Messaging Continues to Be Awful May 1, 2021

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

This morning, I received a call from a distraught friend.  She was on her morning constitutional in a Boston suburb when she was accosted and harassed by two separate women.  For wearing a mask.

Yes, that’s right.  Both yelled at her that masks were not longer required outdoors, and that she shouldn’t be wearing one.

Well, first, that’s not true.  What is true is that the Centers for Disease Control has issued a recommendation to the effect that fully vaccinated individuals can choose not to wear a mask outdoors (my friend is vaccinated, but still chooses to wear a mask, as do I).  However dubious that recommendation, neither Massachusetts nor my home state of New Hampshire have lifted their outdoor mask mandates.  Those are the mandates that have the force of law, not a recommendation by the CDC.

This brings me to my fundamental point.  Our national and state governments continue to do a criminally poor job of communicating facts about COVID, testing, and vaccination.  For the last year, mixed messages have emerged from different sources on just about every aspect of the pandemic, leaving honest people lost and confused about just what to believe.  It certainly happened under the Trump administration, and it continues under Biden.  Folks, if you want society to survive this, get your messages clear, simple, and consistent now.

I have a secondary point too.  When messages are mixed and contradictory, people will choose the ones most convenient for their purposes.  That means that someone can say that you don’t have to wear masks outdoors, and be misleading rather than lying.  And, once again, our government is letting this happen.

Yelling at people on the street is something that I have never seen happen in New England, where civility is bred of coolness toward our neighbors.  This kind of emotion is rare in this part of the country.

I told my friend that the next time she is accosted and harassed, she should tell them to chill out and direct them to the legal pot shop down the street.

On Deepfakes and Geography April 27, 2021

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: ,
1 comment so far

When I was in the sixth grade, my science teacher sat me down with a book of overlapping aerial images of terrain, and handed me a pair of stereoscopic glasses.  “I want you to draw topographical maps of these terrains,” he said.  I took to the task assiduously, and in a few weeks had over a dozen hand-drawn topographical maps of well-known terrains.  Of course, they were likely pretty inaccurate, as I was working with only my eyesight to add quantitative information to my estimates of depth and slope.

But they weren’t fake, by any means.  They were the product of careful observation of the photos, and drawing of the representations.  Now we have this thing called deepfakes, where we are able to display highly realistic geographical information and images that have been faked.

It turns out that people aren’t the only raw material for deepfakes.  Maps and satellite images can also be faked.  Your first thought may be “why?”  I know mine was.  But the more interesting question may well be who would bother to produce them?  Consider these scenarios:

Well, let’s say there is a secret military installation, somewhere in the mountains.  We can use deepfakes to hide it entirely in perfectly realistic images shown to the public or to adversaries.  Likewise, we can put a fake base somewhere and use its images for similar purposes.  Who is to know, unless they try to physically visit.

Let’s say we have a natural disaster, a destructive hurricane or earthquake.  For PR or political purposes, we may want to show far less damage than actually occurred.  Geographical deepfakes give us the ability to do so.

In both cases, governments can control access to the physical spaces and overhead airspace, so the images can reasonably be presented as ground truth.  Except that they’re not.  Instead, they are carefully crafted messages, communicating only what their creators want known.

These seem like relatively small transgressions in the grand scheme of things.  I’m not at all a conspiracy theorist, so I doubt that such activities can be carried out on a grand scale.  We can’t keep people out of, for example, Los Angeles if there is a major earthquake.

But geographical deepfakes take us one step closer into what people are calling the “post-truth” era.  There is the saying that “you are entitled to your opinion, but not your facts.”  Well, with deepfakes, perhaps you are, if you prepare and present them well enough.  To me, that is a frightening proposition.

A Path to AI Explainability April 23, 2021

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: , ,
1 comment so far

I mentioned a while ago that I once met Pepper, the Softbank robot that responds to tactile stimulation.  This article notes that Pepper can now introspect, that is, work through his “thought” processes aloud as he performs his given activities.  I find that fascinating, in that what Pepper is really doing explainable AI, which I have been writing about recently.

The result is that Pepper can not only carry out verbal instructions, but also describe what he is doing in the process.  I don’t really understand how to code this, but I do appreciate the result.

Explainable AI is the ability of an AI system to “describe” how it arrived at a particular result, given the input data.  It actually consists of three separate parts – transparency, interpretability, and explainability.  Transparancy means that we need to be able to look into the algorithms to clearly discern how they are processing input data.  Explainability means that we might want to support queries into our results, or to get detailed explanations into more specific phases of the processing. 

Further, it appears that Pepper, through talking out his instructions (I really dislike using human pronouns here, but it’s convenient) is able to identify contradictions or inconsistencies that prevent him from completing the activity.  That frees Pepper to ask for additional instructions.

That’s an innovative and cool example of explainability, and extends to the ability of the AI to ask questions if the data are ambiguous or incomplete.  We need more applications like this.

On Delegating Responsibilities for Work April 22, 2021

Posted by Peter Varhol in Uncategorized.
Tags:
add a comment

I worked for a large commercial software company earlier in my career.  As was true of many organizations at the time, we were matrixed between functional departments (Dev, QA, marketing, etc.) and cross-functional projects.  At one point, my team was assigned a new member from a functional department.  While I was senior to her, I didn’t supervise her.

I had some struggles with her at first, we communicated well and she turned out to be an outstanding contributor.  The interesting thing here was that she didn’t want to be told how to do something; she would figure out that part, often in innovative ways.  I learned to not give her detailed instructions, but rather just keep her informed as to project direction and needs, and trust that the results would be fast and high-quality.

That was the basis of her conflict with her functional manager.  He was aghast at some of the approaches she took, and instead tried to tell her the acceptable way of performing her job.  They argued frequently, with him wanting to do her tasks in an accepted (by him) way, and her insistence that she find her own way forward.

She actually outlasted her manager at the company (I was long since gone), at least because she was flexible enough to do what was needed in different ways.

When I ask someone to do something, I’ve usually been comfortable looking at results, not process.  Of course there are limits to that.  Don’t do anything illegal, don’t bully other people.  I also once had a subordinate who would work the hours necessary during the week to get the job done, but refused to work weekends.  I was okay with that too.

And some things you have to do for yourself, for speed or symbolic purposes.  If there is something unpleasant to be done, then do it yourself.

But in general, people work in different ways, and in ways we may not have seen before.  That doesn’t make them wrong.  In fact, we may learn something from them.

Why Testing Needs Explainable Artificial Intelligence April 19, 2021

Posted by Peter Varhol in Algorithms, Machine Learning, Software development.
Tags: , , , , ,
add a comment

Many artificial intelligence/machine learning (AI/ML) applications produce results that are not easily understandable from their training and input data.  This is because these systems are largely black boxes that use multiple algorithms (sometimes hundreds) to process data and return a result.  Tracing how this data is processed, in mathematical algorithms, is an impossible task for a person.

Further, these algorithms were “trained” or adjusted based on the data used as the foundation of learning.  What is really happening there is that the data is adjusting algorithms to reflect what we already know about the relationship between inputs and outputs.  In other words, we are doing a very complex type of nonlinear regression, without any inherent knowledge of a casual relationship between inputs and outputs.

At worst, the outputs from AI systems can sometimes seem nonsensical, based on what is known about the problem domain.  Yet because those outputs come from software, we are inclined to trust them and apply them without question.  Maybe we shouldn’t.

But it can be more subtle than that.  The results could pose a systemic bias that made outputs seem correct, or at least plausible, but are not, or at least not ethically right.  And users rarely have recourse to question the outputs, making them a black box.

This is where explainable AI (XAI) comes in.  In cases where the relationship between inputs and outputs is complex and not especially apparent, users need the application to explain why it delivered a certain output.  It’s a matter of trusting the software to do what we think it is doing.  Ethical AI also plays into this concept.

So how does XAI work?  There is a long way to go here, but there are a couple of techniques that show some promise.  It operates off of the principles of transparency, interpretability, and explainability.  Transparency means that we need to be able to look into the algorithms to clearly discern how they are processing input data.  While that may not tell us how those algorithms are trained, it provides insight into the path to the results, and is intended for interpretation by the design and development team.

Interpretability is how the results might be presented for human understanding.  In other words, if you have an application and are getting a particular result, you should be able to see and understand how that result was achieved, based on the input data and processing algorithms.  There should be a logical pathway between data inputs and result outputs.

Explainability remains a vague concept while researchers try to define exactly how it might work.  We might want to support queries into our results, or to get detailed explanations into more specific phases of the processing.  But until there is better consensus, this feature remains a gray area.

The latter two characteristics are more important to testers and users.  How you do this depends on the application.  Facial recognition software can usually be built to describe facial characteristics and how they match up to values in an identification database.  It becomes possible to build at least interpretability into the software.

But interpretability and explainability are not as easy when the problem domain is more ambiguous.  How can we interpret an e-commerce recommendation that may or may not have anything to do with our product purchase?  I have received recommendations on Amazon that clearly bear little relationship to what I have purchased or examined, so we don’t always have a good path between source and destination.

So how do we implement and test XAI? 

Where Testing Gets Involved

Testing AI applications tends to be very different than testing traditional software.  Testers often don’t know what the right answer is supposed to be.  XAI can be very helpful in that regard, but it’s not the complete answer.

Here’s where XAI can help.  If the application is developed and trained in a way where algorithms show their steps in coming from problem to solution, then we have something that is testable.

Rule-based systems can make it easier, because the rules form a big part of the knowledge.  In neural networks, however, the algorithms rule, and they bear little relationship to the underlying intelligence.  But rule-based intelligence is much less common today, so we have to go back to the data and algorithms.

Testers often don’t have control over how AI systems work to create results.  But they can delve deeply into both data and algorithms to come up with ways to understand and test the quality of systems.  It should not be a black box to testers or to users.  How do we make it otherwise?

Years ago, I wrote a couple of neural network AI applications that simply adjusted the algorithms in response to training, without any insight on how that happened.  While this may work in cases where the connection isn’t important, knowing how our algorithms contribute to our results has become vital.

Sometimes AI applications “cheat”, using cues that do not accurately reflect the knowledge within the problem domain.  For example, it may be possible to facially recognize people, not through their characteristics, but through their surroundings.  You may have data to indicate that I live in Boston, and use the Boston Garden in the background as your cue, rather than my own face.  That may be accurate (or may not be), but it’s not facial recognition.

A tester can use an XAI application here to help tell the difference.  That’s why developers need to build in this technology.  But testers need deep insight into both the data and the algorithms.

Overall, a human in the loop remains critical.  Unless someone is looking critically at the results, then they can be wrong, and quality will suffer.

There’s no one correct answer here.  Instead, testers need to be intimately involved in the development of AI applications, and insist on explanatory architecture.  Without that, there is no way of comprehending the quality that these applications need to deliver actionable results.

Deepfakes and Our Belief Systems April 2, 2021

Posted by Peter Varhol in Publishing, Technology and Culture.
Tags:
add a comment

What we believe has become very controversial in recent years.  There is the old saw that we can choose what we believe, but we can’t choose our facts.  Or can we?

Deepfakes are images or videos can today be manipulated to show false narratives.  Probably the most famous is how social media outlets posted a video of Speaker of the House Nancy Pelosi, simply slowing it down to the point where she was slurring her words and appeared to be drunk.

But videos can also be entirely synthesized, through a combination of existing digital content, manipulating how that content is displayed, editing that content, and adding new content.  While I’m not the artist enough to do it, the tools are within reach for plenty of people.

Certainly in politics, international diplomacy, and maybe even in business, deepfakes have the clear potential to change the narratives of debates.  And they will, as people will continue to believe what they see, because how can our eyes lie?  (Cue The Eagles Lyin’ Eyes).

But deepfakes can also be used for more prosaic purposes, such as product placement.  Or, as the above example suggests, trashing people we want to trash, for our own personal purposes.

You and (I’m not sure) I can come up with deepfakes.  And we will.  I would like to think that I have a little more ethics than this, but it’s still problematic.  So where, precisely, are we going with deepfakes?  I don’t think it’s a good place.

True North and Life: An Allegory April 1, 2021

Posted by Peter Varhol in Uncategorized.
Tags: , ,
1 comment so far

True North is this, from Google:  Geographic (or True) North Pole: It is the point on the Earth which is calculated as the northerly point which is furthest away from the Equator. It defined as 90°N. It is located in the middle of the Arctic Ocean.

As a personal life goal, it means trying to be the person that you can be, without excuses, to point unambiguously North, toward a clear goal.  It is different, somewhat, for every person, but everyone needs a true North in life.

I have an internal compass.  Compasses do not point to true North, they point to magnetic north.  I don’t understand the geology, but magnetic north seems to move.  Even in my lifetime, mag north has moved several hundred miles.  So we wander around over the period of our lives, maybe not far, but maybe not particularly close, to North.  Look it up.

If we are defining our lives by true North, but have an internal compass, this is a problem, because it means that our life cannot automatically point to true North.  We need to, I think, figure out how to get there through dead reckoning.

My internal compass is wrong, mostly.  I don’t know at any particular time whether or not my life is pointing to true North.  In fact, it’s probably not.  How can I make myself the person that points to their true North?  And gets there, if not right away, by the end of life.

I also have a directional gyroscope inside of me.  I can set that gyro to North, periodically.  But it does something called precession.  It is a mechanical device that eventually becomes inaccurate, and needs to be reset.

And I think that for all of us, if we cannot find and reach true North in our lifetimes, yes, we may crash.  I will repeat myself.  Life is mostly about dead reckoning, which means navigating from one point to the other, using our compass and directional gyro.  There are no other tools in life.  If I am no longer dead reckoning, I am dead.

And.  This is the most important thing in my life.