jump to navigation

I Suppose It Was Inevitable February 19, 2020

Posted by Peter Varhol in Machine Learning, Technology and Culture, Uncategorized.
Tags: , ,
add a comment

I read this article on the development of robots for sex purposes with a certain amount of incredulity.  We probably all have deep fantasies (mine are pretty tame compared to stuff like this), and perhaps at first glance it represents a harmless (but expensive) way of acting them out.

But is it harmless?  We learn about ourselves and human nature in general through feedback from real live humans.  And they learn from us in the same way.  Certainly artificial general intelligences (AGI), even in their infancy, can learn things from human interaction and respond in perhaps unexpected ways.

But if you buy (or rent, I suppose) a sex robot, ultimately you are going to get your way.  That’s the whole purpose.  The robot may look like a child, or it may shout “Rape!”, but you will not be arrested.  How could you be?  And with that, how will you respond in other circumstances when the robot is actually a real live human being?  That is the Big Problem here.

We interact with other people, casually and intimately, for multiple reasons.  Sometimes we lack the interpersonal skills to do it well, and the relationship breaks down.  We fail, and failure is a part of life.  We learn (or not), and move on.  I hope we learn a lot.  But learning from an AGI isn’t nearly the same as learning from a real live human being.

With an AGI sex robot, we may learn that we just have to keep forcing the issue, that the entity will eventually give us what we want.  In other words, we won’t fail with the robot, no matter what our goal.  And that is the wrong lesson.

The cited article talks about legislation against AGI sex robots.  I’m not sure legislation is the answer.  How about common sense?

I have been reading over the last couple of years with some amusement about various Bills of Rights for AI entities.  Silly and stupid, I think.  Now I am questioning that stance, not for the benefit of the robot, but for the benefit of society as a whole.  I find myself upset over this, and I hope you might consider being so too.

Will We Have Completely Autonomous Airliners? January 2, 2020

Posted by Peter Varhol in aviation, Machine Learning, Technology and Culture.
Tags: , , , ,
add a comment

This has been the long term trend, and two recent stories have added to the debate.  First, the new FAA appropriations bill includes a directive to study single-pilot airliners used for cargo.  Second is this story in Wall Street Journal (paywall), discussing how the Boeing 737 MAX crashes has caused the company to advocate even more for fully autonomous airliners.

I have issues with that.  First, Boeing’s reasoning is fallacious.  The 737 MAX crashes were not pilot error, but rather design and implementation errors, and inadequate information for documentation and training.  Boeing as a culture apparently still refuses to acknowledge that.

Second, as I have said many times before, automation is great when used in normal operations.  When something goes wrong, automation more often than not does the opposite of the right thing, attempting to continue normal operations in an abnormal situation.

As for a single pilot, when things go wrong, a single pilot is likely to be too focused on the most immediate, rather than carrying out a division of labor.  It seems like in an emergency situation, two experienced heads are better than one.  And there are instances, albeit rare, where a pilot becomes incapacitated, and a second person is needed.

Boeing is claiming that AI will provide the equivalent of a competent second pilot.  That’s not what AI is all about.  Despite the ability to learn, a machine learning system would have to have seen the circumstances of the failure before, and have a solution, or at least an approximation of a solution, as a part of its training.  This is not black magic, as Boeing seems to think.  It is a straightforward process of data and training.

AI does only what it is trained to do.  Boeing says that pilot error is the leading cause of airliner incidents.  They are correct, but it’s not as simple as that.  Pilot error is a catch-phrase that includes a number of different things, including wrong decisions, poor information, or inadequate training, among others.  While they can easily be traced back to the pilot, they are the result of several different causes of errors and omissions.

So I have my doubts as to whether full automation is possible or even desirable.  And the same applies to a single pilot.  Under normal operations, it might be a good approach.  But life is full of unexpected surprises.

When AI is the Author December 9, 2019

Posted by Peter Varhol in Algorithms, Machine Learning.
Tags: , ,
add a comment

I have been a professional writer (among many other things) since 1988.  I’ve certainly written over a couple thousand articles and blog posts, and in my free time have authored two fiction thrillers, with several more on the way.  I have found over the years that I write fast, clearly, and when called for, imaginatively.

Now there are machine learning systems that can do all of that.

I knew that more recent ML systems had been able to automatically write news articles for publication on news sites.  At the beginning of this year, OpenAI, a research foundation committed to ethical uses of AI, announced that they had produced a text generator so good that they weren’t going to release the trained model, fearful that it would be used to spread fake news.  They instead released a much smaller model for researchers to experiment with, as well as a technical paper.

Today, though, they are using the same GPT-2 model to create creative works such as fiction and poetry.  I started wondering if there was nothing machine learning could not do, or at least mimic.

But there’s a catch.  The ML systems have to be given a starting point.  That starting point seems to be the beginning of the story, which has to be provided to these systems.  Once they have a starting point, they can come up with follow-on sentences that can be both creative and factual.

But they can’t do so without that starting point.  They can provide the middle, and perhaps the end, although I suspect that the end would have to be tailored toward a particular circumstance.

Decades ago, I read a short story titled “In Medias Res”.  That refers to a writing technique where the actual writing begins in the middle of the story, and fills in the backstory through several possible techniques, such as flashback.  In this case, however, it was about a commercial writer who could only write the middle of stories.  Other writers, who could see their beginning and end, but not the middle, hired him to write the middle of their stories.  He was troubled that he always came in the middle, and was incapable of writing a complete story.

While I’m guessing that ML techniques will eventually be good enough to compose much of our news and fiction, the advances of today are only capable of in medias res.  So I will continue writing.

Minority Report Has Arrived August 19, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: , , ,
1 comment so far

It’s not quite the Tom Cruise movie where the temporal detective is himself pursued for his intent to commit a crime in the future, but it is telling nonetheless.  In this case, the Portland (OR) police department digitally altered a mug shot to remove facial tattoos from a prospective bank robber prior to showing it to witnesses who reported no such tattoos.  This ended up with an arrest and trial of that man, based partially on the doctored photo.

The technology used was not at all cutting edge (it was Photoshopped), but it was intended to make the mug shot look more like the descriptions provided by the witnesses.  It’s not clear that the police and prosecutors tried to hide this fact, but they justify it by saying the suspect could have used makeup prior to the robberies.  The public defender is, of course, challenging its admissibility, but as of now there has been no ruling on the matter.  The police also say that they have done similar adjustments to photos in other cases.  Hmmm.

This specific instance is troubling in that we expect legal evidence not to be Photoshopped, especially for the purpose of pointing the finger at a specific suspect.  The more strategic issue is how law enforcement, and society in general, will use newer technologies to craft evidence advocating or rejecting certain positions.  I don’t expect Congress or other legal body to craft (imperfect) laws regulating this until it is far too late.

I can envision a future where law, politics, and even news in general comes to rely on deep fakes as a way of influencing public opinion, votes, and society as a whole.  We certainly see enough of that today, and the use of faked videos and voices will simply make it more difficult to tell the difference between honest and made-up events.  Social media, with its inconsistent fake news rules applied inconsistently, makes matters worse.

I’m rather reminded of the old (1980s) TV show Max Headroom, in which a comatose investigative reporter lends his knowledge and personality to a lifelike AI that broadcasts in his stead.  The name comes from the last thing the reporter saw before his coma – a sign saying MAX HEADROOM 2.3M.  His head hits the sign at high speed, and becomes his AI num de guerre.

We wonder why so many people persist in believing clearly unsupported statements, and at least part of that has to do with the ability of anyone to express anything and attract a wide audience (“It’s on the Internet so it must be true!”).  Doctored words, photos, and video will eventually make nothing believable.

Congestion Pricing and the Surveillance State July 23, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: , ,
1 comment so far

Not many people are aware that New York City is instituting congestion pricing, both to ease traffic (mostly in Manhattan) and to provide additional funding for public transportation.  Both are laudable goals.  This is to be implemented by using one or more cameras to take a photo of each car’s license plate, and generate a bill that can be sent to the owner of that car.  One of the proposals for the technology to implement it was delivered by a company called Perceptics.

Perceptics proposed a solution that included the ability to identify cars not only by license plate number, but also by the characteristics of the car itself.  The car is like a fingerprint, the company says.  It has characteristics that provide a unique identification.

Still okay, but it’s starting to get a little bit out there in credulity.

Then we find out that Perceptics proposes using a large number of cameras across the congestion zone, back-ended by AI-type algorithms and Big Data analytics whose purpose is to determine the number of people in the car, who they are (through facial recognition), where they came from, where they are going, and how often they do so.

Go back and read that sentence again.  To send out congestion pricing bills, they want to know who is in the car, where it is going, and how often it does so, among other things.  Even I know that is some serious overkill for the stated purpose.  But that data can be used for a lot of other things that have nothing to do with congestion pricing.  In fact, it provides a window into almost everything thing that someone who drives into the congestion area does.

London has been wired extensively with CCTV cameras since at least the 1990s.  Today, the best estimate for the number of CCTVs in London is 500,000.  The average person in London is caught on camera 300 times a day.  Today, thanks to facial recognition and analytics, you don’t have dozens of analysts sorting through tapes to desperately find a particular person, but rather an unstructured cloud database that with the right algorithms for searching and finding a particular person in a particular location within seconds.

Those numbers alone should blow your mind.

I don’t know why proposals like this don’t bother people.  I’m guessing that the cameras are unobtrusive, people don’t see them, and can push the abstract thought of surveillance out of their minds.  Further, they reason that they are not criminals, and these types of technology serve to protect rather than harm them.  Kind of the same reasons why they post anything they like on Facebook, without thought of the consequences.

Today, the cacophony of opinion says that if you’re not doing anything wrong, you are silly or unpatriotic to be afraid of the surveillance state.  Wrong.  The algorithms are by no means perfect, and could mistake you for someone else, sometimes with disastrous consequences.  Further, the data could be stolen and used against you.  As a guiding principal, we as free individuals in a free country should not be subject to constant video scrutiny.

Yet here we are.

Is Elon Musk Prescient, or Just Scary? July 19, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: ,
add a comment

The headline blared out at me yesterday:  Elon Musk has formed a company to make implants to link the brain with a smartphone.  The article offered no insight on how that might be done, but did say that Elon Musk wants to insert Bluetooth-enabled implants into the brain, claiming the devices could enable telepathy (???) and repair motor function in people with injuries.  Further, he says that it would be used by stroke victims, cancer patients, quadriplegics or others with congenital defects.  It connects via Bluebooth to a small computer worn over the ear and to a smartphone.

The part about helping the disabled sounds really good, but once again, it’s not clear how that might happen.  And telepathy?  Seriously?  It’s not at all clear about how having a smartphone wired to your brain is supposed to accomplish in terms of higher cognitive function.

All this leaves me shrugging my shoulders.  On the one hand, I like the idea of technology that might help the disabled, especially those with brain or motor damage.  On the other hand, it is not at all clear how this whole thing might work in practice.

(I am tempted to go third or fourth hand at this point, but maybe I should use the Gripping Hand).  And then there’s Musk.  He has proven himself to be a brilliant and driven innovator, but also seems less than stable when obstacles lie in his path.

Color me dubious.  There are possibly some advantages here in helping disabled people.  I don’t think they will be all that much, but much more information is needed.  But the idea of anyone, whether disabled or healthy, thinking it’s a good idea to put computer chips in their brain (up to 10, says Musk) seems to be the height of folly.

Then there is the whole question of who owns the data that is coming from your brain?  I don’t even want to touch that one.  If we thought ownership and use of Facebook and Google data was controversial, this takes the cake.

So here is another technology I’m not ready for.  I’ve always been cautious in adopting new technologies, but I think I draw the limit at computer chips in my brain.

The Path to Autonomous Automobiles Will Be Longer Than We Think July 14, 2019

Posted by Peter Varhol in Algorithms, Machine Learning, Software platforms, Technology and Culture.
Tags: , ,
add a comment

I continue to be amused by people who believe that fully autonomous automobiles are right around the corner.  “They’re already in use in many cities!” they exclaim (no, they’re not).  In a post earlier this year, I’ve listed four reasons why we will be unlikely to see fully autonomous vehicles in my lifetime; at the very top of the list is mapping technology, maps, and geospatial information.

That makes the story of hundreds of cars trying to get to Denver International Airport being misdirected by Google Maps all that much more amusing.  Due to an accident on the main highway to DIA, Google Maps suggested an alternative, which eventually became a muddy mess that trapped over 100 cars in the middle of the prairie.

Of course, Google disavowed any responsibility, claiming that it makes no promises with regard to road conditions, and that users should check road conditions ahead of time.  Except that it did say that this dirt road would take about 20 minutes less than the main road.  Go figure.  While not a promise, it does sound like a factual statement on, well, road conditions.  And, to be fair, they did check road conditions ahead of time – with Google!

While this is funny (at least reading about it), it points starkly to the limitations of digital maps for use with car navigation.  Autonomous cars require maps with exacting detail, within feet or even inches.  Yet if Google as one of the best examples of mapping cannot get an entire route right, then there is no hope for fully autonomous cars to use these same maps sans driver.

But, I hear you say, how often does this happen?  It happens often.  I’ve often taken a Lyft to a particular street address in Arlington, Massachusetts, a close-in suburb of Boston.  The Lyft (and, I would guess, Uber) maps have it as a through street, but in ground truth it is bisected by the Minuteman Bikeway and blocked to vehicular traffic.  Yet every single Lyft tries to take me down one end of that street in vain.  Autonomous cars need much better navigation than this, especially in and around major cities.

And Google can’t have it both ways, supplying us with traffic conditions yet disavowing any responsibility in doing so.  Of course, that approach is part and parcel to any major tech company, so we shouldn’t be surprised.  But we should be very wary in the geospatial information they provide.

Will We Ever Be Ready for Smart Cities? July 12, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: ,
add a comment

In theory, a smart city is a great idea.  With thousands of sensors and real time data analytics, the city and its inhabitants can operate far more efficiently than they do today.  We have detailed traffic, pedestrian, and shopping patterns, right down to the individual if we so choose.

We can use data on traffic flows to route traffic and coordinate traffic lights.  Stores can operate at times that are convenient to people.  Power plants can generate electricity based on actual real time usage.  Crime patterns can be easily identified, with crime avoidance and crimefighting strategies applied accordingly.  The amount of data that can be collected in a city with tens of thousands of sensors all feeding into a massive database is enormous.

This is what Google (Alphabet) wants to do in a development in Toronto, with its company Sidewalk Labs, and last year won the right to take a neighborhood under development and make it a smart city.  This article cites that urban planners have rushed to develop the waterfront area and build the necessary infrastructure to create at least a smart neighborhood that demonstrates many of the concepts.

But now Toronto is pushing back on the whole idea.  The primary issue is one of data control and use.  A smart city will generate enormous amounts of data, not just on aggregates of people, but on identifiable images and people.  It seems this was left as a “to be determined” item in initial selection and negotiations.  Now that Sidewalk Labs is moving forward to build out the plan, the question of the data has come to the forefront.  And what is occurring isn’t pretty.

The answer that seems to be popular is called a “data trust”, a storage and access entity that protects the data from both government and the vendor supplying the smart services.  Alphabet’s Sidewalk Labs claims to have produced the strongest possible data protection plan; Toronto and activist groups strongly disagree.  Without seeing the plan, I can’t say, but I can say that I would be concerned about a commercial vendor (especially one connected to Google) having any access to this level of data for any purpose.  It is truly the next level of potentially breeching privacy to obtain deeper commercial data.  And do any of us really think that Google won’t ultimately do so?

Now, I was raised in rural America, and while I am comfortable enough whenever I am in a city, it is not my preferred habitat.  It seems to me that there is a tradeoff between privacy and the ability to use data on individual activities (even aggregated) to make day to day activities more efficient for the city and its occupants.  Despite the abstract advantages in the smart cities approach, I don’t think we have the trust necessary to carry it out.