jump to navigation

Minority Report Has Arrived August 19, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: , , ,
1 comment so far

It’s not quite the Tom Cruise movie where the temporal detective is himself pursued for his intent to commit a crime in the future, but it is telling nonetheless.  In this case, the Portland (OR) police department digitally altered a mug shot to remove facial tattoos from a prospective bank robber prior to showing it to witnesses who reported no such tattoos.  This ended up with an arrest and trial of that man, based partially on the doctored photo.

The technology used was not at all cutting edge (it was Photoshopped), but it was intended to make the mug shot look more like the descriptions provided by the witnesses.  It’s not clear that the police and prosecutors tried to hide this fact, but they justify it by saying the suspect could have used makeup prior to the robberies.  The public defender is, of course, challenging its admissibility, but as of now there has been no ruling on the matter.  The police also say that they have done similar adjustments to photos in other cases.  Hmmm.

This specific instance is troubling in that we expect legal evidence not to be Photoshopped, especially for the purpose of pointing the finger at a specific suspect.  The more strategic issue is how law enforcement, and society in general, will use newer technologies to craft evidence advocating or rejecting certain positions.  I don’t expect Congress or other legal body to craft (imperfect) laws regulating this until it is far too late.

I can envision a future where law, politics, and even news in general comes to rely on deep fakes as a way of influencing public opinion, votes, and society as a whole.  We certainly see enough of that today, and the use of faked videos and voices will simply make it more difficult to tell the difference between honest and made-up events.  Social media, with its inconsistent fake news rules applied inconsistently, makes matters worse.

I’m rather reminded of the old (1980s) TV show Max Headroom, in which a comatose investigative reporter lends his knowledge and personality to a lifelike AI that broadcasts in his stead.  The name comes from the last thing the reporter saw before his coma – a sign saying MAX HEADROOM 2.3M.  His head hits the sign at high speed, and becomes his AI num de guerre.

We wonder why so many people persist in believing clearly unsupported statements, and at least part of that has to do with the ability of anyone to express anything and attract a wide audience (“It’s on the Internet so it must be true!”).  Doctored words, photos, and video will eventually make nothing believable.

Congestion Pricing and the Surveillance State July 23, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: , ,
1 comment so far

Not many people are aware that New York City is instituting congestion pricing, both to ease traffic (mostly in Manhattan) and to provide additional funding for public transportation.  Both are laudable goals.  This is to be implemented by using one or more cameras to take a photo of each car’s license plate, and generate a bill that can be sent to the owner of that car.  One of the proposals for the technology to implement it was delivered by a company called Perceptics.

Perceptics proposed a solution that included the ability to identify cars not only by license plate number, but also by the characteristics of the car itself.  The car is like a fingerprint, the company says.  It has characteristics that provide a unique identification.

Still okay, but it’s starting to get a little bit out there in credulity.

Then we find out that Perceptics proposes using a large number of cameras across the congestion zone, back-ended by AI-type algorithms and Big Data analytics whose purpose is to determine the number of people in the car, who they are (through facial recognition), where they came from, where they are going, and how often they do so.

Go back and read that sentence again.  To send out congestion pricing bills, they want to know who is in the car, where it is going, and how often it does so, among other things.  Even I know that is some serious overkill for the stated purpose.  But that data can be used for a lot of other things that have nothing to do with congestion pricing.  In fact, it provides a window into almost everything thing that someone who drives into the congestion area does.

London has been wired extensively with CCTV cameras since at least the 1990s.  Today, the best estimate for the number of CCTVs in London is 500,000.  The average person in London is caught on camera 300 times a day.  Today, thanks to facial recognition and analytics, you don’t have dozens of analysts sorting through tapes to desperately find a particular person, but rather an unstructured cloud database that with the right algorithms for searching and finding a particular person in a particular location within seconds.

Those numbers alone should blow your mind.

I don’t know why proposals like this don’t bother people.  I’m guessing that the cameras are unobtrusive, people don’t see them, and can push the abstract thought of surveillance out of their minds.  Further, they reason that they are not criminals, and these types of technology serve to protect rather than harm them.  Kind of the same reasons why they post anything they like on Facebook, without thought of the consequences.

Today, the cacophony of opinion says that if you’re not doing anything wrong, you are silly or unpatriotic to be afraid of the surveillance state.  Wrong.  The algorithms are by no means perfect, and could mistake you for someone else, sometimes with disastrous consequences.  Further, the data could be stolen and used against you.  As a guiding principal, we as free individuals in a free country should not be subject to constant video scrutiny.

Yet here we are.

Is Elon Musk Prescient, or Just Scary? July 19, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: ,
add a comment

The headline blared out at me yesterday:  Elon Musk has formed a company to make implants to link the brain with a smartphone.  The article offered no insight on how that might be done, but did say that Elon Musk wants to insert Bluetooth-enabled implants into the brain, claiming the devices could enable telepathy (???) and repair motor function in people with injuries.  Further, he says that it would be used by stroke victims, cancer patients, quadriplegics or others with congenital defects.  It connects via Bluebooth to a small computer worn over the ear and to a smartphone.

The part about helping the disabled sounds really good, but once again, it’s not clear how that might happen.  And telepathy?  Seriously?  It’s not at all clear about how having a smartphone wired to your brain is supposed to accomplish in terms of higher cognitive function.

All this leaves me shrugging my shoulders.  On the one hand, I like the idea of technology that might help the disabled, especially those with brain or motor damage.  On the other hand, it is not at all clear how this whole thing might work in practice.

(I am tempted to go third or fourth hand at this point, but maybe I should use the Gripping Hand).  And then there’s Musk.  He has proven himself to be a brilliant and driven innovator, but also seems less than stable when obstacles lie in his path.

Color me dubious.  There are possibly some advantages here in helping disabled people.  I don’t think they will be all that much, but much more information is needed.  But the idea of anyone, whether disabled or healthy, thinking it’s a good idea to put computer chips in their brain (up to 10, says Musk) seems to be the height of folly.

Then there is the whole question of who owns the data that is coming from your brain?  I don’t even want to touch that one.  If we thought ownership and use of Facebook and Google data was controversial, this takes the cake.

So here is another technology I’m not ready for.  I’ve always been cautious in adopting new technologies, but I think I draw the limit at computer chips in my brain.

The Path to Autonomous Automobiles Will Be Longer Than We Think July 14, 2019

Posted by Peter Varhol in Algorithms, Machine Learning, Software platforms, Technology and Culture.
Tags: , ,
add a comment

I continue to be amused by people who believe that fully autonomous automobiles are right around the corner.  “They’re already in use in many cities!” they exclaim (no, they’re not).  In a post earlier this year, I’ve listed four reasons why we will be unlikely to see fully autonomous vehicles in my lifetime; at the very top of the list is mapping technology, maps, and geospatial information.

That makes the story of hundreds of cars trying to get to Denver International Airport being misdirected by Google Maps all that much more amusing.  Due to an accident on the main highway to DIA, Google Maps suggested an alternative, which eventually became a muddy mess that trapped over 100 cars in the middle of the prairie.

Of course, Google disavowed any responsibility, claiming that it makes no promises with regard to road conditions, and that users should check road conditions ahead of time.  Except that it did say that this dirt road would take about 20 minutes less than the main road.  Go figure.  While not a promise, it does sound like a factual statement on, well, road conditions.  And, to be fair, they did check road conditions ahead of time – with Google!

While this is funny (at least reading about it), it points starkly to the limitations of digital maps for use with car navigation.  Autonomous cars require maps with exacting detail, within feet or even inches.  Yet if Google as one of the best examples of mapping cannot get an entire route right, then there is no hope for fully autonomous cars to use these same maps sans driver.

But, I hear you say, how often does this happen?  It happens often.  I’ve often taken a Lyft to a particular street address in Arlington, Massachusetts, a close-in suburb of Boston.  The Lyft (and, I would guess, Uber) maps have it as a through street, but in ground truth it is bisected by the Minuteman Bikeway and blocked to vehicular traffic.  Yet every single Lyft tries to take me down one end of that street in vain.  Autonomous cars need much better navigation than this, especially in and around major cities.

And Google can’t have it both ways, supplying us with traffic conditions yet disavowing any responsibility in doing so.  Of course, that approach is part and parcel to any major tech company, so we shouldn’t be surprised.  But we should be very wary in the geospatial information they provide.

Will We Ever Be Ready for Smart Cities? July 12, 2019

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: ,
add a comment

In theory, a smart city is a great idea.  With thousands of sensors and real time data analytics, the city and its inhabitants can operate far more efficiently than they do today.  We have detailed traffic, pedestrian, and shopping patterns, right down to the individual if we so choose.

We can use data on traffic flows to route traffic and coordinate traffic lights.  Stores can operate at times that are convenient to people.  Power plants can generate electricity based on actual real time usage.  Crime patterns can be easily identified, with crime avoidance and crimefighting strategies applied accordingly.  The amount of data that can be collected in a city with tens of thousands of sensors all feeding into a massive database is enormous.

This is what Google (Alphabet) wants to do in a development in Toronto, with its company Sidewalk Labs, and last year won the right to take a neighborhood under development and make it a smart city.  This article cites that urban planners have rushed to develop the waterfront area and build the necessary infrastructure to create at least a smart neighborhood that demonstrates many of the concepts.

But now Toronto is pushing back on the whole idea.  The primary issue is one of data control and use.  A smart city will generate enormous amounts of data, not just on aggregates of people, but on identifiable images and people.  It seems this was left as a “to be determined” item in initial selection and negotiations.  Now that Sidewalk Labs is moving forward to build out the plan, the question of the data has come to the forefront.  And what is occurring isn’t pretty.

The answer that seems to be popular is called a “data trust”, a storage and access entity that protects the data from both government and the vendor supplying the smart services.  Alphabet’s Sidewalk Labs claims to have produced the strongest possible data protection plan; Toronto and activist groups strongly disagree.  Without seeing the plan, I can’t say, but I can say that I would be concerned about a commercial vendor (especially one connected to Google) having any access to this level of data for any purpose.  It is truly the next level of potentially breeching privacy to obtain deeper commercial data.  And do any of us really think that Google won’t ultimately do so?

Now, I was raised in rural America, and while I am comfortable enough whenever I am in a city, it is not my preferred habitat.  It seems to me that there is a tradeoff between privacy and the ability to use data on individual activities (even aggregated) to make day to day activities more efficient for the city and its occupants.  Despite the abstract advantages in the smart cities approach, I don’t think we have the trust necessary to carry it out.

Deep Fakes and A Brave New World June 30, 2019

Posted by Peter Varhol in Algorithms, Machine Learning, Technology and Culture.
Tags: ,
1 comment so far

I certainly wasn’t the only one who did a double-take when I read about DeepNude, an AI application that could take a photograph of a woman and remove her clothing, creating a remarkably good facsimile of that woman without clothes.  At the drop of a hat, we are now in a world where someone can take a woman’s photo on the street, run it through facial recognition software to determine her name, age, address, and occupation, then use DeepNude to create realistic naked images that can be posted on the Internet, all without even meeting her.

The creator of this application (apparently from Estonia), took it down after a day, and in a subsequent interview said that he suddenly realized the ability of such a program to do great harm (duh!).  But from his description of its development, it didn’t seem that complex to replicate (I could probably do it, except for obtaining 10,000 nude photos to train it.  Or one nude photo, for that matter).

One of my favorite thriller writers, James Rollins, recently wrote a novel titled Crucible, which personalizes the race toward creating Artificial General Intelligences, or AGI.  These types of AIs have the ability to learn new skills outside of their original problem domain, much like a human would.  His fictional characters point out that AGIs will eventually train one another (I’m not sure about that assertion), so it was critically important that the first AGIs were “good”, as opposed to “evil”.

The good versus evil aspects invite much more debate, so I’ll leave them to a later post, but I can’t imagine that such an application has any socially redeeming value.  Still, once one person has done it, others will surely copy.

To be clear, DeepNude is an Artificial Specialized Intelligence, not an AGI, and its problem domain is relatively straightforward.  It is not inherently evil by common definition, and is not thinking in any sense of the word.  But when DeepNude appeared the other day, the world changed irrevocably.

You’re Magnetic Tape April 4, 2019

Posted by Peter Varhol in Algorithms, Machine Learning, Technology and Culture.
Tags: , , , ,
add a comment

That line, from the Moody Blues ‘In the Beginning’ album (yes, album, from the early 1970s), makes us out to be less than the sum of our parts, rather than more.  So logically, writer and professional provocateur Felix Salmon asks if we can prove who we say we are.

Today in an era of high security, that question is more relevant than ever.  I have a current passport, a Real ID driver’s license, a Global Entry ID card, and even my original Social Security card, issued circa 1973 (not at birth, like they are today; I had to drive to obtain it).  Our devices include biometrics like fingerprints and facial recognition, and retina scans aren’t too far behind.

On the other hand, I have an acquaintance (well, at least one) that I’ve never met.  I was messaging her the other evening when I noted, “If you are really in Barcelona, it’s 2AM (thank you, Francisco Franco), and you really should be asleep.”  She responded, “Well, I can’t prove that I’m not a bot.”

Her response raises a host of issues.  First, identity is on the cusp of becoming a big business.  If I know for certain who you are, then I can validate you for all sorts of transactions, and charge a small fee for the validation.  If you look at companies like LogMeIn, that may their end game.

Second, as our connections become increasingly worldwide, do we really know if we are communicating with an actual human being?  With AI bots becoming increasingly sophisticated, they may be able to pass the Turing test.

Last, what will have higher value, our government-issued ID, or a private vendor ID?  I recently opined that I prefer the government, because they are far more disorganized than most private companies, but someone responded “Government can give you an ID one day, and arbitrarily take it away the next.”  I prefer government siloes and disorganization, because of security by obscurity, but is that really the best option any more?

So, what is our ID?  And how can we positively prove we are who we say we are?  More to the point, how can we prove that we exist?  Those questions are starting to intrude on our lives, and may become central to our existence before we realize it.

Will Self-Driving Cars Ever Be Truly So? January 7, 2019

Posted by Peter Varhol in Architectures, Machine Learning, Software platforms, Technology and Culture.
Tags: , , ,
comments closed

The quick answer is we will not be in self-driving cars during my lifetime.  Nor your lifetime.  Nor any combination.  Despite pronouncements by so-called pundits, entrepreneurs, reporters, and GM, there is no chance of a self-driving car being so under all conditions, let alone everyone in a self-driving car, with all that that implies.

The fact of the matter is that the Waymo CEO has come out and said that he doesn’t imagine a scenario where self-driving cars will operate under all conditions without occasional human intervention.  Ever.  “Driverless vehicles will always have constraints,” he says.  Most of his competitors now agree.

So what do we have today?  We have some high-profile demonstrations under ideal conditions, some high-profile announcements that say we are all going to be in self-driving cars within a few years.  And one completely preventable death.  That’s about it.  I will guess that we are about 70 percent of the way there, but that last 30 percent is going to be a real slog.

What are the problems?

  1. Mapping.  Today, self-driving cars operate only on routes that have been mapped in detail.  I’ll give you an example.  I was out running in my neighborhood one morning, and was stopped by someone looking for a specific street.  I realized that there was a barricaded fire road from my neighborhood leading to that street.  His GPS showed it as a through street, which was wrong (he preferred to believe his GPS rather than me).  If GPS and mapping cannot get every single street right, self-driving cars won’t work.  Period.
  2. Weather.  Rain or snow interrupts GPS signals.  As does certain terrain.  It’s unlikely that we will ever have reliable GPS, Internet, and sensor data under extreme weather condition.  Which in most of the country happens several months a year.
  3. Internet.  A highway of self-driving cars must necessarily communicate with each other.  This map (paywall) pretty much explains it all.  There are large swaths of America, especially in rural areas, that lack reliable Internet connection.
  4. AI.  Self-driving cars look toward AI to identify objects in the road.  This technology has the most potential to improve over time.  Except in bad weather.  And poorly mapped streets.

So right now we have impressive demonstrations that have no basis in reality.  I won’t discount the progress that has been made.  But we should be under no illusions that self-driving cars are right around the corner.

The good news is that we will likely see specific application in practice in a shorter period of time.  Long-haul trucking is one area that has great potential for the shorter term.  It will involve re-architecting our trucking system to create terminals around the Interstate highway system, but that seems doable, and would be a nice application of this technology.