jump to navigation

Get Thee to a Spaceport! August 13, 2018

Posted by Peter Varhol in Uncategorized.
Tags:
add a comment

To many of my generation, the United States has a singular space launch facility, at Cape Canaveral, Florida.  Thanks to my time in the Air Force, I know of at least three others – Wallops Island, Virginia (a NASA complex); Vandenberg AFB, California, and the Kodiak Launch Complex on Kodiak Island, Alaska.  Thanks to my personal interest in space exploration, I know of two more – Spaceport America, in Truth or Consequences, New Mexico, and the Blue Origin launch complex near Van Horn, Texas.

But wait!  There are more.  Elon Musk has his own with SpaceX, of course, on the Texas coast (although SpaceX and Blue Origin use Cape Canaveral for operational launches right now).  Oddly, there is also the Oklahoma Spaceport, Cecil Spaceport in Jacksonville, Florida; and the Mojave Air and Space Port in the California desert.  The newest licensed spaceport is at Ellington Field in Houston, although it cannot yet support launches or recoveries.

Complicated?  Yeah.

The dynamics of achieving orbit are complex, but like any physics problem, consistent.  There is a small but distinct advantage in launching close to the Equator (at least for east-west launches), in effect using the Earth’s rotation to help propel a rocket upward.  Probably the most efficient is the Guiana Space Center, in French Guiana and within about five degrees of the Equator, used by the European Space Agency for many manned and unmanned launches.  Tyura Tam, in Kazakhstan, is also comfortably close to the Equator.  Tyura Tam (Baikonur), once a part of the larger Soviet Union, is now leased by the Russians for their launches.

Here in the US, the Kennedy Space Center is used for all manned launches (regrettably none over the last several years).  It launches Equatorially, to the east, over the Atlantic Ocean, in order to minimize the chance of failures over populated areas.  Vandenberg and Kodiak both launch into polar orbits, once again over the ocean.

There have been many other sites around the world that have been used for space launches.  China, Japan, and India have all launched unmanned satellites into orbit, and many other countries have designated spaceports.  Certainly over one hundred sites worldwide have either launched vehicles or are capable of doing so.

That begs the question why.  The manned space program has certainly garnered the lion’s share of popular attention, but hundreds of satellites are launched into space every year.  While many of these are launched from Cape Canaveral or Vandenberg, the volume is simply too great for those two sites alone.  Navigation, geophysical and environmental (including farming), Internet, and of course military are just a few of the uses for satellites today.

In an era where the US has largely depended upon commercial firms to deliver satellites and other payloads, the proliferation of US spaceports both lowers costs and gets satellites in orbit faster.  It also helps develop an industrial base in nontraditional parts of the country.

The majority of US spaceports today are that in name only; few if any launches are occurring outside of Cape Canaveral/Kennedy, Wallops Island, and Vandenberg.  But as the need for orbital launch capabilities heats up, some of the others are in on the ground floor.

Advertisements

My Boss is a Computer August 11, 2018

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags:
add a comment

Well, not really, but if you can be fired by a computer, it must be your boss.  Not my story, but one that foretells the future nonetheless.  An apparently uncorrectable software defect led to a contract employee being locked out of his computer and his building, and labeled inactive in the payroll.

It was almost comically funny that his manager and other senior managers and executives at the company, none of whom fired him, could not get this fiat reversed.  A full three weeks passed, in which he received no pay and no explanation, before they were able to determine that his employment status had never been updated in their new HR management software.  Even after he was reinstated, his colleagues treated him as someone not entitled to work there, and he eventually left.

It seems that intelligent (or otherwise) software is encroaching into the ultimate and unabashed people-oriented field – human resources.  And there’s not a darned thing we can do about it.  Software is not only conducting full interviews, but also performing the entire hiring process.  While we might hope that we aren’t actually selected (or rejected) by computer algorithms, that is the goal of these software systems.

So here’s the problem.  Or several problems.  First, software isn’t perfect, and while most software bugs in released software are no more than annoying, bugs in this kind of software can have drastic consequences on people.  Those consequences will likely spill over to the hiring company itself.

Second, these applications are usually machine learning systems that have had their algorithms trained through the application of large amounts of data.  The most immediate problem is that the use of biased data will simply perpetuate existing practices.  That’s a problem because everything about the interview and selection process is subjective and highly prone to bias.

Last, if the software doesn’t allow for human oversight and the ability to override, then in effect a company has ceded its hiring decisions to software that it most likely doesn’t understand.  That’s a recipe for disaster, as management has lost control over the reasons why management exists in the first place.

Now, there may be some that will say that’s actually a good thing.  Human management is, well, human, with human failings, and sometimes they manifest themselves in negative ways.  Bosses are dictatorial, or racist, or some combination of negative qualities, and are often capricious in dealing with others.  Computer software is at least consistent, if not necessarily fair as we might define it.

But no matter how poor the decisions that might come from human managers, we own them.  If it’s software, no one owns them.  When we are locked in to following the dictates of software, without any understanding as to who programmed it to do what, then we give up on our fellow citizens and colleagues.  Worse, we give up the control that we are paid to maintain.

Lest we face a dystopian future where computer software rules our working lives, and we are powerless to act as the humans we are, then we must control the software that is presumably helping us.

Can Amazon Replace Libraries? July 23, 2018

Posted by Peter Varhol in Education, Technology and Culture.
Tags: , , ,
add a comment

I was born and raised in Aliquippa, Pennsylvania.  It was a company town.  In 1905, the Jones and Laughlin Steel Corporation bought a tract of several thousand acres along the steep hills of the Ohio River, laid out some streets, built some houses and stores, and constructed a steel mill stretching six miles along the river.

The neighborhoods were called plans, because they were individual neighborhood plans conceived and built by the company.  My older sister grew up in the projects of Plan 11.  Football Hall of Fame running back Tony Dorsett, two years my elder, grew up just a couple of blocks away.  We shopped in the company store, the largest building in town, until I was 13.  (Bear with me, please)

B.F. Jones, in the style of the robber barons of an earlier era, built a grand library in his name, right along Franklin Avenue, the main street, all marble and columns, called the B.F. (for Burris Frederick) Jones Memorial Library.

It was a massive marble structure that frightened off most youngsters.  The homeless guy slept at a table in one corner.  In that library, I read Don Quixote, The Far Pavilions, just about everything from James Michener, Irving Stone, and much more.  It was a dismal company town, but I escaped through the library far beyond the boundaries of the drab community.

Today, a yanked Forbes magazine op-ed written by LIU Post economist Panos Mourdoukoutas opined that libraries were obsolete, and that they should be replaced by for-profit brick-and-mortar Amazon stores selling physical books.  Libraries are no longer relevant, Mourdoukoutas and Forbes claim, and Amazon can serve the need in a for-profit way that benefits everyone.  Libraries are a waste of taxpayer funds.

Funny, today, 40 years later, my adopted town library is the hangout of middle and high school students.  Rather than the quiet place of reflection (and possibly stagnation) of the past, it is a vibrant, joyful place where parents are happy to see their children study together and socialize.  There are movies, crafts, classes, lectures, and games.  In an era where youngsters can escape to their phones, the Internet, video games, drugs, or worse, escaping to the library is a worthy goal.

There is one Starbucks in town, where Mourdoukoutas tells us that anyone can get wifi, and most people use the drive-through.  I doubt they would let the throngs of youngsters cavort for the evening like the library does.

Today I travel extensively.  I am enthralled by the amazing architectures of European cities, built when society was much poorer.  Yet today we cannot afford libraries?

I am sorry, I call bullshit.  Long and loud.  This type of trash deserves no serious discussion; in fact, no discussion whatsoever.  If we cannot afford libraries, we cannot afford imagination, we cannot afford, well, life.

To reinforce the point, please invest a few minutes to listen to Jimmy Buffett, Love in the Library.  Thank you.

Empathetic Technology is an Idea Whose Time Should Never Come June 20, 2018

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

I love TED talks.  They are generally well thought out and well-presented, and offer some significant insights on things that may not have occurred to me before.

I really, really wanted to give a thumbs-up to Poppy Crum’s talk on empathetic technology, but it contradicted some of my fundamental beliefs on human behavior and growth.  She talks about how measuring and understanding the physical attributes of emotion will help draw us together, so that we don’t have to feel so alone and misunderstood.

Well, I suppose that’s one way to look at it.  I rather look at it as wearing a permanent lie detector.  Now, that may not be a bad thing, unless you are playing poker or negotiating a deal.  But exposing our innermost emotions to others is rightly a gradual thing, and should be under our control, rather than immediately available through technology.

Also, the example that she demonstrates in the audience requires data from the entire audience, rather than from a single individual.  And her example was highly contrived, and it’s not at all clear that it would work in practice.  It involved measuring changes in CO2 emissions from the audience based on reacting to something unexpected.

But in general, her thesis violates my thoughts on emotional friction.  Other people don’t understand us.  Other people do things that make us feel uncomfortable.  Guess what?  Adapting to that is how we grow as human beings.  And growth is what makes us human.  Now, granted, in a few cases where attempts at emotional growth result in psychopathologies, there seems like there could be value here.  But . . .

I recall the Isaac Asimov novel The Naked Sun, where humans who interact physically with others are considered pathologic.  So we become content to view each other electronically, rather than interact physically.  I see a significant loss of humanity there.

And despite how Poppy Crum paints it, I see a significant loss of humanity with her plan, too.  She is correct in that empathetic technology can help identify those whose psyches may break under the strain of adapting to friction, but I think the loss of our humanity in general overwhelms this single good.

Here’s Looking At You June 18, 2018

Posted by Peter Varhol in Algorithms, Machine Learning, Software tools, Technology and Culture.
Tags: , , ,
add a comment

I studied a rudimentary form of image recognition when I was a grad student.  While I could (sometimes) identify simple images based on obviously distinguishing characteristics, the limitations of rule-based systems, the computing power of Lisp Machines and early Macs, facial recognition was well beyond the capabilities of the day.

Today, facial recognition has benefitted greatly from better algorithms and faster processing, and is available commercially by several different companies.  There is some question as to the reliability, but at this point it’s probably better than any manual approach to comparing photos.  And that seems to be a problem for some.

Recently the ACLU and nearly 70 groups sent a letter to Amazon CEO Jeff Bezos, alongside the one from 20 shareholder groups, arguing Amazon should not provide surveillance systems such as facial recognition technology to the government.  Amazon has a facial recognition system called Rekognition (why would you use a spelling that is more reminiscent of evil times in our history?)

Once again, despite the Hitleresque product name, I don’t get the outrage.  We give the likes of Facebook our life history in detail, in pictures and video, and let them sell it on the open market, but the police can’t automate the search of photos?  That makes no sense.  Facebook continues to get our explicit approval for the crass but grossly profitable commercialization of our most intimate details, while our government cannot use commercial and legal software tools?

Make no mistake; I am troubled by our surveillance state, probably more than most people, but we cannot deny tools to our government that the Bad Guys can buy and use legally.  We may not like the result, but we seem happy to go along like sheep when it’s Facebook as the shepherd.

I tried for the life of me to curse our government for its intrusion in our lives, but we don’t seem to mind it when it’s Facebook, so I just can’t get excited about the whole thing.  I cannot imagine Zuckerberg running for President.  Why should he give up the most powerful position in the world to face the checks and balances of our government?

I am far more concerned about individuals using commercial facial recognition technology to identify and harass total strangers.  Imagine an attractive young lady (I am a heterosexual male, but it’s also applicable to other combinations) walking down the street.  I take her photo with my phone, and within seconds have her name, address, and life history (quite possibly from her Facebook account).  Were I that type of person (I hope I’m not), I could use that information to make her life difficult.  While I don’t think I would, there are people who would think nothing of doing so.

So my take is that if you don’t want the government to use commercial facial recognition software, demonstrate your honesty and integrity by getting the heck off of Facebook first.

Update:  Apple will automatically share your location when you call 911.  I think I’m okay with this, too.  When you call 911 for an emergency, presumably you want to be found.

Too Many Cameras June 15, 2018

Posted by Peter Varhol in Software platforms, Strategy, Technology and Culture.
Tags: ,
add a comment

The title above is a play off of the “Too Many Secrets” revelation in the 1992 movie Sneakers, in which Robert Redford’s character, who has a secret or two himself, finds himself in possession of the ultimate decryption device, and everyone wants it.

Today we have too many cameras around us.  This was brought home to me rather starkly when I received an email that said:

I’ve been recording you with your computer camera and caught you <censored>.  Shame on you.  If you don’t want me to send that video to your family and employer, pay me $1000.

I pause.  Did I really do <censored> in front of my computer camera?  I didn’t think so, but I do spend a lot of time in front of the screen.  In any case, <censored> didn’t quite rise to the level of blackmail concern, in my opinion, so I ignored it.

But is this scenario so completely far-fetched?  This article lists all of the cameras that Amazon can conceivably put in your home today, and in the near future, that list will certainly grow.  Other services, such as your PC vendor and security system provider, will add even more movie-ready devices.

In some ways, the explosion of cameras looking at our actions is good.  Cameras can nudge us to drive more safely, and to identify and find thieves and other bad guys.  They can help find lost or kidnapped children.

But even outside our home, they are a little creepy.  You don’t want to stop in the middle of the sidewalk and think, I’m being watched right now.  The vast majority of people simply don’t have any reason to be observed, and thinking about it can be disconcerting.

Inside, I simply don’t think we want them, phone and PC included.  I do believe that people realize it is happening, but in the short term, think the coolness of the Amazon products and the lack of friction in ordering from Amazon supersedes any thoughts about privacy.  They would rather have computers at their beck and call than think about the implications.

We need to do better than that if we want to live in an automated world.

Cognitive Bias in Machine Learning June 8, 2018

Posted by Peter Varhol in Algorithms, Machine Learning.
Tags: , , ,
add a comment

I’ve danced around this topic over the last eight months or so, and now think I’ve learned enough to say something definitive.

So here is the problem.  Neural networks are sets of layered algorithms.  It might have three layers, or it might have over a hundred.  These algorithms, which can be as simple as polynomials, or as complex as partial derivatives, process incoming data and pass it up to the next level for further processing.

Where do these layers of algorithms come from?  Well, that’s a much longer story.  For the time being, let’s just say they are the secret sauce of the data scientists.

The entire goal is to produce an output that accurately models the real-life outcome.  So we run our independent variables through the layers of algorithms and compare the output to the reality.

There is a problem with this.  Given a complex enough neural network, it is entirely possible that any data set can be trained to provide an acceptable output, even if it’s not related to the problem domain.

And that’s the problem.  If any random data set will work for training, then choosing a truly representative data set can be a real challenge.  Of course, will would never use a random data set for training; we would use something that was related to the problem domain.  And here is where the potential for bias creeps in.

Bias is disproportionate weight in favor of or against one thing, person, or group compared with another.  It’s when we make one choice over another for emotional rather than logical reasons.  Of course, computers can’t show emotion, but they can reflect the biases of their data, and the biases of their designers.  So we have data scientists either working with data sets that don’t completely represent the problem domain, or making incorrect assumptions between relationships between data and results.

In fact, depending on the data, the bias can be drastic.  MIT researchers have recently demonstrated Norman, the psychopathic AI.  Norman was trained with written captions describing graphic images about death from the darkest corners of Reddit.  Norman sees only violent imagery in Rorschach inkblot cards.  And of course there was Tay, the artificial intelligence chatter bot that was originally released by Microsoft Corporation on Twitter.  After less than a day, Twitter users discovered that Tay could be trained with tweets, and trained it to be obnoxious and racist.

So the data we use to train our neural networks can make a big difference in the results.  We might pick out terrorists based on their appearance or religious affiliation, rather than any behavior or criminal record.  Or we might deny loans to people based on where they live, rather than their ability to pay.

On the one hand, biases may make machine learning systems seem more, well, human.  On the other, we want outcomes from our machine learning systems that accurately reflect the problem domain, and not biased.  We don’t want our human biases to become inherited by our computers.

Can Machines Learn Cause and Effect? June 6, 2018

Posted by Peter Varhol in Algorithms, Machine Learning.
Tags: , , ,
add a comment

Judea Pearl is one of the giants of what started as an offshoot of classical statistics, but has evolved into the machine learning area of study.  His actual contributions deal with Bayesian statistics, along with prior and conditional probabilities.

If it sounds like a mouthful, it is.  Bayes Theorem and its accompanying statistical models are at the same time surprisingly intuitive and mind-blowingly obtuse (at least to me, of course).  Bayes Theorem describes the probability of a particular outcome, based on prior knowledge of conditions that might be related to the outcome.  Further, we update that probability when we have new information, so it is dynamic.

So when Judea Pearl talks, I listen carefully.  In this interview, he is pointing out that machine learning and AI as practiced today is limited by the techniques we are using.  In particular, he claims that neural networks simply “do curve fitting,” rather than understand about relationships.  His goal is for machines to discern cause and effect between variables, that is “A causes B to happen, B causes C to happen, but C does not cause A or B”.  He thinks that Bayesian inference is ultimately a way to do this.

It’s a provocative statement to say that we can teach machines about cause and effect.  Cause and effect is a very situational concept.  Even most humans stumble over it.  For example, does more education cause people to have a higher income?  Well maybe.  Or it may be that more intelligence causes a higher income, but more intelligent people also tend to have more education.  I’m simply not sure about how we would go about training a machine, using only quantitative data, about cause and effect.

As for neural networks being mere curve-fitting, well, okay, in a way.  He is correct to point out that what we are doing with these algorithms is not finding Truth, or cause and effect, but rather looking at the best way of expressing a relationship between our data and the outcome produced (or desired, in the case of unsupervised learning).

All that says is that there is a relationship between the data and the outcome.  Is it causal?  It’s entirely possible that not even a human knows.

And it’s not at all clear to me that this is what Bayesian inference is saying.  And in fact I don’t see anything in any statistical technique that allows us to assume cause and effect.  Right now, the closest we come to this in simple correlation is R-squared, which allows us to say how much of a statistical correlation is “explained” by the data.  But “explained” doesn’t mean what you think it means.

As for teaching machines cause and effect, I don’t discount it eventually.  Human intelligence and free will is an existence proof; we exhibit those characteristics, at least some of the time, so it is not unreasonable to think that machines might someday also do so.  That said, it certainly won’t happen in my lifetime.

And about data.  We fool ourselves here too.  More on this in the next post.