jump to navigation

The Internet and Health Care Live Together Uncomfortably February 18, 2019

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Several years ago, I had a very serious health scare.  Thanks in large part to my skepticism of the diagnosis and my desire to be my own health care project manager, and based on my own research using Dr. Google, I recovered completely, without the debilitating surgery recommended by several doctors.  I liked and trusted these doctors, even as I sought alternatives.

In short, it was a win for me personally, and for the notion that people can comprehend and act rationally upon their own health care information.

If only it were that simple.  Steve Jobs, for example, may have had a better chance at surviving pancreatic cancer if he had not insisted that he could be cured through his diet.

But today I speak of vaccinations for childhood diseases, such as measles, mumps, and chicken pox.  When I was in elementary school, everyone was vaccinated, and I can’t recall anyone turning it down.  Sometime in the 1970s, these childhood diseases were declared almost entirely eradicated in the US.

That is changing, and not for the better.

There has always been a small but vocal group of people who were opposed to vaccination.  Often the opposition was based on misinformation or a distrust of government intentions.  Recall the fluoridation controversy in the US in the 1950s and 1960s (I grew up in a rural area and drank untreated well water).

Today, measles, mumps, and chicken pox rates are rising as parents decline to have their children vaccinated.  In many cases, that decision is still based at least in part on distrust in government, but perhaps the biggest part was a study, published in the British medical journal Lancet, that tied vaccinations to an increased occurrence of autism in children.

That study was immediately criticized on methodology grounds, and eventually retracted from publication.  But not before almost every parent in the US had heard about it and believed it to be ground truth.

And worse, the study, and other unscholarly opinions, live on forever through the Internet.  Everyone hears tales of other studies that support this original one, and make even scarier claims.  And for parents, autism is scarier than measles, and they do what they think essential in protecting their children.  So we have fewer children vaccinated, and the incidence of these diseases is growing again.

But the reaction by parents is based on hearsay, bad science, and a misinterpretation of good science.  It doesn’t help that researchers can’t absolutely say there isn’t a connection, not because there might be one, but because in logic, you cannot prove a negative.  That is, no number of studies, no matter how unequivocal, cannot prove logically that a relationship doesn’t exist.  All they can do is support that there isn’t.

Of course, the Internet is essential to spreading these tales.  I did a quick Google search on vaccination, and found several sites on the first page of results that were clearly portals and gathering points for anti-vaccination groups.

I wrote in these pages a few weeks ago that every adult should have a fundamental understanding of the scientific method, its advantages, and its limitations.  Perhaps fifteen years ago, then-Time columnist Barbara Kiviat suggested that people have some level of minimum qualifications and licensure to use the Internet.  That wasn’t workable then, and certainly isn’t now.  But if it were, I would suggest training in the scientific method as the path to that license.

Advertisements

Genetic Editing: Next Decade, or Next Year, or Next Week? February 2, 2019

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

Several years ago, I read Michael Crichton’s last published novel before his death.  It’s called Next, and while it’s a rather disjointed set of storylines, the one that resonated with me was one where Frank Burnet showed a remarkable resistance to leukemia, and as a result his cells are sold (without his knowledge or permission) to a commercial biotech company, BioGen.

The initial cells are lost, but BioGen consults lawyers, who advise that under United States law they have the rights to all of Frank’s cell line and thus the right to extract replacement cells, by force if necessary, from Frank or any of his descendants.  He and his family flees an onslaught of BioGen agents who claim the legal right to kidnap them and harvest cells.  Biogen’s lawyers apply for a warrant to arrest Frank’s daughter, on the grounds that she had stolen the company’s property, namely her and her son’s cells.

The conclusion of this novel was the judge’s decision on the validity of their ownership claim, and it went as we as human beings would have hoped.  Specifically, the judge rules in Frank’s daughter’s favor and rejects the precedents as attempts to abolish normal human rights by decree, a violation of the Thirteenth Amendment to the United States Constitution, which forbids slavery.

However, this is fiction, and fact is turning to be a lot messier.

I’m not a geneticist, and I’m certainly not a lawyer, but as I understand it, cell and DNA ownership are still very much an open legal question.  If a biotech company sees a path to a genetic cure for a serious disease in a particular DNA or genetic sequence, I believe it will vie for legal ownership, in the courts, and spend a great deal of money and effort to achieve that ownership.

And let’s add CRISPR (clustered regularly interspaced short palindromic repeats) technologies into the mix, which provide the ability to edit individual DNA sequences in an embryo, perhaps to remove genetic diseases.

Novelist Daniel Suarez, in Change Agent, postulates CRISPR not only as a means of editing out genetic defects, but also incorporating genetic enhancements, such as strength, speed, brains, or athleticism.  In fact, he goes still farther, postulating that genetic editing can also be done on live subjects, to turn them into a completely different person.

My point is that the boundary between fiction and science is here, and we as a society have some big decisions to make.  This article postulates that CRISPR editing will become morally mandatory, and I am hard-pressed to disagree.

At the same time, we must decide who owns the genes, the person or the company doing the editing.  We may find that we are not the masters of our bodies.

As a youth 40+ years ago, I read Huxley, Orwell, Bradbury, and a host of other dystopian novelists, and was somewhat comforted in the gap between the existing fiction and reality.  Today, there seems to be no such gap, and it makes it a lot more difficult reading both fact and fiction.  This world is almost upon us, if it isn’t already.  Are we prepared to make life’s choices in this world?

The Evolution of Finding Aircraft January 30, 2019

Posted by Peter Varhol in Technology and Culture, Uncategorized.
Tags:
add a comment

In 1937, Amelia Earhart and navigator Fred Noonan disappeared over the central Pacific Ocean near their Howland Island destination in their Lockheed Model 10-E Electra.  The only navigation aids available at that time were the compass and dead reckoning.  While there are indications that Earhart crash-landed on or near Gardner Island, well to the south of Howland.

In 1996, a Learjet 35A disappeared near Dorchester, New Hampshire, in the United States, attempting to land at Lebanon NH airport.  There was radar contact with the plane, and the plane itself had navigation equipment that enabled it to use VOR for landing.  I selected this example because despite the fact that it happened during the day in relatively populated northeast United States, it took three years to find the crash site.

And, of course, we all know about Malaysia Airlines 370, flying from Kuala Lumpur to Beijing, somehow seems to have ended up crashing in the southern Indian Ocean, several thousand miles in the opposite direction.  The main debris field has never been found, but some positively identified debris has washed up on the shores of Reunion, Madagascar, and southern Africa.

People find it amazing that we can’t find lost aircraft under these circumstances, and we create conspiracy theories about the loss, but just about all of the technology deployed to date presumes than an aircraft wants to be found, or defaults to being found.  When you squawk your assigned four-digit code on your transponder, you are positively identified.  If you turn off your transponder, you are just another blip on the radar screen.

And, of course, radar doesn’t cover large stretches of ocean; it’s a line-of-sight technology.  We’ve never conceived of the need for positive control over all aspects of flight, because we thought that airliners would have the opportunity to communicate, even in distress.

The answer seems to be satellites, specifically designed to track aircraft around the globe.  In today’s world, we need to know where every aircraft is, and what that aircraft is doing.  Better satellite technology will hopefully get us there.

The Scientific Method Needs to Be Fundamental Education for Everyone January 15, 2019

Posted by Peter Varhol in Education, Technology and Culture.
Tags: , ,
1 comment so far

We have a problem today.  Actually, we have many problems, but most of them boil down to the fact that we lack disciplined thinking.  As a result, we feel justified in believing any damned thing we like, whether or not it makes logical or evidentiary sense.  A common grounding in the scientific method can address that.

I’ll give an example.  I recently advised a PhD candidate on the use of statistics for his dissertation research.  He was planning on doing about 90 t-tests, plus a collection of ANOVAs.  I warned him that his results were likely to have at least a couple of Type I errors.  He replied, “What is that?”

Where is Martin Gardner when you need him?  (Yes, I know he passed away in 2010).  We lack the understanding of basic analytical statistics and how they influence our beliefs.  This is not rocket surgery, folks.  Anyone, and I mean anyone, who is doing primary research for a doctoral degree should understand the implications of their experimental design.

But we can extend belief well beyond that intellectual exercise.  A very large part of the reason many people feel free to believe things that are quite frankly difficult to believe is that belief is often a subjective thing, rather than based on any sort of scientific discipline.

You may argue that what any person believes is legitimate to that person.  Um, no.  Without a methodology of belief, that represents a lie and a cop-out by that person.  “I believe because I feel like it?”  That doesn’t cut the mustard in serious discussion.

So my point here is that everyone’s belief system has to begin with a disciplined foundation.  We believe something to be true because we have objective evidence, and that evidence allows us to formulate a hypothesis that is testable.  The test may be explicit, or it may be supported or rejected based on additional evidence.  But we cannot believe something because we feel like it.  Life doesn’t work that way.

Few of us think this way in determining our beliefs, and that is unfortunate.

You might also argue that this is an amusing stance for me to be taking.  Decades ago, I learned, and internalized, the scientific method as an undergrad psychology student, which some may consider an odd field of study for that discipline.  But as a social science, psychology is probably the best discipline for employing the scientific method.  It meant a lot for me to begin my adult life with a foundation of the scientific method.  Others can benefit too.

Will Self-Driving Cars Ever Be Truly So? January 7, 2019

Posted by Peter Varhol in Architectures, Machine Learning, Software platforms, Technology and Culture.
Tags: , , ,
comments closed

The quick answer is we will not be in self-driving cars during my lifetime.  Nor your lifetime.  Nor any combination.  Despite pronouncements by so-called pundits, entrepreneurs, reporters, and GM, there is no chance of a self-driving car being so under all conditions, let alone everyone in a self-driving car, with all that that implies.

The fact of the matter is that the Waymo CEO has come out and said that he doesn’t imagine a scenario where self-driving cars will operate under all conditions without occasional human intervention.  Ever.  “Driverless vehicles will always have constraints,” he says.  Most of his competitors now agree.

So what do we have today?  We have some high-profile demonstrations under ideal conditions, some high-profile announcements that say we are all going to be in self-driving cars within a few years.  And one completely preventable death.  That’s about it.  I will guess that we are about 70 percent of the way there, but that last 30 percent is going to be a real slog.

What are the problems?

  1. Mapping.  Today, self-driving cars operate only on routes that have been mapped in detail.  I’ll give you an example.  I was out running in my neighborhood one morning, and was stopped by someone looking for a specific street.  I realized that there was a barricaded fire road from my neighborhood leading to that street.  His GPS showed it as a through street, which was wrong (he preferred to believe his GPS rather than me).  If GPS and mapping cannot get every single street right, self-driving cars won’t work.  Period.
  2. Weather.  Rain or snow interrupts GPS signals.  As does certain terrain.  It’s unlikely that we will ever have reliable GPS, Internet, and sensor data under extreme weather condition.  Which in most of the country happens several months a year.
  3. Internet.  A highway of self-driving cars must necessarily communicate with each other.  This map (paywall) pretty much explains it all.  There are large swaths of America, especially in rural areas, that lack reliable Internet connection.
  4. AI.  Self-driving cars look toward AI to identify objects in the road.  This technology has the most potential to improve over time.  Except in bad weather.  And poorly mapped streets.

So right now we have impressive demonstrations that have no basis in reality.  I won’t discount the progress that has been made.  But we should be under no illusions that self-driving cars are right around the corner.

The good news is that we will likely see specific application in practice in a shorter period of time.  Long-haul trucking is one area that has great potential for the shorter term.  It will involve re-architecting our trucking system to create terminals around the Interstate highway system, but that seems doable, and would be a nice application of this technology.

Rejected! December 14, 2018

Posted by Peter Varhol in Technology and Culture.
Tags:
add a comment

I understand it is the college acceptance season for the high school class of 2018.  I confess I didn’t realize it, as I have no children, and my grandnephews still have a few more years in the soup that is middle/high school before they get to this point.

It was different, circa 1974-75.  Yes, Hopewell High School, 1975 (don’t laugh; my sister, six years my elder, still has an enormous amount or professional and personal energy).  Hopewell was an interesting microcosm; a decidedly blue collar environment, where most went directly into indentured servitude in the steel mill, yet some of us endeavored beyond that.

First, my sister, Hopewell class of 1969.  It was difficult to be a woman then (to be fair, not that it’s much better today).  You married at 18 to a steelworker, kept a household, and raised 3 or 4 kids.  My sister was the first of our extended family to go to college – California State Teachers’ College (now California State University of PA).  We were separated in years, so there was much I didn’t understand at the time.  Our mother told her that she had to be a teacher or a nurse; she graduated in 2.5 (Karen, correct me if I am wrong) years with a degree in French Education.

It didn’t work, she reverted to blue collar, but in her mid-life crisis, found her way to I think a successful professional career in health care.

Now, me (be patient, please).  I applied to two colleges, based on I’m not really sure what criteria.  We didn’t have that sort of world view.  I can tell you that the $25-$50 application fee at the time to yet another school might have meant the difference between our family eating for that week (up yours, James Farley), so my choices were necessarily limited.

My schools were Allegheny College, and Mansfield State.  Mansfield was on there because I met the admissions counselor as a high school junior, and he remembered me a year later.  As a teen with no discernable skills or proclivities, I gravitated toward the Air Force ROTC program at Allegheny, where should I successfully graduate, at least I had a job waiting for me.

It was slightly more convoluted than that, but I graduated in four years with a degree in psychology, from Grove City College, with a late but strong proclivity toward the life sciences.  I have three masters degrees, some work toward a PhD, and a reasonably successful professional life.

So where is all of this going?  Apparently there are teens that have their hearts set on, well, Harvard, Dartmouth, Carnegie Mellon, Stanford, you name it.  They have visited these campuses and feel themselves easing into an academic lifestyle in those locations.

Quoting the Bill Murray movie Meatballs, it just doesn’t matter.  Well, on edge cases it probably does.  If you want a career in national government, particularly the State Department, you must be Ivy League. Get an East Coast law degree if you want the FBI.  Stanford or CMU in CS if you want Google.

But for the vast majority of us, it really doesn’t matter.  Get your degree.  The major doesn’t really matter.  You may do two years at Penn State Beaver (yes, that is a place) as a commuter, before moving to the main campus, or you may do four years in exile at North Adams State.  You’ll do fine.  It may not be your ideal way of starting out your journey, but it may be the best.

How Do You Pay Someone When Money Isn’t the Right Standard? December 10, 2018

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Trick question, you may respond.  Money, and especially more money, is always an appropriate reward.  Well, for some people (I am looking at you, Zuckerberg), that may be true.  And every year Forbes magazine lists the top 400 richest people.  Who wouldn’t want to be on a worldwide top 400 list of just about anything?

It has been reported that Tom Brady, quarterback of the New England Patriots football team, is unlikely to make any of the monetary incentives in his contract this year.  While Tom is paid handsomely by most people’s standards, he is relatively underpaid in comparison to the universe of NFL quarterbacks.  Brady has also in the past accepted below market deals with the expectation that what he was giving up might help build a better team.

Which leads me to the question of pay in general, particularly in the tech sector.  At some Silicon Valley companies, the average pay is well into the six figures, and options and other incentives add still more to the take.  Granted, in Silicon Valley, costs have more than kept pace with income growth, so that whole microcosm might be no more than a Red Queen’s Race.  But surely in all echelons of productive society there are people who say, “My material wants are more than met.”  So how do we compensate such people?

In fact, we tend to think of money not only in compensation terms, but also in motivation terms.  Is there a point at which another ten percent raise won’t deliver a commiserate increase in motivation?  I bet there is.  So what do we do about it?

I have made some money in my career, mostly through a series of decent but unexceptional jobs, plus adjunct teaching, plus freelance writing and consulting.  I also live relatively modestly.  To be fair, I don’t deprive myself, but I only recently gave up a 19-year old daily use car; it simply started every single time, and its extraordinary maintenance needs were trivial.  My sports car days are in the past (yes, I once owned a classic Corvette), and today my choice of a ride is much more pragmatic.

So in seeking employment, I don’t feel the need to maximize my monetary take.  Recently I suggested my high water mark as a goal to a recruiter at a Silicon Valley company.  She chuckled involuntarily, and replied, “I’m sure we can do much better than that.”  (Nevertheless, I didn’t get the job).

I have created a fictional character, a minor employee in a small tech company, who foils a multi-billion dollar scam and rescues the fair maiden, both of which go a long way toward saving the company.  In the sequel, the company owner is exiting, and struggles with how to appropriately reward this character as he was being declared surplus to future needs.  I devise a cop-out, in which the character gets enough money for a sabbatical, along with a modest annuity for future material needs.

As a society, we are (somewhat) striving to provide equal pay for equal work, and I think that’s mostly a good thing.  But I think there’s a step beyond that, and that is an appropriate reward for a job well done.  That may not be money.  In some cases, it may be more vacation or sabbatical, or it may be something more creative.  The problem is that such solutions once again give those who are able to negotiate a distinct advantage over those who can’t.

Getting to an Era of Self-Driving Cars Will Be Messy November 30, 2018

Posted by Peter Varhol in Machine Learning, Technology and Culture.
Tags: , ,
add a comment

In the 1970s, science fiction writer Larry Niven created a near future world where instantaneous matter transport had been invented.  People would use a “phone booth” to dial in their desired destination, and automatically appear at a vacant phone booth nearest that destination.  Cargo used specially designed phone booths to transfer large or hazardous loads.

Of course, the changes in momentum attendant upon changing positions on the planet slowed the Earth’s rotation, as do jet aircraft today, and that momentum had to be dumped somewhere.  Niven used this invention as a way of exploring social phenomena, such as flash crowds (today we call them flash mobs) and ingenious ways of committing crimes.

Michael Crichton used both space and time travel in his novel Timeline (the movie was quite good too).  His technology actually copied the body at the cellular level, destroyed it at the source, then recreated it from the copy at the desired time and place.  Crichton described it by analogy, saying that it was similar to sending a fax.

The problem with this was that replication was, well, slightly less than perfect.  Cells became misaligned, which meant that cell structure was slightly off.  If you used Timeline’s time and space traveling gadget more than about half a dozen times, your body was misaligned enough so that you went crazy and/or died.

Today, we see self-driving cars as a panacea to much that ails society.  Self-driving cars are extremely safe, and they can be coordinated en masse to relieve traffic congestion.  They will obviously be electric, and not spewing combustion gasses into the atmosphere.  What could go wrong?

But none of this is remotely true, at least today and in the foreseeable future.  Although driverless cars claim an enviable safety record for miles driven, all of these miles have been on carefully mapped streets under ideal conditions.  The fact of the matter is that GPS, even with triangulation, does not give these vehicles the needed accuracy to actually travel through traffic.

Coordinated en masse?  Just what does that mean?  Even if we had cars communicating with each other on the highway, it will be 40 years before every car can do so.  And even if they were communicating, can we trust our communications systems enough to coordinate thousands of cars on a highway, feet from each other.  Can’t wait to try that one?

Electric cars.  Yes, the industry is moving in that way.  I just bought my combustion engine car; my last one was still going strong at 19 years.  Will the government force me to buy an electric car in under 20 years?  I don’t think so.

Still, this is the end game, but the end game is a lot farther out than you think.  I’m going to say a hundred years, certainly after all of us have left the mortal plane.  Car companies are saying they will be fully electric in three years.  Um, no.  Electric car advocates are even more deluded.  Car companies are saying all cars will be autonomous by 2025.  Um, no again.  These pronouncements are stupid PR statements, not worth the bytes they take up.

Yet we lap it up.  I really don’t understand that.