Do We Really Hate Science? February 25, 2015Posted by Peter Varhol in Technology and Culture.
Tags: national geographic, science
add a comment
Despite the provocative title, the March cover story in National Geographic magazine, entitled The War On Science, is a well-conceived and thoughtful feature (in fairness, the website uses a much less controversial title – Why Do Many Reasonable People Doubt Science?). It points out that the making of accepted science isn’t something that happens overnight, but can take years, even decades of painstaking work by researchers in different fields around the world before it solidifies into mostly accepted theory. Even with that, there are contrary voices, even within the scientific community.
I think the explanation is slightly off-base. I learned the scientific method fairly rigorously, but in a very imprecise science – psychology. The field has entire courses on statistics and experimental design at the undergraduate level, and labs where students have to put the scientific method into practice.
Still, because psychology is an imprecise science, I was frustrated that we were usually able to interpret outcomes, especially those in real life, in ways that matched our theories and hypotheses. But our explanations had no predictive power; we could not with any degree of confidence predict an outcome to a given scenario. That failure led me away from psychology, to mathematics and ultimately computer science.
It’s true that science is messy. Researchers compete for grants. They stake out research areas that are likely to be awarded grants, and often design experiments with additional grants in mind. Results are inconclusive, and attempts at replication contradictory. Should we drink milk, for example? Yes, but no. In general, the lay public tries to do the right thing, and the science establishment makes it impossible to know what that is.
And the vast majority of scientists who purport to explain concepts to the lay public are, I’m sorry, arrogant pricks. We have lost the grand explainers, the Jacques Cousteau and the Carl Sagan of past generations. These scientists communicated first a sense of wonder and beauty, and rarely made grand statements about knowledge that brooked no discussion.
Who do we have today? Well, except for perhaps Bill Nye the Science Guy, no one, and I wouldn’t claim for a moment that Bill Nye is in anywhere near the same league as past giants.
The scientists who serve as talking heads on supposed news features and news opinions have their own agendas, which are almost invariably presented in a dour and negative manner. They are not even explaining anything, let alone predicting, and they certainly have no feel for the beauty and wonder of their work. Doom and catastrophe will be the end result, unless we do what they say we should do.
To be fair, this approach represents a grand alliance between the news agencies, which garner more attention when their message is negative, and the scientists, who promote their work as a way to gain recognition and obtain new grants.
In short, I would like to think that there is not a war upon science. Rather, there is a growing frustration that science is increasingly aloof, rather than participatory in larger society. Everything will be fine if you just listen to me, one might say. The next day, another says the opposite.
That’s not how science should be communicating to the world at large. And until science fixes that problem, it will continue to believe that there is a war on.
Who is the Bad Guy? February 21, 2015Posted by Peter Varhol in Technology and Culture.
add a comment
I’m not a conspiracy person. I don’t think governments, evil corporations, or anyone else have banded together to rob, cheat, or steal from me. It’s simply too difficult to keep such things a secret from the outside world. As the old saw goes, three people can keep a secret, if two of them are dead (and I’m not entirely convinced about the third one).
But stories like this, concerning US and UK agencies breaking into phone SIM cards, are eminently believable, and infinitely troubling, because we already know that bad hackers can get into servers and devices that we use in our daily lives for all manner of activities. We have had indications that governments are doing likewise, to each other, to companies, and to citizens of its own and others. Governments generally won’t admit to doing so, and don’t have to. But you have to figure that any government could afford talent equivalent to those who do it for free.
There is no question that any government can employ the some of the best hackers (other hackers are likely to avoid governments altogether). And make no mistake – there is a war out there, between nations, and within nations and, well, I’ll say it, cyber-terrorists. I’m okay with stopping the bad guys and when possible, bringing them to justice.
But the larger issue is whether it is appropriate (I would like to find a stronger word here) for any government to do so. I get the fact that the nature of law enforcement has changed dramatically over the past decade. Some crimes that are common today didn’t exist at the turn of the century. Law enforcement has always been behind the technology curve from the bad guys, and I don’t have a problem with investigation, enforcement, and the legal environment catching up.
But we’re talking something different here. We’re talking about the governments becoming the bad guys, digging into people who are not suspected criminals or terrorists. And that is just plain wrong. I understand that the difference between law enforcement and law breaking can become blurred, but from our governments, it should be very well defined and communicated.
I understand why governments and legitimate law enforcement agencies believe they need to adopt these techniques to be proactive in an increasingly dangerous world. But guess what? They don’t. They have simply become lazy, substituting the games of the criminals as a rationale for keeping people safe and protected. No. They are better than that, and we are better than that.
It’s not even the journeyman law enforcement people at fault here. They are caught in a bind. If Something Bad happens on their watch, they take all of the heat, and who can blame them for bending the game a bit for their side. It’s the leadership that has to draw sharp lines, and it is not doing so. This failure harms all of us. Let’s get back to good, solid investigative work and not break into SIM cards and plant malware on computers.
We Have to Start to Choose Our Technologies February 19, 2015Posted by Peter Varhol in Technology and Culture.
Tags: Jetsons, Lost in Space, Star Trek
add a comment
I’m not a Luddite. I grew up with The Jetsons (which is now the brand of tire on my ancient Subaru), Lost in Space, and Star Trek. I want a flying car, tricorder, and the ability to give commands to my personal robot. I have most of the toys most of us have these days, although rarely the latest models, because I have better things to spend my money on, and less an emotional need to be first on the block.
But what these optimistic visions of the future never showed were the tradeoffs inherent in these wonders. If I have a flying car, the airspace become crowded as more people try to get to farther places. Giving commands to a robot means that those commands will be stored somewhere, and have the potential for misinterpretation. The late great Isaac Asimov had a great run with his Three Laws of Robotics.
What this means is that no new technology is a clean win; there are always tradeoffs. It’s only going to accelerate in the future. We’ll have smart homes, with connected utilities, refrigerators, microwaves, and cars. We have apps that record our heart rate and other essential bodily functions, tell us precisely where we are in the world, and have shops text us with deals as we walk by them. We post on social media for the world to see, little, some, or all.
Mostly these are good things, or at least innocuous things. But as we increasingly tie ourselves to technology, we need to constantly consider the tradeoffs. What passes for news these days tells us about people losing jobs because of party photos on their Facebook page, or people scammed or worse by Craigslist ads.
But it’s more insidious than that. As our identities can be stolen for nefarious purposes, and our Web movements tracked for commercial ones, we have to understand those tradeoffs and make the right decisions for us as individuals. Your decisions are likely to be different than mine, but if you don’t understand the implications of your online actions, others will make them for you.
And the standard changes over time, often short periods of time. For years I didn’t mind online photos of me; there are several on my website. And most of the conferences I speak at require photos for their own websites. I was copasetic with that, until I realized that Google (or other) face recognition software would be able to identify me on the street in about three seconds. That genie can’t be put back into the bottle, but I would still like to have some control over my life.
So we shouldn’t consume technology without thought. Circa the 1980s, there was on TV an acclaimed police drama called Hill Street Blues. In every episode, Sergeant Phil Esterhaus concluded the morning briefing by saying, “Let’s be careful out there.”
My Fitbit and I Are Headed for a Split January 7, 2015Posted by Peter Varhol in Technology and Culture.
Tags: activity tracker, Fitbit
add a comment
My motivation for physical activity over the last half-year has been largely driven by the acquisition of a Fitbit, the simple yet effective activity tracker that is almost always fastened to my belt except when I’m taking a shower.
At 5AM this morning, it was cold (5 degrees F). There was snow on the neighborhood streets. I wore my usual hoodie and sweats, with ear muffs and gloves, and ran 5K. When I returned, I discovered that my Fitbit had inexplicably reset itself and didn’t record any of the excursion. Perhaps it was the cold, but this had never happened before (it appears that there may have been a software update overnight).
Now that raises an interesting question. If I ran the 5K and there was no record, did I actually do it? I almost made an instant decision to go back out and do it again.
The question is, of course, more than philosophical (yes, I did run it). The Fitbit has been my primary motivator in activity. I feed off of the numbers, striving to maintain or even improve upon them on a daily basis. Without those numbers, will I be able to continue to compete with myself?
So I need a greater level of reliability in my life. Once failed, I don’t think I can trust the Fitbit again.
Adding to this whole conundrum was an unpleasant experience I had with Fitbit customer support, in which an order for an additional charger cord somehow got lost in their system. It didn’t ship until I called to inquire about it, even though the website said it had shipped. I question whether Fitbit understands the concept of e-commerce, even as that is where most of its business is generated.
So I am ready to break up with my Fitbit. I’m looking at the Microsoft Band. It has a lot of features, including beyond simple activity tracking, but it’s pricey. Thoughts, anyone?
Algorithms Are Thoughtless December 30, 2014Posted by Peter Varhol in Technology and Culture.
Tags: Asimov, robot
add a comment
I’m reminded of the movie I, Robot. Will Smith (as Del Spooner, but it certainly should have been Elijah Baley) distrusts advanced, human-like robots because he came to experience that while they make rational and largely correct decisions, on occasion the logically right, but humanly wrong determinations could be life or death under critical circumstances.
Of course, the late and lamented Isaac Asimov used the “I, Robot” science fiction series as a foil for logic puzzles surrounding his seemingly ironclad “Three Laws of Robotics.” Still, he was not unaware of the potential for wrong when the logic is right.
I came across this posting recently, commenting on a Facebook app that helps users build their “Year in Review.” Regrettably, this past year was not a bed of roses for this person, and the Facebook app selected images that brought back those unpleasant times.
Further, the Year in Review keeps rotating back to the person while in Facebook, serving as an unwanted reminder of events that don’t need reminders. The author called this “inadvertent algorithmic cruelty” (I wonder if there could also be “intentional algorithmic cruelty?”). Apparently, Facebook made the assumption that only good and memorable information is posted. In most cases, that may be true, but it wasn’t a particularly good assumption.
There seems to be a question surrounding whether the current focus of artificial intelligence (AI) is bad for humanity. AI is, and will continue to be, based on mathematical algorithms that assign probability and make determinations that guide decisions.
Certainly, any abstract intellectual, physical, or emotional responses in robots (however you might define that term) would result in an increasingly boring world. Knowing the algorithm means knowing the response in advance. While Asimov’s logic puzzles toyed with a number of logical holes in these laws, for the most part real life experiences are unlikely to be so unpredictable. His were edge cases.
In reality, it will be a long, long time, if ever, before we see anything resembling Asimov’s positronic brains encoded with the Three Laws of Robotics. Robots, and AI in general, will continue to become more complex and capable, and will continue to nibble around the edges of jobs that can be designed and executed according to rules.
But we won’t be replicating judgment in algorithms, because judgment is an individual characteristic. We make judgments in different ways, based on our backgrounds, experiences, age, and a myriad of other factors. The judgments and results will be different. That doesn’t make some judgments wrong, but it does make them human.
Net Neutrality and The Oatmeal November 21, 2014Posted by Peter Varhol in Strategy, Technology and Culture.
Tags: net neutrality, The Oatmeal
add a comment
I can understand why Matthew Inman doesn’t accept email on The Oatmeal, but it does make it difficult to raise an important issue. In this case, I would like to explore his take on net neutrality. Yes, I agree that Senator Cruz is probably an idiot, or at least pandering. But beyond that, I’m wondering just who the bad guy is here. Is it Comcast Xfinity, who would like to charge premium prices to companies with real time content delivery needs? Perhaps. Or is it Netflix, who is abusing an infrastructure not designed or operated for streaming high-quality video? Hmm.
Today we tend to think of the Internet as more or less a public utility, akin to our electrical service. That’s not quite correct. Actually, not at all correct. There was a time, when I was in college, where the Internet was a private, elitist academic network, yet funded entirely by the government. If you as an individual wanted access, you had to be an academic, a government-funded researcher, or at worst a paying student at a really good university. And there was some decent content on the Internet, albeit all text-based. That was the world at the time.
In the early 1990s, the powers-that-be (I really don’t know or care to assign it to one political party or the other, and neither should you (and no, Al Gore did not create the Internet, despite his resume)) decided to commercialize it.
That, I think most people would agree, was a Good Thing. We got ISPs (okay, we had AOL and Compuserve before that, but they specifically weren’t on the Internet until later), we got decent graphics tools, and we got modems to use with our phones. It provided for a burst of innovation, an explosion of content, and a democratization of access.
The phone companies made a half-assed attempt to offer higher access speeds, but DSL was expensive, difficult to buy and configure, and slow. The cable companies realized that they already had fat pipes into homes, and rushed to compete, spending hundreds of billions of dollars (granted, our subscription dollars, but a significant investment nonetheless) on network upgrades.
So here the Internet ceased being a public utility, if it ever was one, and became a commercial venture. I agree that the exclusive contracts still oddly provided by municipalities to cable companies makes it seem that way, but there is little reason for these to still exist. And in any case, they should be up for re-bid every few years, once again making it less of a utility.
So a business like Netflix comes along, and reduces its operating cost by offering a very high usage delivery on what is at worst a low-cost fixed-price medium. Is Comcast wrong by wanting more money for this type of use? To support the Netflix business model of making money from us?
I don’t know. Apparently Matthew Inman does. Good for him.
In theory, I believe net neutrality is the way to go. But it supports some businesses over the expense of others. Just like the alternative. So I simply don’t see a compelling reason to discard either concept.
I Have My Cell Phone, So I’m Safe November 20, 2014Posted by Peter Varhol in Technology and Culture.
1 comment so far
In the safety of my home in New Hampshire (where we are no strangers to snow), I am watching the emerging largely lake-effect snowfall blanket western New York with over seven feet of snow, with more to come. And at least ten people have died, though some as the result of shoveling snow rather than getting caught in it.
And I can’t help but think that technology, which has been of enormous help in saving lives, is also making us more susceptible to dying in extreme conditions. In particular, the easy availability of cell phones makes us think that someone will always come to get us, in just about any circumstances.
Yes, people died of weather tragedies prior to the advent of easy and instant communications. The unnamed hurricane hitting Galveston, Texas in 1900 killed 6000-12,000 people, the bodies of most were never found. The New England hurricane of 1938 killed perhaps 700 people. In the early 1970s as a youth, I recall reading a harrowing account of a massive snowstorm at the Batavia Rock Cut closing the New York Thruway for days, with hundreds of cars still on it.
But we lacked not only communications technology, but the ability to forecast and communicate such extreme weather. Today we have much of that information available to us, yet still choose to take risks. Perhaps more risks than we would otherwise.
And such technology is not a panacea. I also recall the James Nance book and subsequent TV miniseries, Pandora’s Clock, where a doomsday virus kept an airliner from landing, even for medical help. An ambassador on the plane, played by Robert Guillaume, was in contact with the highest level of authority worldwide through a satellite phone. Fat lot of good it did him (Richard Dean Anderson was the hero, as he often was in MacGyver).
I am (not quite, I hope) as guilty as anyone else. I run very early in the morning, well before light, in a neighborhood known to be populated by certain wild animals. I take my cell phone with me, in a pouch. Like I will be able to make a phone call if attacked by a wild animal, or have a heart attack or encounter with an automobile. Still, I irrationally feel a little more secure with it than without.
So is there a lesson here? I think that technology such as easy communications is a cautious fallback strategy. But it doesn’t make us invincible. Sometimes the cavalry just isn’t coming, or can’t come. Even with our superior technology, Mother Nature will win, without cause and without regret.
Does Being Ethically Challenged Matter in Silicon Valley? November 19, 2014Posted by Peter Varhol in Technology and Culture.
1 comment so far
Investor Peter Thiel offers the opinion that ride service Uber is Silicon Valley’s most ethically challenged company, even before one of the company’s executives was quoted as saying it might hire private detectives to find and publicize dirt on journalists who criticize its business practices (the company CEO eventually delivered an apology in 14 tweets, which is just plain silly).
According to various reports, Uber doesn’t play well with local and regional regulatory agencies, treats its drivers poorly, and plays various dirty tricks on competitors. To some extent, friction with a startup in a new market is likely to occur, whether because of a new business model (anyone remember the controversy caused by open source software before it became widely accepted?) or because of legal or regulatory limitations.
I will likely never use Uber, for a variety of reasons (such as not frequenting places it serves), and it certainly appears that the company breaks the boundaries of accepted good business practices and regulatory statutes. But I don’t think that will in any way drive whether or not it succeeds. Its success will be driven primarily by its being able to service customers (riders) in a fast, clean, and courteous environment.
Of course, the legal and regulatory environment plays an important role here (Uber remains banned in certain German cities), and the company’s legal arguments in response are pretty ridiculous (“The Hamburg court rejected Uber’s arguments that the ban violated Uber’s professional freedom or European freedom to offer services.”) But rather than claim an absolute right that doesn’t exist in society, why not work within the legal framework, or work with authorities in a collaborative way to adjust that framework and experiment with new models? The regulations exist for a reason, usually what was once a good reason, so it makes sense to bring about change without negating the law.
I’ll also opine that it seems like the company has intentionally cultivated an arrogant, likely inappropriate business atmosphere. It’s almost certainly not a place where I could work. But that probably only has a small, if any, impact on its probability of business success.