jump to navigation

Of Fossil Fuels and Climate Change March 15, 2015

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

I am not an energy or climate expert by any means. I guess I would call myself a reasonably intelligent layman with good education and experience in the scientific method and interpreting experimental results.

I’ll start with a couple of blanket statements. The Earth is probably undergoing climate change, and if so, it is likely at least partially the result of greenhouse gases. Notice that I express likelihoods, or possibilities, not definitive statements. No serious scientist, under just about any circumstances, would make flat and absolute statements about an ongoing research area, especially one as complex as this. Insofar as we may hear such statements, even from people who have scientific credentials, we should run away from them.

Second, it’s not at all clear that climate change is a bad thing. The world around us is not static, and if we expect things to remain as they are, we are deluding ourselves. We have had two documented “mini”-Ice Ages over the past millennium, and those could clearly not be ascribed to human industrial intervention. After all, the Vikings were able to grow crops in southern Greenland until a cooling of the climate in the twelfth century led them to abandon not only Greenland, but likely also Labrador and certainly Newfoundland.

In a longer sense, we may still be in an Ice Age that began over two million years ago.

If we are in the process of warming, it may be a part of the natural, well, circle of life. It is probably helped along by the trapping of warming greenhouse gasses, but it may or may not be unusual in the life scale of the planet.

But to think that we may draw a conclusion based on a hundred years of data may be intriguing, and may result in poorly thought out conclusions and remedies, despite the sensationalist (and entertaining) movies to the contrary.

And I know there are winners and losers in this process. On the positive side, we may be able to grow tropical crops farther north in the Temperate Zone, ultimately being able to feed more of the planet. On the negative side, countries such as Tuvalu may ultimately be mostly flooded by a rise in sea level. While I feel for the 11,000 people in Tuvalu, I may feel more for the greater number of people we are able to feed.

All that said, I liked this article on the necessity of fossil fuels in the weekend Wall Street Journal. While it represents a biased point of view, it is likely no more biased than an opposing point of view.

It’s a good thing that we are looking toward using energy that doesn’t burn fossil fuels. But let’s not delude ourselves into believing that climate change won’t happen anyway; it’s simply the way massive complex systems work.  We called it catastrophe theory in the 1970s; now it goes by other names.

I recognize that others may disagree with my statements, and perhaps stridently. And I certainly agree that we should continue to explore renewable sources of energy (even if they are not renewable over a sufficiently long time scale). But this is a more difficult question than the popular media has been asking over the course of the last decade or so.

A Youth Guide to Digital Privacy March 14, 2015

Posted by Peter Varhol in Technology and Culture.
add a comment

I’m an old(er) guy. I was well of age when Tim Berners-Lee published his seminal work on hypertext, and was probably on the leading edge of non-academic users when I loaded a third-party TCP/IP package (NetManage) onto my Windows 3.1 installation and connected to an Internet provider and connected to the World Wide Web (hint: It wasn’t easy, and I know you just take this for granted today).

So I would like to offer the youth of today (to be fair, thirty years or more my junior, and I’m not sure why anyone would want to listen to me) some advice on navigating a digital life.

  1. Someone is always watching you. Seriously. If you are out of your residence and not already on CCTV, you will be within a few minutes.
  2. If your cell phone is on, anyone who cares knows where you are. If they don’t care at the moment, they will still know if it becomes important. If your cell phone is not on, the NSA can still find you.
  3. I’m not just talking advertisers, with whom you may already have a relationship, or at least reached a détente. If important, you will be found, by authorities, friends, enemies, and spammers.
  4. Most important: if you do something bad, or stupid, you will almost certainly be found. Maybe prosecuted, more likely ridiculed, for the whole world to see if they desire. You may even be on a news website, not because what you did was in any way newsworthy, but because it offers offbeat or comic page views.
  5. You may or may not recover your life from that point.

I mention this because young people continue to do stupid things, just as they did when I was young. They may not have seemed stupid in my youth, where I did my share of stupid things, but wasn’t caught because they couldn’t catch me. Trust me; anything you do in public today is either on camera or identifiable through a GPS trace.

You might not think you will do anything stupid in public. Chances are, if you are under 30, you have already done so (over 30, the odds reach certainty). Circa 1980, I could drive down the wrong way on a freeway on-ramp, incredibly drunk, and not get caught unless there was a patrol car in the immediate vicinity (theoretical example; maybe). Don’t even think about it today.

Many people who have grown up with the Internet are seemingly comfortable with the positive aspects of universal connectivity, but don’t give any thought as to the other side of the coin.

Moral: We are being watched. And we don’t really understand the implications.

The Challenges of Concurrency in Software March 12, 2015

Posted by Peter Varhol in Software development, Software tools.
Tags: , ,
add a comment

I learned how to program on an Apple Macintosh, circa late 1980s, using Pascal and C. As I pursued graduate work in computer science, I worked with Lisp and Smalltalk, running to the Motorola 680X0 and eventually the Intel architecture.

These were all single-threaded programs, meaning that they executed sequentially, one step at a time. As a CS grad student, and later as a university professor, I learned and taught about multi-threading and concurrent code execution.

But it was almost entirely theoretical. Until the turn of the century, almost no code was executed in parallel. Part of the reason was that none but the most sophisticated computers executed in parallel. Even as Intel and other processors moved decisively into multi-core architectures, operating systems and programmers weren’t ready to take advantage of this hardware innovation.

But only by taking advantage of multi-core and hyper-threaded processors could developers improve the performance of increasingly complex applications. So, aided by modern programming languages such as Java and C#, developers have been cautiously working on applications that can take better advantage of these processors.

To do so, they’re dusting off their old textbooks and looking at concepts like “critical section”, “fork”, and “join”. They are deeply examining their code to determine which operations can occur simultaneously without producing errors.

To be fair, several tools came out in the mid-2000s claiming the ability to automatically parallelize existing code, mostly by analyzing the code and trying to parcel out threads based on an expectation that certain code segments can be parallelized. In practice, there was not a lot of code that could safely be parallelized in this way.

But most new applications are multithreaded, and the operating system can dispatch threads to different cores and CPUs for concurrent execution. Using today’s processors, this is the only way to get the best performance out of modern software.

The problem is that developers are still fumbling their way through the process of writing code that can execute in parallel. There are two types of problems. One is deadlock, where code can’t continue because another thread won’t give up a resource, such as a piece of data. This will stop execution altogether.

Another is the insidious race condition, where the result is dependent upon which thread completes first. This is insidious because an incorrect result can be random, and is often difficult to detect because it may not result in an obvious error.

Fortunately, tools are emerging that help in the identification and analysis of concurrent software issues. One is Race Catcher, from Thinking Software. It can be used in two ways during the application lifecycle. During development and test, it can dynamically analyze Java code to look ahead for deadlocks and race conditions. You can’t predict the occurrence of a race condition, of course, but you can tell where the same data is being processed in different ways, at the same time.

In a headless version, it can run as an agent on production servers, doing the same thing. That’s a version of DevOps. We catch things in production before they become problems, and refer them back to development to be fixed.

In an era where software development is changing more quickly and dramatically than any time since the PC era, we need more tools like this.

And This is Why Government Has No Business Dictating Computer Security March 4, 2015

Posted by Peter Varhol in Uncategorized.
Tags: ,
add a comment

Governments can do an incredible amount of good. They provide services for the public good, such as law enforcement, transportation management, a legal framework, and so much more.

But government as an institution, or set of institutions, can also be incredibly stupid, especially where foresight is required. Especially in the realm of technology, which changes far more quickly than any government has the will to adapt.

So now we have a security hole in our web browsers, courtesy of the U.S. Government, which mandated that software products (such as web browsers) couldn’t use strong encryption

This is the same battle that Phil Zimmerman, author of PGP (Pretty Good Privacy) encryption, fought years ago after open-sourcing his algorithm and in doing so made it available to the world. It turned out that Zimmerman was right, and the government was wrong. In this case, wrong enough to cause potential harm to millions of computer users.

At this point, the government doesn’t seem to be interested in enforcing this any more, but some web browsers are still delivered with weak security. It was a vestige of their intent to comply with the law, and never removed as the law became, well, more flexible. But now it is doing some significant damage.

I am reminded, in a reverse way, of Frank Herbert’s science fiction character Jorj X. McKie, a card-carrying member of the Bureau of Saboteurs, a multi-planet government agency whose role was to, well, sabotage government. In this hypothetical future sphere, it needed to do so because government had become too fast, too efficient, and less deliberative in passing and enforcing laws. The Saboteurs threw a monkey wrench into government, slowing down the process.

But today we need to speed up government. Defining the boundaries of government is a debate that will continue on indefinitely. I generally agree that government should be a participant in this process. But it needs to be an informed and active participant, and not a domineering old grandparent.

Do We Really Hate Science? February 25, 2015

Posted by Peter Varhol in Technology and Culture.
Tags: ,
add a comment

Despite the provocative title, the March cover story in National Geographic magazine, entitled The War On Science, is a well-conceived and thoughtful feature (in fairness, the website uses a much less controversial title – Why Do Many Reasonable People Doubt Science?). It points out that the making of accepted science isn’t something that happens overnight, but can take years, even decades of painstaking work by researchers in different fields around the world before it solidifies into mostly accepted theory. Even with that, there are contrary voices, even within the scientific community.

I think the explanation is slightly off-base. I learned the scientific method fairly rigorously, but in a very imprecise science – psychology. The field has entire courses on statistics and experimental design at the undergraduate level, and labs where students have to put the scientific method into practice.

Still, because psychology is an imprecise science, I was frustrated that we were usually able to interpret outcomes, especially those in real life, in ways that matched our theories and hypotheses. But our explanations had no predictive power; we could not with any degree of confidence predict an outcome to a given scenario. That failure led me away from psychology, to mathematics and ultimately computer science.

It’s true that science is messy. Researchers compete for grants. They stake out research areas that are likely to be awarded grants, and often design experiments with additional grants in mind. Results are inconclusive, and attempts at replication contradictory. Should we drink milk, for example? Yes, but no. In general, the lay public tries to do the right thing, and the science establishment makes it impossible to know what that is.

And the vast majority of scientists who purport to explain concepts to the lay public are, I’m sorry, arrogant pricks. We have lost the grand explainers, the Jacques Cousteau and the Carl Sagan of past generations. These scientists communicated first a sense of wonder and beauty, and rarely made grand statements about knowledge that brooked no discussion.

Who do we have today? Well, except for perhaps Bill Nye the Science Guy, no one, and I wouldn’t claim for a moment that Bill Nye is in anywhere near the same league as past giants.

The scientists who serve as talking heads on supposed news features and news opinions have their own agendas, which are almost invariably presented in a dour and negative manner. They are not even explaining anything, let alone predicting, and they certainly have no feel for the beauty and wonder of their work. Doom and catastrophe will be the end result, unless we do what they say we should do.

To be fair, this approach represents a grand alliance between the news agencies, which garner more attention when their message is negative, and the scientists, who promote their work as a way to gain recognition and obtain new grants.

In short, I would like to think that there is not a war upon science. Rather, there is a growing frustration that science is increasingly aloof, rather than participatory in larger society. Everything will be fine if you just listen to me, one might say. The next day, another says the opposite.

That’s not how science should be communicating to the world at large. And until science fixes that problem, it will continue to believe that there is a war on.

Who is the Bad Guy? February 21, 2015

Posted by Peter Varhol in Technology and Culture.
add a comment

I’m not a conspiracy person. I don’t think governments, evil corporations, or anyone else have banded together to rob, cheat, or steal from me. It’s simply too difficult to keep such things a secret from the outside world. As the old saw goes, three people can keep a secret, if two of them are dead (and I’m not entirely convinced about the third one).

But stories like this, concerning US and UK agencies breaking into phone SIM cards, are eminently believable, and infinitely troubling, because we already know that bad hackers can get into servers and devices that we use in our daily lives for all manner of activities. We have had indications that governments are doing likewise, to each other, to companies, and to citizens of its own and others. Governments generally won’t admit to doing so, and don’t have to. But you have to figure that any government could afford talent equivalent to those who do it for free.

There is no question that any government can employ the some of the best hackers (other hackers are likely to avoid governments altogether). And make no mistake – there is a war out there, between nations, and within nations and, well, I’ll say it, cyber-terrorists. I’m okay with stopping the bad guys and when possible, bringing them to justice.

But the larger issue is whether it is appropriate (I would like to find a stronger word here) for any government to do so. I get the fact that the nature of law enforcement has changed dramatically over the past decade. Some crimes that are common today didn’t exist at the turn of the century. Law enforcement has always been behind the technology curve from the bad guys, and I don’t have a problem with investigation, enforcement, and the legal environment catching up.

But we’re talking something different here. We’re talking about the governments becoming the bad guys, digging into people who are not suspected criminals or terrorists. And that is just plain wrong. I understand that the difference between law enforcement and law breaking can become blurred, but from our governments, it should be very well defined and communicated.

I understand why governments and legitimate law enforcement agencies believe they need to adopt these techniques to be proactive in an increasingly dangerous world. But guess what? They don’t. They have simply become lazy, substituting the games of the criminals as a rationale for keeping people safe and protected. No. They are better than that, and we are better than that.

It’s not even the journeyman law enforcement people at fault here. They are caught in a bind. If Something Bad happens on their watch, they take all of the heat, and who can blame them for bending the game a bit for their side. It’s the leadership that has to draw sharp lines, and it is not doing so. This failure harms all of us. Let’s get back to good, solid investigative work and not break into SIM cards and plant malware on computers.

We Have to Start to Choose Our Technologies February 19, 2015

Posted by Peter Varhol in Technology and Culture.
Tags: , ,
add a comment

I’m not a Luddite. I grew up with The Jetsons (which is now the brand of tire on my ancient Subaru), Lost in Space, and Star Trek. I want a flying car, tricorder, and the ability to give commands to my personal robot. I have most of the toys most of us have these days, although rarely the latest models, because I have better things to spend my money on, and less an emotional need to be first on the block.

But what these optimistic visions of the future never showed were the tradeoffs inherent in these wonders. If I have a flying car, the airspace become crowded as more people try to get to farther places. Giving commands to a robot means that those commands will be stored somewhere, and have the potential for misinterpretation. The late great Isaac Asimov had a great run with his Three Laws of Robotics.

What this means is that no new technology is a clean win; there are always tradeoffs. It’s only going to accelerate in the future. We’ll have smart homes, with connected utilities, refrigerators, microwaves, and cars. We have apps that record our heart rate and other essential bodily functions, tell us precisely where we are in the world, and have shops text us with deals as we walk by them. We post on social media for the world to see, little, some, or all.

Mostly these are good things, or at least innocuous things. But as we increasingly tie ourselves to technology, we need to constantly consider the tradeoffs. What passes for news these days tells us about people losing jobs because of party photos on their Facebook page, or people scammed or worse by Craigslist ads.

But it’s more insidious than that. As our identities can be stolen for nefarious purposes, and our Web movements tracked for commercial ones, we have to understand those tradeoffs and make the right decisions for us as individuals. Your decisions are likely to be different than mine, but if you don’t understand the implications of your online actions, others will make them for you.

And the standard changes over time, often short periods of time. For years I didn’t mind online photos of me; there are several on my website. And most of the conferences I speak at require photos for their own websites. I was copasetic with that, until I realized that Google (or other) face recognition software would be able to identify me on the street in about three seconds. That genie can’t be put back into the bottle, but I would still like to have some control over my life.

So we shouldn’t consume technology without thought. Circa the 1980s, there was on TV an acclaimed police drama called Hill Street Blues. In every episode, Sergeant Phil Esterhaus concluded the morning briefing by saying, “Let’s be careful out there.”

A Road Less Taken January 26, 2015

Posted by Peter Varhol in Software development, Software platforms.
Tags:
add a comment

Among the best jobs that I had in my long and interesting career is the roles of technical evangelist. I’ve had that formal title three times, and have had related roles on other occasions.

As you might expect, a technical evangelist evangelizes, or enthusiastically promotes, a particular technology, product or solution to a wide range of people. They could be C-level executives, or industry thought leaders, managers, tech leads, or individual contributors.

This is done through a number of ways – through articles in industry publications (these days mostly websites), white papers, blog posts, webcasts, podcasts, and screencasts, as well as speaking at user groups and industry conferences.

What good is evangelism? Well, it can play a number of different but interrelated roles. In the places I was an evangelist, it was mostly about brand awareness, with some customer contact for demonstrations and consultation. In other roles, it could serve as a specialized technical resource to help customers solve difficult problems. In still others, it is primarily a technical sales function. The important thing to do is to understand what is expected of the role, and to deliver on those expectations. It sounds simple, but in reality it rarely is.

A little bit about me. I don’t consider myself to be especially outgoing, but I was a university professor for a number of years, and through interest and repetition became a very good public speaker. I learned the art of storytelling through trial and error, and found it to be an effective teaching technique.

I was fortunate enough to receive training in academic instruction through the Air Force, and found that I had a knack for relating to my audiences. I found that I loved the travel and interaction with experts, peers, and students of the field. I would like to think that I built brand awareness very well.

The downside of being an evangelist is that many technology companies are ambivalent about the role. I worked for a company that had twenty evangelists, then systematically dismantled the team in order to look attractive for an exit strategy. In doing so, they may have sacrificed long term growth for a short term balance sheet.

Yet it is an exciting and fulfilling role. I hope to be able to get back into it at some point.

Follow

Get every new post delivered to your Inbox.

Join 425 other followers