jump to navigation

Do I Know You? Tracking Activity on the Web July 31, 2010

Posted by Peter Varhol in Uncategorized.
add a comment

This is one of the most significant articles on the Web that I’ve read in a long time.  If you’re interested in getting your hands around privacy and the Web, this is the place to start.

The Wall Street Journal performed what in most respects seems like a common sense experiment.  It visited the top 50 Web sites in terms of visitors, then counted the number of cookies installed on the test computer.

The number was 3180.  Of these, 2224 were installed by advertising networks and similar companies that collect data on visitors, follow them around the Internet to other sites using the same ad network, and make predictions on the demographics and buying habits of those visitors.  In some cases, they are tailoring ads, pages, and offers based on that identification.  For example, one person interviewed was at one time looking for weight loss alternatives; that person now gets weight loss ads generated on one or more ad networks.

Amusingly (almost), several of the companies contacted professed no knowledge that their websites were installing that many cookies, and that they were collecting that level of data on individual visitors.

Incredible as such a claim may sound, I tend to believe them; never ascribe something to conspiracy when sheer incompetence will do.  It seems negligent for a company not to know what is happening on its website, but I’m not surprised that’s the case.

Here’s where it gets very interesting.  All, including (and especially) the ad networks collecting and selling this data deny that these activities are privacy-related, because none of the data is connected with an actual name.

Now, I honestly don’t know.  Is the door to our privacy opened by our name and nothing more, or is the act of collecting this level of information and using it to build a prototype of specific individuals a violation of the privacy of those individuals?  Is association actions with names the key, or is recording and analyzing those actions enough?

Even more, some of us may actually approve, even to the level of associating our name with our Web browsing actions, because it means the information we receive may be more relevant to us.  I personally prefer a degree of randomness in my interactions, in hopes of learning things that would never have occurred to me to learn, but others may be comforted to know that they Web knows who they are, and is responding to their unique needs.

I’ve touched on the topic of privacy and the Web; this article rips the topic wide open.  It provides no answers, but let everyone know the score.

The  important thing to remember is that the use of this technology is in its infancy.  At least from the standpoint of commerce, we seem to be heading in the direction of a personal and dynamically customized Web.

It’s time we decided whether or not this is what we want.

In Defense of Modern Journalism July 24, 2010

Posted by Peter Varhol in Publishing.
add a comment

I’ve been what you might call a professional journalist for a good part of my working life, but I’ve never had journalism training or really even considered myself a journalist.  Instead, I think of myself as a technologist who happens to play with hardware and software, and communicate my findings to others.

Over the years, I’ve learned much about technology, technology trends, and technology publishing, not all of which made it into my commercial writings.  So I sought to communicate my knowledge and experience in other ways.  Case in point, this blog.  I’m the only contributor to this content and its presentation, and it makes me no money, so I spend the vast majority of my time on other, profitable activities.

I describe it like this because I increasingly see professional online publications posting articles that are less researched, less formal, and less edited than you might see in a paper publication.  And I see commenters who are increasingly critical of seeing what in some cases is lower quality content, in terms of the extent of coverage, or the use of the language.

Give me a break, folks.  The likes of MSNBC, CNN/Money, Time, Newsweek, and their traditional media counterparts have been devastated over the last fifteen years by the Web, and are responding, albeit not all that well, by getting more content posted more quickly.  In many ways, this content isn’t investigative reporting or features, but rather a conversation with readers that may include some news, some insight, some opinion, and some speculation, in no particular order.  It’s not copy edited and its rarely proofed, but it is fast and often informative and entertaining.

It’s not a feature story in a five dollar glossy print magazine, it’s a blog post using the same WordPress platform that I use.  We as consumers of this content can’t have it both ways.  It can’t be fast and informative, yet fully error-free and fact-checked.  Especially not in this day and age.  And that’s not a bad tradeoff.

And to be completely fair, the writers and editors who make these posts almost invariably have many other editorial responsibilities, and have added (willingly or not) blogging onto their set of responsibilities.  Some look down upon blogging for a variety of reasons, while some embrace it as one possible future of publishing.

But to complain about an occasional misspelled word or not fully fact-checked blog posting, even presented by a professional publishing company, not only defeats the purpose of this medium, but also makes it more difficult for publishing companies to experiment with new methods of communicating information.

Why Are We So Happy With Windows XP? July 15, 2010

Posted by Peter Varhol in Software platforms, Strategy.
1 comment so far

If you’ve shopped for a PC anytime in the last couple of years, you’ve probably noticed that most of the vendors provide a “Windows XP downgrade license” as an option (oddly, usually for a nominal fee).  You can’t get a new PC with Windows XP, but you can pay for the privilege of removing Windows 7 and installing XP on it.

Now Microsoft is extending the availability of that downgrade license for another ten years, until 2020.  This is primarily for business customers, but in effect it means that many of us will be using XP far into the future, in the office if not in the home.

There are certainly reasons for this level of popularity of XP.  Businesses like maintaining a homogeneous computing environment, and XP is the lowest common denominator that works for many organizations.  Most experienced system administrators thoroughly understand XP, and gaining that level of skill for a new OS can take years.  Commercial software availability remains high.  Perhaps most important, it remains good enough for the vast majority of business computing uses.

There may be more insidious factors at work.  Microsoft’s software development model is predicated on Moore’s LawGordon Moore noted that the number of transistors on a processor was doubling approximately every eighteen months.  Over time, we’ve taken this to mean that processors are doubling in speed every eighteen months, and for a long time, that’s pretty much what happened.

Microsoft has made each successive operating system larger and arguably more feature-rich, knowing that Moore’s Law would keep up with its code bloat.  This worked until Windows Vista, which was so bloated that it couldn’t run well on any reasonable processor.  Microsoft learned that lesson and scaled back Windows 7, which appears to be more popular, but it’s not clear whether the damage can be undone.

(Arguably, the popular version of Moore’s Law also failed at this time, as processors got faster not through higher clock speeds or more complex pipelines, but rather through multiple processor cores, which Windows can’t effectively take advantage of.)

But the ultimate message is a frightening one – there is nothing that we (or more specifically, businesses) want to do with computers that we can’t do with XP.  There have been no drop-dead must-have applications for Vista and beyond, and few people feel that they absolutely need any of the new features.

For Microsoft, this is both a crowning success and an embarrassing failure.  Windows XP must be considered to be one of the most successful software products of all time, and Microsoft should be proud of its technical and business accomplishment.  But the fact that it has largely pushed aside later and more sophisticated replacements is strong evidence that something has gone very wrong.

It also means that the desktop OS innovations that began with Vista – most importantly the new user interface and underlying Windows Presentation Foundation – aren’t considered important enough to use Vista or Windows 7.

If anything heralds the end of the PC era, this is it.

Are We Outgrowing the Need for Privacy? July 11, 2010

Posted by Peter Varhol in Uncategorized.
2 comments

According to a new survey by the Pew Internet and American Life Project, the Millennials, that generation that grew up at the turn of the century (my god, I remember when that meant around 1900), are far more open to sharing personal information online than previous generations.

Further, the results of the survey suggest that such sharing will be a lifelong habit.  The report notes that people have generally become more private as they age, with more to protect.  However, the Millennial Generation may not have the same requirements for privacy as preceding generations.

I think there’s an element of balancing risk and reward here.  I probably don’t need a lot of privacy, and I have, in a minor way, cultivated an online persona (case in point, this blog).  I don’t mind personal and potentially embarrassing things about me being public; I’ve been embarrassed before, and I’ve gotten over it.

However, we’ve all heard the stories (or experienced first-hand) how online information can be used in ways that we don’t expect.  Our identities can be stolen, or information can be used against us in a variety of ways, such as in divorce proceedings or our jobs.  One wonders if we are starting to accept these potential dangers as simply a cost of being a citizen of the Digital Era.

I wouldn’t call myself a digital native (as the Pew Internet and American Life Project refers to the Millennials), and I’m not an early adopter of most consumer technologies.  I share online, but very selectively.  I have no doubt that a more open approach to online sharing would have afforded me more friendships, job opportunities, and humorous moments.  Even with the potential dangers, on balance I’ve probably erred on the side of less rather than more exposure.  That likely says something about my personality, but more about the attitudes in general during my formative years.

Fundamentally, I’m prepared for the world to change as I grow older, and I’m prepared for the following generations to be smarter than me (hell, I’m prepared for anyone to be smarter than me).  But I’m not fully prepared to have the world search for details of my life.  I don’t think of such details as public or private, but nor would it occur to me to offer them up to strangers.

Is the ability to do just that, without regard to privacy, or public versus personal information, what defines the Millennial Generation?  Or is this a false dichotomy, and merely a more natural reaction to the greater reach of information in general?

GPU Technology Conference Approaches July 8, 2010

Posted by Peter Varhol in Architectures, Software platforms.
add a comment

NVIDIA reminds me that its GPU Technology Conference is 11 weeks away (I submitted a proposal to speak; I’ll certainly mention if it’s accepted).  It’s September 20-23 at the San Jose Convention Center in San Jose, CA.  In the meantime, the company is hosting a Live Chat series, featuring high-profile GPU technology personalities.  Each Live Chat is 30 minutes long and open to the public.  This is a great opportunity for anyone interested in GPU computing to get some virtual one-on-one time with the industry’s top GPU scientists, researchers and developers.

The first Live Chat first guest is Ian Buck, inventor of the Brook project and CUDA.  He’s currently Software Director of GPU Computing at NVIDIA. Ian’s talk at GTC last year, From Brook to CUDA, highlighted a perspective on how GPU computing has evolved and what the current innovations are in GPU technology.  During his Live Chat, Ian will give a preview of his GTC talk and will be taking questions about the future of CUDA and GPU computing.

I attended this conference last year, and found it to be one of the most energetic and informative conferences I have attended in many years.  You can tell the state of a particular technology by the enthusiasm of the attendees, and this conference has all the earmarks of a celebration for a significant new technology.

This year, keynote speakers include scientific computing authority Hanspeter Pfister from Harvard University and computer graphics pioneer Pat Hanrahan from Stanford University.

Anyone interested in high performance computing or GPU computing, whether specifically for graphics or for general-purpose computation, should check this out.

Microsoft Kills the Kin July 1, 2010

Posted by Peter Varhol in Uncategorized.
1 comment so far

Two months after its introduction, Microsoft pulled its Kin cell phone for the youth market.  Although I never tried the Kin, I liked the concept of a phone specifically designed as the first significant gadget for what will be the most wired and technologically savvy generation in history.  The Sidekick functionality from Microsoft’s Danger (called Kin Studio for this phone) acquisition got good reviews, but the lack of features and applications was odd, given that if any company knows the importance of applications, it’s Microsoft.

The analysts are being kind, noting that Microsoft wants to put all of its efforts into Windows Mobile 7 (also called Windows Phone, but the branding remains ambiguous), and that it may only have delivered the Kin in its present form in order to meet contractual obligations with carriers.  If that’s the case, it’s nothing but silly to spend money building and manufacturing a phone the company intended to kill.

I dislike piling on Microsoft.  There is still much to admire and appreciate in the world’s largest software company.  But if it can’t get a clear success in the device market, it is headed for the same cliff that IBM almost drove off in the early 1990s.

And regrettably, Steve Ballmer is no help.  It’s not clear if he truly believes that such products aren’t important to the long-term success of the company, or if he’s fighting a verbal rear-guard action in hopes that something in the pipeline will take off.  I think even the Microsoft board of directors must realize by now that his best role was an operating one, not a product leadership one.

It’s amazing if Ballmer and the rest of Microsoft’s leadership don’t see the parallels to the IBM of 20 years ago.  In 1990, IBM was by far the dominant mainframe computing vendor, and arguably still the first among equals in the PC market (Micro Channel notwithstanding).  But there were too many daunting challenges as computing shifted from Big Iron to a smaller and far more distributed model.  Lou Gerstner saved the company, but only by making it into something completely different, and shedding over 100,000 employees in the process.

We are fast reaching the day when Microsoft will require such surgery.  And that’s a shame.