jump to navigation

Airline Security Failure A Black Mark for IT Management December 30, 2009

Posted by Peter Varhol in Strategy.
add a comment

I’ve been following with interest as the details emerge from the attack on Northwest Flight 253 from Amsterdam to Detroit on Christmas Day.  I am a Northwest (now Delta) frequent flyer, and I’ve actually been on that particular flight before, so I’m curious as to what went wrong.  In particular, I’ve flown through Amsterdam on several occasions, and find its security procedures to be among the most organized and effective I’ve seen.

The fallout is predictable.  Security experts bemoan the lack of stricter screening procedures, politicians find yet another opportunity to engage in partisan finger-pointing, and law-abiding travelers are bemused to find that many of the time-consuming screening procedures are more appearance than reality.

Once again, I draw different conclusions.  Unknown to many of us, there are in fact multiple lists that designate terrorists, persons of suspicion, those denied the privilege of flying, and so on.  Further, many countries maintain their own lists.  Some trading goes on between selected lists, but each has its own criteria on adding names.

According to published news sources, the presumed terrorist was on the British list that denied him admittance to the country, as a result of a visa request to enroll as a student in a possibly nonexistent school.  This database doesn’t seem to have been shared with the U.S.  However, the U.S Embassy in Nigeria was told by the suspect’s father of possible tendencies, and a report was filed with the State Department in Washington DC.  That report ultimately went to the National Counterterrorism Center for entry in its terrorism database.

That, apparently, is separate from the no-fly list maintained by the Department of Homeland Security.  Further, because this person had not been on the list before, there was no check to see if he had ever visited the U.S. in the past, or had a current visa  (it turned out that he had).

This is a business and IT management problem; one that most with some experience in the private sector have come across many times, and have even occasionally overcome.  It consists of poorly defined or incomplete business processes, coupled with business and IT system silos, and political disagreements on authority, prerogatives, and span of control.

These problems are actually very similar to what is faced in the private sector.  Databases are separate, and there are executive directives to join them to streamline business, but there are process, technical, and political obstacles to doing so.  When there is success in resolving these challenges and bringing together disparate systems, the enterprise as a whole invariably benefits.

Now, I like maintaining a certain amount of privacy, and subscribe at least in part to the concept that my privacy is better protected by confusion and silos of government information.  However, I also acknowledge that government will work better if its databases shared information more readily.  Whether you consider that a breach of trust or good management is open to debate.  I’m not sure what I’m willing to give up, if anything, for greater personal security, but I don’t think efficient IT systems should be a part of that tradeoff.

Another Tablet Computer? Bullwinkle, That Trick Never Works December 29, 2009

Posted by Peter Varhol in Software platforms, Strategy.
add a comment

The computer industry seems to be gearing up for another round of tablet computing devices, which use a stylus or pen as input.  As someone who admits to being around for the first two or three rounds, I am dubious.  The first couple of rounds were led by proprietary systems such as the PenPoint OS, which was followed by Microsoft’s copycat technology, Windows for Pen Computing in the mid-1990s.

This doesn’t count the several different PDAs, such as the Apple Newton, that used pen input with a small form factor for activities like note-taking and recording appointments.

The tablet form factor today typically incorporates a standard laptop PC with a display that swivels down flat with the body of the laptop to creating a writing and drawing surface.  Such systems are used in niche markets, such as medicine and graphic design, for very specific purposes.

Yet general-purpose pen computing is a holy grail to which the industry has aspired for decades.  That aspiration has probably been driven by third factors.  First, a keyboard, even with a pointing device, is a highly limiting way of providing input to a computer.  While appropriate for text, it can’t easily handle the input of drawings, objects, or abstract concepts.  Second, a keyboard limits computer use to those who have learned to type.  If the computer is to be a universal device, it needs an input device that any person with a basic education can use.  Third, a keyboard is simply the largest part of a computer today, and one that the industry has been trying to downsize without success.

Part of the problem has been the technical issue of handwriting recognition.  Handwriting technology is reasonably good today, and systems have the computing power to resolve ambiguities through brute force, but people’s handwriting varies so greatly that there can still be a number of input errors.

But another part is that there is yet to be the proverbial killer application for the tablet.  There are certain activities that lend themselves to pen input, but for those who already know how to use a computer, the keyboard is at least adequate.

Perhaps it is different this time.  I can think of two reasons why that might be the case.  First, the Internet as a computing resource doesn’t require nearly as much text input as the standalone computer.  We can search, read, bookmark, and more with minimal use of a keyboard.

Second, the rumor mill surrounding Apple and an upcoming tablet computer launch indicates that the time may be right.  Apple has done a superior job over the last several years on hardware and systems design, and while it has had poor designs in the distant past, just about everything it’s done since the turn of the century has been right on.

But we still haven’t found a broad compelling application for pen computing on a tablet form factor.  I’m sure I can list off eight or ten applications that are invaluable in specific vertical markets, but nothing that’s going to cause millions of ordinary users to go out and buy a tablet computer.  Even if it is an Apple.  One or more of the underlying market or technical issues I’ve sketched above has to be successfully addressed before tablet computing can reach the mainstream.

Programmers, Productivity, and Pay December 27, 2009

Posted by Peter Varhol in Software development, Strategy.
1 comment so far

John Cook, Tyler Cowen (Tyler Cowen?), and James Kwak have all offered opinions in recent days on why programmers who are ten times more productive than other programmers aren’t paid ten times more, as, more or less, salespeople are.  Cowen, as is his bent, approaches the problem as an economics one, while Cook and Kwak look more at the environment in which programmers work.  I think both touch on some good points, but miss the larger picture.

To backtrack a little, Ed Yourdon, citing research I believe done by Capers Jones, noted in his book The Decline and Fall of the American Programmer that the most productive programmers were an order of magnitude better than the least productive.  While it is a fool’s errand to attempt any substantive measurement of programmer productivity, an observer of programmers at work over a period of time would say that there is a big difference in skill between the best and worst.

In any industry, there is always a dichotomy between those who make the product and those who sell it.  For those senior executives responsible for revenue and costs, the sales operation is directly responsible for the revenue side of that equation.  Making the product to sell will always, regrettably, be considered a cost.

Sure, management knows it needs both, but when push comes to shove, you can sell inventory.  Or last year’s products.  Or promises for future products.  And those making the product can be subject to cuts, while the sales side can receive further incentives to enhance revenues.

In software, it’s even easier.  The thing that we make only has to be made once.  While that process is long and complex, additional copies of the completed software are available for the cost of a download.  Even in the past, when we made CDs and put them into boxes, the cost of additional copies was only a couple of dollars apiece.

And it’s easy to sell last year’s software rather than invest in a new version.  There are customers that still need last year’s features, and good salespeople can gloss over the fact that the product hasn’t been enhanced in a year or more.  In short, it is possible to wring more revenue out of software without spending a dime to improve it.

What does this have to do with the pay differential between the best and worst? We all know there are the best and worst of every field.  Those responsible for revenues and cost in a company know it too.  But because programmers are a cost, not a source of revenue (yes, I know it’s not logical, but that’s the thought process), paying the best programmers at a similar level to star sales professionals simply increases the cost, but because programmers don’t close deals, they don’t directly generate revenue.

Are Software Patents Helping or Killing Innovation? December 24, 2009

Posted by Peter Varhol in Strategy.
3 comments

The big story of the last few days (other than NORAD tracking that mysterious object flying in from the North Pole) was that Microsoft lost its appeal on the patent infringement case to tiny (30 employees) i4i of Toronto, Canada.  The end result is that Microsoft must stop selling Word 2007 with the offending technology (a way of editing XML).

I sympathize with both sides.  Even in a company with the size and market power of Microsoft (perhaps especially in a company with the size and market power of Microsoft), it is difficult to know what it a true innovation on your part, or an approach similar to what someone else had already patented.

On the other hand, I may be one of the few people left standing who remember the Stac Electronics incident of the early 1990s.  Microsoft, in talks to acquire this maker of disk compression technology, had its engineers look at the details of Stac’s approach.  Microsoft then halted acquisition talks and in 1993 came out with its own compression technology with MS-DOS 6.0 that looked suspiciously similar to Stac’s.  It was not Microsoft’s finest moment.

i4i is not a patent troll, but by all accounts a small software company seeking to protect a technology it patented in good faith.  It’s especially difficult for a small company to do so, because of the legal uncertainties and resources required, compared to an adversary with virtually unlimited time and resources.  Most choose to shrug and move on without a fight.

I would like to think that most innovation occurs in a strictly meritocratic way – we take the current state of the art, and improve upon it in imaginative and useful ways.  It looks like it works that way, but there is a lot of ambiguity and occasionally outright theft under the covers.

I think that most people agree that patenting business methods (think Amazon “1-Click” checkout) is a bad idea, and we seem to be moving ever so slowly away from that practice.

But beyond that, technology patents are a mixed blessing.  It used to be that companies interested in legal protection would copyright the code (with some redaction), but that still allowed for copying functionality and even appearance through reverse engineering.  Yet the US patent rules and process are still largely mired in an era where inventions were physical things rather than ideas expressed as software.

Don’t count on that changing anytime soon.  Today, software companies that can afford to do so patent just about everything they can, not to protect their intellectual property, but rather to build a patent portfolio.  That patent portfolio is perhaps the best protection against litigation, as many lawsuits are settled with cross-licensing agreements.  It may not fit our definition of justice, but it seems to be the best way of resolving patent ambiguities today.

The Mono Project Releases Silverlight Clone Moonlight 2 December 22, 2009

Posted by Peter Varhol in Software development, Software platforms.
add a comment

The Mono Project, an ambitious open source project to replicate in essential ways the Microsoft .NET Framework and associated tools, last week announced the availability of Moonlight 2, its version of Microsoft Silverlight.  It incorporates essentially all of the capabilities of Silverlight 2 and some of the newer Silverlight 3.

Mono is a bit of a quixotic effort, led by Miguel de Icaza, the originator of the project.  Years ago, the goal was to create an open source version of the .NET Framework that can run on alternative systems, especially Linux.  Because Microsoft turned the .NET Framework specification over to ECMA (it used to stand for the European Computer Manufacturers Association, but shortened it to ECMA International at least in part to its growth as a standards body) as a published standard, Mono is written from the spec, rather than reverse engineered.

De Icaza’s company was acquired by Novell several  years ago, and Mono has found its way into several of Novell’s network administration tools, including parts of Zenworks.  Because it is open source, it was also used in third party projects, including the Mainsoft .NET to Java cross-compiler from my friends at Mainsoft.

I haven’t looked in detail at the structure of Silverlight, but I’m guessing that Moonlight is a pretty easy derivative.  Silverlight is a subset of the .NET Framework, to be used on memory- and compute-limited devices, and for Web pages seeking to have a rich look and feel.  Silverlight hasn’t caught on in a very big way, at least not yet, so it’s not clear that Moonlight makes sense from a market standpoint.  It just may have been something that was easy to do, because the heavy lifting of replicating the .NET Framework in a larger way was already done.

De Icaza and his development team have had difficulty in keeping up with .NET Framework development, because the technical problem is a hard one, and because they have to wait for the new specifications to be published before work on new .NET versions can begin in earnest.

I call the Mono effort quixotic because of the impact it has made on development, and Microsoft development in particular, which is pretty small.  It’s not clear that there is a big market for using the Microsoft framework and languages on non-Windows systems.  Still, as a technical achievement over time, it is one of the most ambitious and focused projects of the decade.

The Power of the Bazaar December 20, 2009

Posted by Peter Varhol in Architectures, Strategy.
add a comment

As I read the many stories (here and here, for example) of terrorist hackers intercepting the video feed from US drones being used in Afghanistan and Pakistan, I take away a much different message than those who conclude that additional security measures are needed on our systems to be able to combat all possible threats.  In this case, that conclusion dictates that the video feeds must be encrypted.

Well, yes, that’s the obvious answer.  But there is a deeper meaning when we see hundreds of millions of dollars of technology foiled, or at least compromised, by a $26 hack.  Stories like this remind me of the essential timelessness of Eric Raymond’s seminal work, The Cathedral and the Bazaar.  Raymond deftly makes the claim that proprietary software is analogous to a cathedral, where it is developed using a closed cohort of developers working in a traditional development lifecycle.

In contrast, open source software, developed in a dispersed manner by hundreds of different programmers (mostly volunteers), works in at best a loosely organized meritocracy, with initiative and fitness of code being the primary criteria for inclusion in the build.  Raymond claims, with some justification, that the latter approach produces superior software more quickly.

How does this apply to hacked drones?  Drones are developed in a cathedral – traditional, and very expensive dedicated development teams with a long design cycle employing comprehensive specs and discrete design, development, and test cycles.  You end up with pretty much the system that a very small group of acquisitions personnel specified and paid for.  If you didn’t happen to specify an encrypted video channel, it will cost a lot more, and take a lot longer, to do so.

On the other hand, those who are attempting to compromise such a system, terrorist or not, represent the bazaar.  Many across the world, for a variety of different motivations, are attempting a number of different ways of affecting the success of the drones’ missions.  One or more of them is bound to succeed.

This is not to say that defense systems should be open source, with contributions from a worldwide community.  And capturing the drone’s video in real time can’t be much more than marginal help in determining where it is heading, so this particular flaw was not particularly egregious.

But surely we’ve learned that while the cathedral can produce some very good systems and software, its flaw is one of vision.  You can spend a boatload of money, take years in development, get exactly the system you want, and it can still fail.  The bazaar, on the other hand, has strengths such as a more flexible development model that can likely be applied to complex and large-scale system development.  If we applied some techniques that make open source successful, we may discover that our systems can be more resilient.

What the Heck is No-SQL? December 18, 2009

Posted by Peter Varhol in Architectures, Software development, Software platforms.
add a comment

In looking at data access in the cloud, I started by noting that there were good reasons why it was difficult or undesirable to run a SQL database in a server instance in a cloud.  This problem has lead to something that might be termed the “no-SQL” movement.  The goal is to persist data in some form in the cloud, in a way that is also readily available to the application.  Of course, it’s also important to bring that data back into enterprise data repositories and warehouses at some point.

Because large applications are likely to be distributed in cloud clusters, the data store has to be fast.  To be fast, it is likely distributed; you don’t want writes to be your application bottleneck.  So for these and I’m sure other reasons, developers building large enterprise applications targeting the cloud prefer avoiding relational databases and the use of SQL as a query language.

Probably the most common data management technique discussed in these circumstances is MapReduce, first used by Google to manage queries across its vast server farm.  MapReduce (Hadoop is an open source implementation of MapReduce for Java), breaks large problems up into small components and assigns them to individual servers in the cluster.  When the small components are solved, MapReduce reassembles them into the larger solution.  It is said to be especially useful for very large data sets and relatively simple queries.

MemcacheDB is another alternative.  MemcacheDB is an implementation of the Berkeley DB that provides a distributed memory caching system and persistent store used to speed up database-driven applications by caching data and objects in memory.

There are other alternatives, although I’m not the person to judge them technically.  In-memory databases such as Oracle TimesTen offers high performance for a distributed system, but is a commercial product requiring costly licenses.  Object-oriented databases such as ObjectStore from Progress Software can eliminate the need for object-relational mapping, but is once again a costly commercial product.

Incidentally, my friends at 1060 Research tell me that in the company’s NetKernel representational state transfer (REST) middleware (which they call Resource-Oriented Computing, or ROC), data caching is a fundamental part of the architecture, and almost seems to me to be a side effect of the very elegant design of the product.

Whichever technique you use in the cloud (and this list is by no means comprehensive), you still need an additional step – getting the data from a persistent store in the cloud to your enterprise databases, which are almost certainly relational and use SQL as the primary query language (you may also be using XQuery or other XML-based query mechanism in the enterprise, of course).

So ultimately you have to map your cloud store into the relational store, albeit probably not in real time.  I’ll write about that next time.

Higher Mathematics Made Easy December 17, 2009

Posted by Peter Varhol in Software tools.
add a comment

I have a graduate degree in applied mathematics.  I was never very good in mathematics, but my love of the subject matter managed to sustain me through three-plus years of advanced calculus, differential equations, statistics, forecasting, queuing theory, FFTs, Laplace transforms, and the like.  When I’ve taught these subjects, I’ve always stressed that they are both easier to understand, and more relevant, than people have been led to believe.

I didn’t mean that as hyperbole, but rather as a firm notion that useful mathematics didn’t have to be obtuse and out of reach of all but the nerds.  Maplesoft, with Maple 13 and MapleSim 3, have demonstrated that I must have been on to something.  I had the opportunity to examine these products and speak with Tom Lee, Vice President of Applications Engineering and Chief Evangelist at MapleSoft.

Maple, whose technology originally came out of Canada’s University of Waterloo, is the godfather of the symbolic mathematics engines.  That means that it can manipulate not only numbers, but also variables as parts of equations.  Those equations can be just about anything you would like, from integral calculus to differential equations.  You simply type in the equation, and Maple will solve it.

It may seem like little more than an intellectual exercise, except that there are plenty of fields where higher-level mathematics is needed to analyze and solve technical problems.  This includes design engineering (civil, structural, aerospace, mechanical; really, just about any type of engineer), financial modeler, forecaster, operations research analyst, and economist.

What did these professionals do in the era before Maple and its competitors?  “Estimated, guessed, or over-engineered,” according to Tom Lee.  It’s not that the math was particularly hard, but there was a lot of it, and with deadlines and other tasks requiring less concentration, a ballpark approach was used more often than not.

MapleSim is a simulation package that takes Maple equations and simulates a system based on those equations.  It includes both predefined equations and many components and examples that can be pieced together in different ways to build different systems.  This can be anything from an automobile carburetor to a bridge.  A simulation of a carburetor can test gasoline consumption or engine efficiency, for example, under different types of gasoline-air mixtures.  The savings in time and money over building actual prototypes in some cases can be measured in years and millions of dollars.

I’ve been away from mathematics for a while, as have many professionals who learned it as a part of their education but don’t have the opportunity to use it in practice.  Lee points out that there can be a learning curve for Maple in such circumstances, but that ultimately it may bring back a resurgence in math-based analytics.

Mathematics.  Good stuff.  Not always easy, but accessible.  With symbolic math engines like Maple, today you can almost call it easy.