Do We Care About Writing Parallel Code? January 19, 2010Posted by Peter Varhol in Software development, Strategy.
My writings on developing code to run in parallel beg the question of just how important is the ability for developers to produce code that can execute on multiple processor cores? On the one hand, it seems like a fool’s errand. Few individual applications seem to have the need to run different threads on different cores. To at least some extent, modern operating systems can dispatch different processes to different processors, although in practice it really has to be entirely different applications.
Certainly it’s important for scientific and engineering applications, where algorithms that can be parallelized are many and known. As difficult as it is to break up code so that it can be dispatched to multiple processors safely, the techniques are practiced often in these applications.
But how about general-purpose business applications? Despite my playing up the need for software to take better advantage of multi-core processors, there doesn’t seem to be any hard evidence that businesses are clamoring for applications that run on all processor cores.
But is this really true? Randy Kahle of 1060 Research tells me that IT leaders that he talks to in enterprises are desperate to hire developers who can understand the issues of writing code that can execute in parallel, and have experience doing so. He is not sure why this is the case.
Nor am I, but I’m going to take a stab at it. For a number of years, hardware advances have outpaced software. Processor performance, memory speed, and graphics display have all made significantly greater gains than software. For example, it took close to ten years after the widespread availability of 32-bit processors before we had a real 32-bit desktop operating system and a few desultory 32-bit applications.
So what happened? The software not only caught up, it thumbed its nose at the hardware as it flew by. It wasn’t because we got any better about writing software, but because hardware took a quick right turn off the track altogether. Hardware continues to accelerate, but on an entirely different racetrack, so to speak.
The end result is that software continues to become more complex, consuming computing power. That computing power, while still growing (following Moore’s Law), is making itself available not through higher clock rates or more steps in the pipeline, but through multiple pipelines in cores that look like separate processors to the applications.
Experienced IT leaders understand this disconnect intuitively, even if they may not be entirely clear on why or how it occurred. They understand that their applications need more power, and look for ways to get it, whether in multiple cores, in the cloud, or in entirely new architectures. They understand that they need developers who can deliver the processing power they have available, and are looking for coders who can build applications that can make that same right turn that the hardware made.