jump to navigation

Can we use GPUs in Desktop Computing? February 25, 2010

Posted by Peter Varhol in Software development, Software platforms, Software tools.
trackback

Graphical Processing Units (GPUs) are clearly faster for certain types of computations than industry standard CPUs.  I’ve written about the potential of GPUs on several occasions over the last several months, specifically featuring the Nvidia GPUs, which are the most advanced and most marketed as general-purpose computational processors.

Yet there are significant barriers to making use of these processors.  Probably the highest barrier is that code has to be written and compiled specifically for the GPU.  For the most part, that’s C code (C++ can also be pre-compiled down to C, so C++ can also be employed), but there are restrictions, such as the inability to use function pointers.  Rather, functions have to be called directly.  There are also Java bindings available for some libraries, so it is possible to get some benefit for Java applications.

But the big problem is that you need to recompile code in order to take advantage of GPU performance (and possibly make some code changes before doing so).  That means you either have to own your own code, or be dependent upon your software vendor to do the porting for you and offer it as a product.  Either way, it can be a difficult road.

But here’s an easier way of getting code to run on GPUs.  A startup company called AccelerEyes is working to ease the development burden for moving code over to GPUs.  It has started doing so with MATLAB, a special-purpose language from The Math Works and used extensively by scientists and engineers.  Its product is called Jacket.

Here’s how it works.  You examine your code and tag data structures that might execute more quickly on a GPU.  Jacket takes those tags and automatically compiles those data structures into GPU executable code.  When data and functions use those data structures, it compiles the functions to GPU code and fetches the data into GPU memory space.  When the computation is complete, the data is returned to the CPU space.

The truly impressive thing about Jacket is that it’s completely transparent to the developer and user.  Once the developer tags the data structures, everything else is done under the covers.

The company is also working with the Nvidia Compute Unified Device Architecture (CUDA), which expands the use of GPUs into an architecture for running code in parallel.  Jacket can extend the single GPU support of the base Jacket product to as many as eight (8) GPUs in a single system image machine, and ultimately to GPU clusters.

MATLAB is an important first step.  If AccelerEyes can expand that to more general purpose languages, it will encourage applications vendors to offer GPU-ready versions of their products.  Because GPUs are far faster for floating point computations, yet less expensive than standard CPUs, the have the potential to significantly speed up computationally-intensive operations.

If you’re using MATLAB and would like better performance, take a look at Jacket today.  If you develop using more general-purpose languages, keep an eye on this company.  When they release Jacket for more general purpose languages, any vendor of computational applications can use it to leverage GPU horsepower.  Way cool.

Comments»

1. Will Dwinnell - February 27, 2010

It’ll be interesting to see where this goes. As far as I know, the GPU-based solutions so far have been restricted to single-precision arithmetic. A competing solution comes from the parallel processing possible on multi-core processors. Exciting times, indeed.


Leave a comment