PARALLEL WORLDS

...........................................SASWAT RAJ  




Things change fast in computer science, but odds are that they will change especially fast in the next few years. Much of this change centers on the shift toward parallel computing. In the short term, parallelism will take hold in massive datasets and analytics, but longer term, the shift to parallelism will impact all software, because most existing systems are ill-equipped to handle this new reality.

Like many changes in computer science, the rapid shift toward parallel computing is a function of technology trends in hardware. Most technology watchers are familiar with Moore’s Law, and the more general notion that computing performance doubles about every 18-24 months. This continues to hold for disk and RAM storage sizes, but a very different story has unfolded for CPUs in recent years, and it is changing the balance of power in computing — probably for good.

 

What Moore’s Law predicts, specifically, is the number of transistors that can be placed on an integrated circuit. Until recently, these extra transistors had been used to increase CPU speed. But, in recent years, limits on heat and power dissipation have prevented computer architects from continuing this trend. Basically, CPUs are not getting much faster. Instead, the extra transistors from Moore’s Law are being used to pack more CPUs into each chip.

Most computers being sold today have a single chip containing between two and eight processor “cores.” In the short term, this still seems to make our existing software go faster: one core can run operating systems utilities, another can run the currently active application, another can drive the display, and so on. But remember, Moore’s Law continues doubling every 18 months. That means your laptop in nine years will have 128 processors, and a typical corporate rack of 40-odd computers will have something in the neighborhood of 20,000 cores.

Parallel software should, in principle, take advantage not only of the hundreds of processors per machine, but of the entire rack — even an entire data center of machines. Since individual cores will not get appreciably faster, we need massively parallel software that can scale up with the increasing number of cores, or we will effectively drop off of the exponential growth curve of Moore’s Law. Unfortunately, the large majority of today’s software is written for a single processor, and there is no technique known to “auto-parallelize” these programs.