Why Parallel Processing? Why now?
Many software companies have applications
which are in use by their customers that have significant runtime and for which
fast runtime is a necessity or a competitive advantage. There has always been
the pressure to make such applications go faster. Historically, as processors
have increased their speed, the needed speedups could often be achieved by
tuning the single cpu performance of the program and by utilizing the latest
and fastest hardware. In the Electronic Design Automation industry, it
has always been the case that the newest machines had to be used to run the
design tools which were being used to design the next generation of processors.
The speed and memory capability of the newest machines had always been just
enough to design the next generation chips. Other types of cpu intensive
software have also ridden the hardware performance curve in this way.
We will no
longer see significant increases in the clock speed of processors. The power
consumed by the fastest possible processors generates too much heat to
dissipate effectively in known technologies. Instead processor manufacturers
are adding multiple processors cores to each chip. Why does this help? Power
Consumed = Capacitance * Voltage^2 * Frequency. If a given calculation is
perfectly moved from a processor running at N Gigahertz to 2 parallel
processors running at N/2 Gigahertz where does the savings come from? It would
seem that each processor runs in half the power but now there are 2 processors
which would mean that the same power is used. The power savings comes from the
fact that slower processors can run at a lower voltage. For example a processor
running at half the frequency can run at around 8/10 the Voltage level. .8^2 is
.64 which implies a 36% power savings. If you scale this up to 32 cpus then it
will be possible to get a lot of compute power for much lower power consumption
and therefore much lower required heat dissipation. Eventually it seems that
even cell phones and other embedded devices will move to multi core processing for
this reason. More compute capabilities or longer battery life for the same
capabilities. Both are compelling values.
Part of the assumption that goes into the
definition of how this power savings will be achieved is that the software
implementation of the parallel program running on the 2 slower processors must
be perfectly efficient. Well, nothing in the real world is perfectly efficient.
Even if the coding is not perfectly efficient, as long as it is reasonably
efficient, then there is a benefit. If the parallel coding is inefficient, then
it might be that the parallel program will use more power on the slower
processors than the serial program running on the fast single processor.
However, since faster processors that won’t melt can no longer be made, we are
kind of stuck with going parallel and need to do our best.
It stuck because from a software development perspective a large new burden is being placed on software developers. That burden is to write programs that are as efficient as possible and which make use of N processors, hopefully where N is configurable by the user and can be increased as new processors chips with more cores become available. For most developers this is something really new and really complex. It also presents a huge discontinuity for software companies with large investments in legacy code.
It stuck because from a software development perspective a large new burden is being placed on software developers. That burden is to write programs that are as efficient as possible and which make use of N processors, hopefully where N is configurable by the user and can be increased as new processors chips with more cores become available. For most developers this is something really new and really complex. It also presents a huge discontinuity for software companies with large investments in legacy code.
With Car Rental 8 you can discover the cheapest car hire at over 50,000 locations worldwide.
ReplyDelete