Tuesday 28 October 2014

HYPER THREADING and MULTIPLE CORE

Today,we have learned about introduction to CORBA(Common Object Request Broker Architecture) and Foundation to Parallel Computing.I want to touch about hyperthreading and multiple core.Read carefully...

Hyper Threading

As has already been noted, memory delay has become an important problem for computer performance.When an instruction requires data that is in second level cache,it may have to wait a cycle or two.

During this time,the CPU will look for other instructions that do not depend on the result of the blocked instruction and execute them out of order.However,out of order execution is at best good for a dozen instructions.When an instruction needs data from DDR DRAM,it will be blocked for a length of time during which the CPU could have run hundreds of instructions.

In 2004,Intel tried to address this memory delay problem with a trick called Hyper threading.Rather than duplicate the entire circuitry of a CPU,a Hyper threading processor simply duplicates the registers that hold all the data that the OS would have to remove from the CPU in order to run a different thread.The OS thinks that there are two CPUs and it assigns two different threads to them. All the registers and data needed to run each thread are loaded into the same CPU chip at the same time.

When both threads are able to run at full speed,the CPU spends half its time running instructions for each thread.Unlike the OS,the CPU doesn't have a view of "priority" and cannot favor one thread because it is more important.However,if one thread becomes blocked because it is waiting for data from the very slow main memory,then the CPU can apply all of its resources to executing instructions for the other thread.Only when both threads are simultaneously blocked waiting for data from memory does the CPU become idle.

Multiple Core

Moore's Law says that every 18 months the number of circuits on a chip can double.About one Moore Generation after Intel introduced Hyper threading both Intel and AMD(Advance Micro Devices) decided to spend the extra transistors to take the next step and create two real CPUs in the same chip.

It has always been possible to do this in any 18 month cycle. However,vendors previously decided to use the transistors to make the single CPU run faster,by supporting out of order execution and register renaming.

A Server tends to assign a thread to each incoming user request.Generally all network users are of equal priority, so threading is an obvious choice for Server software.However,desktop users tend to do one primary thing at a time.If you are running a low intensity job like Word or Web Browsing, CPU doesn't matter.However,playing video games,retouching photographs,compressing TV programs and a few other consumer programs will use a lot of one CPU,and making the one CPU run faster seemed more important.

Engineers ran out of ideas for using transistors to make a single program run faster.So starting last year they started building "dual core" chips with two CPUs.That forced some of the software vendors,particularly the video game makers,to redesign their software to make better use of the second processor.

Two CPUs can do twice as much work as one CPU if you can keep both processors busy all the time. Unfortunately,that is not realistic. Even on a server,the value of each subsequent processor goes down,and on a desktop there just isn't enough work to distribute it uniformly.So while Intel is beginning to show of a Core 2 Quadro chip with four CPUs,it makes little sense to go farther than that.

That's all....

No comments:

Post a Comment