What are parallel computing, grid computing, and supercomputing?
Parallel computing
Parallel computing is the concurrent use of multiple processors (CPUs) to do computational work.In traditional (serial) programming, a single processor executes program instructions in a step-by-step manner. Some operations, however, have multiple steps that do not have time dependencies and therefore can be separated into multiple tasks to be executed simultaneously. For example, adding a number to all the elements of a matrix does not require that the result obtained from summing one element be acquired before summing the next element. Elements in the matrix can be made available to several processors, and the sums performed simultaneously, with the results available faster than if all operations had been performed serially.
Parallel computations can be performed on shared-memory systems with multiple CPUs, distributed-memory clusters made up of smaller shared-memory systems, or single-CPU systems. Coordinating the concurrent work of the multiple processors and synchronizing the results are handled by program calls to parallel libraries; these tasks usually require parallel programming expertise.
At Indiana University, the UITS Scientific Applications and Performance Tuning (SciAPT) group can help programmers convert serial codes to parallel code, and optimize the performance of parallel codes. To arrange a consultation, email SciAPT.
Grid computing
The term "grid computing" denotes the connection of distributed computing, visualization, and storage resources to solve large-scale computing problems that otherwise could not be solved within the limited memory, computing power, or I/O capacity of a system or cluster at a single location. Much as an electrical grid provides power to distributed sites on demand, a computing grid can supply the infrastructure needed for applications requiring very large computing and I/O capacity.The creation of a functional grid requires a high-speed network and grid middleware that lets the distributed resources work together in a relatively transparent manner. For example, whereas sharing resources on a single large system may require a batch scheduler, scheduling and dispatching jobs that run concurrently across multiple systems in a grid requires a metascheduler that interacts with each of the local schedulers. Additionally, a grid authorization system may be required to map user identities to different accounts and authenticate users on the various systems.
No comments:
Post a Comment