NEWS & UPDATES >> BCA BCSP064 Synopsis & Project Work Started for DEC 2017, IGNOU MCA MCSP060 Synopsis Work Started for DEC 2017, CONTACT 4 IGNOU Mini Project

MULTIPROCESSING AND PROCESSOR COUPLING

MULTIPROCESSING AND PROCESSOR COUPLING
Multiprocessing is a general term for the use of two or more CPUs within a single computer system. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined. The term multiprocessing is sometimes used to refer to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant. However, the term multiprogramming is more appropriate to describe this concept, which is implemented mostly in software, whereas multiprocessing is more appropriate to describe the use of multiple hardware CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two. Processor Coupling is a type of multiprocessing. Let us see in the next section.

Processor Coupling

Tightly-coupled multiprocessor systems contain multiple CPUs that are connected at the bus level. These CPUs may have access to a central shared memory (SMP), or may participate in a memory hierarchy with both local and shared memory (NUMA). The IBM p690 Regatta is an example of a high end SMP system. Chip multiprocessors, also known as multi-core computing, involve more than one processor placed on a single chip and can be thought of the most extreme form of tightly-coupled multiprocessing. Mainframe systems with multiple processors are often tightly-coupled.


Loosely-coupled multiprocessor systems often referred to as clusters are based on multiple standalone single or dual processor commodity computers interconnected via a high speed communication system. A Linux Beowulf is an example of a loosely-coupled system.

Tightly-coupled systems perform better and are physically smaller than loosely-coupled systems, but have historically required greater initial investments and may depreciate rapidly; nodes in a loosely-coupled system are usually inexpensive commodity computers and can be recycled as independent machines upon retirement from the cluster.

Power consumption is also a consideration. Tightly-coupled systems tend to be much more energy efficient than clusters. This is due to fact that considerable economies can be realised by designing components to work together from the beginning in tightly-coupled systems, whereas loosely-coupled systems use components that were not necessarily intended specifically for use in such systems. 

MULTIPROCESSOR INTERCONNECTIONS

As learnt above, a multiprocessor can speed-up and can improve throughput of the

computer system architecturally. The whole architecture of multiprocessor is based on

the principle of parallel processing, which needs to synchronize, after completing a


stage of computation, to exchange data. For this the multiprocessor system needs an efficient mechanism to communicate. This 

section outlines the different architecture of multiprocessor interconnection, including:

Bus-oriented System

Crossbar-connected System

Hyper cubes

Multistage Switch-based System.

Bus-oriented System

Figure1 illustrates the typical architecture of a bus oriented system. As indicated, processors and memory are connected by a common bus. Communication between processors (P1, P2, P3 and P4) and with globally shared memory is possible over a shared bus. Other then illustrated many different schemes of bus-oriented system are also possible, such as:

1)            Individual processors may or may not have private/cache memory.

2)            Individual processors may or may not attach with input/output devices.

3)            Input/output devices may be attached to shared bus.

4)            Shared memory implemented in the form of multiple physical banks connected to the shared bus.


P 1


P 2


P 3

P 4

















































                                                              
                                                              Shared memory


Figure 1. Bus-oriented multiprocessor interconnection

The above architecture gives rise to a problem of contention at two points, one is shared bus itself and the other is shared memory. Employing private/cache memory in either of two ways, explained below, the problem of contention could be reduced;

with shared memory; and

with cache associated with each individual processor

1)            With shared memory


P 1


P 2


P 3

P 4

















































Cache

Shared memory


Figure 2: With shared Memory

2)           Cache associated with each individual processor.


P 1


P 2


P 3

P 4













Cache


Cache


Cache

Cache

















































Shared memory



Figure 3: Individual processors with cache

The second approach where the cache is associated with each individual processor is the most popular approach because it reduces contention more effectively. Cache attached with processor an capture many of the local memory references for example, with a cache of 90% hit ratio, a processor on average needs to access the shared memory for 1 (one) out of 10 (ten) memory references because 9 (nine) references are already captured by private memory of processor. In this case where memory access is uniformly distributed a 90% cache hit ratio can support the shared bus to speed-up 10 times more than the process without cache. The negative aspect of such an arrangement arises when in the presence of multiple cache the shared writable data are cached. In this case Cache Coherence is maintained to consistency between multiple physical copies of a single logical datum with each other in the presence of update activity. Yes, the cache coherence can be maintained by attaching additional hardware or by including some specialised protocols designed for the same but unfortunately this special arrangement will increase the bus traffic and thus reduce the benefit that processor caches are designed to provide.

Cache coherence refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.








No comments:

Post a Comment