Swami Vivekananda Institute of Science &: Technology

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Swami Vivekananda Institute of Science &

Technology
Approved by AICTE & Affiliated to MAKAUT & WBSCTE | NAAC Accredited |
ISO 9001:2008 Certified

Name of the examination: CA-2

Topic: Report on Flynn's classification


Subject Name: Computer Architecture
Subject Code: EC502

Name: HASANUR KHAN


University Roll No. :24130322053

Department: ECE

Year (Semester):3rd / 5th


INTRODUCTION:
Parallel computing is computing where the jobs are broken
into discrete parts that can be executed concurrently. Each part
is further broken down into a series of instructions. Instructions
from each piece execute simultaneously on different CPUs. The
breaking up of different parts of a task among
multiple processors will help to reduce the amount of time to
run a program. Parallel systems deal with the simultaneous
use of multiple computer resources that can include a single
computer with multiple processors, a number of computers
connected by a network to form a parallel processing cluster,
or a combination of both. Parallel systems are more difficult to
program than computers with a single processor because the
architecture of parallel computers varies accordingly and the
processes of multiple CPUs must be coordinated and
synchronized. The difficult problem of parallel processing is
portability.
An Instruction Stream is a sequence of instructions that are
read from memory. Data Stream is the operations performed
on the data in the processor.
Flynn’s classification is a classification scheme for computer
architectures proposed by Michael Flynn in 1966. The
classification is based on the number of instruction streams and
data streams that can be processed simultaneously by a
computer architecture.
FLYNN'S CLASSIFICATION:
Flynn’s classification is a useful tool for understanding
different types of computer architectures and their strengths
and weaknesses. The taxonomy highlights the importance of
parallelism in modern computing and shows how different
types of parallelism can be exploited to improve performance.

systems are classified into four major categories:


1.Single-instruction, single-data (SISD) systems –

A SISD computing system is a uniprocessor


machine that is capable of executing a single instruction,
operating on a single data stream. In SISD, machine
instructions are processed in a sequential manner and
computers adopting this model are popularly called
sequential computers. Most conventional computers
have SISD architecture. All the instructions and data to
be processed have to be stored in primary memory. The
speed of the processing element in the SISD model is
limited(dependent) by the rate at which the computer
can transfer information internally. Dominant
representative SISD systems are IBM PC, and
workstations.
2.Single-instruction, multiple-data (SIMD) systems –

A SIMD system is a multiprocessor machine


capable of executing the same instruction on all the
CPUs but operating on different data streams. Machines
based on a SIMD model are well suited to scientific
computing since they involve lots of vector and matrix
operations. So that the information can be passed to all
the processing elements (PEs) organized data elements
of vectors can be divided into multiple sets (N-sets for
N PE systems) and each PE can process one data
set. Dominant representative SIMD systems are Cray’s
vector processing machines.
3.Multiple-instruction, single-data (MISD) systems –
A MISD computing system is a multiprocessor
machine capable of executing different instructions on
different PEs but all of them operate on the same
dataset. Example Z = sin(x)+cos(x)+tan(x) The system
performs different operations on the same data set.
Machines built using the MISD model are not useful in
most applications, a few machines are built, but none of
them are available commercially.
4.Multiple-instruction, multiple-data (MIMD) systems –

An MIMD system is a multiprocessor machine


that is capable of executing multiple instructions on
multiple data sets. Each PE in the MIMD model has
separate instruction and data streams; therefore,
machines built using this model are capable of any
application. Unlike SIMD and MISD machines, PEs in
MIMD machines work asynchronously. MIMD machines
are broadly categorized into shared-memory
MIMD and distributed-memory MIMD based on the
way PEs are coupled to the main memory. In the shared
memory MIMD model (tightly coupled multiprocessor
systems), all the PEs are connected to a single global
memory and they all have access to it. The

communication between PEs in this model takes place


through the shared memory, modification of the data
stored in the global memory by one PE is visible to all
other PEs. The dominant representative shared memory
MIMD systems are Silicon Graphics machines and
Sun/IBM’s SMP (Symmetric Multi-Processing).
In Distributed memory MIMD machines (loosely
coupled multiprocessor systems) all PEs have a local
memory. The communication between PEs in this model
takes place through the interconnection network (the
inter-process communication channel, or IPC The
shared-memory MIMD architecture is easier to program
but is less tolerant to failures and harder to extend with
respect to the distributed memory MIMD model.

CONCLUSION:
Flynn's Taxonomy provides a conceptual framework for
understanding the parallelism and concurrency
capabilities of different computer architectures. It is
particularly valuable for classifying and comparing the
performance characteristics of various systems in terms
of their ability to execute instructions and process data
simultaneously. While the boundaries between these
categories have blurred with advances in computer
architecture, Flynn's Taxonomy remains a foundational
concept in the field and is essential for discussing
parallelism and concurrency in computing systems.

You might also like