Computer Hardware Lecturer - 4
Computer Hardware Lecturer - 4
Computer Hardware Lecturer - 4
RISC stands for Reduced Instruction Set Computer Processor, and CISC stands for
Complex Instruction Set Computer Processor, both are used to increase CPU
performance.
RISC reduces the cycles per instruction at the cost of the number of instructions per
program.
CISC tries to minimize the number of instructions per program but at the cost of
increasing the number of cycles per instruction.
A few decades earlier, when the programming was done using assembly language,
a need was felt to make instruction do more tasks because programming in
assembly was tedious and error-prone; because of this, CISC architecture evolved
but with the uprise of high-level language dependency on assembly reduced the
RISC architecture prevailed.
Let’s understand both RISC and CISC concepts in-depth.
Recommended Topic, Microinstruction in Computer Architecture
Reduced Instruction Set Computer (RISC)
RISC is a microprocessor architecture that uses a simple set of instructions that can
be substantially modified. It is designed to reduce the time it takes for instructions to
execute by optimizing and reducing the number of instructions. It means that each
instruction cycle has only one clock per cycle, and each cycle consists of three
parameters: fetch, decode, and execute. The RISC processor can also combine
multiple complex instructions into a simple one. RISC chips require several
transistors, making them less expensive to develop and reducing instruction
execution time.
RISC Architecture:
In this section, we will discuss the types of pipelining, pipelining hazards, its
advantage. So let us start.
Look at the figure below the 5 instructions are pipelined. The first instruction gets
completed in 5 clock cycle. After the completion of first instruction, in every new
clock cycle, a new instruction completes its execution.
Observe that when the Instruction fetch operation of the first instruction is
completed in the next clock cycle the instruction fetch of second instruction gets
started. This way the hardware never sits idle it is always busy in performing some
or other operation. But, no two instructions can execute their same stage at
the same clock cycle.
Types of Pipelining
In 1977 Handler and Ramamoorthy classified pipeline processors depending on
their functionality.
1. Arithmetic Pipelining
2. Instruction Pipelining
Here, the number of instruction are pipelined and the execution of current
instruction is overlapped by the execution of the subsequent instruction. It is also
called instruction lookahead.
3. Processor Pipelining
Here, the processors are pipelined to process the same data stream. The data
stream is processed by the first processor and the result is stored in the memory
block. The result in the memory block is accessed by the second processor. The
second processor reprocesses the result obtained by the first processor and the
passes the refined result to the third processor and so on.
The pipeline performing the precise function every time is unifunctional pipeline.
On the other hand, the pipeline performing multiple functions at a different time or
multiple functions at the same time is multifunction pipeline.
The static pipeline performs a fixed-function each time. The static pipeline is
unifunctional. The static pipeline executes the same type of instructions
continuously. Frequent change in the type of instruction may vary the performance
of the pipelining.
Scalar pipelining processes the instructions with scalar operands. The vector
pipeline processes the instruction with vector operands.
Pipelining Hazards
Whenever a pipeline has to stall due to some reason it is called pipeline hazards.
Below we have discussed four pipelining hazards.
1. Data Dependency
But the Sub instruction need the value of the register R2 at the cycle t3. So the Sub
instruction has to stall two clock cycles. If it doesn’t stall it will generate an
incorrect result. Thus depending of one instruction on other instruction for data
is data dependency.
2. Memory Delay
3. Branch Delay
Suppose the four instructions are pipelined I1, I2, I3, I4 in a sequence. The instruction
I1 is a branch instruction and its target instruction is Ik. Now, processing starts and
instruction I1 is fetched, decoded and the target address is computed at the 4th stage
in cycle t3.
But till then the instructions I2, I3, I4 are fetched in cycle 1, 2 & 3 before the target
branch address is computed. As I1 is found to be a branch instruction, the
instructions I2, I3, I4 has to be discarded because the instruction Ik has to be
processed next to I1. So, this delay of three cycles 1, 2, 3 is a branch delay.
Prefetching the target branch address will reduce the branch delay. Like if the
target branch is identified at the decode stage then the branch delay will reduce to 1
clock cycle.
4. Resource Limitation
If the two instructions request for accessing the same resource in the same clock
cycle, then one of the instruction has to stall and let the other instruction to use the
resource. This stalling is due to resource limitation. However, it can be prevented
by adding more hardware.
Advantages
1. Pipelining improves the throughput of the system.
2. In every clock cycle, a new instruction finishes its execution.
3. Allow multiple instructions to be executed concurrently.