Notes Cso Unit 2
Notes Cso Unit 2
Notes Cso Unit 2
The control unit is a component of a computer's central processing unit (CPU) that directs
operation of the processor. It tells the computer's memory, arithmetic/logic unit and input and
output devices how to respond to a program's instructions.
It directs the operation of the other units by providing timing and control signals.[citation needed]
All computer resources are managed by the CU (Control Unit).[citation needed] It directs the flow
of data between the Central Processing Unit (CPU) and the other devices. John von Neumann
included the control unit as part of the von Neumann architecture. In modern computer designs,
the control unit is typically an internal part of the CPU with its overall role and operation
unchanged.
The Control Unit is the circuitry that controls the flow of data through the processor, and
coordinates the activities of the other units within it.[citation needed] In a way, it is the "brain
within the brain", as it controls what happens inside the processor, which in turn controls the rest
of the computer.[citation needed] The examples of devices that require a Control Unit are CPUs
and graphics processing units (GPUs).[citation needed] The Control Unit receives external
instructions or commands which it converts into a sequence of control signals that the Control
Unit applies to the data path to implement a sequence of register-transfer level operations
Control Unit.
The Control Unit (CU) is generally a sizable collection of complex digital circuitry interconnecting
and controlling the many execution units contained within a CPU.[citation needed] The CU is
normally the first CPU unit to accept from an externally stored computer program, a single
instruction, based on the CPU’s instruction set, then decode this individual instruction into several
sequential steps (fetching addresses/data from registers/memory, managing execution [i.e. data
sent to the ALU or I/O], and storing the resulting data back into registers/memory) that controls
and coordinates the CPU’s interworks.[citation needed] These detailed steps from the CU dictate
Other more advanced forms of Control Units manage the translation of instructions (but not the
data containing portion) into several micro-instructions and the CU manages the scheduling of the
micro-instructions between the selected execution units to which the data is then channeled and
changed according to the execution unit’s function (i.e., ALU contains several functions).[citation
needed] On some processors, the Control Unit may be further broken down into additional units,
such as an instruction unit or scheduling unit to handle scheduling, or a retirement unit to deal
with results coming from the instruction pipeline.[citation needed] Again, the Control Unit
orchestrates the main functions of the CPU: carrying out stored instructions in the software
program then directing the flow of data throughout the computer based upon these instructions
(roughly likened to how traffic lights will systematically control the flow of cars [containing data]
to different locations within the traffic grid [CPU] until it parks at the desired parking spot
[memory address/register].[citation needed] The car occupants [data] then go into the building
[execution unit] and comes back changed in some way then get back into the car and returns to
another location via the controlled traffic grid).
Hardwired control units are implemented through use of sequential logic units, featuring a finite
number of gates that can generate specific results based on the instructions that were used to
invoke those responses. Hardwired control units are generally faster than micro programmed
designs. Their design uses a fixed architecture—it requires changes in the wiring if the instruction
set is modified or changed.[citation needed] This architecture is preferred in reduced instruction
set computers (RISC) as they use a simpler instruction set.
A controller that uses this approach can operate at high speed; however, it has little flexibility, and
the complexity of the instruction set it can implement is limited.
The hardwired approach has become less popular as computers have evolved. Previously, control
units for CPUs used ad-hoc logic, and they were difficult to design.
The idea of microprogramming was introduced by Maurice Wilkes in 1951 as an intermediate level
to execute computer program instructions. Micro programs were organized as a sequence of
microinstructions and stored in special control memory. The algorithm for the micro program
Control Memory
Address Sequencing
Microinstructions are usually stored in groups where each group specifies a routine, where each
routine specifies how to carry out an instruction. Each routine must be able to branch to the next
routine in the sequence. An initial address is loaded into the CAR when power is turned on; this is
usually the address of the first microinstruction in the instruction fetch routine. Next, the control
unit must determine the effective address of the instruction.
When instruction execution is finished, control must be return to the fetch routine. This is done
using an unconditional branch. Addressing sequencing capabilities of control memory include
Incrementing the CAR Unconditional and conditional branching (depending on status bit).Mapping
instruction bits into control memory addresses Handling subroutine calls and returns.
Mapping
The next step is to generate the micro operations that executed the instruction. This involves
Microinstruction Formats:
One bit for each possible signal that might need to be generated by any microinstruction -leads to the
fastest execution:
Requires d bits if there are d possible destinations plus s bits if there are s possible sources.
Mutually exclusive operations grouped together and encoded in binary. Reduces number of bits in
microinstruction. Each vertically encoded field needs a decoder
Suppose there were up to 15 possible sources and destinations (PC, MAR, MDR, IR ....). Four bits needed to
specify which one:
In computer architecture and engineering, a sequencer or micro sequencer generates the addresses used to
step through the micro program of a control store. It is used as a part of the control unit of a CPU or as a
stand-alone generator for address ranges.
Usually the addresses are generated by some combination of a counter, a field from a microinstruction, and
some subset of the instruction register. A counter is used for the typical case, that the next microinstruction
is the one to execute. A field from the microinstruction is used for jumps, or other logic.
Since CPUs implement an instruction set, it's very useful to be able to decode the instruction's bits directly
into the sequencer, to select a set of microinstructions to perform a CPU's instructions.
Most modern CPUs are considerably more complex than this description suggests. They tend to have
multiple cooperating micro machines with specialized logic to detect and handle interference between the
micro machines.
Or
A micro program sequencer for micro programmed control unit develops micro program consecutive
addresses, branches to subroutines with address saving and possible return to micro program, as well as
interrupting micro program forcings with address saving of the interrupted micro programs.
In order to allow the double saving of micro program and subroutine addresses in case of concurrent
interruptions and branches, the sequencer is provided with two address generation loops each including a
register. The two loops have a common portion to which they accede through a multiplexer (23).
The first loop (23, 25, 22, 21, 30, 31) is further coupled to a saving register stack (20).
While the first loop executes the saving of a micro program address and the latching of a branch address
received from the second loop, the second loop (23, 25, 24, 39, 17, 18, 42, 19, 27, 29) executes a first
updating and-, related latching of interrupting micro program address. During the following cycle, by
command of the first microinstruction of the interrupting micro program, the second loop performs a first
updating and related latch of the interrupting micro program address and the first loop saves into the
register stack (20) the branch address and performs a second updating and related latching of the
interrupting micro program address.
Microcode is a layer of hardware-level instructions that implement higher-level machine code instructions
or internal state machine sequencing in many digital processing elements. Microcode is used in general
central processing units, in more specialized processors such as microcontrollers, digital signal processors,
channel controllers, disk controllers, network interface controllers, network processors, graphics processing
units, and in other hardware.
Microcode typically resides in special high-speed memory and translates machine instructions, state
machine data or other input into sequences of detailed circuit-level operations. It separates the machine
instructions from the underlying electronics so that instructions can be designed and altered more freely. It
also facilitates the building of complex multi-step instructions, while reducing the complexity of computer
circuits. Writing microcode is often called microprogramming and the microcode in a particular processor
implementation is sometimes called a microprogram.
Engineers normally write the microcode during the design phase of a processor, storing it in a ROM (read-
only memory) or PLA (programmable logic array)[1] structure, or in a combination of both. However,
machines also exist that have some (or all) microcode stored in SRAM or flash memory. This is traditionally
denoted a "writeable control store" in the context of computers. Complex digital processors may also
employ more than one (possibly microcode-based) control unit in order to delegate sub-tasks that must be
performed (more or less) asynchronously in parallel. A high-level programmer, or even an assembly
programmer, does not normally see or change microcode. Unlike machine code, which often retains some
compatibility among different processors in a family, microcode only runs on the exact electronic circuitry
for which it is designed, as it constitutes an inherent part of the particular processor design itself
More extensive microcoding allows small and simple microarchitectures to emulate more powerful
architectures with wider word length, more execution units and so on – a relatively simple way to achieve
software compatibility between different products in a processor family.
Some hardware vendors, especially IBM, use the term "microcode" as a synonym for "firmware". That way,
all code in a device is termed "microcode" regardless of it being microcode or machine code; for example,
hard disk drives are said to have their microcode updated, though they typically contain both microcode
and firmware.
Q.1 With a neat block diagram explain the working principle Dec 2010 5
of micro program sequencer
Q.2 Draw the format of a microinstruction and explain how a June 2010 7
microprogram sequencer works.
Q.3 Write a brief note on microprogram sequencer June 2014 2
Arithmetic Logic Unit, ALU is one of the many components within a computer processor. The ALU performs
mathematical, logical, and decision operations in a computer and is the final processing performed by the
processor. After the information has been processed by the ALU, it is sent to the computer memory.
In some computer processors, the ALU is divided into an AU and LU. The AU performs the arithmetic
operations and the LU performs the logical operations.
An arithmetic logic unit (ALU) is a digital circuit that performs integer arithmetic and logical operations. The
ALU is a fundamental building block of the central processing unit of a computer, and even the simplest
microprocessors contain one for purposes such as maintaining timers. The processors found inside modern
CPUs and graphics processing units (GPUs) accommodate very powerful and very complex ALUs; a single
component may contain a number of ALUs.
An ALU must process numbers using the same formats as the rest of the digital circuit. The format
of modern processors is almost always the two's complement binary number representation.
Early computers used a wide variety of number systems, including ones' complement, two's
complement, sign-magnitude format, and even true decimal systems, with various representation
of the digits.
An arithmetic logic unit (ALU) is a digital circuit used to perform arithmetic and logic operations. It
represents the fundamental building block of the central processing unit (CPU) of a computer.
Modern CPUs contain very powerful and complex ALUs. In addition to ALUs, modern CPUs contain
a control unit (CU). Most of the operations of a CPU are performed by one or more ALUs, which
load data from input registers. A register is a small amount of storage available as part of a CPU.
The control unit tells the ALU what operation to perform on that data and the ALU stores the
result in an output register. The control unit moves the data between these registers, the ALU and
memory.
ALU working
An ALU performs basic arithmetic and logic operations. Examples of arithmetic operations are
addition, subtraction, multiplication, and division. Examples of logic operations are comparisons of
values such as NOT, AND, and OR.
All information in a computer is stored and manipulated in the form of binary numbers, i.e. 0 and
1. Transistor switches are used to manipulate binary numbers, since there are only two possible
states of a switch: open or closed. An open transistor, through which there is no current,
represents a 0. A closed transistor, through which there is a current, represents a 1. Operations
can be accomplished by connecting multiple transistors. One transistor can be used to control a
second one, in effect turning the transistor switch on or off depending on the state of the second
transistor. This is referred to as a gate, because the arrangement can be used to allow or stop a
current. The simplest type of operation is a NOT gate. This uses only a single transistor. It uses a
single input and produces a single output, which is always the opposite of the input. The figure
below shows the logic of the NOT gate.
Addition is the most common arithmetic operation a processor performs. When two n-bit numbers are
added together, it is always possible to produce a result with n + 1 nonzero digits due to a carry from the
leftmost digit. For two's complement addition of two numbers, there are three cases to consider:
If both numbers are positive and the result of their addition has a sign bit of 1, then overflow has
occurred; otherwise the result is correct.
If both numbers are negative and the sign of the result is 0, then overflow has occurred; otherwise
the result is correct.
If the numbers are of unlike sign, overflow cannot occur and the result is always correct.
For addition use normal binary addition
0+0=sum 0 carry 0
0+1=sum 1 carry 0
1+1=sum 0 carry 1
Overflow cannot occur when adding 2 operands with the different signs. If 2 operand have same sign and
result has a different sign, overflow has occurred. Subtraction: Take 2’s complement of subtrahend and
add to minuend i.e. a -b = a + (b),So we only need addition and complement circuits
Q.2 Write down the algorithm for addition and subtraction with June 2011 10
signed-magnitude data. Also draw the flowchart.
Multiplication
A complex operation compared with addition and subtraction. Many algorithms are used, esp. for large
numbers .Simple algorithm is the same long multiplication taught in grade school.
Compute partial product for each digit. Add partial products.
Multiplication Example
• Multipli a d de
• Multiplier de
• Partial produ ts
• Note: if ultiplier it is op
• ultipli a d pla e alue
• other ise zero
• Produ t de
•Note: eed dou le le gth result
Fig 2.7
Multiplication Algorithm
Repeat n times:
If Q0= 1 Add M into A, store carry in CF
Division
•More o ple tha ultipli atio to i ple e t for computers as well as humans. Some processors
designed for embedded applications or digital signal processing lack a divide instruction.
•Basi all i erse of add a d shift: shift a d su tra t.
Unsigned Division algorithm
• Usi g sa e registers A,M,Q, ou t as ultipli atio
• Results of di isio are quotient and remainder
Q will hold the quotient
A will hold the remainder
• I itial alues
Q <- 0
A <- Dividend
M <- Divisor
Count <- n
Arithmetic operations on floating point numbers consist of addition, subtraction, multiplication and division
The operations are done with algorithms similar to those used on sign magnitude integers (because of the
similarity of representation) -- example, only add numbers of the same sign. If the numbers are of
opposite sign, must do subtraction.
Arithmetic unit
The arithmetic unit, also called the arithmetic logic unit (ALU), is a component of the central processing unit
(CPU). It is often referred to as the engine of the CPU because it allows the computer to perform
mathematical calculations, such as addition, subtraction, and multiplication. The ALU also performs logic
operations, like AND, OR, and NOT. The arithmetic unit works along with the register array, which
holds data, when processing any of these operations. The arithmetic unit is comprised of many
interconnected elements that are designed to perform specific tasks.
Some central processing units are comprised of two components, an arithmetic unit and a logic unit. Other
processors may have an arithmetic unit for calculating fixed-point operations and another AU for calculating
floating-point computations. Some PCs have a separate chip known as the numeric coprocessor. This
coprocessor contains a floating-point unit for processing floating-point operands. The coprocessor increases
the operating speed of the computer because of the coprocessor ability to perform computation faster and
more efficiently.
Q.1 Take an example and explain the design of arithmetic and June 2014 7
logic unit
Table No.2.1
our AU. This has a number of advantages over the sign and magnitude representation such as easy
addition or subtraction of mixed positive and negative numbers. Recall that the two’s complement
2n - N = (2n - 1 - N) + 1
The last representation gives us an easy way to find two’s complement: take the bit wise
complement of the number and add 1 to it. As an example, to represent the number -5, we take
+1
1 0 1 1 (two’s complement)
Numbers represented in two’s complement lie within the range - (2n-1) to + (2n-1 - 1). For a 4-bit
number this means that the number is in the range of -8 to +7. There is a potential problem we still
need to be aware of when working with two's complement, namely overflow and underflow as is
0 1 0 0 (=carry Ci)
+5 0 1 0 1
+4 + 0 1 0 0
+9 0 1 0 0 1 = -7!
Also,
1 0 0 0 (=carry Ci)
-7 1 0 0 1
-9 1 0 1 1 1 = +7!
Both calculations give the wrong results (-7 instead of +9 or +7 instead of -9) which is caused by the
fact that the result +9 or -9 is out of the allowable range for a 4- bit two’s complement number.
Whenever the result is larger than +7 or smaller than -8 there is an overflow or underflow and the
result of the addition or subtraction is wrong. Overflow and underflow can be easily detected when
the carry out of the most significant stage (i.e. C4 ) is different from the carry out of the previous
The inputs A and B have to be presented in two’s complement to the inputs of the AU.
Q.1 Take an example and explain the design of arithmetic and June 2014 7
logic unit