Basic Computer Model and Different Units of Computer 1
Basic Computer Model and Different Units of Computer 1
Basic Computer Model and Different Units of Computer 1
The model of a computer can be described by four basic units in high level abstraction which is
shown in figure 1.1. These basic units are:
Central
Input
Unit
Output
Unit
Memory
Unit
Processor
Unit
The program control unit has a set of registers and control circuit to
generate control signals.
The execution unit or data processing unit contains a set of registers for
storing data and an Arithmatic and Logic Unit (ALU) for execution of
arithmatic and logical operations.
In addition, CPU may have some additional registers for temporary storage of
data.
B. Input Unit :
With the help of input unit data from outside can be supplied to the computer. Program or data is
read into main storage from input device or secondary storage under the control of CPU input
instruction.
Example of input devices: Keyboard, Mouse, Hard disk, Floppy disk, CD-ROM drive etc.
C. Output Unit :
With the help of output unit computer results can be provided to the user or it can be stored in
stograge device permanently for future use. Output data from main storage go to output device
under the control of CPU output instructions.
Example of output devices: Printer, Monitor, Plotter, Hard Disk, Floppy Disk etc.
D. Memory Unit :
Memory unit is used to store the data and program. CPU can work with the information stored
in memory unit. This memory unit is termed as primary memory or main memory module. These
are basically semi conductor memories.
There ate two types of semiconductor memories
Volatile Memory
: RAM (Random Access Memory).
Non-Volatile Memory : ROM (Read only Memory), PROM (Programmable ROM)
Secondary Memory :
There is another kind of storage device, apart from primary or main memory,
which is known as secondary memory. Secondary memories are non volatile
memory and it is used for permanent storage of data and program.
Example of secondary memories:
------
-----------
is optical device
is semiconductor memory.
But in computer, we need more storage space for proper functioning of the Computer.
Some of them are inside CPU, which are known as register. Other bigger chunk of storage space
is known as primary memory or main memory. The CPU can work with the information available in
main memory only.
To access the data from memory, we need two special registers one is known as Memory Data
Register (MDR) and the second one is Memory Address Register (MAR).
Data and program is stored in main memory. While executing a program, CPU brings instruction
and data from main memory, performs the tasks as per the instuction fetch from the memory.
After completion of operation, CPU stores the result back into the memory.
Main Memory Organization
Main memory unit is the storage unit, There are several location for storing information in the
main memory module.
The capacity of a memory module is specified by the number of memory location and the
information stored in each location.
A memory module of capacity 16 X 4 indicates that, there are 16 location in the memory module
and in each location, we can store 4 bit of information.
We have to know how to indicate or point to a specific memory location. This is done by address
of the memory location.
We need two operation to work with memory.
This operation is to retrive the data from memory and bring it to
Operation:
CPU register
WRITE Operation: This operation is to store the data to a memory location from
READ
CPU register
We need some mechanism to distinguish these two operations READ and WRITE.
Arithmetic and logic Unit (ALU)
ALU is responsible to perform the operation in the computer.
The basic operations are implemented in hardware level. ALU is having collection of two types
of operations:
Arithmetic
Logical operations
operations
MEMORY
We have already mentioned that digital computer works on stored programmed concept
introduced by Von Neumann. We use memory to store the information, which includes both
program and data.
Due to several reasons, we have different kind of memories. We use different kind of memory at
different lavel.
The memory of computer is broadly categories into two categories:
Internal and
external
Internal memory is used by CPU to perform task and external memory is used to store bulk
information, which includes large software and data.
Memory is used to store the information in digital form. The memory hierarchy is given by:
Register
Cache Memory
Main Memory
Magnetic Disk
Removable media (Magnetic tape)
Register:
This is a part of Central Processor Unit, so they reside inside the CPU. The
information from main memory is brought to CPU and keep the information in
register. Due to space and cost constraints, we have got a limited number of
registers in a CPU. These are basically faster devices.
Cache Memory:
Cache memory is a storage device placed in between CPU and main memory.
These are semiconductor memories. These are basically fast memory device,
faster than main memory.
We can not have a big volume of cache memory due to its higher cost and some
constraints of the CPU. Due to higher cost we can not replace the whole main
memory by faster memory. Generally, the most recently used information is kept
in the cache memory. It is brought from the main memory and placed in the
cache memory. Now a days, we get CPU with internal cache.
Main Memory:
Like cache memory, main memory is also semiconductor memory. But the main memory
is relatively slower memory. We have to first bring the information (whether it is data or
program), to main memory. CPU can work with the information available in main memory
only.
Magnetic Disk:
This is bulk storage device. We have to deal with huge amount of data in many application. But
we don't have so much semiconductor memory to keep these information in our computer. On
the other hand, semiconductor memories are volatile in nature. It loses its content once we
switch off the computer. For permanent storage, we use magnetic disk. The storage capacity of
magnetic disk is very high.
Removable media:
For different application, we use different data. It may not be possible to keep all the information
in magnetic disk. So, which ever data we are not using currently, can be kept in removable
media. Magnetic tape is one kind of removable medium. CD is also a removable media, which is
an optical device. Register, cache memory and main memory are internal memory. Magnetic
Disk, removable media are external memory. Internal memories are semiconductor memory.
Semiconductor memories are categoried as volatile memory and non-volatile memory.
RAM: Random Access Memories are volatile in nature. As soon as the computer is switched off,
the contents of memory are also lost.
ROM: Read only memories are non volatile in nature. The storage is permanent, but it is read
only memory. We can not store new information in ROM.
Several types of ROM are available:
PROM: Programmable Read Only Memory; it can be programmed once as per user
requirements.
EPROM: Erasable Programmable Read Only Memory; the contents of the memory can
be erased and store new data into the memory. In this case, we have to erase whole
information.
Main Memory
The main memory of a computer is semiconductor memory. The main memory unit of computer
is basically consists of two kinds of memory:
RAM : Random access memory; which is volatile in nature.
ROM : Read only memory; which is non-volatile.
The permanent information are kept in ROM and the user space is basically in RAM.
The smallest unit of information is known as bit (binary digit), and in one memory cell we can
store one bit of information. 8 bit together is termed as a byte.
The maximum size of main memory that can be used in any computer is determined by the
addressing scheme.
A computer that generates 16-bit address is capable of addressing upto 216 which is equal to
64K memory location. Similarly, for 32 bit addresses, the total capacity will be 232 which is equal
to 4G memory location.
In some computer, the smallest addressable unit of information is a memory word and
the machine is called word-addressable.
The data transfer between main memory and the CPU takes place through two CPU registers.
If the MAR is k-bit long, then the total addressable memory location will be 2k.
If the MDR is n-bit long, then the n bit of data is transferred in one memory cycle.
The transfer of data takes place through memory bus, which consist of address bus and data
bus. In the above example, size of data bus is n-bit and size of address bus is k bit.
It also includes control lines like Read, Write and Memory Function Complete (MFC) for
coordinating data transfer. In the case of byte addressable computer, another control line to be
added to indicate the byte transfer instead of the whole word.
The memory consttucted with the help of transistors is known as semiconductor memory. The
semiconductor memories are termed as Random Access Memory(RAM), because it is possible
to
access
any
memory
location
in
random.
Depending on the technology used to construct a RAM, there are two types of RAM SRAM: Static
Random
DRAM: Dynamic Random Access Memory.
Access
Memory.
Due to the discharge of the capacitor during read operation, the read operation of DRAM is
termed as destructive read out.
Static RAM (SRAM):
In an SRAM, binary values are stored using traditional flip-flop constructed with the help of
transistors. A static RAM will hold its data as long as power is supplied to it.
SRAM Versus DRAM :
Both static and dynamic RAMs are volatile, that is, it will retain the information as long as
power supply is applied.
A dynamic memory cell is simpler and smaller than a static memory cell. Thus a DRAM is
more
dense,
i.e., packing density is high(more cell per unit area). DRAM is less expensive than
corresponding SRAM.
DRAM requires the supporting refresh circuitry. For larger memories, the fixed cost of the
refresh circuitry is more than compensated for by the less cost of DRAM cells
SRAM cells are generally faster than the DRAM cells. Therefore, to construct faster
memory modules(like cache memory) SRAM is used.
Cache Memory
Analysis of large number of programs has shown that a number of instructions are executed
repeatedly. This may be in the form of a simple loops, nested loops, or a few procedures that
repeatedly call each other. It is observed that many instructions in each of a few localized areas
of the program are repeatedly executed, while the remainder of the program is accessed
relatively less. This phenomenon is referred to as locality of reference.
Now, if it can be arranged to have the active segments of a program in a fast memory,
then the tolal execution time can be significantly reduced. It is the fact that CPU is a
faster device and memory is a relatively slower device. Memory access is the main
bottleneck for the performance efficiency. If a faster memory device can be inserted
between main memory and CPU, the efficiency can be increased. The faster memory
that is inserted between CPU and Main Memory is termed as Cache memory. To make
this arrangement effective, the cache must be considerably faster than the main memory,
and typically it is 5 to 10 time faster than the main memory. This approach is more
economical than the use of fast memory device to implement the entire main memory.
This is also a feasible due to the locality of reference that is present in most of the
program, which reduces the frequent data transfer between main memory and cache
memory. The inclusion of cache memory between CPU and main memory is shown in
Figure 3.13
Operation of Cache Memory
The memory control circuitry is designed to take advantage of the property of locality of
reference. Some assumptions are made while designing the memory control circuitry:
1. The CPU does not need to know explicitly about the existence of the cache.
2. The CPU simply makes Read and Write request. The nature of these two
operations
are
same
whether
cache
is
present
or
not.
3. The address generated by the CPU always refer to location of main memory.
4. The memory access control circuitry determines whether or not the requested
word currently exists in the cache.
When a Read request is received from the CPU, the contents of a block of memory words
containing the location specified are transferred into the cache. When any of the locations in this
block is referenced by the program, its contents are read directly from the cache.
Consider the case where the addressed word is not in the cache and the operation is a
read. First the block of words is brought to the cache and then the requested word is
forwarded to the CPU. But it can be forwarded to the CPU as soon as it is available to the
cache, instaead of the whole block to be loaded in the cache. This is called load through,
and there is some scope to save time while using load through policy. The cache
memory can store a number of such blocks at any given time.
The correspondence between the Main Memory Blocks and those in the cache is
specified by means of a mapping function.
MEMORY MANAGEMENT
Main Memory
The main working principle of digital computer is Von-Neumann stored program
principle. First of all we have to keep all the information in some storage, mainly known
as main memory, and CPU interacts with the main memory only. Therefore, memory
management is an important issue while designing a computer system.
On the otherhand, everything cannot be implemented in hardware, otherwise the cost of
system will be very high. Therefore some of the tasks are performed by software
program. Collection of such software programs are basically known as operating
systems. So operating system is viewed as extended machine. Many more functions or
instructions are implemented through software routine. The operating system is mainly
memory resistant, i.e., the operating system is loaded into main memory.
Due to that, the main memory of a computer is divided into two parts. One part is
reserved for operating system. The other part is for user program. The program currently
being executed by the CPU is loaded into the user part of the memory. The two parts of
the
main
memory
are
shown
in
the
figure
3.17.
In a uni-programming system, the program currently being executed is loaded into the
user
part
of
the
memory.
In a multiprogramming system, the user part of memory is subdivided to accomodate
multiple process. The task of subdivision is carried out dynamically by opearting system
and is known as memory management.
: A program is admitted to execute, but not yet ready to execute. The operating
system will initialize the process by moving it to the ready state.
2.Ready
3.Running: The process is being executed by the processor. At any given time, only one
process is in running state.
4.Waiting : The process is suspended from execution, waiting for some system resource,
such as I/O.
5.Exit
: The process has terminated and will be destroyed by the operating system.
The processor alternates between executing operating system instructions and executing
user processes. While the operating system is in control, it decides which process in the
queue
sholud
be
executed
next.
A process being executed may be suspended for a variety of reasons. If it is suspended
because the process requests I/O, then it is places in the appropriate I/O queue. If it is
suspended because of a timeout or because the operating system must attend to
processing some of it's task, then it is placed in ready state.
We know that the information of all the process that are in execution must be placed in
main memory. Since there is fix amount of memory, so memory management is an
important issue.
Memory Management
In an uniprogramming system, main memory is divided into two parts : one part for the operating
systemand
the
other
part
for
the program
currently
being
executed.
In multiprogramming system, the user part of memory is subdivided to accomodate multiple
processes.
The task of
subdivision is
carried out
dynamically by
the operating
system and is
known
as memory
management.
In
uniprogramming
system, only
one program is
in execution.
After complition
of one program,
another
program may
start.
In general, most
of the programs
involve I/O
operation. It
must take input
from some input
device and
place the result
in some output
device.
Partition of main
memory for uniprogram and
multi program is
shown in figure
3.19.
To utilize the idle time of CPU, we are shifting the paradigm from uniprogram environment to
multiprogram
environment.
Since the size of main memory is fixed, it is possible to accomodate only few process in the main
memory. If all are waiting for I/O operation, then again CPU remains idle.
To utilize the idle time of CPU, some of the process must be off loaded from the memory and
new process must be brought to this memory place. This is known swapping.
What is swapping :
1. The process waiting for some I/O to complete, must stored back in disk.
2. New ready process is swapped in to main memory as space becomes
available.
3. As process completes, it is moved out of main memory.
4. If none of the processes in memory are ready,
Swapped out a block process to intermediate queue of blocked
process.
Swapped in a ready process from the ready queue.
But swapping is an I/O process, so it also takes time. Instead of remain in idle state of CPU,
sometimes it is advantageous to swapped in a ready process and start executing it.
The main question arises where to put a new process in the main memory. It must be done in
such a way that the memory is utilized properly.
Partitioning
Splitting of memory into sections to allocate processes including operating system. There are
two scheme for partitioning :
o
o
Fixed sized
pertitions:
The mamory is partitioned to
fixed size partition. Although
the partitions are of fixed
size, they need not be of
equal size.
There is a problem of
wastage of memory in fixed
size even with unequal size.
When a process is brought
into memory, it is placed in
the smallest available
partition that will hold it.
Equal size and unequal size
partition ofr fixed size
partitions of main memory is
shown in Figure 3.20.
Even with the use of unequal size of partitions, there will be wastage of memory. In most cases,
a process will not require exactly as much memory as provided by the partition.
For example, a process that require 5-MB of memory would be placed in the 6-MB partition
which is the smallest available partition. In this partition, only 5-MB is used, the remaining 1-MB
can not be used by any other process, so it is a wastage. Like this, in every partition we may
have some unused memory. The unused portion of memory in each partition is termed as hole.
Variable size Partition:
When a processe is brought into memory, it is allocated exactly as much memory as it requires
and no more. In this process it leads to a hole at the end of the memory, which is too small to
use. It seems that there will be only one hole at the end, so the waste is less.
But, this is not the only hole that will be present in variable size partition. When all processes are
blocked then swap out a process and bring in another process. The new swapped in process
may be smaller than the swapped out process. Most likely we will not get two process of same
size. So, it will create another whole. If the swap- out and swap-in is occuring more time, then
more and more hole will be created, which will lead to more wastage of memory.
There are two simple ways to slightly remove the problem of memory wastage:
Coalesce
: Join the adjacent holes into one large hole , so that some process can be
accomodated into the hole.
Compaction : From time to time go through memory and move all hole into one free block
of memory.
During the execution of process, a process may be swapped in or swapped out many times. it is
obvious that a process is not likely to be loaded into the same place in main memory each time it
is swapped in. Further more if compaction is used, a process may be shiefted while in main
memory.
Introduction to CPU
The operation or task that must perform by CPU are:
Fetch Data: The execution of an instruction may require reading data from
memory or I/O module.
Write data: The result of an execution may require writing data to memory or an
I/O module.
To do these tasks, it should be clear that the CPU needs to store some data temporarily. It must
remember the location of the last instruction so that it can know where to get the next instruction.
It needs to store instructions and data temporarily while an instruction is beign executed. In other
words, the CPU needs a small internal memory. These storage location are generally referred as
registers.
The major components of the CPU are an arithmatic and logic unit (ALU) and a control unit (CU).
The ALU does the actual computation or processing of data. The CU controls the movement of
data and instruction into and out of the CPU and controls the operation of the ALU.
The CPU is connected to the rest of the system through system bus. Through system bus, data
or information gets transferred between the CPU and the other component of the system. The
system bus may have three components:
Data
Data bus is used to transfer the data between main memory and CPU.
Bus:
Address
Bus:
Address bus is used to access a particular memory location by putting the address of the
memory location.
Control
Bus:
Control bus is used to provide the different control signal generated by CPU to different part of
the system. As for example, memory read is a signal generated by CPU to indicate that a
memory read operation has to be performed. Through control bus this signal is transferred to
memory module to indicate the required operation.
There are three basic components of CPU: register bank, ALU and Control Unit. There are
several data movements between these units and for that an internal CPU bus is used. Internal
CPU bus is needed to transfer data between the various registers and the ALU.
I/O modules
There are several reasons why an I/O device or peripheral device is not directly connected to the
system bus. Some of them are as follows
There are a wide variety of peripherals with various methods of operation. It would be
impractical to include the necessary logic within the processor to control several devices.
The data transfer rate of peripherals is often much slower than that of the memory or
processor. Thus, it is impractical to use the high-speed system bus to communicate
directly with a peripheral.
Peripherals often use different data formats and word lengths than the computer to which
they are attached.
nput/Output Modules
The major functions of an I/O module are categorized as follows
Control
Processor
Communication
Device
Communication
Data
Error Detection
and
timing
Buffering
During any period of time, the processor may communicate with one or more external devices in
unpredictable manner, depending on the program's need for I/O.
The internal resources, such as main memory and the system bus, must be shared among a
number of activities, including data I/O.
Buses
The processor, main memory, and I/O devices can be interconnected through common data
communication lines which are termed as common bus.
The primary function of a common bus is to provide a communication path between the devices
for the transfer of data. The bus includes the control lines needed to support interrupts and
arbitration.
The bus lines used for transferring data may be grouped into three categories:
data,
address
control lines.
There are several schemes exist for handling the timing of data transfer over a bus. These can
be broadly classified as
o
o
Synchronous bus
Asynchronous bus
Synchronous Bus :
In a synchronous bus, all the devices are synchronised by a common clock, so all devices derive
timing information from a common clock line of the bus. A clock pulse on this common clock line
defines equal time intervals.
In the simplest form of a synchronous bus, each of these clock pulse constitutes a bus cycle
during which one data transfer can take place.
The timing of an input transfer on a synchronous bus is shown in the Figure 7.1.
The master places the address and command information on the bus. Then it
indicates to all devices that it has done so by activating the master-ready signal.
The selected target device performs the required operation and inform the
processor (or master) by activating the slave-ready line.
The master waits for slave-ready to become asserted before it remove its signals
from the bus.
In case of a read operation, it also strobes the data into its input
buffer.
The computer designers intend to reduce this gap and include large instruction set, more
addressing mode and various HLL statements implemented in hardware. As a result the
instruction set becomes complex. Such complex instruction sets are intended to
To reduce the gap between HLL and the instruction set of computer architecture, the system
becomes more and more complex and the resulted system is termed as Complex Instruction
Set Computer (CISC).