Presentation 2

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 37

MEMORY UNIT

INTRODUCTION

In this unit we shall discuss about various types of memory associated with a

computer system including main memory, cache and virtual memory and

various technology associated with these

memory units. Finally, we conclude the unit discussing the concept of

secondary memory along with their types.


MR. SOLOMON ANAB 1
MEMORY HIERARCHY

The computer stores the programs and the data in its memory unit.
The CPU fetches the instructions out of the memory unit to execute
and process them.

Memory can be primary (or main) memory and secondary (or auxiliary)
memory. Main memory stores programs and data currently executed
by the CPU of a computer. Auxiliary memory provides backup storage
of information. Data as well as instructions are transferred from the
secondary memory to the main memory whenever it is needed by
the CPU.

MR. SOLOMON ANAB 2


The capacity of the memory is typically expressed in terms of bytes
or words (1 byte = 8 bits). Word lengths are commonly 8 bits, 16
bits and 32 bits. The size of a word is generally the number of bits
that are transferred at a time between the main memory and the
CPU.
Memory has different locations, which are called its addresses, to
store the data. There are different methods for accessing those
address locations such as sequential access, direct access and
random access.

MR. SOLOMON ANAB 3


• In Sequential access method, the records or the data are accessed in a linear fashion, from
its current location in the memory to the desired location moving through each and every
record in the memory unit. For example , in case of the magnetic

tapes this method is used.

• In Direct Access, each record has different addresses based on the physical location of the
memory and the shared Read/Write head moves directly to the desired record. This method
is used in magnetic disks.

• In Random access each location can be randomly selected

and accessed directly. Main memory can be randomly accessed.

MR. SOLOMON ANAB 4


MEMORY OPERATION
Memory has two basic operations: Read and Write operations. In

Read operation, the processor reads data from a particular memory

location and transmits that data to the requesting device via bus. On

the other hand, a memory Write operation causes the memory to

accept data from a bus and to write that particular information in a

memory location.

MR. SOLOMON ANAB 5


SPEED OF THE MEMORY
Regarding the speed of the memory, there are two useful measures
of the speed of memory units : Memory Access Time and Memory
Cycle Time.

Memory Access Time is the time between the initiation of a memory

operation and the completion of that operation. Memory Cycle Time

is the minimum time delay that is required between the initiation of

two successive memory operations, for example between two

successive memory read operations.

MR. SOLOMON ANAB 6


The computer system has a memory hierarchy consisting of the
storage devices in it. A typical memory hierarchy is illustrated in the figure below:

MR. SOLOMON ANAB Memory Hierarchy 7


CHARACTERISTICS OF THE MEMORY

There are three key characteristics of the memory. They are cost,

capacity and access time. On moving down the memory hierarchy,

it is found that the cost of the storage devices decreases but their

storage capacity as well as the memory access time increases. In

other words, the smaller memories are more expensive and much

faster. These are supplemented by the larger, cheaper and slower

storage devices.

MR. SOLOMON ANAB 8


Thus from the above figure it can be seen that the registers are at
the top of the hierarchy and so provides the fastest, the smallest and
the most expensive type of memory device for the CPU to access
data. Registers are actually small amount of storage available on the
CPU and their contents can be accessed more quickly than any
other available storage. They may be 8–bit registers or 32-bit registers
according to the number of bits that they can hold in them.

Magnetic disks and Magnetic Tapes are the secondary storage


mediums whose data holding capacities are much larger than the
Processor Registers and the semiconductor memories which cannot
hold all the data to be stored in the computer system.

MR. SOLOMON ANAB 9


MAIN MEMORY
The main memory is the central storage unit of the computer system.
Main memory refers to the physical memory which is internal to a
computer. The word “Memory” when used usually refers to the Main
Memory of the computer system. The computer can process only those data which
are inside the main memory. For execution, the
programs and the data must be first brought into the main memory
from the storage device where they are stored. Computer memory
has a crucial role in the performance, reliability and the stability of the
system.

MR. SOLOMON ANAB 10


TYPES OF MAIN MEMORY
There are two types of main memory:

• RAM (Random Access Memory) and


• ROM (Read Only Memory).

RAM : In RAM, it is possible to both read and write data from and
to the memory in a fixed amount of time independent of the memory
location or address.
ROM : ROM is a non-volatile semiconductor memory; that is, it doesn’t
lose its contents even when the power is switched off. ROM is not
re-writable once it has been written or manufactured. ROM is used
for programs such as bootstrap program that starts a computer and
load its operating system.
MR. SOLOMON ANAB 11
SEMICONDUCTOR RAM
The basic building block of the semiconductor memories is the RAM

chip. RAM is actually made up of a number of RAM chips. They

contain a number of memory cells, which are electronic circuits having

two stable states : 0 and 1. The binary information are stored in the

form of arrays having rows and columns in the memories. With the

advent and advances of VLSI (Very Large Scale Integration) circuits,

thousands of memory cells can be placed in one chip.

MR. SOLOMON ANAB 12


Static and Dynamic RAM
There are two main types of semiconductor RAM Memories :
Static RAM (SRAM) and Dynamic RAM (DRAM) and also their variations.

Static RAM consists of internal flip-flops to store the binary


information. Each flip-flop has four to six transistors in them.
SRAM can hold its data as long as the power is supplied to the
circuit. Also SRAM cells can keep the data intact without any
external refresh circuitry.
Static RAM is so called as they can retain their state as long
as the power is applied. As SRAM never has to be refreshed,
it is very fast. However, because it requires several transistors,
it takes up more space on a RAM chip than the Dynamic RAM
cells.
MR. SOLOMON ANAB 13
performance-wise SRAM is superior
to the DRAM. But because of the size and cost of the SRAMs,
DRAM is used for the system memory or main memory instead
and SRAM are used for Cache memory as cache memory
needs to be more faster and small.

MR. SOLOMON ANAB 14


Dynamic RAM are so named because their cells do not retain their states
indefinitely. DRAM stores the information in the
form of a charge on capacitors. The capacitor holds a charge
if the bit is a “1” and holds no charge if the bit is a “0”. Unlike
SRAM, DRAM uses only one transistor to read the contents of
the capacitor. The capacitors are very tiny and can hold a
charge only for a short period of time, after which it starts
fading away. Therefore a refresh circuitry is required in case of
DRAM cells to read the contents of the cell and to refresh them
with a fresh charge before the contents are lost.

This refreshing of the cells are done hundreds of time in every second
irrespective of whether the computer is using the DRAM memory
at that time or not.

MR. SOLOMON ANAB 15


DRAM can again be Synchronous DRAM and Asynchronous
DRAM. The DRAM discussed above is Asynchronous DRAM, that is, the memory is not
synchronized to the system clock.
The memory signals are not at all coordinated with the system clock.
The Synchronous DRAM or SDRAM is synchronized with the
system clock; that is, it is synchronized with the clock speed
of the microprocessor. All signals are according to the clocks
so the timings are controlled and tight. The figure below shows
the structure of SDRAM.

MR. SOLOMON ANAB 16


Internal Organization of Memory Chips
Internally, the memory in the computer system is organized in the form of array
of rows and columns.

MR. SOLOMON ANAB 17

Organization of cells in a memory chip


Each cell in the array can hold one bit of information. Each row in the array
forms a memory word. The cells in the column are all connected to
a Read/Write circuit. The figure above presents a possible
organization of the memory cells.

Let us consider a simple example of the organization of a 64-


bit memory. If the memory is organized into 16 groups or words,
then it can be arranged as 16 x 4 memory, that is, it contains
16 memory words and each of the word is 4 bits long. There
are also other ways of organizing the same memory, like for
example, it can be arranged as 64 x 1, 32 x 2 or 8 x 8.

MR. SOLOMON ANAB 18


 CHECK YOUR PROGRESS
 1. Compare the characteristics of SRAM and DRAM.
 2. Fill in the blanks :
 (a) Data and _____ are transferred from the secondary
 memory to the _______ whenever it is needed by the CPU.
 (b) The records or the data are accessed in a linear fashion,
 from its current location in the memory in _________ access.
 (c) Memory _______time is the time between the initiation of a
 memory operation and the completion of that operation.
 (d) Memory ________ time is the minimum time delay that is
 required between the initiation of two successive memory
 operations.
 (e) Memory access time is _________ in Magnetic disks than
 in magnetic tapes.
 (f) Registers are small storage inside the ___________.
 (g) RAM can access data in a fixed amount of time _______
 of the memory location or address.
 (h) RAM is a ______ memory and ROM is a _______memory.
 (i) Registers are measured by the number of _____ that they
 can hold.
 (j) The magnetic tapes are more suited for the __________ of
 the large amounts of the computer data.
MR. SOLOMON ANAB 19
ROM
ROM (Read Only Memory) is another type of main memory that can

only be read. Each memory cell in ROM is hardware preprogrammed

during the IC (Integrated Circuit) fabrication process. That is the code

or the data in ROM is programmed in it at the time of its manufacture.

The data stored in ROM is not lost even if the power is switched off.

For this reason, it is called a non-volatile storage. It is used to store

programs that are permanently stored and are not subject to change.

The system BIOS program is stored in ROM so that the computer

can use it to boot the system when the computer is switched on.

MR. SOLOMON ANAB 20


Types of ROM

There are five types of ROM :

1. ROM (Read Only Memory)


2. PROM (Programmable Read Only Memory)
3. EPROM (Erasable Programmable Read Only Memory)
4. EEPROM (Electrically Erasable Programmable Read Only Memory)
5. Flash EEPROM Memory

MR. SOLOMON ANAB 21


1. ROM (Read Only Memory)
The contents of ROM are permanent and is programmed at
the factory. It is designed to perform a specific function and
cannot be changed. It is reliable.

2. PROM (Programmable Read Only Memory)


As the name indicates, this type of ROM chips allows the user
to code data into it. However, PROMs can be programmed only once. They are
more fragile than ROMs.

3. EPROM (Erasable Programmable Read Only Memory)


An EPROM is another type of ROM chip that allows data to be
erased and reprogrammed. They can be rewritten many times.

MR. SOLOMON ANAB 22


4. EEPROM (Electrically Erasable Programmable Read Only Memory)

The drawbacks of EPROMs are that they must be physically removed to be rewritten
and also the entire chip has to be completely erased to just change a particular
portion of it. EEPROM was introduced to remove these drawbacks of EPROM.
EEPROM chips can be both programmed and the contents can be erased electrically.

5. Flash EEPROM Memory

As EEPROM is slow to be used in products that have to make quick changes to the
data on the chip, so the Flash EEPROM devices were developed.

The advantage is that they work faster and the power consumption is low.

MR. SOLOMON ANAB 23


CHECK YOUR PROGRESS
3. Write True or False :
(a) SRAM and DRAM are the two types of semiconductor
memories.
(b) SRAM stores the data in flip-flops and DRAM stores the data
in capacitors.
(c) Main memory is actually the Static RAM.
(d) DRAM is more expensive as compared to SRAM.
(e) The capacitors have their own tendency to leak their charge.
(f) Conventional DRAM is the Synchronous DRAM.
4. Fill in the blanks :
(a) A refresh circuitry is required in case of ___________.
(b) SRAM are used for _______________ memory and DRAM is
used for ______________ memory
(c) DRAM uses one transistor to read the contents of
the______________________.
(d) The type of RAM that can hold its data without external
refresh for as long as power is supplied to the circuit is called
_________________.
(e) SDRAM is synchronized to the ____________________.
MR. SOLOMON ANAB 24
(f)_________________ is rapidly becoming the new memory
standard for modern PC’s.
LOCALITY OF REFERENCE
During program execution, memory access time is of utmost importance. It has been

observed that data and instructions which are executed repeatedly are located near to each

other. Many instructions in localized areas of the program are executed repeatedly

during some time period and the other remaining instructions of the program are accessed

relatively infrequently. This characteristic of the program is referred to as the “locality of

reference”. There are two types of locality of references. These are : temporal locality and

spatial locality.

MR. SOLOMON ANAB 25


Temporal locality of reference means that a recently executed instruction is
likely to be executed again very soon. This is so because when a program loop is
executed the same set of instructions are referenced and fetched repeatedly.
For example, loop indices, single data element etc.

Spatial locality of reference means that data and instructions which are close
to each other are likely to be executed soon.

This is so because, in a program, the instructions are stored in consecutive


memory locations. For example, sequential code, array processing, code within a
loop etc.

MR. SOLOMON ANAB 26


CACHE MEMORY
To make use of the locality of reference principle, a small high-speed memory can be

used to hold just the active portions of code or data. This memory is termed as cache

memory. The word cache is pronounced as cash. It stores data and instructions which

are to be immediately executed. Two types of caching are commonly found in

computers. These are : memory caching and disk caching.

MR. SOLOMON ANAB 27


Memory Caching
In case of memory caching, the cache is made up of high-speed static RAM
(SRAM). Static RAM is made up of transistors that do not need to be constantly
refreshed. It is much faster than the dynamic RAM; access time is about 10
nanoseconds(ns). The main memory is usually made up of dynamic RAM(DRAM)
chip. It is directly addressed by the CPU and its access time is about 50 ns. Data
and instructions stored in cache memory are transferred to the CPU many times
faster as compared to main memory.

By using an intelligent algorithm, a cache contains the data that is accessed most
often between a slower peripheral device and the faster processor.

MR. SOLOMON ANAB 28


Some memory caches are built into the architecture of microprocessors. These are called

internal cache. For example, the Intel 80486 microprocessor contains a 8K memory cache

and the Pentium has a 16K cache. Such internal caches are often called Level 1(L1) caches.

Cache outside the microprocessor i.e., on the motherboard is called external cache or

Level 2(L2) cache. External caches are found in almost all modern personal computers.

These caches are placed between the CPU and the DRAM. Like L1 caches, L2 caches are

composed of SRAM but they are much larger.

MR. SOLOMON ANAB 29


Disk Caching
Disk caching works under the same principle as memory caching, but instead of using
high-speed static RAM, it uses dynamic RAM.
The most recently accessed data from disk is stored in the main memory which is made
up of DRAM. When a program needs to access data from the disk, it first checks the disk
cache to see if the data is there. Disk caching can improve the performance of
applications, because accessing a byte of data in DRAM can be thousands times faster
than accessing a byte on a hard disk.

MR. SOLOMON ANAB 30


Cache Operation - an overview
When the CPU sends an address of instruction code or data, the cache controller

examines whether the content of the specified address is present in the cache memory.

If the requested data or instruction code is found in the cache, the

cache controller enables the cache memory to send the specified data/instruction to

the CPU. This is known as a ‘hit’. If it is not found in the cache memory, then it is said

that a ‘miss’ has occurred and the cache controller enables the controller of the main

memory to send the specified code or data from the main memory. The performance of

cache memory is measured in terms of hit ratio. It is the ratio of number of hits divided

by the total number of requests made. By adding number of hits and number of misses,

the
MR. total
SOLOMON request
ANAB is calculated. 31
It is not necessary for the processor to know about the existence of cache. Processor
simply issues READ and WRITE requests using addresses that refer to locations in the
memory. Both the main memory and the cache are divided into equalize
units called blocks. The term block is usually used to refer to a set of contiguous
address locations of some size.

In a READ operation, the main memory is not involved. When a READ request is
received from the processor, the contents of a block of memory words containing the
location specified are transferred into the cache, one word at a time.

Usually, the cache memory can store a reasonable number of blocks at any given time.
The correspondence between the main memory blocks and those in the cache is
specified by a mapping function. When the cache is full, it becomes necessary to
implement a replacement algorithm. The replacement algorithm decides which block
should be moved out of the cache to make room for the new block.

MR. SOLOMON ANAB 32


Normally, the block that will be replaced is the one that will not be accessed or needed
Again for the longest time. Cache provides a timer to control this situation.

When the memory operation is a WRITE, there are two ways to proceed: write-through
method and write-back method.
 Write-through method: In this method, the cache
location and the main memory location are updated
simultaneously.
 Write-back method: This method is to update only the cache location and to mark it as
updated with an associated flag bit also known as dirty bit . Here, the update of main
memory occurs only when the block containing this marked word is to be removed from the
cache to make room for a new block.

MR. SOLOMON ANAB 33


MR. SOLOMON ANAB 34
MAPPING FUNCTIONS
To search for a word in the cache memory, cache is mapped to the main memory. There
are three different ways that this mapping can generally be done. These are:
• Direct Mapping
• Associative Mapping
•Set-Associative Mapping

Mapping functions are used as a way to decide which main memory block occupies which

line of cache. As there are less lines (or block) of cache than main memory blocks, an

algorithm is needed to decide this. Let us take an example, a system with a cache of 2048

(2K) words and 64K (65536) words of the main memory. Each block of the cache memory is

of size 16 words. Thus, there will be 128 such blocks (i.e., 16*128 = 2048). Let the main
MR. SOLOMON ANAB 35
memory is addressable by 16 bit address (i.e., 216 = 65536 = 64*1024).
Direct Mapping
The simplest way for determining the cache location for placement of a main
memory block is the direct mapping technique. Here, the block i of the main
memory maps onto block i modulo 128 of the cache memory.

The advantage of direct mapping is that it is simple and inexpensive. The main
disadvantage of this mapping is that there is a fixed cache location for any given
block in main memory. If a program accesses two blocks that map to the

same cache line repeatedly, then cache misses are very high.

MR. SOLOMON ANAB 36


Associative Mapping
This type of mapping overcomes the disadvantage of direct mapping by permitting each

main memory block to be loaded into any line of cache. To do so, the cache controller

interprets a memory address as a tag and a word field. The tag uniquely identifies a

block in main memory.

With this mapping, the space in the cache can be used more efficiently. The primary

disadvantage of this method is that to find out whether a particular block is in cache, all

cache lines would have to be examined. Using this method, replacement algorithms are

required to maximize its potential.


MR. SOLOMON ANAB 37

You might also like