Presentation 2
Presentation 2
Presentation 2
INTRODUCTION
In this unit we shall discuss about various types of memory associated with a
computer system including main memory, cache and virtual memory and
The computer stores the programs and the data in its memory unit.
The CPU fetches the instructions out of the memory unit to execute
and process them.
Memory can be primary (or main) memory and secondary (or auxiliary)
memory. Main memory stores programs and data currently executed
by the CPU of a computer. Auxiliary memory provides backup storage
of information. Data as well as instructions are transferred from the
secondary memory to the main memory whenever it is needed by
the CPU.
• In Direct Access, each record has different addresses based on the physical location of the
memory and the shared Read/Write head moves directly to the desired record. This method
is used in magnetic disks.
location and transmits that data to the requesting device via bus. On
memory location.
There are three key characteristics of the memory. They are cost,
it is found that the cost of the storage devices decreases but their
other words, the smaller memories are more expensive and much
storage devices.
RAM : In RAM, it is possible to both read and write data from and
to the memory in a fixed amount of time independent of the memory
location or address.
ROM : ROM is a non-volatile semiconductor memory; that is, it doesn’t
lose its contents even when the power is switched off. ROM is not
re-writable once it has been written or manufactured. ROM is used
for programs such as bootstrap program that starts a computer and
load its operating system.
MR. SOLOMON ANAB 11
SEMICONDUCTOR RAM
The basic building block of the semiconductor memories is the RAM
two stable states : 0 and 1. The binary information are stored in the
form of arrays having rows and columns in the memories. With the
This refreshing of the cells are done hundreds of time in every second
irrespective of whether the computer is using the DRAM memory
at that time or not.
The data stored in ROM is not lost even if the power is switched off.
programs that are permanently stored and are not subject to change.
can use it to boot the system when the computer is switched on.
The drawbacks of EPROMs are that they must be physically removed to be rewritten
and also the entire chip has to be completely erased to just change a particular
portion of it. EEPROM was introduced to remove these drawbacks of EPROM.
EEPROM chips can be both programmed and the contents can be erased electrically.
As EEPROM is slow to be used in products that have to make quick changes to the
data on the chip, so the Flash EEPROM devices were developed.
The advantage is that they work faster and the power consumption is low.
observed that data and instructions which are executed repeatedly are located near to each
other. Many instructions in localized areas of the program are executed repeatedly
during some time period and the other remaining instructions of the program are accessed
reference”. There are two types of locality of references. These are : temporal locality and
spatial locality.
Spatial locality of reference means that data and instructions which are close
to each other are likely to be executed soon.
used to hold just the active portions of code or data. This memory is termed as cache
memory. The word cache is pronounced as cash. It stores data and instructions which
By using an intelligent algorithm, a cache contains the data that is accessed most
often between a slower peripheral device and the faster processor.
internal cache. For example, the Intel 80486 microprocessor contains a 8K memory cache
and the Pentium has a 16K cache. Such internal caches are often called Level 1(L1) caches.
Cache outside the microprocessor i.e., on the motherboard is called external cache or
Level 2(L2) cache. External caches are found in almost all modern personal computers.
These caches are placed between the CPU and the DRAM. Like L1 caches, L2 caches are
examines whether the content of the specified address is present in the cache memory.
cache controller enables the cache memory to send the specified data/instruction to
the CPU. This is known as a ‘hit’. If it is not found in the cache memory, then it is said
that a ‘miss’ has occurred and the cache controller enables the controller of the main
memory to send the specified code or data from the main memory. The performance of
cache memory is measured in terms of hit ratio. It is the ratio of number of hits divided
by the total number of requests made. By adding number of hits and number of misses,
the
MR. total
SOLOMON request
ANAB is calculated. 31
It is not necessary for the processor to know about the existence of cache. Processor
simply issues READ and WRITE requests using addresses that refer to locations in the
memory. Both the main memory and the cache are divided into equalize
units called blocks. The term block is usually used to refer to a set of contiguous
address locations of some size.
In a READ operation, the main memory is not involved. When a READ request is
received from the processor, the contents of a block of memory words containing the
location specified are transferred into the cache, one word at a time.
Usually, the cache memory can store a reasonable number of blocks at any given time.
The correspondence between the main memory blocks and those in the cache is
specified by a mapping function. When the cache is full, it becomes necessary to
implement a replacement algorithm. The replacement algorithm decides which block
should be moved out of the cache to make room for the new block.
When the memory operation is a WRITE, there are two ways to proceed: write-through
method and write-back method.
Write-through method: In this method, the cache
location and the main memory location are updated
simultaneously.
Write-back method: This method is to update only the cache location and to mark it as
updated with an associated flag bit also known as dirty bit . Here, the update of main
memory occurs only when the block containing this marked word is to be removed from the
cache to make room for a new block.
Mapping functions are used as a way to decide which main memory block occupies which
line of cache. As there are less lines (or block) of cache than main memory blocks, an
algorithm is needed to decide this. Let us take an example, a system with a cache of 2048
(2K) words and 64K (65536) words of the main memory. Each block of the cache memory is
of size 16 words. Thus, there will be 128 such blocks (i.e., 16*128 = 2048). Let the main
MR. SOLOMON ANAB 35
memory is addressable by 16 bit address (i.e., 216 = 65536 = 64*1024).
Direct Mapping
The simplest way for determining the cache location for placement of a main
memory block is the direct mapping technique. Here, the block i of the main
memory maps onto block i modulo 128 of the cache memory.
The advantage of direct mapping is that it is simple and inexpensive. The main
disadvantage of this mapping is that there is a fixed cache location for any given
block in main memory. If a program accesses two blocks that map to the
same cache line repeatedly, then cache misses are very high.
main memory block to be loaded into any line of cache. To do so, the cache controller
interprets a memory address as a tag and a word field. The tag uniquely identifies a
With this mapping, the space in the cache can be used more efficiently. The primary
disadvantage of this method is that to find out whether a particular block is in cache, all
cache lines would have to be examined. Using this method, replacement algorithms are