Memory Part-Ii

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Part-II

By: Dr. Ajit Jain


Contents
 What is Cache Memory?
 Cache Memory Mapping
 Associative Mapping
 Direct Mapping
 Set Associative Mapping
Cache Memory
 A special very-high speed memory.
 The speed of moving data between RAM and the CPU
is much slower that the processing capability of the
CPU, with the result that processing speed is limited
primarily by the speed of the main memory.
 A technique used to compensate for the mismatch in
these operating speeds is to employ an extremely fast,
small cache between the CPU and the main memory
whose access time is close to processing speed of the
CPU.
Cache Memory

Cache Main
CPU
Memory Memory

Fig. 1 Cache Memory

 Cache is used for storing segments of programs currently


being executed in the CPU and temporary data frequently
needed in the present calculation.
 By making programs and data available at a rapid state, it is
possible to increase the performance rate of the computer.
Cache Memory
How cache memory works?
When a program is running and the CPU
needs to read a piece of data or program
instructions from RAM, the CPU checks first
to see whether the data is in cache memory. If
the data is not there, the CPU reads the data
from RAM into its registers, but it also loads a
copy of the data into cache memory. The next
time the CPU needs the data, it finds it in the
cache memory and saves the time needed to
load the data from RAM.
Auxiliary Versus Cache Memory
Auxiliary Cache

Holds those parts of the program Holds those parts of the program
and data that are not presently and data that are most heavily
used by the CPU. used by the CPU.
CPU does not have directly access CPU has the directly access to
to auxiliary memory. cache memory.

• Cache is typically 5 to 10 times faster than the RAM.


• RAM is 1000 times faster than the Auxiliary Memory.
• Block size in auxiliary memory typically ranges from 256 to
2048 words, while cache block size is typically from 1 to 16
words.
Cache Memory…
 The performance of cache memory is frequently
measured in terms of a quantity called hit ratio.
 When the CPU refers to memory and finds the word in
cache, it is said to produce a hit .
 If the word is not found in cache, it is in main memory
and it counts as a miss.
𝑁𝑜.𝑜𝑓 𝐻𝑖𝑡𝑠
 𝐻𝑖𝑡 𝑅𝑎𝑡𝑖𝑜 =
𝑇𝑜𝑡𝑎𝑙 𝑁𝑜.𝑜𝑓 𝑅𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑡𝑜 𝑀𝑒𝑚𝑜𝑟𝑦
𝑁𝑜.𝑜𝑓 𝐻𝑖𝑡𝑠
=
𝑁𝑜.𝑜𝑓 (𝐻𝑖𝑡𝑠+𝑀𝑖𝑠𝑠)
Characteristics of Cache Memory

1. Fast Access Speed


2. Reduces the execution time of the programs
3. Improves the performance of the system
Mapping
 The transformation of data from main memory to
cache memory is referred to as a mapping process.
 Three types of mapping procedures:
1. Associative Mapping
2. Direct Mapping
3. Set Associative Mapping
Example
Fig. 2 Cache Memory Example

• The CPU
communicates with
both memories.
• It first sends a 15-bit
address to cache.
• If there is a hit, the
CPU accepts the 12-bit
For Main Memory data from cache.
Size = 32K words of 12 bits each • If there is a miss, the
32K = 32 x 1024 words = 215 CPU reads the word
Size of Address Bus = 15 bits from main memory
Size of Data Bus = 12 bits and the word is then
For Cache Memory transferred to cache.
Size = 512 words of 12 bits each
I Associative Mapping
• The fastest and most
flexible cache organization
uses an associative
memory.
• The associative memory
stores both the address and
content (data) of the
memory word.
• This permits any location
Fig. 3 Associative mapping cache (all in cache to store any word
numbers in octal). from main memory.
I Associative Mapping
• A CPU address of 15 bits is placed
in the argument register and the
associative memory is searched
for a matching address.
• If the address is found:
 12-bit data is read and sent to
the CPU
• If the address is not found:
 the main memory is accessed,
• If the cache is full, a and
room is made for a pair  the address data pair is then
• First in First Out (FIFO) transferred to the associative
replacement policy. cache memory
Associative Mapping…
 Advantages
 Flexibility
 Any main memory word can be loaded to the cache
anywhere.
 Fastest cache organization
 Limitation
 Associative memory are expensive because the added
logic associated with each cell.
II Direct Mapping
Fig. 4 Addressing relationships
between main and cache memories
• CPU address of 15-
bit  tag field
and index field.
• The number of bits
in the index field =
no. of address bits
required to access
the cache memory.

• In general 2k words in cache memory and 2n words in main


memory.
• n-bit memory address is divided into: k-bits (index) + (n-k)
bits for tag.
• Direct mapping uses n-bit address to access the main memory
and the k-bit index to access the cache.
• CPU generates a memory
request:
 The index field is used
for the address to
access the cache
 The tag field of the
CPU address is
compared with the
tag in the word read
from the cache
 If the two tags match,
there is a hit and the
Fig. 5 Direct mapping cache organization
desired data word is
in cache
 If there is no match,
Eg. The CPU wants to access the word there is a miss and
at address 02000. the required word is
How is it done? read from main
memory
Direct Mapping…
 Advantages
 Simple mapping technique
 Easy to implement
 Limitation
 The hit ratio can drop considerably if two or more words
whose addresses have the same index but different tags
are accessed repeatedly.
Set Associative Mapping
 As we know, the major limitation of Direct Mapping is:
“two words with the same index in their address but
with different tag values cannot reside in cache
memory at the same time”
 The above limitation of direct cache organization can
be overcome by the Set Associate organization of
cache.
 In this organization, each word of cache (or memory
location) can store two or more words of memory
under the same index address.
Fig. 6 Two way set associative mapping cache
• Word Length:
2*(6-bit tag + 12-bit data) =
36-bit
• Thus the size of cache
memory is 512 x 36
bits.
• It can accommodate
1024 words of main
memory since each
word of cache contains
two data words.
• In general, a set-
• Each index address refers to two data associative cache of set
words and their associated tags. size k will
accommodate k words
of main memory in
each word of cache.
• When the CPU generates a memory request, the index
value of the address is used to access the cache.
• The tag field of the CPU address is then compared with
both tags in the cache to determine if a match occurs.
• The comparison logic is done by an associative search of
the tags in the set similar to an associative memory search:
thus the name "set-associative."
Set Associative Mapping…
 Advantages
 The hit ratio will improve as the set size increases because
more words with the same index but different tags can reside
in cache.
 Limitation
 An increase in the set size increases the number of bits in
words of cache and requires more complex comparison logic.
 When a miss occurs in a set-associative cache and the set is
full, it is necessary to replace one of the tag-data items with
a new value.
 One of the most common replacement algorithm: FIFO
Thanks

You might also like