Computer Architecture and Organization: Dr. Mohd Hanafi Ahmad Hijazi
Computer Architecture and Organization: Dr. Mohd Hanafi Ahmad Hijazi
Computer Architecture and Organization: Dr. Mohd Hanafi Ahmad Hijazi
KT14203
Computer
Architecture
and
Organization
Presented by:
Dr. Mohd Hanafi Ahmad Hijazi
FKI, UMS
Slides, with minor modifications, taken from
William Stallings Computer Organization and
Architecture, 9th Edition
+
William Stallings
Computer Organization
and Architecture
9th Edition
+
Chapter 4
Cache Memory
Key Characteristics of Computer
Memory Systems
Capacity
Memory is typically expressed in terms of bytes
Unit of transfer
For internal memory the unit of transfer is equal to the number of
electrical lines into and out of the memory module
Method of Accessing Units of Data
Sequential Direct Random
Associative
access access access
Locality of
reference:
When a block of
data is fetched into
the cache, it is likely
that there will be
future references to
other words in the
block.
Cache/Main Memory Structure
Cache Read
Operation
+
Typical Cache Organization
Elements of Cache Design
Virtual memory
Facility that allows programs to address memory from a logical
point of view, without regard to the amount of main memory
physically available
When used, the address fields of machine instructions contain
virtual addresses
For reads to and writes from main memory, a hardware memory
management unit (MMU) translates each virtual address into a
physical address in main memory
+
Logical
and
Physical
Caches
Table 4.3
Cache
Sizes of
Some
Processors
a Two values
separated by a
slash refer to
instruction and
data caches.
Direct
Mapping
Direct Mapping Cache Organization
+
Direct
Mapping
Example
+
Direct Mapping Summary
Mapping
Example
+
Associative Mapping Summary
Mapping From
Main Memory
to Cache:
k-Way
Set Associative
k-Way
Set
Associative
Cache
Organization
+
Set Associative Mapping Summary
Number of sets = v = 2d
Once the cache has been filled, when a new block is brought
into the cache, one of the existing blocks must be replaced
For direct mapping there is only one possible line for any
particular block and no choice is possible
First-in-first-out (FIFO)
Replace that block in the set that has been in the cache longest
Easily implemented as a round-robin or circular buffer technique
If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the multiple processors are attached to the
cache then main memory must be same bus and each processor has its own
updated by writing the line of cache out local cache - if a word is altered in one
to the block of memory before bringing cache it could conceivably invalidate a
in the new block word in other caches
+
Write Through
and Write Back
Write through
Simplest technique
All write operations are made to main memory as well as to the cache
The main disadvantage of this technique is that it generates substantial
memory traffic and may create a bottleneck
Write back
Minimizes memory writes
Updates are made only in the cache
Portions of main memory are invalid and hence accesses by I/O
modules can be allowed only through the cache
This makes for complex circuitry and a potential bottleneck
Line Size
When a block of Two specific effects
data is retrieved come into play:
and placed in the • Larger blocks reduce the
cache not only the As the block size number of blocks that fit into
desired word but increases more a cache
also some number useful data are • As a block becomes larger
each additional word is
of adjacent words brought into the farther from the requested
are retrieved cache word
The on-chip cache reduces the processor’s external bus activity and
speeds up execution time and increases overall system performance
When the requested instruction or data is found in the on-chip cache, the bus
access is eliminated
On-chip cache accesses will complete appreciably faster than would even
zero-wait state bus cycles
During this period the bus is free to support other transfers
Two-level cache:
Internal cache designated as level 1 (L1)
External cache designated as level 2 (L2)
Potential savings due to the use of an L2 cache depends on the hit rates
in both the L1 and L2 caches
The use of multilevel caches complicates all of the design issues related
to caches, including size, replacement algorithm, and write policy
Hit Ratio (L1 & L2)
For 8 Kbyte and 16 Kbyte L1
+
Unified Versus Split Caches
How fast?
Pentium 4 cache organization
How expensive?
Cache memory principles ARM cache organization