CH04 COA11e

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 48

Computer Organization and Architecture

Designing for Performance


11th Edition

Chapter 4
The Memory Hierarchy:
Cache Memory

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Principle of Locality (1 of 2)
• Also referred to as the locality of reference
• Reflects the observation that during the course of
execution of a program, memory references by the
processor tend to cluster
• Locality is based on three assertions:
– During any interval of time, a program references memory
location non-uniformly
– As a function of time, the probability that a given unit of memory
is referenced tends to change slowly
– The correlation between immediate past and immediate future
memory reference patterns is high and tapers off as the time
interval increases
Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Principle of Locality (2 of 2)
• Two forms of locality
– Temporal locality
▪ Refers to the tendency of a program to reference in the near
future those units of memory referenced in the recent past
▪ Constants, temporary variables, and working stacks are also
constructs that lead to this principle

• Spatial locality
– Spatial locality
▪ Refers to the tendency of a program to reference units of
memory whose addresses are near one another
▪ Also reflects the tendency of a program to access data
locations sequentially, such as when processing a table of data

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Library Analogy Explained

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Table 4.1
Key Characteristics of Computer Memory Systems
Location Performance
Internal (e.g., processor registers, cache, main Access time
memory) Cycle time
External (e.g., optical disks, magnetic Transfer rate
disks, tapes) Physical Type
Capacity Semiconductor
Number of words Magnetic
Number of bytes Optical
Unit of Transfer Magneto-optical
Word Physical Characteristics
Block Volatile/nonvolatile
Access Method Erasable/nonerasable
Sequential Organization
Direct Memory modules
Random
Associative

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Characteristics of Memory Systems
• Location
– Refers to whether memory is internal and external to the computer
– Internal memory is often equated with main memory
– Processor requires its own local memory, in the form of registers
– Cache is another form of internal memory
– External memory consists of peripheral storage devices that are
accessible to the processor via I/O controllers

• Capacity
– Memory is typically expressed in terms of bytes

• Unit of transfer
– For internal memory the unit of transfer is equal to the number of
electrical lines into and out of the memory module

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Capacity and Performance:
The two most important characteristics of memory

Three performance parameters are used:

Memory cycle time


Access time (latency) Transfer rate
• Access time plus any additional time
• For random-access memory it is the required before second access can • The rate at which data can be
time it takes to perform a read or commence transferred into or out of a memory
write operation • Additional time may be required for unit
• For non-random-access memory it is transients to die out on signal lines • For random-access memory it is
the time it takes to position the read- or to regenerate data if they are read equal to 1/(cycle time)
write mechanism at the desired destructively
location • Concerned with the system bus, not
the processor

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Memory Hierarchy
• Design constraints on a computer’s memory can be
summed up by three questions:
– How much, how fast, how expensive

• There is a trade-off among capacity, access time, and


cost
– Faster access time, greater cost per bit
– Greater capacity, smaller cost per bit
– Greater capacity, slower access time

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 4.6

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Table 4.2
Characteristics of Memory Devices in a Memory
Architecture
Memory level Typical technology Unit of transfer with Managed by
next larger level
(typical size)
Registers CMOS Word (32 bits) Compiler
Cache Static RAM (SRAM); Cache block (32 bytes) Processor hardware
Embedded dynamic
RAM (eDRAM)
Main memory DRAM Virtual memory page Operating system (OS)
(1 kB)
Secondary memory Magnetic disk Disk sector (512 bytes) OS/user

Offline bulk memory Magnetic tape OS/User

Table 4.2 Characteristics of Memory Devices in a Memory Architecture

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Why Cache?

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.1
Cache and Main Memory

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Cache Memory Principles
• Block
– The minimum unit of transfer between cache and main memory

• Frame
– To distinguish between the data transferred and the chunk of physical memory,
the term frame, or block frame, is sometimes used with reference to caches

• Line
– A portion of cache memory capable of holding one block, so-called because it
is usually drawn as a horizontal object

• Tag
– A portion of a cache line that is used for addressing purposes

• Line size
– The number of data bytes, or block size, contained in a line

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.2
Cache/Main Memory Structure

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.3
Cache Read Operation

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.4
Typical Cache Organization

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Terminologi Cache

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Cache Performance

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Can Cache Increases Performance Of
The System?

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Table 5.1
Elements of Cache Design
Cache Addresses Write Policy
Logical Write through
Physical Write back
Cache Size Line Size
Mapping Function Number of Caches
Direct Single or two level
Associative Unified or split
Set associative
Replacement Algorithm
Least recently used (LRU)
First in first out (FIFO)
Least frequently used (LFU)
Random

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Cache Size
• Preferable for the size of the cache to be:
– Small enough so that the overall average cost per bit is close to that of
main memory alone
– Large enough so that the overall average access time is close to that of
the cache alone

▪ Motivations for minimizing cache size:


– The larger the cache, the larger the number of gates involved in
addressing the cache resulting in large caches being slightly slower than
small ones
– The available chip and board area also limits cache size

▪ Because the performance of the cache is very sensitive to the


nature of the workload, it is impossible to arrive at a single
“optimum” cache size
Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Table 5.3
Cache Access Methods

Mapping of Main Memory Access using Main


Method Organization Blocks to Cache Memory Address
Direct Mapped Sequence of m Each block of main memory Line portion of address used
lines maps to one unique line of to access cache line; Tag
cache. portion used to check for hit
on that line.
Fully Associative Sequence of m Each block of main memory Tag portion of address used
lines can map to any line of cache. to check every line for hit on
that line.

Set Associative Sequence of m Each block of main memory Line portion of address used to
lines organized as v maps to one unique cache set. access cache set; Tag portion
sets of k lines each used to check every line in that
(m = v × k) set for hit on that line.

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.6
Mapping from Main Memory to Cache: Direct and
Associative

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Direct Mapping

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Direct Mapping
Address Structure

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Direct Mapping Summary

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.8
Direct Mapping Example

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Associative Mapping

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Associative Mapping
Address Structure
Word
Tag 22 bit 2 bit
• 22 bit tag stored with each 32 bit block of data
• Compare tag field with tag entry in cache to check for hit
• Least significant 2 bits of address identify which 8 bit word is
required from 32 bit data block
• e.g.
– Address Tag Data Cache line
– FFFFFC 3FFFFF 24682468 3FFF

FCSIT
Associative Mapping Summary

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.11
Associative Mapping Example

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Set Associative Mapping
• Compromise that exhibits the strengths of both the direct
and associative approaches while reducing their
disadvantages
• Cache consists of a number of sets
• Each set contains a number of lines
• A given block maps to any line in a given set
• e.g. 2 lines per set
– 2 way associative mapping
– A given block can be in one of 2 lines in only one set

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.12
Mapping from Main Memory to Cache: k-Way Set
Associative

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Set Associative Mapping
Address Structure

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Set Associative Mapping Summary

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Figure 5.14
Two-Way Set-Associative Mapping Example

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Replacement Algorithms
• Once the cache has been filled, when a new block is
brought into the cache, one of the existing blocks must
be replaced
• For direct mapping there is only one possible line for any
particular block and no choice is possible
• For the associative and set-associative techniques a
replacement algorithm is needed
• To achieve high speed, an algorithm must be
implemented in hardware

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
The most common replacement algorithms
are:
• Least recently used (LRU)
– Most effective
– Replace that block in the set that has been in the cache longest with no
reference to it
– Because of its simplicity of implementation, LRU is the most popular
replacement algorithm

• First-in-first-out (FIFO)
– Replace that block in the set that has been in the cache longest
– Easily implemented as a round-robin or circular buffer technique

• Least frequently used (LFU)


– Replace that block in the set that has experienced the fewest references
– Could be implemented by associating a counter with each line

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Write Policy
When a block that is resident in the
There are two problems to contend
cache is to be replaced there are two
with:
cases to consider:

If the old block in the cache has not been


More than one device may have access to main
altered then it may be overwritten with a new
memory
block without first writing out the old block

If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the cache multiple processors are attached to the same
then main memory must be updated by writing bus and each processor has its own local cache
the line of cache out to the block of memory - if a word is altered in one cache it could
before bringing in the new block conceivably invalidate a word in other caches

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Write Through
and Write Back
• Write through
– Simplest technique
– All write operations are made to main memory as well as to the cache
– The main disadvantage of this technique is that it generates substantial
memory traffic and may create a bottleneck

• Write back
– Minimizes memory writes
– Updates are made only in the cache
– Portions of main memory are invalid and hence accesses by I/O modules
can be allowed only through the cache
– This makes for complex circuitry and a potential bottleneck

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Write Miss Alternatives
• There are two alternatives in the event of a write miss at a cache
level:
– Write allocate
– The block containing the word to be written is fetched from main memory (or
next level cache) into the cache and the processor proceeds with the write cycle
– No write allocate
– The block containing the word to be written is modified in the main memory and
not loaded into the cache

• Either of these policies can be used with either write through or


write back
• No write allocate is most commonly used with write through
• Write allocate is most commonly used with write back

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Cache Coherency
• A new problem is introduced in a bus organization in which more than one device has a
cache and main memory is shared
• If data in one cache are altered, this invalidates not only the corresponding word in main
memory, but also that same word in other caches
• Even if a write-through policy is used, the other caches may contain invalid data
• Possible approaches to cache coherency include:
– Bus watching with write through
• Each cache controller monitors the address lines to detect write operations to memory by other bus masters
• If another master writes to a location in shared memory that also resides in the cache memory, the cache
controller invalidates that cache entry
• This strategy depends on the use of a write-through policy by all cache controllers
– Hardware transparency
• Additional hardware is used to ensure that all updates to main memory via cache are reflected in all caches
• If one processor modifies a word in its cache, this update is written to main memory
– Noncacheable memory
• Only a portion of main memory is shared by more than one processor, and this is designated as noncacheable
• All accesses to shared memory are cache misses, because the shared memory is never copied into the cache
• The noncacheable memory can be identified using chip-select logic or high-address bits

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Line Size
When a block of Two specific effects
data is retrieved come into play:
and placed in the • Larger blocks reduce the
cache not only the As the block size number of blocks that fit
desired word but increases more into a cache
• As a block becomes larger
also some number useful data are each additional word is
of adjacent words brought into the farther from the requested
are retrieved cache word

As the block size The hit ratio will


increases the hit begin to decrease
ratio will at first as the block
increase because becomes bigger
of the principle of and the probability
locality of using the newly
fetched information
becomes less than
the probability of
reusing the
information that
has to be replaced

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Multilevel Caches
• As logic density has increased it has become possible to have a cache on the same chip as the
processor

• The on-chip cache reduces the processor’s external bus activity and speeds up execution time
and increases overall system performance
– When the requested instruction or data is found in the on-chip cache, the bus access is eliminated
– On-chip cache accesses will complete appreciably faster than would even zero-wait state bus
cycles
– During this period the bus is free to support other transfers

• Two-level cache:
– Internal cache designated as level 1 (L1)
– External cache designated as level 2 (L2)

• Potential savings due to the use of an L2 cache depends on the hit rates in both the L1 and L2
caches

• The use of multilevel caches complicates all of the design issues related to caches, including
size, replacement algorithm, and write policy

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Unified Versus Split Caches
• Has become common to split cache:
– One dedicated to instructions
– One dedicated to data
– Both exist at the same level, typically as two L1 caches

• Advantages of unified cache:


– Higher hit rate
▪ Balances load of instruction and data fetches automatically
▪ Only one cache needs to be designed and implemented

• Trend is toward split caches at the L1 and unified caches for higher
levels
• Advantages of split cache:
– Eliminates cache contention between instruction fetch/decode unit and execution unit
▪ Important in pipelining

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Inclusion Policy
• Inclusive policy
– Dictates that a piece of data in one cache is guaranteed to be also found in all lower levels of caches
– Advantage is that it simplifies searching for data when there are multiple processors in the computing
system
– This property is useful in enforcing cache coherence

• Exclusive policy
– Dictates that a piece of data in one cache is guaranteed not to be found in all lower levels of caches
– The advantage is that it does not waste cache capacity since it does not store multiple copies of the
same data in all of the caches
– The disadvantage is the need to search multiple cache levels when invalidating or updating a block
– To minimize the search time, the highest-level tag sets are typically duplicated at the lowest cache
level to centralize searching

• Noninclusive policy
– With the noninclusive policy a piece of data in one cache may or may not be found in lower levels of
caches
– As with the exclusive policy, this policy will generally maintain all higher-level cache sets at the lowest
cache level

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Summary •Cache

Chapter 4 • Memory
• Elements of cache design
• Cache memory principles
– Cache addresses
• Cache performance
– Cache size
modules
– Logical cache
– Cache timing model
organization
– Design option for
– Replacement algorithms
improving
performance – Write policy
– Line size
– Number of caches
– Inclusion policy

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved
Copyright

This work is protected by United States copyright laws and is provided solely
for the use of instructions in teaching their courses and assessing student
learning. dissemination or sale of any part of this work (including on the
World Wide Web) will destroy the integrity of the work and is not permit-
ted. The work and materials from it should never be made available to
students except by instructors using the accompanying text in their
classes. All recipients of this work are expected to abide by these
restrictions and to honor the intended pedagogical purposes and the needs of
other instructors who rely on these materials.

Copyright © 2019, 2016, 2013 Pearson Education, Inc. All Rights Reserved

You might also like