ECE4680 Computer Organization and Architecture Memory Hierarchy: Cache System

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

ECE4680 Computer Organization and Architecture Memory Hierarchy: Cache System

ECE4680 Cache.1

2002-4-17

The Motivation for Caches


Memory System

Processor

Cache

DRAM

Motivation: Large memories (DRAM) are slow Small memories (SRAM) are fast Make the average access time small by: Servicing most accesses from a small, fast memory. Reduce the bandwidth required of the large memory

ECE4680 Cache.2

2002-4-17

An Expanded View of the Memory System

Processor Control Memory Memory Memory Memory Datapath Memory

Speed: Fastest Size: Smallest Cost: Highest

Slowest Biggest Lowest

ECE4680 Cache.3

2002-4-17

Levels of the Memory Hierarchy


Capacity Access Time Cost CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns $.01-.001/bit Main Memory M Bytes 100ns-1us $.01-.001 Disk G Bytes ms -4 -3 10 - 10 cents Tape infinite sec-min 10 -6
ECE4680 Cache.4

Upper Level
Staging Xfer Unit

faster

Registers Instr. Operands Cache Blocks Memory Pages Disk Files Tape
user/operator Mbytes OS 512-4K bytes cache cntl 8-128 bytes prog./compiler 1-8 bytes

Larger

Lower Level
2002-4-17

The Principle of Locality


Probability of reference

Address Space

The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Example: 90% of time in 10% of the code Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon. Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon.
ECE4680 Cache.5 2002-4-17

Memory Hierarchy: Principles of Operation


At any given time, data is copied between only 2 adjacent levels: Upper Level (Cache) : the one closer to the processor - Smaller, faster, and uses more expensive technology Lower Level (Memory): the one further away from the processor Bigger, slower, and uses less expensive technology

Block: The minimum unit of information that can either be present or not present in the two level hierarchy

To Processor

Upper Level (Cache)


Blk X

Lower Level (Memory)

From Processor

Blk Y

ECE4680 Cache.6

2002-4-17

Memory Hierarchy: Terminology


Hit: data appears in some block in the upper level (example: Block X) Hit Rate: the fraction of memory access found in the upper level Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Miss Penalty = Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty
Lower Level (Memory)

To Processor

Upper Level (Cache)


Blk X

From Processor

Blk Y

ECE4680 Cache.7

2002-4-17

Basic Terminology: Typical Values

Typical Values Block (line) size Hit time Miss penalty (access time) (transfer time) Miss rate Cache Size 4 - 128 bytes 1 - 4 cycles 8 - 32 cycles (and increasing) (6-10 cycles) (2 - 22 cycles) 1% - 20% 1 KB - 256 KB

ECE4680 Cache.8

2002-4-17

How Does Cache Work?


Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon. Keep more recently accessed data items closer to the processor Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon. Move blocks consists of contiguous words to the cache

To Processor

Upper Level Cache


Blk X

Lower Level Memory

From Processor

Blk Y

ECE4680 Cache.9

2002-4-17

The Simplest Cache: Direct Mapped Cache


Memory Address 0 1 2 3 4 5 6 7 8 9 A B C D E F
ECE4680 Cache.10

Memory 4 Byte Direct Mapped Cache Cache Index 0 1 2 3

Location 0 can be occupied by data from: Memory location 0, 4, 8, ... etc. In general: any memory location whose 2 LSBs of the address are 0s Address<1:0> => cache index (Block Addr) module (#cache blocks) How can we tell which one is in the cache? Which one should we place in the cache?
2002-4-17

Example: Consider an eight-word direct mapped cache. Show the contents of the cache as it responds to a series of requests (decimal addresses): 22, 26, 22, 26, 16, 3, 16, 18
22 10110 miss 110 26 11010 miss 010 22 10110 hit 110 26 11010 hit 010 16 10000 miss 000 3 00011 miss 011 16 10000 hit 000 18 10010 miss 010

ECE4680 Cache.11

2002-4-17

Cache Organization: Cache Tag and Cache Index


Assume a 32-bit memory (byte ) address: N A 2 bytes direct mapped cache: Cache Index: The lower N bits of the memory address Cache Tag: The upper (32 - N) bits of the memory address

31 Cache Tag Example: 0x50

N Cache Index 2 Bytes Direct Mapped Cache Byte 0 Byte 1 Byte 2 Byte 3
N

0 Ex: 0x03

Valid Bit

Stored as part of the cache state

0x50

0 1 2 3

:
Byte 2 N- 1 2
N

-1
2002-4-17

ECE4680 Cache.12

Cache Access Example


Start Up Access 000 01 (miss) Miss Handling: Write Tag & Set V Load Data V Tag Data Access 000 01 (HIT) 000 010 M [00001] M [01010]

000 Access 010 10 (miss)

M [00001]

Access 010 10 (HIT)

000 010

M [00001] M [01010]

Load Data Write Tag & Set V

Sad Fact of Life: A lot of misses at start up:


M [00001] M [01010]

000 010

Compulsory Misses - (Cold start misses)


2002-4-17

ECE4680 Cache.13

Definition of a Cache Block


Cache Block: the cache data that has in its own cache tag Our previous extreme example (see slide 10): 4-byte Direct Mapped cache: Block Size = 1 Byte Take advantage of Temporal Locality: If a byte is referenced, it will tend to be referenced soon. Did not take advantage of Spatial Locality: If a byte is referenced, its adjacent bytes will be referenced soon. In order to take advantage of Spatial Locality: increase the block size
Valid Cache Tag Direct Mapped Cache Data Byte 0 Byte 1 Byte 2 Byte 3

Question: Can we say the larger, the better for the block size?
ECE4680 Cache.14 2002-4-17

Example: 1 KB Direct Mapped Cache with 32 B Blocks


For a 2 ** N byte cache: The uppermost (32 - N) bits are always the Cache Tag The lowest M bits are the Byte Select (Block Size = 2 ** M)
31 Cache Tag Example: 0x50 9 Cache Index Ex: 0x01 4 0 Byte Select Ex: 0x00

Stored as part of the cache state Valid Bit Cache Tag 0x50 Cache Data Byte 31 Byte 63

: :

Byte 1 Byte 0 0 Byte 33 Byte 32 1 2 3

:
Byte 1023

: :
Byte 992 31
2002-4-17

ECE4680 Cache.15

Mapping an Address to a Multiword Cache Block


Consider a cache with 64 Blocks and a block size of 16 bytes. What block number does byte address 1200 map to? [1200/16] module 64 = 11

ECE4680 Cache.16

2002-4-17

Bits in a cache How many totoal bits are required for a directed mapped cache with 64 KB of data and one-word blocks, assuming a 32-bit address?

2^14 (32 + (32-14-2)+1) = 2^14*49 = 784 Kbits

14 2

1 32 14 2 32

ECE4680 Cache.17

2002-4-17

Block Size Tradeoff


In general, larger block size take advantage of spatial locality BUT: Larger block size means larger miss penalty: - Takes longer time to fill up the block If block size is too big relative to cache size, miss rate will go up Average Access Time: = Hit Time x (1 - Miss Rate) + Miss Penalty x Miss Rate
Miss Penalty

Miss Rate Exploits Spatial Locality


Fewer blocks: compromises temporal locality

Average Access Time Increased Miss Penalty & Miss Rate

Block Size

Block Size

Block Size

ECE4680 Cache.18

2002-4-17

Another Extreme Example


Valid Bit Cache Tag Cache Data Byte 3 Byte 2 Byte 1 Byte 0 0

Cache Size = 4 bytes Only ONE entry in the cache

Block Size = 4 bytes

True: If an item is accessed, likely that it will be accessed again soon But it is unlikely that it will be accessed again immediately!!! The next access will likely to be a miss again Continually loading data into the cache but discard (force out) them before they are used again Worst nightmare of a cache designer: Ping Pong Effect

Conflict Misses are misses caused by: Different memory locations mapped to the same cache index - Solution 1: make the cache size bigger ECE4680 Cache.19

Solution 2: Multiple entries for the same Cache Index


2002-4-17

How Do you Design a Cache?


Set of Operations that must be supported read: data <= Mem[Physical Address] write: Mem[Physical Address] <= Data
Physical Address Read/Write Data Memory Black Box Inside it has: Tag-Data Storage, Muxes, Comparators, . . .

Determine the internal register transfers Design the Datapath Design the Cache Controller
Address Data In Data Out
ECE4680 Cache.20

Control Cache Points Cache DataPath Controller Signals

R/W Active wait


2002-4-17

Impact on Cycle Time


PC

Cache Hit Time: directly tied to clock rate increases with cache size increases with associativity

I -Cache miss
IR

IRex

invalid
IRm R D Cache IRwb Miss T

Average Memory Access time = Hit Time + Miss Rate x Miss Penalty Time = IC x CT x (ideal CPI + memory stalls)

ECE4680 Cache.21

2002-4-17

Improving Cache Performance: 3 general options


1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Memory stall cycles = Read stall cycles + write-stall cycles Read stall cycles = #reads * read miss rate * read miss penalty Write stall cycles = #writes * write miss rate * write miss penalty Memory stall cycles = #access * miss rate * miss penalty

ECE4680 Cache.22

2002-4-17

Examples:
Q1: Assume an instruction cache miss rate for gcc of 2% and a data cache miss rate of 4%. If a machine has a CPI of 2 without any memory stalls and the miss penalty is 40 cycles for all misses, determine how much faster a machine would run with a perfect cache that never missed. (It is known that the frequency of all loads and stores in gcc is 36%, unused) Answer: Instruction miss cycles = I * 2% * 40 = 0.80I Data miss cycles = I * 4% * 40 = 0.56I The CPI with memory stalls is 2 + 1.36 = 3.36 The performance with the perfect cache is better by 1.68 Q2: Suppose we speed up the machine by reducing CPI from 2 to 1. Q3: If we double clock rate,

ECE4680 Cache.23

2002-4-17

And yet Another Extreme Example: Fully Associative


Fully Associative Cache Forget about the Cache Index Compare the Cache Tags of all cache entries in parallel Example: Block Size = 32 B blocks, we need N 27-bit comparators By definition: Conflict Miss = 0 for a fully associative cache
31 Cache Tag (27 bits long) 4 0 Byte Select Ex: 0x01

Cache Tag X X X X X
ECE4680 Cache.24

Valid Bit Cache Data Byte 31 Byte 1 Byte 0 Byte 63 Byte 33 Byte 32

: :

:
2002-4-17

A Two-way Set Associative Cache


N-way set associative: N entries for each Cache Index N direct mapped caches operates in parallel Example: Two-way set associative cache Cache Index selects a set from the cache The two tags in the set are compared in parallel Data is selected based on the tag result
Valid Cache Tag Cache Data Cache Block 0 Cache Index Cache Data Cache Block 0 Cache Tag Valid

Adr Tag

Compare

Sel1 1

Mux

0 Sel0

Compare

OR Hit
ECE4680 Cache.25

Cache Block
2002-4-17

Disadvantage of Set Associative Cache


N-way Set Associative Cache versus Direct Mapped Cache: N comparators vs. 1 Extra MUX delay for the data Data comes AFTER Hit/Miss In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: Possible to assume a hit and continue. Recover later if miss.
Valid Cache Tag Cache Data Cache Block 0 Cache Index Cache Data Cache Block 0 Cache Tag Valid

Adr Tag

Compare

Sel1 1

Mux

0 Sel0

Compare

OR Hit
ECE4680 Cache.26

Cache Block
2002-4-17

Example: There are three small caches, each consisting of four one-word blocks. One cache is fully associative, a second is two-way set associative, and the third is direct mapped. Find the number of misses for each cache organization given the following sequence of block addresses: 0, 8, 0, 6, 8
0 0000 M M M 8 1000 M M M 0 0000 H H M 6 0110 M M M 8 1000 H H M

F. A. 2-Way D. M.

ECE4680 Cache.27

2002-4-17

A Summary on Sources of Cache Misses


Compulsory (cold start, first reference): first access to a block Cold fact of life: not a whole lot you can do about it Conflict (collision): Multiple memory locations mapped to the same cache location Solution 1: increase cache size Solution 2: increase associativity Capacity: Cache cannot contain all blocks access by the program Solution: increase cache size Invalidation: other process (e.g., I/O, processors) updates memory

ECE4680 Cache.28

2002-4-17

Source of Cache Misses Quiz

Direct Mapped Cache Size Compulsory Miss

N-way Set Associative

Fully Associative

Conflict Miss Capacity Miss Invalidation Miss

ECE4680 Cache.29

2002-4-17

Sources of Cache Misses Answer

Direct Mapped Cache Size Compulsory Miss


See Note

N-way Set Associative Medium Medium Medium Medium Same

Fully Associative Small Low Zero High Same

Big High (but who cares!) High Low Same

Conflict Miss Capacity Miss Invalidation Miss

Note: If you are going to run billions of instruction, Compulsory Misses are insignificant.

ECE4680 Cache.30

2002-4-17

The Need to Make a Decision!


Direct Mapped Cache: Each memory location can only mapped to 1 cache location No need to make any decision :-) - Current item replaced the previous item in that cache location N-way Set Associative Cache: Each memory location have a choice of N cache locations Fully Associative Cache: Each memory location can be placed in ANY cache location Cache miss in a N-way Set Associative or Fully Associative Cache: Bring in new block from memory Throw out a cache block to make room for the new block ====> We need to make a decision on which block to throw out!

ECE4680 Cache.31

2002-4-17

Cache Block Replacement Policy


Random Replacement: Hardware randomly selects a cache item and throw it out Least Recently Used: Hardware keeps track of the access history Replace the entry that has not been used for the longest time Example of a Simple Pseudo Least Recently Used Implementation: Assume 64 Fully Associative Entries Hardware replacement pointer points to one cache entry Whenever an access is made to the entry the pointer points to: - Move the pointer to the next entry Otherwise: do not move the pointer
Entry 0 Entry 1

Replacement Pointer

:
Entry 63
2002-4-17

ECE4680 Cache.32

Cache Write Policy: Write Through versus Write Back


Cache read is much easier to handle than cache write: Instruction cache is much easier to design than data cache Cache write: How do we keep data in the cache and memory consistent? Two options (decision time again :-) Write Back: write to cache only. Write the cache block to memory when that cache block is being replaced on a cache miss. Need a dirty bit for each cache block Greatly reduce the memory bandwidth requirement

- Control can be complex Write Through: write to cache and memory at the same time. What!!! How can this be? Isnt memory too slow for this?

ECE4680 Cache.33

2002-4-17

Write Buffer for Write Through


Cache

Processor

DRAM

Write Buffer

A Write Buffer is needed between the Cache and Memory Processor: writes data into the cache and the write buffer Memory controller: write contents of the buffer to memory Write buffer is just a FIFO: Typical number of entries: 4 Works fine if: Store frequency (w.r.t. time) << 1 / DRAM write cycle Memory system designers nightmare: Store frequency (w.r.t. time) > 1 / DRAM write cycle Write buffer saturation

ECE4680 Cache.34

2002-4-17

Write Buffer Saturation


Processor Cache DRAM

Write Buffer

Store frequency (w.r.t. time) > 1 / DRAM write cycle If this condition exist for a long period of time (CPU cycle time too quick and/or too many store instructions in a row): - Store buffer will overflow no matter how big you make it The CPU Cycle Time <= DRAM Write Cycle Time

Solution for write buffer saturation: Use a write back cache Install a second level (L2) cache:

Processor

Cache

L2 Cache

DRAM

Write Buffer
ECE4680 Cache.35 2002-4-17

Write Allocate versus Not Allocate


Assume: a 16-bit write to memory location 0x0 and causes a miss Do we read in the rest of the block (Byte 2, 3, ... 31)? Yes: Write Allocate No: Write Not Allocate
31 Cache Tag Example: 0x00 9 Cache Index Ex: 0x00 4 0 Byte Select Ex: 0x00

Valid Bit

Cache Tag 0x00

Cache Data Byte 31 Byte 63

: :

Byte 1 Byte 0 0 Byte 33 Byte 32 1 2 3

:
Byte 1023

: :
Byte 992 31
2002-4-17

ECE4680 Cache.36

Recap: Improving Cache Performance


1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Memory stall cycles = Read stall cycles + write-stall cycles Read stall cycles = #reads * read miss rate * read miss penalty Write stall cycles = #writes * write miss rate * write miss penalty Memory stall cycles = #access * miss rate * miss penalty

ECE4680 Cache.37

2002-4-17

Reducing Memory Transfer Time


CPU CPU CPU $ bus M M M M

$ bus M

mux $ bus M

Solution 1 High BW DRAM

Solution 2 Wide Path Between Memory & Cache

Solution 3 Memory Interleaving

Examples: Page Mode DRAM Cost SDRAM CDRAM RAMbus


ECE4680 Cache.38 2002-4-17

Cycle Time versus Access Time


Cycle Time Access Time Time

DRAM (Read/Write) Cycle Time >> DRAM (Read/Write) Access Time - 2:1; why? DRAM (Read/Write) Cycle Time : How frequent can you initiate an access? Analogy: A little kid can only ask his father for money on Saturday DRAM (Read/Write) Access Time: How quickly will you get what you want once you initiate an access? Analogy: As soon as he asks, his father will give him the money DRAM Bandwidth Limitation analogy: What happens if he runs out of money on Wednesday?

ECE4680 Cache.39

2002-4-17

Increasing Bandwidth - Interleaving


Access Pattern without Interleaving:
CPU Memory

D1 available Start Access for D1

Start Access for D2 Memory Bank 0 Memory Bank 1 Memory Bank 2

Access Pattern with 4-way Interleaving:

CPU

Access Bank 0

Memory Bank 3 Access Bank 1 Access Bank 2 Access Bank 3 We can Access Bank 0 again
2002-4-17

ECE4680 Cache.40

Main Memory Performance


Timing model 1 to send address, 6 access time, 1 to send data Cache Block is 4 words Simple M.P. = 4 x (1+6+1) = 32 Wide M.P. =1+6+1 =8 Interleaved M.P. = 1 + 6 + 4x1 = 11

ECE4680 Cache.41

2002-4-17

Independent Memory Banks

How many banks?


number banks number clocks to access word in bank

For sequential accesses, otherwise will return to original bank before it has next word ready Increasing DRAM => fewer chips => harder to have banks Growth bits/chip DRAM : 50%-60%/yr Nathan Myrvold M/S: mature software growth (33%/yr for NT) - growth MB/$ of DRAM (25%-30%/yr)

ECE4680 Cache.42

2002-4-17

SPARCstation 20s Memory System


Memory Controller

Memory Bus (SIMM Bus) 128-bit wide datapath Memory Module 5 Memory Module 7 Memory Module 4 Memory Module 3 Memory Module 6 Memory Module 2 Memory Module 1 Memory Module 0

Processor Bus (Mbus) 64-bit wide

Processor Module (Mbus Module) SuperSPARC Processor External Cache Instruction Cache Data Cache

Register File

ECE4680 Cache.43

2002-4-17

SPARCstation 20s External Cache


Processor Module (Mbus Module) SuperSPARC Processor External Instruction Cache Cache 1 MB Register Direct Mapped File Data Write Back Cache Write Allocate

SPARCstation 20s External Cache: Size and organization: 1 MB, direct mapped Block size: 128 B Sub-block size: 32 B Write Policy: Write back, write allocate

ECE4680 Cache.44

2002-4-17

SPARCstation 20s Internal Instruction Cache


Processor Module (Mbus Module) SuperSPARC Processor External I-Cache Cache 20 KB 5-way 1 MB Register Direct Mapped File Write Back Data Write Allocate Cache

SPARCstation 20s Internal Instruction Cache: Size and organization: 20 KB, 5-way Set Associative Block size: 64 B Sub-block size: 32 B Write Policy: Does not apply Note: Sub-block size the same as the External (L2) Cache

ECE4680 Cache.45

2002-4-17

SPARCstation 20s Internal Data Cache


Processor Module (Mbus Module) SuperSPARC Processor External I-Cache Cache 20 KB 5-way 1 MB Register Direct Mapped File D-Cache Write Back 16 KB 4-way Write Allocate WT, WNA

SPARCstation 20s Internal Data Cache: Size and organization: 16 KB, 4-way Set Associative Block size: 64 B Sub-block size: 32 B Write Policy: Write through, write not allocate Sub-block size the same as the External (L2) Cache

ECE4680 Cache.46

2002-4-17

Two Interesting Questions?


Processor Module (Mbus Module) SuperSPARC Processor External I-Cache Cache 20 KB 5-way 1 MB Register Direct Mapped File D-Cache Write Back 16 KB 4-way Write Allocate WT, WNA

Why did they use N-way set associative cache internally? Answer: A N-way set associative cache is like having N direct mapped caches in parallel. They want each of those N direct mapped cache to be 4 KB. Same as the virtual page size. How many levels of cache does SPARCstation 20 has? Answer: Three levels. (1) Internal I & D caches, (2) External cache and (3) ...

ECE4680 Cache.47

2002-4-17

SPARCstation 20s Memory Module


Supports a wide range of sizes: Smallest 4 MB: 16 2Mb DRAM chips, 8 KB of Page Mode SRAM Biggest: 64 MB: 32 16Mb chips, 16 KB of Page Mode SRAM
DRAM Chip 15 512 cols 256K x 8 = 2 MB

DRAM Chip 0 512 rows 256K x 8 = 2 MB 8 bits 512 x 8 SRAM bits<7:0>

512 x 8 SRAM bits<127:0>

Memory Bus<127:0>

ECE4680 Cache.48

2002-4-17

DRAM Performance
A 60 ns (tRAC) DRAM can perform a row access only every 110 ns (tRC) perform column access (tCAC) in 15 ns, but time between column accesses is at least 35 ns (tPC). In practice, external address delays and turning around buses make it 40 to 50 ns

These times do not include the time to drive the addresses off the microprocessor nor the memory controller overhead. Drive parallel DRAMs, external memory controller, bus to turn around, SIMM module, pins 180 ns to 250 ns latency from processor to memory is good for a 60 ns (tRAC) DRAM

ECE4680 Cache.49

2002-4-17

Summary:
The Principle of Locality: Temporal Locality vs Spatial Locality Four Questions For Any Cache Where to place in the cache How to locate a block in the cache Replacement Write policy: Write through vs Write back - Write miss: Three Major Categories of Cache Misses: Compulsory Misses: sad facts of life. Example: cold start misses. Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! Capacity Misses: increase cache size

ECE4680 Cache.50

2002-4-17

You might also like