2 Cache Complexity

Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

Cache Memories, Cache Complexity

< 0 “O ‘A O
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

< □ ► «g ► ◄ “O ‘A O
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

“O ‘A O
Capacity
Access Time Staging
Cost Xfer Unit

~$1 / GByte

<□►<g► “O ‘A O
CPU Cache (1/3)

Main Cache
Memory Memory

► A CPU cache is an auxiliary memory which is smaller, faster


memory than the main memory and which stores copies of the
main memory locations that are expectedly frequently used.
► Most modern desktop and server CPUs have at least three
independent caches: the data cache, the instruction cache and
the translation look-aside buffer.
CPU Cache (2/3)

Main Cache
Memory Memory
Index Data
0 xyz
__ 1 Pdq
2 abc
3 rgf

► Each location in each memory (main or cache) has


► a datum (cache line) which ranges between 8 and 512 bytes in
size, while a datum requested by a CPU instruction ranges
between 1 and 16.
► a unique index (called address in the case of the main memory)
► In the cache, each location has also a tag (storing the address
of the corresponding cached datum).

< 0 “O ‘A O
CPU Cache (3/3)

Main Cache
Memory Memory

► When the CPU needs to read or write a location, it checks the


cache:
► if it finds it there, we have a cache hit
► if not, we have a cache miss and (in most cases) the processor
needs to create a new entry in the cache.
► Making room for a new entry requires a replacement policy:
the Least Recently Used (LRU) discards the least recently
used items first; this requires to use age bits.
CPU Cache (4/7)

Main Cache
Memory Memory

► Read latency (time to read a datum from the main memory)


requires to keep the CPU busy with something else:
out-of-order execution: attempt to execute independent
instructions arising after the instruction that is
waiting due to the cache miss
hyper-threading (HT): allows an alternate thread to use the
CPU

□►< “O ‘A O
CPU Cache (5/7)

Main Cache
Memory Memory

► Modifying data in the cache requires a write policy for


updating the main memory
- write-through cache: writes are immediately mirrored to
main memory
- write-back cache: the main memory is mirrored when that
data is evicted from the cache
► The cache copy may become out-of-date or stale, if other
processors modify the original entry in the main memory.
“O ‘A O
CPU Cache (6/7)

Direct Mapped 2-Way Associative


Cache Fill Cache Fill
Main Main
Memory Cache Memory Cache
Index Memory Index Memory
0 Index 0 0 Index 0, Way 0
-------- 3 Index 1 Index 0, Way 1
1
2
77* Index 2
1
2 Index 1, Way 0
3
//J Index 3 3 Index 1, Way 1
4 4
5 5

Each bcatbn in main me maty can be Each bcatbn in main memory can be
cached by just one cache bcatbn. cached by one of two cache bcatbns.

► The replacement policy decides where in the cache a copy


particular entry of main memory will go:
- fully associative: any entry in the cache can hold it
- direct mapped: only one possible entry in the cache can
hold it
- /V-way set associative: N possible entries can hold it
cache size

► Cache Performance for SPEC CPU2000 by J.F. Cantin and M.D.


Hill.
► The SPEC CPU2000 suite is a collection of 26 compute-intensive,
non-trivial programs used to evaluate the performance of a
computer’s CPU, memory system, and compilers
(http://www.spec.org/osg/cpu2000 ).
<□►<0►◄>►«>► “O ‘A O
Cache issues

► Cold miss: The first time the data is available. Cure:


Prefetching may be able to reduce this type of cost.
► Capacity miss: The previous access has been evicted because
too much data touched in between, since the working data set
is too large. Cure: Reorganize the data access such that reuse
occurs before eviction.
► Conflict miss: Multiple data items mapped to the same
location with eviction before cache is full. Cure: Rearrange
data and/or pad arrays.
► True sharing miss: Occurs when a thread in another
processor wants the same data. Cure: Minimize sharing.
► False sharing miss: Occurs when another processor uses
different data in the same cache line. Cure: Pad data.
A typical matrix multiplication C code
#define IND(A, x, y, d) A[(x)*(d)+(y)]
uint64_t testMM(const int x, const int y, const int z)

double *A; double *B; double *C;


long started, ended;
float timeTaken;
int i, j, k;
srand(getSeed());
A = (double *)malloc(sizeof(double)*x*y);
B = (double *)malloc(sizeof(double)*x*z);
C = (double *)malloc(sizeof(double)*y*z);
for (i= 0; i < x*z; i++) B[i] = (double) rand() ;
for (i= 0; i < y*z; i++) C[i] = (double) rand() ;
for (i= 0; i < x*y; i++) A[i] = 0 ;
started = example_get_time();
for (i = 0; i < x; i++)
for (j = 0; j < y; j++)
for (k = 0; k < z; k++)
// A[i] [j] += B[i] [k] * C[k] [j];
IND(A,i,j,y) += IND(B,i,k,z) * IND(C,k,j,z);
ended = example_get_time();
timeTaken = (ended - started)/I.f;
return timeTaken;

<□►«g► “O A O
Issues with matrix representation

K
► Contiguous accesses are better:
► Data fetch as cache line (Core 2 Duo 64 byte per cache line)
► With contiguous data, a single cache fetch supports 8 reads of
doubles.
► Transposing the matrix C should reduce LI cache misses!
Transposing for optimizing spatial locality
float testMM(const int x, const int y, const int z)

double *A; double *B; double *C; double *Cx;


long started, ended; float timeTaken; int i, j, k;
A = (double *)malloc(sizeof(double)*x*y);
B = (double *)malloc(sizeof(double)*x*z);
C = (double *)malloc(sizeof(double)*y*z);
Cx = (double *)malloc(sizeof(double)*y*z);
srand(getSeed());
for (i = 0; i < x*z; i++) B[i] = (double) rand() ;
for (i = 0; i < y*z; i++) C[i] = (double) rand() ;
for (i = 0; i < x*y; i++) A[i] = 0 ;
started = example_get_time();
for(j =0; j < y; j++)
for(k=0; k < z; k++)
IND(Cx,j,k,z) = IND(C,k,j,y);
for (i = 0; i < x; i++)
for (j = 0; j < y; j++)
for (k = 0; k < z; k++)
IND(A, i, j, y) += IND(B, i, k, z) *IND(Cx, j, k, z) ;
ended = example_get_time();
timeTaken = (ended - started)/I.f;
return timeTaken;

<□►«g► T) ‘-t O
Issues with data reuse

1024 384 1024

X
1024

► Naive calculation of a row of A, so computing 1024


coefficients: 1024 accesses in A, 384 in B and
1024 x 384 = 393,216 in C. Total = 394, 524.
► Computing a 32 x 32-block of A, so computing again 1024
coefficients: 1024 accesses in A, 384 x 32 in B and 32 x 384 in
C. Total = 25, 600.
► The iteration space is traversed so as to reduce memory
accesses.
Blocking for optimizing temporal locality

float testMM(const int x, const int y, const int z)

double *A; double *B; double *C;


long started, ended; float timeTaken; int i, j, k, iO, jO, kO;
A = (double *)malloc(sizeof(double)*x*y);
B = (double *)malloc(sizeof(double)*x*z);
C = (double *)malloc(sizeof(double)*y*z);
srand(getSeed());
for (i= 0; i < x*z; i++) B[i] = (double) rand() ;
for (i= 0; i < y*z; i++) C[i] = (double) rand() ;
for (i= 0; i < x*y; i++) A[i] = 0;
started = example_get_time();
for (i= 0; i < x; i += BLOCK.X)
for (j= 0; j < y; j += BL0CK_Y)
for (k = 0; k < z; k += BLOCK.Z)
for (iO = i; iO < min(i + BLOCK_X, x); i0++)
for (jO = j; j0 < min(j + BL0CK_Y, y); j0++)
for (kO = k; kO < min(k + BLOCK_Z, z); k0++)
IND(A,iO,jO,y) += IND(B,i0,k0,z) * IND(C,kO,jO,y);
ended = example_get_time();
timeTaken = (ended - started)/I.f;
return timeTaken;

□►<► “O O
Transposing and blocking for optimizing data locality

float testMM(const int x, const int y, const int z)

double *A; double *B; double *C;


long started, ended; float timeTaken; int i, j, k, iO, jO, kO;
A = (double *)malloc(sizeof(double)*x*y);
B = (double *)malloc(sizeof(double)*x*z);
C = (double *)malloc(sizeof(double)*y*z);
srand(getSeed());
for (i= 0; i < x*z; i++) B[i] = (double) rand() ;
for (i= 0; i < y*z; i++) C[i] = (double) rand() ;
for (i= 0; i < x*y; i++) A[i] = 0 ;
started = example_get_time();
for (i= 0; i < x; i += BLOCK.X)
for (j= 0; j < y; j +=BL0CK_Y)
for (k = 0; k < z; k += BLOCK.Z)
for (iO = i; iO < min(i + BL0CK_X, x); i0++)
for (jO = j; j0 < min(j + BL0CK_Y, y); j0++)
for (kO = k; kO < min(k + BLOCK_Z, z); k0++)
IND(A,iO,jO,y) += IND(B,i0,k0,z) * IND(C,jO,kO,z);
ended = example_get_time();
timeTaken = (ended - started)/I.f;
return timeTaken;

□►<► T) O
Experimental results

Computing the product of two nxn matrices on my laptop (Core2


Duo CPU P8600 © 2.40GHz, LI cache of 3072 KB, 4 GBytes of
RAM)
n naive transposed speedup 64 x 64-tiled speedup t. &. t. speedup
128 7 3 7 2
256 26 43 155 23
512 1805 265 6.81 1928 0.936 187 9.65
1024 24723 3730 6.62 14020 1.76 1490 16.59
2048 271446 29767 9.11 112298 2.41 11960 22.69
4096 2344594 238453 9.83 1009445 2.32 101264 23.15
Timings are in milliseconds.

The cache-oblivious multiplication (more on this later) runs within


12978 and 106758 for n = 2048 and n = 4096 respectively.

< 0 “O ‘A O
Other performance counters
Hardware count events
► CPI Clock cycles Per Instruction: the number ofclock cycles
that happen when an instruction is being executed. With
pipelining we can improve the CPI by exploiting instruction
level parallelism
► LI and L2 Cache Miss Rate.
► Instructions Retired: In the event of a misprediction,
instructions that were scheduled to execute along the
mispredicted path must be canceled.

L1 L2
Miss Miss Percent SSE Instructions
CPI Rate Rate Instructions Retired
InC 478 - 0.24 0.02 43% 13,137,280,000
-5x " 2x “ 1x
Transposed 1.13 - 015 0.02 50% 13,001.486,336 -
-3x - 8x ■0 8x
Tiled 049 - 002 0 39% 18,044,811,264

□►< “O ‘A O
Analyzing cache misses in the naive and transposed
multiplication

EH
► Let A, B and C have format (m, n), (m, p) and (p, n)
respectively.
► A is scanned one, so mn/L cache misses if L is the number of
coefficients per cache line.
► B is scanned n times, so mnp/L cache misses if the cache
cannot hold a row.
► C is accessed “nearly randomly” (for m large enough) leading
to mnp cache misses.
► Since 2m n p arithmetic operations are performed, this means
roughly one cache miss per flop!
Analyzing cache misses in the tiled multiplication
1024 384 1024

A
1024

► Let A, B and C are all square of order of n.


► Assume all tiles are square of order b and three fit in cache.
► If C is transposed, then loading three blocks in cache cost
3 b2//..
► This process happens n3/b3 times, leading to 3n3/(b/_) cache
misses.
► Three blocks fit in cache for 3b2 < Z, if Z is the cache size.
► So O(n3/(\/Z/_)) cache misses, if b is well chosen, which is
optimal.
<0► <=► <=► -T)C<O
Counting sort: the algorithm
► Counting sort takes as input a collection of n items, each of
which known by a key in the range 0 • • • k.
► The algorithm computes a histogram of the number of times
each key occurs.
► Then performs a prefix sum to compute positions in the
output.

allocate an array Count[0..kJ; initialize each array cell to zero


for each input item x:
Count[key(x)] = Count[key(x)] + 1
total = 0
for i = 0, 1, ... k:
c = Count [i]
Count[i] = total
total = total + c
allocate an output array Output[0..n-1]
for each input item x:
store x in Output[Count[key(x)]]
Count[key(x)] = Count[key(x)] + 1
return Output
Counting sort: cache complexity analysis
allocate an array Count[0..kJ; initialize each array cell to zero
for each input item x:
Count[key(x)] = Count[key(x)] + 1
total = 0
for i = 0, 1, ... k:
c = Count [i]
Count[i] = total
total = total + c
allocate an output array Output[0..n-1]
for each input item x:
store x in Output[Count[key(x)]]
Count[key(x)] = Count[key(x)] + 1
return Output

1. n/L to compute k.
2. k/L cache misses to initialize Count.
3. n/L + n cache misses for the histogram (worst case).
4. k/L cache misses for the prefix sum.
5. n/L + n + n cache misses for building Output (worst case).
Total: 3n+3n/ L + 2k/L cache misses (worst case).
<0► «>► « ► “O O
Counting sort: cache complexity analysis: explanations

1. n/L to compute k\ this can be done by traversing the items


linearly.
2. k/L cache misses to initialize Count: this can be done by
traversing the Count linearly (with a stride of 1).
3. n/L + n cache misses for the histogram (worst case): accesses
in items are linear but accesses in Count are potentially
random.
4. k/L cache misses for the prefix sum: accesses in Count are
linear.
5. n/L + n + n cache misses for building Output (worst case):
accesses in items are linear but accesses in Output and
Count are potentially random.
Total: 3n+3n/ L + 2k/L cache misses (worst case).
How to fix the poor data locality of counting sort?
allocate an array Count[0..kJ; initialize each array cell to zero
for each input item x:
Count[key(x)] = Count[key(x)] + 1
total = 0
for i = 0, 1, ... k:
c = Count [i]
Count[i] = total
total = total + c
allocate an output array Output[0..n-1]
for each input item x:
store x in Output[Count[key(x)]]
Count[key(x)] = Count[key(x)] + 1
return Output

► Recall that our worst case is 3n+3n/L + 2k/L cache misses.


► The troubles come from the irregular accesses which
experience capacity misses and conflict misses.
► To solve this problem, we preprocess the input so that
counting sort is applied in succession to several smaller input
item sets with smaller value ranges.
► To put it simply, so that k and n are small enough for Output
and Count to incur cold misses only.
Counting sort: bukecting the input
alloacate an array bucketsize[0..m-1]; initialize each array cell to zero
for each input item x:
bucketsize[floor(key(x) m/(k+l))] := bucketsize[floor(key(x) m/(k+l))] + 1
total = 0
for i = 0, 1, ... m-1:
c = bucketsize[i]
bucketsize[i] = total
total = total + c
alloacate an array bucketedinput[0..n-1];
for each input item x:
q := floor(key(x) m/(k+l))
bucketedinput[bucketsize[q] ] := key(x)
bucketsize[q] := bucketsize[q] + 1
return bucketedinput

► Intention: after preprocessing, the arrays Count and Output incur


cold misses only.
► To this end we choose a parameter m (more on this later) such
that, after preprocessing:
1. any key in the range [//?, (/ + l)h — 1] is always < any key in the
range [(/ + 1)h, (/ + 2)h — 1], for / = 0 • • • m — 2, with h = k/m,
2. bucketsize and m cache-lines from bucketedinput all fit in
cache. That is, counting cache-lines, mL + m < Z.
g► < >► « > ► O
Counting sort: cache complexity with bukecting
alloacate an array bucketsize[0..m-1]; initialize each array cell to zero
for each input item x:
bucketsize[floor(key(x) m/(k+l))] := bucketsize[floor(key(x) m/(k+l))J + 1
total = 0
for i = 0, 1, ... m-1:
c = bucketsize[i]
bucketsize[i] = total
total = total + c
alloacate an array bucketedinput[0..n-1];
for each input item x:
q := floor(key(x) m/(k+l))
bucketedinput[bucketsize[q] ] := key(x)
bucketsize[q] := bucketsize[q] + 1
return bucketedinput

1. 3m/L + n/L caches misses to compute bucketsize


2. Key observation: bucketedinput is traversed regularly by
segment.
3. Hence, 2n/L + m + m/L caches misses to compute
bucketedinput
Preprocessing: 3n/L + 3m/L + m cache misses.
“O ‘A O
Counting sort: cache complexity with bukecting:
explanations
1. 3m/L + n/L caches misses to compute bucketsize:
► m/L to set each cell of bucketsize to zero,
► m/L + n/L for the first for loop,
► m/L for the second for loop.
2. Key observation: bucketedinput is traversed regularly by
segment:
► So writing bucketedinput means writing (in a linear
traversal) m consecutive arrays, of possibly different sizes, but
with total size n.
► Thus, because of possible misalignments between those arrays
and their cache-lines, this writing procedure can yield n/L + m
cache misses (and not just n/L).
3. Hence, 2n/L + m + m/L caches misses to compute
bucketedinput:
► n/L to read the items,
► n/L + m to write bucketedinput,
► m/L to load bucketsize.
“O ‘A O
Cache friendly counting sort: complete cache complexity
analysis
► Assumption: the preprocessing creates buckets of average size
n/m.
► After preprocessing, counting sort is applied to each bucket whose
values are in a range [7/7, (/ + l)h — 1], for / = 0 • • • m — 1.
► To be cache-friendly, this requires, for / = 0 • • • n? — 1,
h + |{key C [//?, (/ + 1)/? — 1]}| < Z and n? < Z/(l + /_). These
two are very realistic assumption considering today’s cache size.
And the total complexity becomes;
^total = ^preprocessing T Qsorting
= ^preprocessing T m Qsortingofonebucket
— ^preprocessing + m ^~mL +
— ^preprocessing +6fl/L
+ 2/c//_
= 3n/Z_ + 3m//_ + m + 6n//_ + 2k/£
= 9n/L + 3m//_ + m + 2k/£
Instead of 3n+3n/L + 2k/L for the naive counting sort.
Cache friendly counting sort: experimental results

► Experimentation on an Intel(R) Core(TM) i7 CPU ©


2.93GHz. It has L2 cache of 8MB.
► CPU times in seconds for both classical and cache-friendly
counting sort algorithm.
► The keys are random machine integers in the range [0, n].

n classical cache-oblivious
counting counting sort
sort (preprocessing + sorting)
100000000 13.74 4.66 (3.04 + 1.62 )
200000000 30.20 9.93 (6.16 + 3.77)
300000000 50.19 16.02 (9.32 + 6.70)
400000000 71.55 22.13 (12.50 +9.63)
500000000 94.32 28.37 (15.71 + 12.66)
600000000 116.74 34.61 (18.95 + 15.66)

< “O ‘A O'
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

“O ‘A O
Basic idea of a cache memory (review)

Cache Lines Memory

► A cache is a smaller memory, faster to access.


► Using smaller memory to cache contents of larger memory
provides the illusion of fast larger memory.
► Key reasons why this works: temporal locality and spatial
locality.
A simple cache example

Cache Lines Memory

► Byte addressable memory


► Cache of 32Kbyte with direct mapping and 64 byte lines (thus
512 lines) so the cache can fit 29 x 24 = 213 int.
► “Therefore” successive 32Kbyte memory blocks line up in cache
► A cache access costs 1 cycle while a memory access costs 100
= 99 + 1 cycles.
► How addresses map into cache
► Bottom 6 bits are used as offset in a cache line,
► Next 9 bits determine the cache line
< □ ► «g ► ◄ -s ► < ► “O ‘A O
Exercise 1 (1/2)

// sizeof(int) = 4 and Array laid out sequentially in memo]


#define S ((l«20)*sizeof (int))
int A [S] ;
// Thus size of A is 2"(20) x 4
for (i = 0; i < S; i++) {
read A[i];

Memory

Total access time? What kind of locality? What kind of misses?


“O ‘A O
Exercise 1 (2/2)

#define S ((l«20)*sizeof (int))


int A [S] ;
for (i = 0; i < S; i++) {
read A[i];

► S reads to A.
► 16 elements of A per cache line
► 15 of every 16 hit in cache.
► Total access time: 15(S/16) + 100(S/16).
► spatial locality, cold misses.

<□►<g► “O ‘A O
Exercise 2 (1/2)

#define S ((l«20)*sizeof (int))


int A [S] ;
for (i = 0; i < S; i++) {
read A[0];

Memory

Total access time? What kind of locality? What kind of misses?

<□►<0►<>►«>► O
Exercise 2 (2/2)

#define S ((l«20)*sizeof (int))


int A [S] ;
for (i = 0; i < S; i++) {
read A[0];

► S reads to A
► All except the first one hit in cache.
► Total access time: 100 + (S — 1).
► Temporal locality
► Cold misses.

“O ‘A O
Exercise 3 (1/2)

// Assume 4 <= N <= 13


#define S ((l«20)*sizeof (int))
int A [S] ;
for (i = 0; i < S; i++) {
read A[i °/0 (1«N)] ;

Memory

Total access time? What kind of locality? What kind of misses?


<□►<0►<>►«>► O
Exercise 3 (2/2)

// Assume 4 <= N <= 13


#define S ((l«20)*sizeof (int))
int A [S] ;
for (i = 0; i < S; i++) {
read A[i °/0 (1«N)] ;

► S reads to A
► One miss for each accessed line, rest hit in cache.
► Number of accessed lines: 2/v_4.
► Total access time: 2/v_4100 + (S — 2A/_4).
► Temporal and spatial locality
► Cold misses.

< □ ► «g ► « “O ‘A O
Exercise 4 (1/2)

// Assume 14 <= N
#define S ((l«20)*sizeof (int))
int A [S] ;
for (i = 0; i < S; i++) {
read A[i °/0 (1«N)] ;

Memory

Total access time? What kind of locality? What kind of misses?


<□►<0►<>►«>► O
Exercise 4 (2/2)

// Assume 14 <= N
#define S ((l«20)*sizeof (int))
int A [S] ;
for (i = 0; i < S; i++) {
read A[i °/0 (1«N)] ;

► S reads to A.
► First access to each line misses
► Rest accesses to that line hit.
► Total access time: 15(S/16) + 100(S/16).
► Spatial locality
► Cold and capacity misses.

<□►<g► “O ‘A O
Exercise 5 (1/2)

// Assume 14 <= N
#define S ((l«20)*sizeof (int))
int A [S] ;
for (i = 0; i < S; i++) {
read A[(i*16) °/0 (1«N)] ;

Memory

Total access time? What kind of locality? What kind of misses?


<□►<0►<>►«>► O
Exercise 5 (2/2)

// Assume 14 <= N
#define S ((l«20)*sizeof (int))
int A [S] ;
for (i = 0; i < S; i++) {
read A[(i*16) °/0 (1«N)] ;

► S reads to A.
► First access to each line misses
► One access per line.
► Total access time: 100S.
► No locality!
► Cold and conflict misses.

“O ‘A O
Exercise 6 (1/2)

#define S ((l«20)*sizeof (int))


int A [S] ;
for (i = 0; i < S; i++) {
read A [random O°/0S] ;

Memory

Total access time? What kind of locality? What kind of misses?

<□►<0►<>►«>► O
Exercise 6 (2/2)

#define S ((l«20)*sizeof (int))


int A [S] ;
for (i = 0; i < S; i++) {
read A [random 0 °/0S] ;

► S reads to A.
► After A/ iterations, for some l\l, the cache is full.
► Them the chance of hitting in cache is 29/218 = 1/512, that
is the number of lines in the cache divided by the total
number of cache lines used by A.
► Estimated total access time: S(511/512)100 + S(l/512).
► Almost no locality!
► Cold, capacity conflict misses.
< □ ► < 9 ► « -e ► « > ► O
Exercise 7 (1/2)

#define S ((1«19) *sizeof (int))


int A [S] ;
int B [S] ;
for (i = 0; i < S; i++) {
read A[i], B [i];

Memory

^09999995 Cache

Total access time? What kind of locality? What kind of misses?


Exercise 7 (2/2)

#define S ((1«19) *sizeof (int))


int A [S] ;
int B [S] ;
for (i = 0; i < S; i++) {
read A[i], B [i];

► S reads to A and B.
► A and B interfere in cache: indeed two cache lines whose
addresses differ by a multiple of 29 have the same way to
cache.
► Total access time: 200S.
► Spatial locality but the cache cannot exploit it.
► Cold and conflict misses.
Exercise 8 (1/2)

#define S ( (l«19+4) *sizeof (int) )


int A [S] ;
int B [S] ;
for (i = 0; i < S; i++) {
read A[i], B [i];

Total access time? What kind of locality? What kind of misses? = O


Exercise 8 (2/2)

#define S ( (l«19+4) *sizeof (int) )


int A [S] ;
int B [S] ;
for (i = 0; i < S; i++) {
read A[i], B [i];

► S reads to A and B.
► A and B almost do not interfere in cache.
► Total access time: 2(15S/16 + 100S/16).
► Spatial locality.
► Cold misses.

□►< “O ‘A O
Set Associative Caches

► Set associative caches have sets with multiple lines per set.
► Each line in a set is called a way
► Each memory line maps to a specific set and can be put into
any cache line in its set
► In our example, we assume a 32 Kbyte cache, with 64 byte
lines, 2-way associative. Hence we have:
► 256 sets
► Bottom six bits determine offset in cache line
► Next 8 bits determine the set.
□►< “O A O
Exercise 9 (1/2)

#define S ((1«19) *sizeof (int))


int A [S] ;
int B [S] ;
for (i = 0; i < S; i++) {
read A[i], B [i];

^99999995 Cache

Total access time? What kind of locality? What kind of misses?


<□►<0►<>►«>► O
Exercise 9 (2/2)

#define S ((1«19) *sizeof (int))


int A [S] ;
int B [S] ;
for (i = 0; i < S; i++) {
read A[i], B [i];

► S reads to A and B.
► A and B lines hit same set, but enough lines in a set.
► Total access time: 2(15S/16 + 100S/16).
► Spatial locality.
► Cold misses.

□ ► < g ► < .a ► «► “O ‘A O
Tuned cache-oblivious square matrix transposition
void DC_matrix_transpose(int *A, int Ida, int iO, int il,
int jO, int djO, int jl /*, int djl = 0 */){
const int THRESHOLD = 16; // tuned for the target machine
tail:
int di = il - iO, dj = jl - jO;
if (dj >= 2 * di && dj > THRESHOLD) {
int dj2 = dj / 2;
cilk_spawn DC_matrix_transpose(A, Ida, iO, il, jO, djO, jO + dj2);
jO += dj2; djO = 0; goto tail;
} else if (di > THRESHOLD) {
int di2 = di / 2;
cilk_spawn DC_matrix_transpose(A, Ida, iO, iO + di2, jO, djO, jl);
iO += di2; jO += djO * di2; goto tail;
} else {
for (int i = iO; i < il; ++i) {
for (int j = jO; j < jl; ++j) {
int x = A[j * Ida + i];
A[j * Ida + i] = A[i * Ida + j] ;
A[i * Ida + j] = x;

jO += djO;
}

□ ► < g ► < .a ► < -e ► “O ‘A O


Tuned cache-oblivious matrix transposition benchmarks

size Naive Cache-oblivious ratio


5000x5000 126 79 1.59
10000x10000 627 311 2.02
20000x20000 4373 1244 3.52
30000x30000 23603 2734 8.63
40000x40000 62432 4963 12.58
► Intel(R) Xeon(R) CPU E7340 0 2 .40GHz
► LI data 32 KB, L2 4096 KB, cache line size 64bytes
► Both codes run on 1 core
► The improvement comes simply from an optimal memory
access pattern.
Tuned cache-oblivious matrix multiplication

< □ ► « (51 ► < “O ‘A O


Extra Exercise A

#define S ((1«19) *sizeof (int))


int A [S] ;
int B [S] ;
int C[S};
for (i = 0; i < S; i++) {
C Li] := A Li] + B Li] ;

For the above 2-way associative cache (of size 32 Kbyte cache, and
with 64 byte lines): Total access time? What kind of locality?
What kind of misses?

□►< “O A O
Extra Exercise B
Let A be a n x n invertible lower triangular matrix. A simple
divide-and-conquer strategy to invert A is described below. Let A
be partitioned into (n/2) x (n/2) blocks as follows:

Ai 0
(1)
A2 A3

where n is assumed to be a power of 2. Clearly Ai and A3 are


invertible lower triangular matrices. A-1 is given by

^r1 (2)
-a^1a 2a^
1

Write a Cilk-like program computing A-1. Analyze the work and


critical path of your parallel program. We can assume that we have
a sub-routine computing the product (resp. sum) of 2 square
matrives of order n in work ©(n3) (resp. ©(n2)) and span
0(log2(n)) (resp. ©(log(n))).
Same question using a space-efficient (i.e. in place^multiplication.
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

“O ‘A O
The (Z, Z.) ideal cache model (1/4)

Main
organized by Memory

Figure 1: The ideal-cache model

<□►«s “O ‘A O
The (Z, Z.) ideal cache model (2/4)
Main

► Computer with a two-level memory hierarchy:


► an ideal (data) cache of Z words partitioned into Z/Z. cache
lines, where L is the number of words per cache line.
► an arbitrarily large main memory.
► Data moved between cache and main memory are always
cache lines.
The (Z, Z.) ideal cache model (3/4)

Figure 1: The ideal-cache model

► The processor can only reference words that reside in the


cache.
► If the referenced word belongs to a line already in cache, a
cache hit occurs, and the word is delivered to the processor.
► Otherwise, a cache miss occurs, and the line is fetched into
the cache.
O
The (Z, Z.) ideal cache model (4/4)

Figure 1: The ideal-cache model

► The ideal cache is fully associative: cache lines can be stored


anywhere in the cache.
► The ideal cache uses the optimal off-line strategy of
replacing the cache line whose next access is furthest in the
future, and thus it exploits temporal locality perfectly.
Cache complexity

► For an algorithm with an input of size n, the ideal-cache


model uses two complexity measures:
► the work complexity W(n), which is its conventional running
time in a RAM model.
► the cache complexity Q(n; Z, Z_), the number of cache misses
it incurs (as a function of the size Z and line length L of the
ideal cache).
► When Z and L are clear from context, we simply write Q(n)
instead of Q(n; Z, /_).

► An algorithm is said to be cache aware if its behavior (and


thus performances) can be tuned (and thus depend on) on the
particular cache size and line length of the targeted machine.
► Otherwise the algorithm is cache oblivious.

< 0 “O ‘A O
Cache complexity of the naive matrix multiplication
// A is stored, in ROW-major and B in COLUMN-major
for(i=0; i < n; i++)
for(j=0; j < n; j++)
for(k=0; k < n; k++)
C[i][j] += A[i][k] * B[j][k];

► Assuming Z > 3L, computing each C[i] [j] incurs


0(1 + n/L) caches misses.
► If Z large enough, say Z 6 Q(n) then the row / of A will be
remembered for its entire involvement in computing row / of
C.
► For column j of B to be remembered when necessary, one
needs Z 6 Q(n2) in which case the whole computation fits in
cache. Therefore, we have

0(n+ n3/L) if 3L<Z <n2


Q(n, Z, l_) =
O(l + n2/t) if 3n2 < Z.
« ,51 ► ◄ 1 ► O
A cache-aware matrix multiplication algorithm (1/2)

// A, B and C are in row-major storage


for(i =0; i < n/s; i++)
for(j =0; j < n/s; j++)
for(k=0; k < n/s; k++)
blockMult(A,B,C,i,j,k,s);

► Each matrix M E {/A, 8, C} consists of (n/s) x (n/s)


submatrices Mj (the blocks), each of which has size s x s,
where s is a tuning parameter.
► Assume s divides n to keep the analysis simple.
► blockMult (A,B,C, i , j ,k, s) computes Cjj = A-^ x By using
the naive algorithm

< 0 “O ‘A O
A cache-aware matrix multiplication algorithm (2/2)
// A, B and C are in row-major storage
for(i =0; i < n/s; i++)
for(j =0; j < n/s; j++)
for(k=0; k < n/s; k++)
blockMult(A,B,C,i,j,k,s);

► Choose s to be the largest value such that three s x s


submatrices simultaneously fit in cache, that is, Z G ©(s2),
that is, s G ©(a/Z)-
► An s x s submatrix is stored on ©(s + s2//_) cache lines.
► Thus blockMult(A,B,C,i,j,k,s) runs within ©(s + s2/t)
cache misses.
► Initializing the n2 elements of C amounts to ©(1 + n2 / L)
caches misses. Therefore we have

Q(n,Z,L) e 0(l + n2//. + (n/\/Z)3(VZ + Z//-))


£ 0(l + n2//.+ n3/Z + n3/(tv/Z)).
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

“O ‘A O
Scanning

Figure 2. Scanning an array of N elements arbitrarily aligned with blocks may cost
one more memory transfer than IJV/B].

Scanning n elements stored in a contiguous segment (= cache


lines) of memory costs at most |~n/Z/| +1 cache misses.
► In the above, N = n and B = L. The main issue here is
alignment.
► Let (q, r) be the quotient and remainder in the integer division
of n by L. Let u (resp. w) be # words in a fully (not fully)
used cache line.
► If vi/ = 0 then r = 0 and the conclusion is clear.
► If w < L then r = w and the conclusion is clear again.
► If L < w < 2L then qL = u + 1 and the conclusion follows.
O'
Array reversal

1 -------------------------- 1

Figure 3. Bentley’s reversal of an array.

Reversing an array of n elements stored in a contiguous


segments (= cache lines) of memory costs at most |’n//_'| + 1
cache misses, provided that Z > 2Z_ holds. Exercise!

< □ ► «s ► < “O ‘A O
Median and selection (1/8)

► A selection algorithm is an algorithm for finding the k-th


smallest number in a list. This includes the cases of finding
the minimum, maximum, and median elements.

► A worst-case linear algorithm for the general case of selecting


the k-th largest element was published by Blum, Floyd, Pratt,
Rivest, and Tarjan in their 1973 paper Time bounds for
selection, sometimes called BFPRT.

► The principle is the following:


► Find a pivot that allows splitting the list into two parts of
nearly equal size such that
► the search can continue in one of them.

< 0 O ‘A O
Median and selection (2/8)

select(L,k)

if (L has 10 or fewer elements)

sort L
return the element in the kth position

partition L into subsets S[i] of five elements each


(there will be n/5 subsets total).

for (i = 1 to n/5) do
x[i] = select(S [i],3)

M = select({x[i]}, n/10)

partition L into L1<M, L2=M, L3>M


if (k <= length(Ll))
return select(LI,k)
else if (k > length(LI)+length(L2))
return select(L3,k-length(Ll)-length(L2))
else return M

< ►◄ O'
Median and selection (3/8)

For an input list of n elements, the number 7~(n) of comparisons


satisfies
T(n) < 12n/5 + T(n/5) + T(7n/10).

► We always throw away either L3 (the values greater than M) or


LI (the values less than M). Suppose we throw away L3.
► Among the n/5 values x[i], n/10 are larger than M, since M
was defined to be the median of these values.
► For each i such that x[i] is larger than M, two other values
in S[i] are also larger than x[i]
► So L3 has at least 3n/10 elements. By a symmetric argument,
LI has at least 3n/10 elements.
► Therefore the final recursive call is on a list of at most 7n/10
elements and takes time at most 7~(7n/10).
Median and selection (4/8)
How to solve

T(n) < 12n/5 + 7~(n/5) + T(7n/10)?

► We “try" T(n) < c n by induction. The substitution gives

T(n) < n (12/5 + 9c/10).

From n(12/5 + 9c/10) < c n we derive c < 24.


► The tree-based method also brings 7"(n) < 24n.
► The same tree-expansion method then shows that, more
generally, if T(n) < cn + T(an) + T(bn), where a + b < 1,
the total time is c(l/(l — a — b\)n.
► With a lot of work one can reduce the number of comparisons
to 2.95n [D. Dor and U. Zwick, Selecting the Median, 6th
SODA, 1995].
Median and selection (5/8)

In order to analyze its cache complexity, let us review the algorithm


and consider an array instead of a list.
Step 1: Conceptually partition the array into n/5 quintuplets
of five adjacent elements each.
Step 2: Compute the median of each quintuplet using 0(1)
comparisons.
Step 3: Recursively compute the median of these medians
(which is not necessarily the median of the original
array).
Step 4: Partition the elements of the array into three groups,
according to whether they equal, or strictly less or
strictly greater than this median of medians.
Step 5: Count the number of elements in each group, and
recurse into the group that contains the element of
the desired rank.
Median and selection (6/8)
To make this algorithm cache-oblivious, we specify how each step
works in terms of memory layout and scanning. We assume that
Z > 3L.
Step 1: Just conceptual; no work needs to be done.
Step 2: requires two parallel scans, one reading the 5 element
arrays at a time, and the other writing a new array of
computed medians, incurring ©(1 + n/L).
Step 3: Just a recursive call on size n/5.
Step 4: Can be done with three parallel scans, one reading
the array, and two others writing the partitioned
arrays, incurring again ©(1 + n/L).
Step 5: Just a recursive call on size 7n/10.
This leads to

Q(n) < Q(n/5) + <?(7n/10) + 0(1 + n/L).

□►<g►« “O ‘A O
Median and selection (7/8)

How to solve

Q(n) < Q(n/5) + Q(7n/10) + 0(1 + n/L)?

The unknown is what is the base-case?

► Suppose the base case is Q(0(l)) e 0(1).


► Following Master Theorem proof the number of leaves
L(n) = nc satisfies in /V(n) = A/(n/5) + A/(7n/10), A/(l) = 1
which brings

leading to c ~ 0.8397803.
► Since each leaf incurs a constant number of cache misses we
have Q(n) G Q(nc), which could be larger or smaller than
Median and selection (8/8)
How to solve

Q(n) < Q(n/5) + Q(7n/10) + 0(1 + n//_)?

► Fortunately, we have a better base-case: Q(O(Z_)) e 0(1).


► Indeed, once the problem fits into 0(1) cache-lines, all five
steps incur only a constant number of cache misses.
► Thus we have only (n/Z_)c leaves in the recursion tree.
► In total, these leaves incur O((n/Z_)c) = o(n/L) cache misses.
► In fact, the cost per level decreases geometrically from the
root, so the total cost is the cost of the root. Finally we have

Q(n) G 0(1 + n/L)

□►< “O ‘A O
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

“O ‘A O
Matrix transposition: various algorithms

► Matrix transposition problem: Given an m x n matrix A


stored in a row-major layout, compute and store AT into an
n x m matrix B also stored in a row-major layout.

► We shall describe a recursive cache-oblivious algorithm which


uses Q{mn) work and incurs ©(1 + mn/L) cache misses,
which is optimal.

► The straightforward algorithm employing doubly nested loops


incurs 0(mn) cache misses on one of the matrices when
m Z/L and n Z/L.

► We shall start with an apparently good algorithm and use


complexity analysis to show that it is even worse than the
straightforward algorithm.

< 0 “O ‘A O
Matrix transposition: a first divide-and-conquer (1/4)
► For simplicity, assume that our input matrix A is square of
order n and that n is a power of 2, say n = 2k.
► We divide A into four square quadrants of order n/2 and we
have

%,i A
^2,2 ) '
► This observation yields an “in-place” algorithm:
1. If n = 1 then return A.
2. If n > 1 then
2.1 recursively compute ^2,1/A,2/^2,2 in place as

( %,i %,2 \
V %,1 ^2,2 )
2.2 exchange Mi,2 and M24.

► What is the number M(n) of memory accesses to A,


performed by this algorithm on an input matrix A of order n
Matrix transposition: a first divide-and-conquer (2/4)
► M(n) satisfies the following recurrence relation

0 if n = 1
4/W(n/2) + 2(n/2)2 if n > 1.

► Unfolding the tree of recursive calls or using the Master’s


Theorem, one obtains:

M(n) = 2(n/2)2 log2(n).

► This is worse than the straightforward algorithm (which


employs doubly nested loops). Indeed, for this latter, we have
M(n) = n2 — n. Explain why!
► Despite of this negative result, we shall analyze the cache
complexity of this first divide-and-conquer algorithm. Indeed,
it provides us with an easy training exercise
► We shall study later a second and efficiency-optimal
divide-and-conquer algorithm, whose cache complexity
analysis is more involved.
O ‘A O
Matrix transposition: a first divide-and-conquer (3/4)
► We shall determine Q(n) the number of cache misses incurred
by our first divide-and-conquer algorithm on a (Z, Z_)-ideal
cache machine.
► For n small enough, the entire input matrix or the entire block
(input of some recursive call) fits in cache and incurs only the
cost of a scanning. Because of possible misalignment, that is,
n(rn/L]+l).
► Important: For simplicity, some authors write n/L instead of
This can be dangerous.
► However: these simplifications are fine for asymptotic
estimates, keeping in mind that n/L is a rational number
satisfying

n/L — 1 < |_n/Lj < n/L < fn/L] < n/L + 1.

Thus, for a fixed L, the functions |_n/LJ, n/L and fn/Z-l are
asymptotically of the same order of magnitude.
► We need to translate "for n small enough” into a formula. We
claim that there exists a real constant a > 0 s.t. for all n and Z
we have
n2 < aZ => Q(n) < n2/L + n.
“O ‘A O
Matrix transposition: a first divide-and-conquer (4/4)
► Q(n) satisfies the following recurrence relation
if n2 < aZ (base case)
if n2 > aZ (recurrence)

► Indeed, exchanging 2 blocks amount to 2((n/2)2//_ + n/2)


accesses.
► Unfolding the recurrence relation k times (more details in
class) yields

<?(") = 4* Q(^) + /^ + (2*-l)n.


2
► The minimum k for reaching the base case satisfies = aZ,
that is, 4k = that is, k = log4(^). This implies
2k = ^= and thus
VocZ
Q(n) < ^(aZ/t + ^Z) + log4(^)^ + ?|f n

< ^ + 2^z + '°g4(S)^-


«Elk«lsp> «<►«.»► -e “O O
A matrix transposition cache-oblivious algorithm (1/2)

► If n > m, the Rec-Transpose algorithm partitions

A = (^2), e=(|)

and recursively executes Rec-Transpose(/4i, Bi) and


Rec-Transpose(42, 62).
► If m > n, the Rec-Transpose algorithm partitions

and recursively executes Rec-Transpose(/4i, Bi) and


Rec-Transpose(42, 62).

< 0 “O ‘A O
A matrix transposition cache-oblivious algorithm (2/2)

► Recall that the matrices are stored in row-major layout.

► Let ce be a constant sufficiently small such that the following


two conditions hold:
(/) two sub-matrices of size m x n and n x m, where
max{/77, n} < aL, fit in cache
(//) even if each row starts at a different cache line.

► We distinguish three cases for the input matrix A\


► Case I: max {m, n} < aL.
► Case II: m < aL < n or n < aL < m.
► Case III: m. n > aL.

□►< “O ‘A O
Case I: max{m,n} < aL.

► Both matrices fit in 0(1) + 2mn/L lines.

► From the choice of a, the number of lines required for the


entire computation is at most Z / L.

► Thus, no cache lines need to be evicted during the


computation. Hence, it feels like we are simply scanning A
and B.

► Therefore Q(m. n) e 0(1 + mn/L).

□►< “O ‘A O
Consider n < aL < m. The Rec-Transpose algorithm
divides the greater dimension m by 2 and recurses.
At some point in the recursion, we have aL}?. < m < aL and
the whole computation fits in cache. At this point:
► the input array resides in contiguous locations, requiring at
most ©(1 + nm/L) cache misses
► the output array consists of nm elements in n rows, where in
the worst case every row starts at a different cache line,
leading to at most 0(n + nm/L) cache misses.
Since m/L e [a/2,a], the total cache complexity for this base
case is ©(1 + n), yielding the recurrence (where the resulting
Q(m. n) is a worst case estimate)

0(1 + n) if m e [ctZ_/2, aL] ,


2Q(m/2, n) + 0(1) otherwise ;

whose solution satisfies Q(m. n) = 0(1 + mn/L).


Case III: m, n > aL.
► As in Case II, at some point in the recursion both n and m fall
into the range [aL/2,al\.
► The whole problem fits into cache and can be solved with at
most ©(m + n + mn//_) cache misses.
► The worst case cache miss estimate satisfies the recurrence

Q(m, n) =
©(m + n + mn//_) if m, n e [orZ_/2, aL] ,
2Q(m/2, n) + 0(1) if m > n ,
2Q(m, n/2) + 0(1) otherwise;

whose solution is Q(m, n) = ©(1 + mn/L).


► Therefore, the Rec-Transpose algorithm has optimal
cache complexity.
► Indeed, for an m x n matrix, the algorithm must write to mn
distinct elements, which occupy at least [mn/L] cache lines.
Plan

Hierarchical memories and their impact on our programs

Cache Analysis in Practice

The Ideal-Cache Model

Cache Complexity of some Basic Operations

Matrix Transposition

A Cache-Oblivious Matrix Multiplication Algorithm

“O ‘A O
A cache-oblivious matrix multiplication algorithm (1/3)
► We describe and analyze a cache-oblivious algorithm for
multiplying an m x n matrix by an n x p matrix
cache-obliviously using
► ©(mnp) work and incurring
► O(m + n + p + (mn + np + rnp)/L + mnp/(Ly/Z)) cache
misses.
► This straightforward divide-and-conquer algorithm contains no
voodoo parameters (tuning parameters) and it uses cache
optimally.
► Intuitively, this algorithm uses the cache effectively, because
once a subproblem fits into the cache, its smaller subproblems
can be solved in cache with no further cache misses.
► These results require the tail-cache assumption for matrices
stored in row-major layout format,
► This assumption can be relaxed for certain other layouts, see
(Frigo et al. 1999).
► The case of Strassen's algorithm is also treated in (Frigo et al.
“O ‘A O
A cache-oblivious matrix multiplication algorithm (2/3)
► To multiply an m x n matrix A and an n x p matrix B, the
Rec-Mult algorithm halves the largest of the three
dimensions and recurs according to one of the following three
cases:

Ct) ■ - (S).
(A ^2)^) = ^iBi + 42B2, (4)

A(B, B2) = (AB, AB2) . (5)

► In case (3), we have m > max{n, p}. Matrix A is split


horizontally, and both halves are multiplied by matrix B.
► In case (4), we have n > max{m, p}. Both matrices are split,
and the two halves are multiplied.
► In case (5), we have p > max{m, n}. Matrix B is split
vertically, and each half is multiplied by A.
► The base case occurs when m = n = p = 1.
A cache-oblivious matrix multiplication algorithm (3/3)

► let a > 0 be the largest constant sufficiently small that three


submatrices of sizes m' x n', n' x p', and m' x p' all fit
completely in the cache, whenever max {m\ n\ p'} < ay/Z
holds.

► We distinguish four cases depending on the initial size of the


matrices.
► Case I: m,n,p> a\fZ.
► Case II: (m < otyfZ. and n,p > ay/Z) or (n < a^fZ. and
m, p > cka/Z) or (p < ayfZ. and m, n >aVZ).
► Case III: (n, p < ol\[Z. and m > ay/Z) or (m, p < a\fZ. and
n > aV~Z) or (m, n < a\fZ and p > ay/Z).
► Case IV: m, n, p < ay/Z.
► Similarly to matrix transposition, Q(m, n, p) is a worst case
cache miss estimate.
< □ ► «g ► « “O O
Case I: m, n, p > aVZ. (1/2)

Q(m, n.p) = (6)


©((mn + np+ mp')/L') if m, n, p G [a-\/Z/2, a\/Z] ,

{ 2Q(m/2, n, p) + 0(1) ow. if m > n and m > p ,


2Q{m, n/2,p) + 0(1) ow. if n > m and n > p ,
2Q{m, n,p/2) + 0(1) otherwise .

► The base case arises as soon as all three submatrices fit in


cache:
► The total number of cache lines used by the three submatrices
is 0((mn + np + mp)/L).
► The only cache misses that occur during the remainder of the
recursion are the 0((mn + np + mp)/L) cache misses required
to bring the matrices into cache.

□►< “O ‘A O
Case I: m, n, p > aVZ. (2/2)

Q(m, n.p) =
©((mn + np+ mp')/L') if m, n, p G [a-\ZZ/2, a^Z] ,
2Q(m/2,n,p) + 0(1) ow. if m > n and m > p ,
2Q{m, n/2,p) + 0(1) ow. if n > m and n > p ,
2Q{m, n,p/2) + 0(1) otherwise .

► In the recursive cases, when the matrices do not fit in cache,


we pay for the cache misses of the recursive calls, plus 0(1)
cache misses for the overhead of manipulating submatrices.
► The solution to this recurrence is

0(m, n, p) = 0(mnp/(l_Vzy).

► Indeed, for the base-case m, m, p e Q(a^Z).


< ►< O
Case II: (m < ax/Z) and (n, p > ay/Z).
► Here, we shall present the case where m < ay/Z and
n, p > ay/Z.
► The Rec-Mult algorithm always divides n or p by 2
according to cases (4) and (5).
► At some point in the recursion, both n and p are small enough
that the whole problem fits into cache.
► The number of cache misses can be described by the
recurrence

Q(m, n,p) = (7)


{©(1 + n + m + np//_) if n, p e [ay/Z/2, ay[Z\ ,
2Q{m^ n/2, p) + 0(1) otherwise if n > p ,
2Q(m, n, p/2) + 0(1) otherwise ;

whose solution is Q(m. n, p) = Q(np/L + mnp/(Ly/Z)).


► Indeed, in the base case: mnp/(Ly/Z) < anp/L.
► The term ©(1 + n + m) appears because of the row-major
layout.
“O ‘A O
► In each of these cases, one of the matrices fits into cache, and
the others do not.
► Here, we shall present the case where n,p < ay/Z and
m > ay/Z.
► The Rec-Mult algorithm always divides m by 2 according to
case (3).
► At some point in the recursion, m falls into the range
a\TZ.l2 < m < aVZ, and the whole problem fits in cache.
► The number cache misses can be described by the recurrence

Q(m, n,p) = (8)


f ©(1 + m) if m e [qa/Z/2, a\/Z] ,
[ 2Q(m/2, n, p) + 0(1) otherwise;

whose solution is Q(m, n, p) = 0(m + mnp/(Ly/Z)).


► Indeed, in the base case: mnp/^Ly/Z) < cn/Zm/L; moreover
Z E ^(L2) (tall cache assumption).
“O ‘A O
► From the choice of a, all three matrices fit into cache.

► The matrices are stored on ©(1 + mn/L + np/L + mp//_)


cache lines.

► Therefore, we have Q(m, n, p) = ©(1 + (n?n + np + mp)/L).

□►< “O ‘A O
Typical memory layouts for matrices

(a) 0 1 (b)
8— 0 10 1112 1 Til- 15
16^7TTj9202L-22-e3

32<rT3£3536JZ-3fr39
40<ni245j4JS46-47

Figure 2: Layout of a 16 x 16 matrix in (a) row ma­


jor, (b) column major, (c) 4 x 4-blocked, and (d) bit­
interleaved layouts.

<□►«s►< T) ‘A O
Acknowledgements and references

Acknowledgements.
► Charles E. Leiserson (MIT) and Matteo Frigo (Intel) for
providing me with the sources of their article Cache-Oblivious
Algorithms.
► Charles E. Leiserson (MIT) and Saman P. Amarasinghe (MIT)
for sharing with me the sources of their course notes and
other documents.

References.
► Cache-Oblivious Algorithms by Matteo Frigo, Charles E.
Leiserson, Harald Prokop and Sridhar Ramachandran.

► Cache-Oblivious Algorithms and Data Structures by Erik D.


Demaine.

You might also like