03 Dynamic Sched

Download as pdf or txt
Download as pdf or txt
You are on page 1of 84

Computer Architecture Instruction Level Parallelism

S. G. Shivaprasad Yadav

Review from Last Time


4 papers: All about where to draw line between HW and SW IBM set foundations for ISAs since 1960s
8-bit byte Byte-addressable memory (as opposed to word-addressable memory) 32-bit words Two's complement arithmetic (but not the first processor) 32-bit (SP) / 64-bit (DP) Floating Point format and registers Commercial use of microcoded CPUs Binary compatibility / computer family

B5000 very different model: HLL only, stack, Segmented VM IBM paper made case for ISAs good for microcoded processors leading to CISC Berkeley paper made the case for ISAs for pipeline + cache micrprocessors (VLSI) leading to RISC Who won RISC vs. CISC? VAX is dead. Intel 80x86 on desktop, RISC in embedded, Servers x86 and RISC
8/21/2011 2

Outline
ILP Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction Dynamic Branch Prediction Overcoming Data Hazards with Dynamic Scheduling (Start) Tomasulo Algorithm Conclusion

8/21/2011

Recall from Pipelining Review


Pipeline CPI = Ideal pipeline CPI + Structural Stalls + Data Hazard Stalls + Control Stalls
Ideal pipeline CPI: measure of the maximum performance attainable by the implementation Structural hazards: HW cannot support this combination of instructions Data hazards: Instruction depends on result of prior instruction still in the pipeline Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps)

8/21/2011

Instruction Level Parallelism


Instruction-Level Parallelism (ILP): overlap the execution of instructions to improve performance 2 approaches to exploit ILP:
1) Rely on hardware to help discover and exploit the parallelism dynamically (e.g., Pentium 4, AMD Opteron, IBM Power) , and 2) Rely on software technology to find parallelism, statically at compile-time (e.g., Itanium 2)

Next 4 lectures on this topic

8/21/2011

Instruction-Level Parallelism (ILP)


Basic Block (BB) ILP is quite small
BB: a straight-line code sequence with no branches in except to the entry and no branches out except at the exit For a typical MIPS programs, the average dynamic branch frequency 15% to 25% => 4 to 7 instructions execute between a pair of branches Since these instructions in BB likely to depend on each other, the amount of overlap we can exploit within a basic block is likely to be less than the average BB size.

To obtain substantial performance enhancements, we must exploit ILP across multiple basic blocks Simplest: loop-level parallelism to exploit parallelism among iterations of a loop. E.g., for (i=1; i<=1000; i=i+1) x[i] = x[i] + y[i];
8/21/2011 6

Loop-Level Parallelism
Exploit loop-level parallelism to parallelism by unrolling loop either by 1. dynamic via branch prediction or 2. static via loop unrolling by compiler (Another way is vectors, to be covered later) Determining instruction dependence is critical to Loop Level Parallelism To exploit ILP we must determine which instructions can be executed in parallel If 2 instructions are parallel, they can execute simultaneously in a pipeline of arbitrary depth without causing any stalls (assuming no structural hazards) dependent, they are not parallel and must be executed in order, although they may often be partially overlapped
8/21/2011 7

Data Dependence and Hazards


Three different types of dependencies data dependencies, name dependencies, and control dependencies InstrJ is data dependent (aka true dependence) on InstrI: if either of the following holds
1. Instr i produces a result that may be used by instruction j or

I: add r1,r2,r3 J: sub r4,r1,r3


2. Instr j is data dependent on Instr k, and instruction k is data dependent on Instr I

8/21/2011

Data Dependence and Hazards


Example: Consider the following MIPS code sequence that increments vector of values in memory (starting at 0(R1), and with the last element at 8(R2)), by a scalar in register F2.

8/21/2011

Data Dependence and Hazards


Both of the above dependent sequences, as shown by the arrows, have each instruction depending on the previous one. The arrows here show the order that must be preserved for correct execution. The arrow points from an instruction that must precede the instruction that the arrowhead points to. If two instructions are data dependent, they cannot execute simultaneously or be completely overlapped The dependence implies that there would be a chain of one or more data hazards between the two instructions. Executing the instructions simultaneously will cause a processor with pipeline interlocks (and a pipeline depth longer than the distance between the instructions in cycles) to detect a hazard and stall, thereby reducing or eliminating the overlap. Data dependence in instruction sequence data dependence in source code effect of original data dependence must be preserved If data dependence caused a hazard in pipeline, called a Read After Write (RAW) hazard
8/21/2011 10

ILP and Data Dependencies,Hazards


HW/SW must preserve program order: order instructions would execute in if executed sequentially as determined by original source program
Dependences are a property of programs

Presence of dependence indicates potential for a hazard, but actual hazard and length of any stall is property of the pipeline Importance of the data dependencies
1) indicates the possibility of a hazard 2) determines order in which results must be calculated 3) sets an upper bound on how much parallelism can possibly be exploited

HW/SW goal: exploit parallelism by preserving program order only where it affects the outcome of the program
8/21/2011 11

Data Dependencies,Hazards
A dependence can be overcome in two different ways:
maintaining the dependence but avoiding a hazard, and eliminating a dependence by transforming the code.

Scheduling the code is the primary method used to avoid a hazard without altering a dependence, and such scheduling can be done both by the compiler and by the hardware.

8/21/2011

12

Name Dependence #1: Anti-dependence


Name dependence: when 2 instructions use same register or memory location, called a name, but no flow of data between the instructions associated with that name; 2 versions of name dependence InstrJ writes operand before InstrI reads it I: sub r4,r1,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an anti-dependence by compiler writers. This results from reuse of the name r1 If anti-dependence caused a hazard in the pipeline, called a Write After Read (WAR) hazard
8/21/2011 13

Name Dependence #2: Output dependence


InstrJ writes operand before InstrI writes it. I: sub r1,r4,r3 J: add r1,r2,r3 K: mul r6,r1,r7 Called an output dependence by compiler writers This also results from the reuse of name r1 If output-dependence caused a hazard in the pipeline, called a Write After Write (WAW) hazard Instructions involved in a name dependence can execute simultaneously if name used in instructions is changed so instructions do not conflict
Register renaming resolves name dependence for regs Either by compiler or by HW
8/21/2011 14

Data Hazards
A hazard is created whenever there is a dependence between instructions, and they are close enough that the overlap during execution would change the order of access to the operand involved in the dependence. Because of the dependence, we must preserve program order, that is, the order that the instructions would execute in if executed sequentially one at a time as determined by the original source program. The goal of both software and hardware techniques is to exploit parallelism by preserving program order only where it affects the outcome of the program. Detecting and avoiding hazards ensures that necessary program order is preserved. Data hazards may be classified as one of three types, depending on the order of read and write accesses in the instructions. Consider two instructions i and j, with i preceding j in program order. The possible data hazards are
RAW (read after write) WAW (write after write) WAR (write after read)
8/21/2011 15

Data Hazards
RAW (read after write)
j tries to read a source before i writes it, so j incorrectly gets the old value. This hazard is the most common type and corresponds to a true data dependence. Program order must be preserved to ensure that j receives the value from i.

WAW (write after write)


j tries to write an operand before it is written by i. The writes end up being performed in the wrong order, leaving the value written by i rather than the value written by j in the destination. This hazard corresponds to an output dependence. WAW hazards are present only in pipelines that write in more than one pipe stage or allow an instruction to proceed even when a previous instruction is stalled.

WAR (write after read)


j tries to write a destination before it is read by i, so i incorrectly gets the new value. This hazard arises from an anti-dependence. WAR hazards cannot occur in most static issue pipelineseven deeper pipelines or floating-point pipelinesbecause all reads are early (in ID) and all writes are late (in WB). A WAR hazard occurs either when there are some instructions that write results early in the instruction pipeline and other instructions that read a source late in the pipeline, or when instructions are reordered.
8/21/2011 16

Control Dependencies
A control dependence determines the ordering of an instruction, i, with respect to a branch instruction so that the instruction i is executed in correct program order and only when it should be. Every instruction is control dependent on some set of branches, and, in general, these control dependencies must be preserved to preserve program order One of the simplest examples of a control dependence is the dependence of the statements in the "then" part of an if statement on the branch. For example, in the code segment if p1 { S1; }; if p2 { S2; } S1 is control dependent on p1, and S2 is control dependent on p2 but not on p1.
8/21/2011 17

Control Dependencies
In general, there are two constraints imposed by control dependences:
1. An instruction that is control dependent on a branch cannot be moved before the branch so that its execution is no longer controlled by the branch. For example, we cannot take an instruction from the then portion of an if statement and move it before the if statement. 2. An instruction that is not control dependent on a branch cannot be moved after the branch so that its execution is controlled by the branch. For example, we cannot take a statement before the if statement and move it into the then portion.

8/21/2011

18

Control Dependence Ignored


Control dependence need not be preserved
willing to execute instructions that should not have been executed, thereby violating the control dependences, if can do so without affecting correctness of the program

Instead, 2 properties critical to program correctness are


1) exception behavior and 2) data flow

8/21/2011

19

Exception Behavior
Preserving exception behavior any changes in instruction execution order must not change how exceptions are raised in program ( no new exceptions) Example: DADDU R2,R3,R4 BEQZ R2,L1 LW R1,0(R2) L1:
(Assume branches not delayed)

Problem with moving LW before BEQZ?

8/21/2011

20

Data Flow
Data flow: actual flow of data values among instructions that produce results and those that consume them
branches make flow dynamic, determine which instruction is supplier of data

Example: DADDU R1,R2,R3 BEQZ R4,L DSUBU R1,R5,R6 L: OR R7,R1,R8 OR depends on DADDU or DSUBU? Must preserve data flow on execution
8/21/2011 21

Outline
ILP Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction Dynamic Branch Prediction Overcoming Data Hazards with Dynamic Scheduling (Start) Tomasulo Algorithm Conclusion

8/21/2011

22

Basic Pipeline Scheduling and Loop Unrolling


To keep a pipeline full, parallelism among instructions must be exploited by finding sequences of unrelated instructions that can be overlapped in the pipeline. To avoid a pipeline stall, a dependent instruction must be separated from the source instruction by a distance in clock cycles equal to the pipeline latency of that source instruction. A compiler's ability to perform this scheduling depends both on the amount of ILP available in the program and on the latencies of the functional units in the pipeline. We assume the standard five-stage integer pipeline, so that branches have a delay of 1 clock cycle. We assume that the functional units are fully pipelined or replicated (as many times as the pipeline depth), so that an operation of any type can be issued on every clock cycle and there are no structural hazards. Now we look at how the compiler can increase the amount of available ILP by transforming loops

8/21/2011

23

Software Techniques - Example


Example: This code, add a scalar to a vector: for (i=1000; i>0; i=i1) x[i] = x[i] + s; Lets look at the performance of this loop, showing how we can use the parallelism to improve its performance for a MIPS pipeline with latencies shown below
Ignore delayed branch in these examples

Instruction producing result FP ALU op FP ALU op Load double Load double Integer op

Instruction using result Another FP ALU op Store double FP ALU op Store double Integer op

Latency in cycles 4 3 1 1 1

stalls between in cycles 3 2 1 0 0

8/21/2011

24

FP Loop: Where are the Hazards?


First translate into MIPS code:
-To simplify, assume 8 is lowest address

Loop: L.D ADD.D S.D DADDUI BNEZ

F0,0(R1) ;F0=vector element F4,F0,F2 ;add scalar from F2 0(R1),F4 ;store result R1,R1,-8 ;decrement pointer 8B (DW) R1,Loop ;branch R1!=zero

8/21/2011

25

FP Loop Showing Stalls


1 Loop: L.D 2 stall 3 ADD.D 4 stall 5 stall 6 S.D 7 DADDUI 8 stall 9 BNEZ Instruction producing result FP ALU op FP ALU op Load double F0,0(R1) ;F0=vector element F4,F0,F2 ;add scalar in F2

0(R1),F4 ;store result R1,R1,-8 ;decrement pointer 8B (DW) ;assumes cant forward to branch R1,Loop ;branch R1!=zero Instruction using result Another FP ALU op Store double FP ALU op Latency in clock cycles 3 2 1
26

9 clock cycles: Rewrite code to minimize stalls?


8/21/2011

Revised FP Loop Minimizing Stalls


1 Loop: L.D F0,0(R1) 2 DADDUI R1,R1,-8 3 ADD.D F4,F0,F2 4 stall
5 stall S.D BNEZ 8(R1),F4 R1,Loop ;altered offset when move DSUBUI

6 7

Swap DADDUI and S.D by changing address of S.D


Instruction producing result FP ALU op FP ALU op Load double Instruction using result Another FP ALU op Store double FP ALU op Latency in clock cycles 3 2 1

7 clock cycles, but just 3 for execution (L.D, ADD.D,S.D), 4 for loop overhead; How make faster?
8/21/2011 27

Loop Unrolling
A simple scheme for increasing the number of instructions relative to the branch and overhead instructions is loop unrolling. Unrolling simply replicates the loop body multiple times, adjusting the loop termination code. Loop unrolling can also be used to improve scheduling. Because it eliminates the branch, it allows instructions from different iterations to be scheduled together. In this case, we can eliminate the data use stalls by creating additional independent instructions within the loop body. If we simply replicated the instructions when we unrolled the loop, the resulting use of the same registers could prevent us from effectively scheduling the loop. Thus, we will want to use different registers for each iteration, increasing the required number of registers.
8/21/2011 28

Unroll Loop Four Times (straightforward way)


1 Loop:L.D 3 ADD.D 6 S.D 7 L.D 9 ADD.D 12 S.D 13 L.D 15 ADD.D 18 S.D 19 L.D 21 ADD.D 24 S.D 25 DADDUI 26 BNEZ F0,0(R1) F4,F0,F2 0(R1),F4 F6,-8(R1) F8,F6,F2 -8(R1),F8 F10,-16(R1) F12,F10,F2 -16(R1),F12 F14,-24(R1) F16,F14,F2 -24(R1),F16 R1,R1,#-32 R1,LOOP 1 cycle stall 2 cycles stall minimize ;drop DSUBUI & BNEZ

Rewrite loop to stalls?

;drop DSUBUI & BNEZ

;drop DSUBUI & BNEZ

;alter to 4*8

27 clock cycles, or 6.75 per iteration (Assumes R1 is multiple of 4)


8/21/2011 29

Unrolled Loop Detail


Do not usually know upper bound of loop Suppose it is n, and we would like to unroll the loop to make k copies of the body Instead of a single unrolled loop, we generate a pair of consecutive loops:
1st executes (n mod k) times and has a body that is the original loop 2nd is the unrolled body surrounded by an outer loop that iterates (n/k) times

For large values of n, most of the execution time will be spent in the unrolled loop

8/21/2011

30

Unrolled Loop That Minimizes Stalls


1 Loop:L.D 2 L.D 3 L.D 4 L.D 5 ADD.D 6 ADD.D 7 ADD.D 8 ADD.D 9 S.D 10 S.D 11 S.D 12 DSUBUI 13 S.D 14 BNEZ F0,0(R1) F6,-8(R1) F10,-16(R1) F14,-24(R1) F4,F0,F2 F8,F6,F2 F12,F10,F2 F16,F14,F2 0(R1),F4 -8(R1),F8 -16(R1),F12 R1,R1,#32
8(R1),F16 ; 8-32 = -24 R1,LOOP

14 clock cycles, or 3.5 per iteration

8/21/2011

31

5 Loop Unrolling Decisions


Requires understanding how one instruction depends on another and how the instructions can be changed or reordered given the dependences: Determine loop unrolling useful by finding that loop iterations were independent (except for maintenance code) Use different registers to avoid unnecessary constraints forced by using same registers for different computations Eliminate the extra test and branch instructions and adjust the loop termination and iteration code Determine that loads and stores in unrolled loop can be interchanged by observing that loads and stores from different iterations are independent
Transformation requires analyzing memory addresses and finding that they do not refer to the same address

1. 2. 3. 4.

5. Schedule the code, preserving any dependences needed to yield the same result as the original code
8/21/2011 32

3 Limits to Loop Unrolling


1. Decrease in amount of overhead amortized with each extra unrolling
Amdahls Law For larger loops, concern it increases the instruction cache miss rate

2. Growth in code size 3. Register pressure: potential shortfall in registers created by aggressive unrolling and scheduling
If not be possible to allocate all live values to registers, may lose some or all of its advantage

Loop unrolling reduces impact of branches on pipeline; another way is branch prediction

8/21/2011

33

Static Branch Prediction


To reorder code around branches, need to predict branch statically when compile Simplest scheme is to predict a branch as taken
This scheme has an average misprediction rate that is equal to the untaken branch frequency, which for the SPEC programs is 34%. Unfortunately, the misprediction rate for the SPEC programs ranges from not very accurate (59%) to highly accurate (9%)
25%

eq nt ot es t pr es so

co

hy

More accurate scheme predicts branches using profile information collected from earlier runs, and modify prediction based on last run:
8/21/2011

22% 18% 15% 12% 11% 12% 4% 6%

Misprediction Rate

20% 15% 10% 5% 0%

9%

10%

ea r

2d

li

ss

p dl jd

do du

pr e

dr o

Integer

Floating Point

su
34

2c o

gc

Static Branch Prediction


The key observation that makes this worthwhile is that the behavior of branches is often bimodally distributed; that is, an individual branch is often highly biased toward taken or untaken. The same input data were used for runs and for collecting the profile; other studies have shown that changing the input so that the profile is for a different run leads to only a small change in the accuracy of profile-based prediction. The effectiveness of any branch prediction scheme depends both on the accuracy of the scheme and the frequency of conditional branches, which vary in SPEC from 3% to 24%. The fact that the misprediction rate for the integer programs is higher and that such programs typically have a higher branch frequency is a major limitation for static branch prediction.

8/21/2011

35

Dynamic Branch Prediction


Why does prediction work?
Underlying algorithm has regularities Data that is being operated on has regularities Instruction sequence has redundancies that are artifacts of way that humans/compilers think about problems

Is dynamic branch prediction better than static branch prediction?


Seems to be There are a small number of important branches in programs which have dynamic behavior

8/21/2011

36

Dynamic Branch Prediction


Performance = (accuracy, cost of misprediction) The simplest dynamic branch-prediction scheme is a branch-prediction buffer or branch history table.
A branch-prediction buffer is a small memory indexed by the lower portion of the address of the branch instruction. The memory contains a bit that says whether the branch was recently taken or not. This scheme is the simplest sort of buffer; it has no tags and is useful only to reduce the branch delay when it is longer than the time to compute the possible target PCs.

Problem: in a loop, 1-bit BHT will cause two mispredictions (avg is 9 iterations before exit):
End of loop case, when it exits instead of looping as before First time through loop on next time through code, when it predicts exit instead of looping

8/21/2011

37

Dynamic Branch Prediction


Solution: 2-bit scheme where change prediction only if get mis-prediction twice
T NT Predict Taken T Predict Not Taken
01 11 10

T NT
00

Predict Taken NT Predict Not Taken

Red: stop, not taken NT Green: go, taken Adds hysteresis to decision making process
8/21/2011 38

BHT Accuracy
Mispredict because either:
Wrong guess for that branch Got branch history of wrong branch when index the table

4096 entry table:

8/21/2011

39

Correlated Branch Prediction


Branch predictors that use the behavior of other branches to make a prediction are called correlating predictors or twolevel predictors. Existing correlating predictors add information about the behavior of the most recent branches to decide how to predict a given branch. Idea: record m most recently executed branches as taken or not taken, and use that pattern to select the proper n-bit branch history table In general, (m,n) predictor means record last m branches to select between 2m history tables, each with n-bit counters
Thus, old 2-bit BHT is a (0,2) predictor

Global Branch History: m-bit shift register keeping T/NT status of last m branches. Each entry in table has m n-bit predictors.

8/21/2011

40

Correlating Branches
(2,2) predictor
Behavior of recent branches selects between four predictions of next branch, updating just that prediction Branch address 4 2-bits per branch predictor

Prediction

2-bit global branch history

8/21/2011

41

Accuracy of Different Schemes


20%

Frequency of Mispredictions

18% 16% 14% 12% 10% 8% 6% 4% 2% 1%

4096 Entries 2-bit BHT Unlimited Entries 2-bit BHT 1024 Entries (2,2) BHT
11%

6% 5%

6% 4%

6% 5%

1% 0% doducd fpppp expresso spice gcc matrix300 tomcatv eqntott li

0%
nasa7

4,096 entries: 2-bits per entry

Unlimited entries: 2-bits/entry

1,024 entries (2,2)

8/21/2011

42

Tournament Predictors
Multilevel branch predictor Use n-bit saturating counter to choose between predictors Usual choice between global and local predictors

8/21/2011

43

Tournament Predictors
Tournament predictor using, say, 4K 2-bit counters indexed by local branch address. Chooses between: Global predictor
4K entries index by history of last 12 branches (212 = 4K) Each entry is a standard 2-bit predictor

Local predictor
Local history table: 1024 10-bit entries recording last 10 branches, index by branch address The pattern of the last 10 occurrences of that particular branch used to index table of 1K entries with 3-bit saturating counters

8/21/2011

44

Comparing Predictors (Fig. 2.8)


Advantage of tournament predictor is ability to select the right predictor for a particular branch
Particularly crucial for integer benchmarks. A typical tournament predictor will select the global predictor almost 40% of the time for the SPEC integer benchmarks and less than 15% of the time for the SPEC FP benchmarks

8/21/2011

45

Pentium 4 Misprediction Rate (per 1000 instructions, not per branch)


14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
vp r gr id 17 3. ap pl u 17 6. gc c 16 4. gz ip af ty cf e w im 18 1. m 17 5. 18 6. cr up w 17 2. m m es a 17 7. is

13 12

6% misprediction rate per branch SPECint (19% of INT instructions are branch) 2% misprediction rate per branch SPECfp (5% of FP instructions are branch)
9

Branch mispredictions per 1000 Instructions

11

SPECint2000
8/21/2011

16 8. w

17 1. s

SPECfp2000
46

Branch Target Buffers (BTB)

Branch target calculation is costly and stalls the instruction fetch. BTB stores PCs the same way as caches The PC of a branch is sent to the BTB When a match is found the corresponding Predicted PC is returned If the branch was predicted taken, instruction fetch continues at the returned predicted PC

Branch Target Buffers

Dynamic Branch Prediction Summary


Prediction becoming important part of execution Branch History Table: 2 bits for loop accuracy Correlation: Recently executed branches correlated with next branch
Either different branches (GA) Or different executions of same branches (PA)

Tournament predictors take insight to next level, by using multiple predictors


usually one based on global information and one based on local information, and combining them with a selector In 2006, tournament predictors using 30K bits are in processors like the Power5 and Pentium 4

Branch Target Buffer: include branch address & prediction


8/21/2011 49

Outline
ILP Compiler techniques to increase ILP Loop Unrolling Static Branch Prediction Dynamic Branch Prediction Overcoming Data Hazards with Dynamic Scheduling (Start) Tomasulo Algorithm Conclusion

8/21/2011

50

Advantages of Dynamic Scheduling


Dynamic scheduling - hardware rearranges the instruction execution to reduce stalls while maintaining data flow and exception behavior It handles cases when dependences unknown at compile time
it allows the processor to tolerate unpredictable delays such as cache misses, by executing other code while waiting for the miss to resolve

It allows code that compiled for one pipeline to run efficiently on a different pipeline It simplifies the compiler Hardware speculation, a technique with significant performance advantages, builds on dynamic scheduling (next lecture)

8/21/2011

51

HW Schemes: Instruction Parallelism


Key idea: Allow instructions behind stall to proceed
DIVD ADDD SUBD F0,F2,F4 F10,F0,F8 F12,F8,F14

Enables out-of-order execution and allows out-oforder completion (e.g., SUBD)


In a dynamically scheduled pipeline, all instructions still pass through issue stage in order (in-order issue)

Will distinguish when an instruction begins execution and when it completes execution; between 2 times, the instruction is in execution Note: Dynamic execution creates WAR and WAW hazards and makes exceptions harder

8/21/2011

52

Dynamic Scheduling Step 1


Simple pipeline had 1 stage to check both structural and data hazards: Instruction Decode (ID), also called Instruction Issue Split the ID pipe stage of simple 5-stage pipeline into 2 stages: IssueDecode instructions, structural hazards check for

Read operandsWait until no data hazards, then read operands

8/21/2011

53

A Dynamic Algorithm: Tomasulos


For IBM 360/91 (before caches!)
Long memory latency

Goal: High Performance without special compilers Small number of floating point registers (4 in 360) prevented interesting compiler scheduling of operations
This led Tomasulo to try to figure out how to get more effective registers renaming in hardware!

Why Study 1966 Computer? The descendants of this have flourished!


Alpha 21264, Pentium 4, AMD Opteron, Power 5,

8/21/2011

54

Tomasulo Algorithm
Control & buffers distributed with Function Units (FU)
FU buffers called reservation stations; have pending operands

Registers in instructions replaced by values or pointers to reservation stations(RS); called register renaming ;
Renaming avoids WAR, WAW hazards More reservation stations than registers, so can do optimizations compilers cant

Results to FU from RS, not through registers, over Common Data Bus that broadcasts results to all FUs
Avoids RAW hazards by executing an instruction only when its operands are available

Load and Stores treated as FUs with RSs as well Integer instructions can go past branches (predict taken), allowing FP ops beyond basic block in FP queue
8/21/2011 55

Tomasulo Organization
From Mem FP Op Queue Load Buffers FP Registers

Load1 Load2 Load3 Load4 Load5 Load6 Add1 Add2 Add3

Store Buffers
Mult1 Mult2

FP adders FP adders

Reservation Stations

To Mem FP multipliers FP multipliers

8/21/2011

Common Data Bus (CDB)

56

Reservation Station Components


Op: Operation to perform in the unit (e.g., + or ) Vj, Vk: Value of Source operands
Store buffers has V field, result to be stored

Qj, Qk: Reservation stations producing source registers (value to be written)


Note: Qj,Qk=0 => ready
Store buffers only have Qi for RS producing result

Busy: Indicates reservation station or FU is busy Register result statusIndicates which functional unit will write each register, if one exists. Blank when no pending instructions that will write that register.
8/21/2011 57

Three Stages of Tomasulo Algorithm


1. Issueget instruction from FP Op Queue
If reservation station free (no structural hazard), control issues instr & sends operands (renames registers).

2. Executeoperate on operands (EX)


When both operands ready then execute; if not ready, watch Common Data Bus for result

3. Write resultfinish execution (WB)


Write on Common Data Bus to all awaiting units; mark reservation station available

Normal data bus: data + destination (go to bus) Common data bus: data + source (come from bus)
64 bits of data + 4 bits of Functional Unit source address Write if matches expected Functional Unit (produces result) Does the broadcast

Example speed: 3 clocks for Fl .pt. +,-; 10 for * ; 40 clks for /


8/21/2011 58

Instruction stream
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6

Tomasulo Example
j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Instruction status:

Exec Write Issue Comp Result


Load1 Load2 Load3

Busy Address
No No No

3 Load/Buffers
Op S1 Vj S2 Vk RS Qj RS Qk

Reservation Stations:
Time Name Busy Add1 No Add2 No FU count Add3 No down Mult1 No Mult2 No

3 FP Adder R.S. 2 FP Mult R.S.

Register result status: Clock


0 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Clock cycle counter


8/21/2011 59

Tomasulo Example Cycle 1


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 Load1 Load2 Load3

Busy Address
Yes No No 34+R2

Reservation Stations:
Time Name Busy Add1 No Add2 No Add3 No Mult1 No Mult2 No

Op

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


1 FU

F0

F2

F4

F6
Load1

F8

F10

F12

...

F30

8/21/2011

60

Tomasulo Example Cycle 2


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 Load1 Load2 Load3

Busy Address
Yes Yes No 34+R2 45+R3

Reservation Stations:
Time Name Busy Add1 No Add2 No Add3 No Mult1 No Mult2 No

Op

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


2 FU

F0

F2
Load2

F4

F6
Load1

F8

F10

F12

...

F30

Note: Can have multiple loads outstanding


8/21/2011 61

Tomasulo Example Cycle 3


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 3 Load1 Load2 Load3

Busy Address
Yes Yes No 34+R2 45+R3

Reservation Stations:
Time Name Busy Op Add1 No Add2 No Add3 No Mult1 Yes MULTD Mult2 No

S1 Vj

S2 Vk

RS Qj

RS Qk

R(F4) Load2

Register result status: Clock


3 FU

F0

F2

F4

F6
Load1

F8

F10

F12

...

F30

Mult1 Load2

Note: registers names are removed (renamed) in Reservation Stations; MULT issued Load1 completing; what is waiting for Load1?
8/21/2011

62

Tomasulo Example Cycle 4


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 3 4 4 Load1 Load2 Load3

Busy Address
No Yes No 45+R3

Reservation Stations:

Time Name Busy Op Add1 Yes SUBD M(A1) Load2 Add2 No Add3 No R(F4) Load2 Mult1 Yes MULTD Mult2 No

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


4 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 Load2

M(A1) Add1

Load2 completing; what is waiting for Load2?


8/21/2011 63

Tomasulo Example Cycle 5


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 3 4 4 5 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op 2 Add1 Yes SUBD M(A1) M(A2) Add2 No Add3 No 10 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


5 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

M(A1) Add1 Mult2

Timer starts down for Add1, Mult1


8/21/2011 64

Tomasulo Example Cycle 6


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 4 5 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op 1 Add1 Yes SUBD M(A1) M(A2) Add2 Yes ADDD M(A2) Add1 Add3 No 9 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


6 FU

F0

F2

F4

F6
Add2

F8

F10

F12

...

F30

Mult1 M(A2)

Add1 Mult2

Issue ADDD here despite name dependency on F6?


8/21/2011 65

Tomasulo Example Cycle 7


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 4 5 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op 0 Add1 Yes SUBD M(A1) M(A2) Add2 Yes ADDD M(A2) Add1 Add3 No 8 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


7 FU

F0

F2

F4

F6
Add2

F8

F10

F12

...

F30

Mult1 M(A2)

Add1 Mult2

Add1 (SUBD) completing; what is waiting for it?


8/21/2011 66

Tomasulo Example Cycle 8


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 4 5 8 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No 2 Add2 Yes ADDD (M-M) M(A2) Add3 No 7 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


8 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

Add2 (M-M) Mult2

8/21/2011

67

Tomasulo Example Cycle 9


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 4 5 8 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No 1 Add2 Yes ADDD (M-M) M(A2) Add3 No 6 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


9 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

Add2 (M-M) Mult2

8/21/2011

68

Tomasulo Example Cycle 10


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 10 4 5 8 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No 0 Add2 Yes ADDD (M-M) M(A2) Add3 No 5 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


10 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

Add2 (M-M) Mult2

Add2 (ADDD) completing; what is waiting for it?


8/21/2011 69

Tomasulo Example Cycle 11


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 10 4 5 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No 4 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


11 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

(M-M+M(M-M) Mult2

Write result of ADDD here? All quick instructions complete in this cycle!
8/21/2011 70

Tomasulo Example Cycle 12


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 10 4 5 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No 3 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


12 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

(M-M+M(M-M) Mult2

8/21/2011

71

Tomasulo Example Cycle 13


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 10 4 5 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No 2 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


13 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

(M-M+M(M-M) Mult2

8/21/2011

72

Tomasulo Example Cycle 14


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 7 10 4 5 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No 1 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


14 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

(M-M+M(M-M) Mult2

8/21/2011

73

Tomasulo Example Cycle 15


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 15 7 10 4 5 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No 0 Mult1 Yes MULTD M(A2) R(F4) Mult2 Yes DIVD M(A1) Mult1

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


15 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

Mult1 M(A2)

(M-M+M(M-M) Mult2

Mult1 (MULTD) completing; what is waiting for it?


8/21/2011 74

Tomasulo Example Cycle 16


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 15 7 10 4 5 16 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No Mult1 No 40 Mult2 Yes DIVD M*F4 M(A1)

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


16 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

M*F4 M(A2)

(M-M+M(M-M) Mult2

Just waiting for Mult2 (DIVD) to complete


8/21/2011 75

Faster than light computation (skip a couple of cycles)

8/21/2011

76

Tomasulo Example Cycle 55


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 15 7 10 4 5 16 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No Mult1 No 1 Mult2 Yes DIVD M*F4 M(A1)

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


55 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

M*F4 M(A2)

(M-M+M(M-M) Mult2

8/21/2011

77

Tomasulo Example Cycle 56


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 15 7 56 10 4 5 16 8 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No Mult1 No 0 Mult2 Yes DIVD M*F4 M(A1)

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


56 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

M*F4 M(A2)

(M-M+M(M-M) Mult2

Mult2 (DIVD) is completing; what is waiting for it?


8/21/2011 78

Tomasulo Example Cycle 57


Instruction status:
Instruction LD F6 LD F2 MULTD F0 SUBD F8 DIVD F10 ADDD F6 j 34+ 45+ F2 F6 F0 F8 k R2 R3 F4 F2 F6 F2

Exec Write Issue Comp Result


1 2 3 4 5 6 3 4 15 7 56 10 4 5 16 8 57 11 Load1 Load2 Load3

Busy Address
No No No

Reservation Stations:

Time Name Busy Op Add1 No Add2 No Add3 No Mult1 No Mult2 Yes DIVD M*F4 M(A1)

S1 Vj

S2 Vk

RS Qj

RS Qk

Register result status: Clock


56 FU

F0

F2

F4

F6

F8

F10

F12

...

F30

M*F4 M(A2)

(M-M+M(M-M) Result

Once again: In-order issue, out-of-order execution and out-of-order completion.


8/21/2011 79

Why can Tomasulo overlap iterations of loops?


Register renaming
Multiple iterations use different physical destinations for registers (dynamic loop unrolling).

Reservation stations
Permit instruction issue to advance past integer control flow operations Also buffer old values of registers - totally avoiding the WAR stall

Other perspective: Tomasulo building data flow dependency graph on the fly

8/21/2011

80

Tomasulos scheme offers 2 major advantages


1. Distribution of the hazard detection logic
distributed reservation stations and the CDB If multiple instructions waiting on single result, & each instruction has other operand, then instructions can be released simultaneously by broadcast on CDB If a centralized register file were used, the units would have to read their results from the registers when register buses are available

2. Elimination of stalls for WAW and WAR hazards

8/21/2011

81

Tomasulo Drawbacks
Complexity
delays of 360/91, MIPS 10000, Alpha 21264, IBM PPC 620 in CA:AQA 2/e, but not in silicon!

Many associative stores (CDB) at high speed Performance limited by Common Data Bus
Each CDB must go to multiple functional units high capacitance, high wiring density Number of functional units that can complete per cycle limited to one! Multiple CDBs more FU logic for parallel assoc stores

Non-precise interrupts!
We will address this later

8/21/2011

82

And In Conclusion #1
Leverage Implicit Parallelism for Performance: Instruction Level Parallelism Loop unrolling by compiler to increase ILP Branch prediction to increase ILP Dynamic HW exploiting ILP
Works when cant know dependence at compile time Can hide L1 cache misses Code for one machine runs well on another

8/21/2011

83

And In Conclusion #2
Reservations stations: renaming to larger set of registers + buffering source operands
Prevents registers as bottleneck Avoids WAR, WAW hazards Allows loop unrolling in HW

Not limited to basic blocks (integer units gets ahead, beyond branches) Helps cache misses as well Lasting Contributions
Dynamic scheduling Register renaming Load/store disambiguation

360/91 descendants are Intel Pentium 4, IBM Power 5, AMD Athlon/Opteron,


8/21/2011 84

You might also like