RC

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 175

MODULE III

IMPLEMENTATION

In the first part of this chapter, we present the different possibilities fo r the
use of reconfigurable devices in a system. According to the way those dev ices
are used, the target applications and the systems in which they are integrated,
different terminologies will be defined for the systems. We follow up by pre-
senting the design flow, i.e. the steps required to implement an application on
those devices. Because the programming of coarse-grained reconfigu rable de-
vices is very similar to that of processors, we will not focus on the variety of
tool that exists for this purpose. The implementation on FPGA devices is rather
unusual in two points. First the programming is not aimed at generating a set of
instructions to be executed sequentially on a given processor. We seek the
generation of the hardware components that will be mapped at different time on
the available resources. According to the application, the resources needed for
the computation of the application will be built as components to be down-
loaded to the device at run-time. The generation of such components is called
logic synthesis. It is an optimization process whose goal is to minimize some
cost functions aimed at producing, for instance, the fastest hardware with the
smallest amount of resources and the smallest power consumption. The map-
ping of the application to the FPGA resources is a step of the logic synthesis
called technology mapping. The second unusual point with FPGAs is that the
technology mapping targets look-up tables rather than NAND-Gate as it is the
case with many digital devices. In the last part of the chapter, we will therefore
shortly present the steps required in logic synthesis and focus in more details on
technology mapping for FPGAs.

1. Integration
Reconfigurable devices are usually used in three different ways:
68 RECONFIGURABLE COMPUTING

Rapid prototyping: In this case, the reconfigurable device is used as an


emulator for another digital device, usually an ASIC. The emulation pro-
cess allows for functionally test the correctness of the ASIC device to be
produced, sometimes in real operating and environmental conditions, be-
fore production. The reconfigurable device is only reconfigured to emu
late a new implementation of the ASIC device.
Non-frequently reconfigurable systems : The reconfigurable device is
in-tegrated in a running system where it is used as an application specific
pro-cessor. These systems are usually stand alone systems. The
reconfigur ation is used for test and initialization at start-up, and for
upgrading purpose. The device configuration is usually stored in an
EEPROM of Flash from which it is downloaded at start-up to reconfigure
the device. No configuration happens during operation.
Frequently reconfigurable systems : This third category comprises sys-
tems, which are frequently reconfigured. Those systems are usually cou -
pled with a host processor, which is used to reconfigure the device and
control the complete system.
1
With the increasing size and speed of reconfigurable processor it is possible to
implement many large modules on a reconfigurable device at the same time.
Moreover, for some reconfigurable devices, only a part of the device can be
configured while the rest continues to operate. This partial reconfiguration ca-
pability enables many functions to be temporally implemented on the device.
Depending on the time at which the reconfiguration sequence are defined, the
computation and configuration flow on a reconfigurable devices can be cla ssi-
fied in two categories:

Compile-time reconfiguration: In this case, the computation and


config-uration sequences as well as the data exchange are defined at
compile-time and never change during a computation. This approach is
more interesting for devices, which can only be full reconfigured.
However it can be a pplied to partial reconfigurable devices that are
logically or physically partitioned in a set of reconfigurable bins.
Run-time reconfiguration: The computation and configuration sequences
are not known at compile-time. Request to implement a given task is known
at run-time and should be handled dynamically. The reconfiguration pro-cess
exchanged part of the device to accommodate the system to changing
operational and environmental conditions. Run-time reconfiguration is a
difficult process that must handle side effect factors like defragmentatio n of

1
Today leading edge reconfigurable devices contain million of gates with few hundreds MHz
Implementation 69

the device and communication between newly placed modules. The man-
agement of the reconfigurable device is usually done by a scheduler and a
placer that can be implemented as part of an operating system running on
a processor (figure 3.1). The processor can either resides inside or outside
the reconfigurable chip.

Figure 3.1. Architecture of a run-time reconfigurable system

The scheduler manages the tasks and decides when a task should be exe-
cuted. The tasks, which are available as configuration data in a database a
re characterized through their bounding box and their run-time. The
bounding box defines the area that a task occupies on the device. The
management o f task execution at run-time is therefore a temporal
placement problem that will be studied in details in chapter 5. The
scheduler determines which task should be executed on the RPU and then
gives the task to the placer which will try to place it on the device, i.e.
allocate a set of resources for the implementation of that task. If the
placer is not able to find a site for the new task, then it will be sent back
to the scheduler which can then decide to send it later and to send another
task to the placer. In this case, we say that the task is rejected.

No mater if we are dealing with a compiled-time or run-time reconfigurable


system, the computation and reconfiguration flow is usually the one shown on
Figure 3.2. The host CPU is used for device configuration and data trans fer.
Usually, the reconfigurable device and the host processor communicates via a
70 RECONFIGURABLE COMPUTING

bus that is used for the data transfer between the processor and the reconfig-
urable device. In supercomputing, systems are made upon a high speed pro-
cessor from Intel or AMD and an FPGA-board attached to a bus like the PCI.
Several systems (Cray XD1, SGI, Nallatech SRC Computers) in which FPGAs
cohabits and communicate using a dedicated high speed bus, are available to
buy. In embedded systems, the processors are more and more integrated in the
reconfigurable devices and are heavily used for management purpose rather than
for computation. The RPU acts like a coprocessor with varying instruc-tion sets
accessible by the processor in a function call. The computation flow can be
summarized as shown in algorithm 2.

Figure 3.2. A CPU-RPU configuration and computation step

At the beginning of a computation, the host processor configures the reco n-


2
figurable device . Then it downloads the segment of the data to be processed by
3
the RPU and give the start signal to the RPU. The host and the RPU can then
process in parallel on their segments of data. At the end of its computation the
host reads the finish signal of the RPU. At this point the data (computation
result) can be collected from the on RPU memory by the processor. The RPU

2
It is already possible to trigger the reconfiguration of the X ilinx FPGAs from within the device using
their ICAP-port. This might allow a self reconfiguration of a s ystem from a processor running within the device
3The data might also be collected by the RPU itself from an external source
Implementation 71

Algorithm 2 CPU-RPU configuration and computation steps


1: Start
2: Initialize the RPU
3: while (1) do
4: Configure the RPU to implement a new task
5: Download Data for RPU computation into RPU-memory
6: Computes in parallel with the RPU if necessary
7: Upload the data computed by the RPU from the RPU-memory
8: end while
9: Stop

can also send the data directly to an external sink. In the computation flow
presented above, the RPU is configured only once. However, in frequ ently
re-configured systems several configurations might be done. If the RPU ha s
to be configured more than once, then the body of the while loop must be
run again according to the number of reconfigurations to be done before.

The design flow of a dynamic reconfigurable system is primary a hard-


ware/software partitioning process in which:
The part of the code to be executed on the host processor is determined.
This part is usually control-dominated.
The parts of the code to be executed on the reconfigurable device are ide
n-tified. Those are usually data-dominated parts for which efficient
dataflow computation modules are required.
The interface between the processor and the reconfigurable device is im-
plemented.
The implementation of the control part on the CPU is done in software
using the common development languages and tools. It is usually a C-
program with system calls to access the device for reconfiguration. We will
not focus o n the software development details in this book, since enough
material that cover software development exist.
No mater if the device is partially reconfigurable or not, the development
of the module to be downloaded later on the reconfigurable device follows
the same approach. The goal is to generate a dataflow computing block that
bes t matches the inherent parallelism of the part of the application to be
imple-mented as module.
Almost all manufacturers of coarse-grained reconfigurable devices p rovide
proprietary language and tools to implement such modules. It is usually a C-like
language with an appropriate scheduler that defines the function of the
72 RECONFIGURABLE COMPUTING

processing elements as well as their interconnections as function of the time.


Table 3.1 gives an overview of the languages and tools used by some
manufac-turers.

Manufacturer Language Tool Description


PACT NML (Structural) XPP-VC C into NML and configuration
Quicksilver SilverC InSpire SDK SiverC into Configuration
NEC-DRP C DRP Compiler C into configuration
picoChip C Picochip Toolchain C into configuration
IpFlex C/MATLAB DAP/DNA FW C/MATLAB into configuration

Table 3.1. Language and tools overview for coarse-grained RPUs

We will not further consider the programming on coarse-grained devices,


but rather refer to the corresponding manufacturers for more details on their
implementation languages and tools. In the next section, we will briefly
presen t the standard design flow for implementing digital circuits like
ASIC. We will focus in much detail on technology mapping for FPGA
because it differs from that of other digital devices.

2. FPGA Design Flow


The standard implementation methodology for FPGA designs is borrowed
from the ASIC design flow. The usual steps are presented in Figure 3.3.
Note that those steps concern only the modules of the application that has
been iden-tified by the hardware/software co-design process to be executed
on the FPGA. The software part is implemented using standard software
development ap-proaches, which are well covered in several textbooks.

2.1 Design Entry


The description of the function is made using either a schematic editor, a
hardware description language (HDL), or a finite state machine (FSM) edito r. A
schematic description is made by selecting components from a given library and
connecting them together to build the function circuitry. This process has the
advantage of providing a visual environment that facilitates a direct map-ping of
the design functions to selected computing blocks. The final circuit is built in a
structural way. However, designs with very large amount of function will not be
easy to manage graphically. Instead, a Hardware Description lan-guage (HDL)
may be used to capture the design either in a structural or in a behavioral way.
Besides VHDL and Verilog, which are the most established HDLs several C-
like languages, mostly tied to just one compiler manufacturer exist to describe
hardware. Here we can cite the languages Handel-C [125] and ImpulseC [175]
and, in some extend, SystemC [170].
Implementation 73

Figure 3.3. The FPGA design flow

2.2 Functional Simulation


After the design entry step, the designer can simulate the design to check the
correctness of the functionality. This is done by providing test patterns to the
inputs of the design and observing the outputs. The simulation is done in
software by tools which emulate the behavior of the components used in the
design. During the simulation, the inputs and outputs of the design are usually
shown on a graphical interface, which describes the signal evolution in time.

2.3 Logic Synthesis


After the design description and the functional simulation, the design can
be compiled and optimized. It is first translated into a set of Boolean equa-
tions. Technology mapping is then used to implement the functions with the
available modules in function library of the target architecture. In case of FP-
GAs, this step is called LUT-based technology mapping, because LUTs are
the modules used in the FPGA to implement the boolean operators. The
result of the logic synthesis is called the netlist. A netlist describes the
modules used to implement the functions as well as their interconnections.
There exist different netlist formats to help exchange data between different
tools. The most known are the Electronic Design Interchange Format
(EDIF). Some FPGA manufac-turers provide proprietary formats. This is the
case the Xilinx Netlist Format (XNF) for the Xilinx FPGAs.
74 RECONFIGURABLE COMPUTING

2.4 Place and Route


For the netlist generated in the logic synthesis process, operators (LUTs,
Flip-Flopss, Multiplexers, etc...) should be placed on the FPGA and
connected together via routing. Those two steps are normally achieved by
CAD tools provided by the FPGA vendors. After the placement and routing
of a netlist, the CAD tools generate a file called a bitstream. A bitstream
provides the description of all the bits used to configure the LUTs, the
interconnect ma-trices, the state of the multiplexer and I/O of the FPGA. The
full and partial bitstreams can now be stored in a database to be downloaded
later according to the paradigm described in section 1

2.5 Design Tools


The design entry, the functional simulation and the logic synthesis are done
using the CAD tools from Xilinx, Synopsys, Synplicity, Cadence, ALTERA and
Mentor Graphics. The place and route as well as the generation of con-figuration
data is done by the corresponding vendors tools. Table 3.2 pro vides some
information on the tool capabilities of some vendors.

Manufacturer Tool Description


Synopsys FPGA Compiler synthesis
Mentor FPGA synthesis, place and route
Synplicity Sinplify Synthesis, place and route
Xilinx ISE Synthesis, place and route (only Xilinx products)
Altera Quartus II Synthesis, place and route (only Altera products)
Actel Libero Synthesis, place and route (only Actel products)
Atmel Figaro Synthesis, place and route (only Atmel products)

Table 3.2. Overview of FPGA Manufacturers and Tool Providers

The logic synthesis step is an important part in FPGA design. It is done by


tools, which solve a given number of optimization problems on the path
from the design entry to the generation of the configuration data. In the next
sec tion, we will take a look inside those tools in order to understand how
they work. Since the technology mapping process of FPGA differs from that
of ASIC, we will pay more attention to this step and present in detail some
of technology mapping algorithms, developed for FPGA devices. Each of
the algorithms is best adapted for the optimization of a given cost function.

3. Logic Synthesis
A function, assigned to the hardware in the hardware/software co-design
process, can be described as a digital structured system. As shown in Figure
3.4, such a digital structured system consists of a set of combinatorial logic
modules (the nodes), memory (the registers), inputs and outputs.
Implementation 75

Figure 3.4. A structured digital system

The inputs provide data to the system while the outputs carry data out of
the system. Computation is performed in the combinatorial parts and the
results might be temporally stored in registers that are placed between the
combinato-rial blocks. A clock is used to synchronize the transfer of data
from register to register via combinatorial parts. The description of a design
at this level is usu-ally called register transfer description, due to the register
to register operation mode previously described.
For such a digital system, the goal of the logic synthesis is to produce an
optimal implementation of the system on a given hardware platform. In the
case of FPGA, the goal is the generation of configuration data which satis-
fies a set of given constraints like the maximal speed, the minimum area, the
minimum power consumption, etc. In a structured system, each
combinatorial block is a node that can be represented as a two-level function
or as multi-level function. Depending on the node representations, the two
following synthesis approaches exist:

Two-Level Logic Synthesis: Two-level synthesis deals with the synthesis of


designs represented in two-level logic. Those are representations in which
the longest path from input to output, in term of number of gates crossed on
the path, is two. Two-level logic is the natural and straightforward approach
to implement a Boolean function, because each Boolean function can be
represented as a sum of product terms. In the first level, the products a re
built using the AND-primitives. The sums of the resulting products are built
in the second level with the OR-primitives.
76 RECONFIGURABLE COMPUTING

Multi Level Logic Synthesis: In the multi-level synthesis, functions are


rep-resented using a multi-level logic. Those are circuits in which the
longest path from input to output goes through more than two gates.

Most of the circuits used in practice are implemented using multi-level logic.
Multi-level circuits are smaller, faster in most cases and consume less power
than two-level circuits. Two-level logic is most appropriate for PAL and PLA
implementations while multi-level is used for standard cell, mask-
programmable or field-programmable devices.
We formally represent a node of the structured system as a Boolean network,
i.e. a network of Boolean operators that reflects the structure and functio n of the
nodes. A Boolean network is defined as a directed acyclic graph (DAG) in
which a node represents an arbitrary Boolean function and an edge (i, j)
represents the data dependency between the two nodes i and j of the network.

Figure 3.5. Example of a boolean network with: y1 = x1 + x2, y2 = x3 · x4, y3 = x5 · x6 , y4


= y1 + y2, z1 = y1 + y4, and z2 = y2 ⊕ y3

Figure 3.5 shows a Boolean network example for the functions.

3.1 Node representation


The representation of a node is of great importance in the synthesis process.
In two-level logic, the question on how to represent the node of a Boolean net-
work is not relevant, since the final representation is the same sum of prod ucts
as the initial representation. Although any valid representation is allowed, in
multi-level logic, representation are seek, which efficiently use the memory and
for which a correlation with the final representation exists. Furthermor e, the
representation should be easy to manipulate.
Logic synthesis can be done in two different approaches: the technology
dependant synthesis and the technology independent. In the first case, only
valid gates chosen from the target library are used in the node representation.
Implementation 77

The final implementation matches the node representation. In the second case ,
the representation is technology independent, i.e the design is not tied to any
library. A final mapping must be done to the final library in order to have an
implementation. The technology independent method is most used, due to the
large set of available optimization methods available. With a technology
independent representation, synthesis for FPGA devices is done in two steps. In
the first step, all the Boolean equations are minimized, independent of the
function generators used. In the second step, the technology mapping process
maps the parts of the Boolean network to a set of LUTs.
In general, the following choices are made for the representation of a node:
Sum of products form: A Sum of Product ( SOP) is the most trivial
form to represent a Boolean function. It consists of a sum of product
terms and it is well adapted for two-level logic implementation on PALs
and PLAs. Example: f = xyz + xyz + wxy.
This representation has the advantage that it is well understood and it is
easy to manipulate. Many optimization algorithms are available (AND,
OR, Tautology, two-level minimizers). The main disadvantage is the non
representativity of the logic complexity. In fact, designs represented as
sum of products are not easy to estimate as the complexity of the design
decreases through manipulation. Therefore estimation of progress during
logic minimization on SOPs is difficult.
Factored form A factored form is defined recursively either as a single
literal or, as a product or a sum of two factored forms: a product is either
a single literal or the product of two factored forms and a sum is either a
single literal or the sum of two factored forms. c(a + b(d + e)) is a
product of the factored forms c and a + b(d + e), and a + b(d + e) is a
sum of the factored forms a and b(d + e).
Factored forms are representative of the logic complexity. In many design
styles, the implementation of a function corresponds to its factored form.
Therefore, factored forms are good estimation of complexity of the logic
implementation. Their main disadvantage is the lack of manipulation algo-
rithms. They are usually converted in SOPs before manipulation.
Binary decision diagram (BDD) A binary decision diagram (BDD) is a
rooted directed acyclic graph used to represent a boolean function. Two
kinds of nodes exist in BDDs: variable and constant nodes.

– A variable node v is a non terminal having as attribute its argument


index4 index(v) ∈ {1, ..., n} and its two children low(v) and
high(v).

4
An index i defines a variable xi .
78 RECONFIGURABLE COMPUTING

– A constant node v is a terminal node with a value value(v) ∈ {0, 1}.


A BDD in which an ordering relation among the nodes exists is called a
Ordered BDD (OBDD). The non terminal nodes are ordered from the
root to the terminal nodes. Formally, for each non terminal node v, if
low(v) is non terminal, then index(low(v)) < index(v). Similarly, if if
high(v) is non terminal, then index(high(v)) < index(v).
The correspondence between a BDD and a Boolean relation is define as
follow: A BDD with root v denotes a function fv .
– If v is a terminal node, then if value(v) = 1, then fv = 1, else (value(v)
= 0) fv = 0.
– If v is a non terminal node with index(v) = i , the Shannon expan-sion
theorem is used to express the function fv as fv = xiflow(v) + xifhigh(v),
where flow(v) respectively fhigh(v) denote the function rooted at
low(v) respectively high(v). The value of fv for a given assignment is
obtained by traversing the graph from the root terminal according to
the assignment values of the nodes.
A BDD G is a Reduced Ordered BDD (ROBDD) if the following holds:
– low(v) 6= high(v), ∀v in G
′ ′
– ∀v, v ∈ G, the subtrees rooted at v and the subtree routed at v are not
5
isomorphic
Figure 3.6 shows the optimal BDD-representation of the function f = abc
+ bd + bcd.
Basically, BDDs are not canonical, i.e. there might exist several BDD
rep-resentations of the same Boolean function. Reduced ordered BDDs
are canonical and compact, thus they are good replacements of truth
tables. For a good ordering, reduced ordered BDDs remain reasonably
small for complicated functions. Manipulations of BDDs are well defined
and effi-cient.

3.2 Node manipulation


Having represented the node in one of the formats previously presented op-
timization algorithms can be applied to the Boolean network. The goal is to
generate an optimized network with either a less amount of gates or the lowest

5
Two BDDs G1 and G2 are isomorph ⇐ ⇒ there exists a bijective function σ from G1 in G2 such that: 1)
for a terminal node v ∈ G1, σ(v) = w is a terminal node in G2 with value(v) = value(w); 2) for a non terminal
node v ∈ G1, σ(v) = w is a non terminal node of G2 with index(v) = index(w),
σ(low(v)) = low(w) and σ(high(v)) = high(w)
Implementation 79

Figure 3.6. BDD-representation of the function f = abc + bd + bcd

depth of the gates. A given number of transformations can be applied to the


network for this purpose. Among them we can cite the following:
Decomposition: The decomposition takes a single Boolean function and
replace it with a collection of new expressions. We say that a Boolean
function f (X) is decomposable if we can find a function g(X) such that f
(X) = f ′(g(X), X).
The function f = abc+ abd+ acd+ bcd for example operates on 12 literals.
Using decomposition, f can be written as: f = ab(c + d) + (a + b)cd =
ab(c + d) + ab(c + d) = X Y + XY with X = ab and Y = c + d. This
decomposition reduces f the operation of f to only 8 literals.
Extraction: The extraction is used to identify common intermediate sub-
functions of a set of given functions in order to avoid redundancy.
Example: f = (a + bc)d + e and g = (a + bc)e can be rewritten as
f = xd + e and g = xe with x = a + bc. The output x of the common
part of the circuit will be used as common input for f and g.
Factoring: Factoring is the transformation of SOP-expressions in
factored form. For example the function f = ac + ad + bc + bd + e can
be rewritten as f = (a + b)(c + d) + e
Substitution: Substitution replace an expression e within a function f with
the value of an equivalent function g(X) = e. For example the function
f = (a + bc)(d + e) can be rewritten as f = g(d + e), with g = a + bc.
Collapsing: Also called elimination, collapsing is the reverse operation of
the substitution. Collapsing is used to eliminate levels in order to meet the
80 RECONFIGURABLE COMPUTING

timing constraints. The function f = ga + gb for example will be replaced


by f = ac + ad + bcd with g = c + d.

3.3 LUT-based Technology Mapping


The technology independent optimization phase ends with a reduced
Boolean network in which the fanin and fanout of gates vary. The next step
consists of allocating LUTs, which are the FPGA library elements for the
implementa-tion of the different nodes. Several goals might be followed here.
If the goal is to minimize the chip area used by the circuit, then the mapping
will try to allocate the less possible amount of LUTs. If the goal is to
minimize the de-lay of the final circuit, then the mapping will try to minimize
the depth of the LUTs used. Other goals like testability and low power might
also be followed. Several LUT technology mapping algorithms have been
developed in the past. Depending on their optimization goals, those
algorithms can be classified in three categories:
The first category contains the algorithms, whose goal is to minimize the
area. This category includes the Chortle-crf [87][86], the MIS-fpga [163]
and the Xmap [133]. Due to its popularity and the proof of area
optimality for LUTs with less than 5 inputs, the Chortle algorithm will be
presented in this section.
The algorithms in the second category target the delay minimization. Al-
gorithms in this category include the FlowMap [53], the Chortle-d[86],
the DAG-map[46] and the MIS-pga-delay[164]. The FlowMap algorithm
was a breakthrough in delay minimization because the authors were able
to present an optimal algorithm for LUT-technology mapping with delay
minimization as well as a proof of the polynomial-time complexity of
their algorithm. We present the FlowMap in detail in this section.
The third category contains algorithms which focuses on maximizing the
routability. This category includes the Bhat and Hill [26] work as well as
the Schlag[54], Kong and Chang approach[54]. None of those methods
will be presented here.
Besides the algorithms presented in this section, a large variety of high quality
LUT-technology mapping algorithms have been developed in the last couple of
years. Also research on LUT-based technology mapping keeps going on. All
those algorithms cannot be presented here. We therefore referred the interested
readers to the large amount of available publications. A starting point is the
survey provided by Cong et al [54]. We will first present some definitio ns
needed to better understand the algorithms that we present. We start providing a
formal definition of the problem that the algorithms we introduce intend to
solve. The LUT-technology mapping problem can be stated as follow:
Implementation 81

Definition 3.1 (LUT-Based Technology mapping problem) Given a Boolean


network representing a function f and an integer k. Find an imple-mentation of f
using only k-inputs LUTs, such that:
1 The amount of LUTs use is minimal or
2 The delay of the resulting circuit is minimal.

The LUT-based technology mapping is the problem of covering a given


graph (the Boolean network) with a set of k-input LUTs. Because the area of
the final implementation is defined through the amount of LUT used, the
first condition is equivalent to having a covering with a minimal amount of
LUTs. The delay in an FPGA is influenced by two main factors: The delay
in LUTs and the interconnection delay. While the delay in LUT is known,
the interconnection delay can only be accurately known after the place and
route phase. Delay estimation at this stage is done using the depth of LUTs
in the design, thus assuming the interconnection delay to be one for each
wire. The more LUTs are available on a path from the input to the outputs
the higher the delay in the circuit.

Definition 3.2 (Primary Input, Primary Output, Node Level,Node Depth, Fan-in
Given a boolean network G, we define the following:
1 A primary input (PI) node is a node without any predecessor.
2 A primary output (PO) node is a node without any successor.
3 The level l(v) of a node v is the length of the longest path from the
primary inputs to v.
4 The depth of a network G is the largest level of a node in G.
5 The fan-in of a node v is the set of gates whose outputs are inputs of v.
6 The fan-out of v is the set of gates that use the output of v as input.
7 Given a node v ∈ G , input(v) is defined as the set of node of G, which
are fan-in of v, i.e. the set of predecessors of v.

Definition 3.3 (Trees, Leaf-Dag) 1 A tree or fan-out-free circuit is one in


which each node has a maximal fan-out of one.

2 A leaf-DAG is a combinational circuit in which the only gates with a fan-


in greater than one are the primary inputs.

Definition 3.4 (K-Bounded network) Given a Boolean network G and a


subgraph H of G.
82 RECONFIGURABLE COMPUTING

With input(H), we denote the set of all nodes not included in H, which
are predecessors of some nodes in H.

G is K-bounded if |input(v)| ≤ K for all nodes of G.


A K-bounded Boolean network can be directly mapped to a set of k-inputs
LUT by assigning a LUT for each node. However this straightforward ap-
proach may not produce the expected optimal results.
Definition 3.5 (Cone at a node) Given a Boolean network G,
1 A cone Cv at a node v is the tree with root v which spans from v to its
primary inputs.

2 The cone Cv is K-feasible if:


input(Cv ) ≤ K
any path connecting two nodes in Cv to v lies entirely in Cv
An illustration of K-feasible cone at a node v is given in figure 3.7

Figure 3.7. Example of a K-feasible cone Cv at a node v

With the previous definition of K-feasible cones, the LUT technology


map-ping becomes the problem of covering the graph with a set of K-
feasible cones that are allowed to overlap. The technology mapping results
in a new DAG in which nodes are K-feasible cones and edges represent
communication among the cones. Figure 3.8 shows the covering of a graph
with 3-feasible cones and the resulting LUT-mapping to 3-input LUTs.
Next, we present some existing LUT-technology mapping algorithms and
explain their advantage as well as their drawbacks.

3.3.1 The Chortle Algorithm


The chortle algorithm was developed by Francis et al [87][86] at the Uni-
versity of Toronto in 1991 with the aim of minimizing the amount of LUTs in
Implementation 83

3−LUT 3−LUT

3−LUT

Figure 3.8. Example of a graph covering with K-feasible cone and the corresponding covering
with LUTs

the implementation of a given circuit. It operates in two steps: In the first


step, the original Boolean network is partitioned into a forest of trees that are
then separately mapped into circuits of K-input LUTs. The second step
assembles the circuits implementing the trees to produce the final circuit.
The transformation of the original network into a forest is done by partition-
ing each fan-out node v. Therefore, sub-network rooted at v is duplicated for
each input triggered by the fan-out nodes of v. The resulting sub-networks are
either trees or leaf-DAGs. The leaf-DAGs are converted in trees by creating a
unique instance of a primary input for each of its fan-out edges.

Mapping the trees. The strategy used by Chortle to map a tree is a combina-
tion of bin packing and dynamic programming. Each tree is traversed from the
primary inputs to the primary outputs. At each node v, a circuit referred to as the
best Circuit, implementing the cone at v extending from the node to the pri-mary
inputs of the network is constructed. The best circuit is characterized by two
main factors: The tree rooted at v and represented by a cone must contain the
minimum number of LUTs and the output LUT (the root-LUT) implement-ing v
should contain the maximum number of unused input pins. For a primary input
p the best circuit is a single LUT whose function is a buffer. Using the dynamic
programming, the best circuit at a node v can be constructed from its fan-in
nodes, because each of them is already optimally implemented. The procedure
enforces the use of the minimum number of LUTs at a given node. The best-
circuit is then constructed from the minimum number of LUTs used to
implement its fan-in nodes. The secondary goal is to minimize the number of
unused inputs of the circuit rooted at node v.
The construction of the tree is done in two steps. First a two-level
decompo-sition of the cone at v is constructed and then this decomposition is
converted into a multi-level decomposition.
84 RECONFIGURABLE COMPUTING

Two-level decomposition. The two-level decomposition consists of a single


first level node and several two-level nodes (figure 3.9(a)). Each second level
node implements the operation of the node being decomposed over a subset of
the fan-in LUTs. The first level node is not implemented at this stage. This will
be done in the second phase of the algorithm where the two-level representation
is converted into a multi-level implementation.

(a) Fan-in LUTs at a node (b) The two-level decomposition

Figure 3.9. Chortle two-level decomposition

The two-level decomposition is constructed using a bin packing algorithm


approach. In the traditional bin packing algorithm, the goal is to find the min-
imum number of bins with a given capacity, into which a set of boxes can be
packed. In the Chortle algorithm, the bins are the second-level LUTs and the
boxes are the fan-in LUTs. The capacity of each bin is the number k of LUT
inputs. The packing at this stage consist of combining two fan-in LUTs, LU T1
that realizes the function f1 and LU T2 that realizes the function f2 into a new
LUT LU Tr that implements the function f1 ⊘ f2, where ⊘ is the operation
implemented by the fan-out node. Figure 3.9(a) shows an original graph and its
decomposition is shown in 3.9(b). In example, the ⊘ is the OR function.
The pseudo code of the two-level decomposition algorithm is given in
algo-rithm 3.
This algorithm uses a First-Fit-Decreasing (FFD) method, which places a
fan-in LUT to be packed into the first LUT it will fit within. However, a
Best-Fit (BF) approach that packed the fan-in LUTs into the LUT they best
fit in, can also be used.

Multi-level Decomposition. In the second step, the first-level nodes are im-
plemented using a tree of LUTs. The number of LUTs used is minimized by
using second level LUTs that have unused pins to implement a portion of the
first-level three as shown in figure 3.10. The detailed procedure for co nverting a
two-level decomposition into a multi-level decomposition is given in algo-rithm
4 and figure 3.10 provides an illustration of this process.
Implementation 85

Algorithm 3 Chortle's two-level decomposition


Start with an empty list of LUTs
while there are unpacked fan-in LUTs do
if the largest unpacked fan-in LUT will
not fit within any LUT in the list then
create an empty LUT and add
it to the end of the list
end if
pack the largest unpacked fan-in LUT
into the first LUT it will fit within
end while

Algorithm 4 Chortle's multi-level decomposition


while there is more than one unconnected LUT do
if there are no free inputs among
the remaining unconnected LUTs then
create an empty LUT and add
it to the end of the LUT list
end if
connect the most filled unconnected LUT
to the next unconnected LUT with a free input
end while

The fact that the most filled unconnected LUTs are always selected push
es less filled LUTs to the root of the tree being built. Therefore, the LUTs
with the most unused inputs will be found near the root of the tree.

Improvement of the Chortle algorithm. Before applying the decomposi-tion


on the trees in the forest, the Chortle algorithm performs a pre-processing step in
which De Morgan's Theorem as well as the associativity rules are ap-plied in
order to insure that:
the only inverted edges in the trees are those originating from the leaf nodes.
no consecutive OR and no consecutive AND exist in the trees.
Subject to this Francis et al. [87] could prove that the Chortle algorithm con-
struct an area optimal algorithm for LUTs with less than or equal five inputs.

Exploiting the reconvergent paths. A further optimization of the Chortle


consists of optimizing the reconvergent paths to improve the implementation at
a given node. A reconvergent path is caused by a leaf node with a fan-out
greater than one. This produces two paths in the leaf-DAG that terminate at
86 RECONFIGURABLE COMPUTING

Figure 3.10. Example of multi-level decomposition

a given node. The goal of the optimization is to pack the reconvergent paths
caused by a given input into just one LUT.
This strategy is illustrated in figure 3.11. To see are two paths going
through the blocks g and h and placed in the same LUT, thus reducing the
number of LUTs used from five to four.

Figure 3.11. Exploiting reconvergent paths to reduce the amount of LUTs used
Implementation 87

If more than one pair of fan-in LUTs share inputs, there will be several
pairs of reconvergent paths. To determine which one should be packed in the
same LUT, two approaches exist in the Chortle algorithm. The first one is an
exhaustive approach that first finds all pairs of fan-in LUTs that sha re
inputs. Then every possible combination is constructed by first merging the
fan-in LUTs and then proceeding with the FFD bin packing algorithm. The
two-level decomposition that produces the fewest bins and the smallest least
filled bins is retained as the solution.
A large amount of pairs of fan-in LUTs sharing the same inputs cause the
algorithm to be impracticable. To overcome this limitation, a second heuristic
called the Maximum Share Decreasing (MSD) can be used. The goal is to
maximize the sharing of inputs when fan-in LUTs (boxes) are packed into bins.
The MSD iteratively chooses the next box to be packed into bins according to
the following criteria: 1) the box hast the greatest number of inputs, 2) the box
shares the greatest number of inputs with any existing bin and 3) it shares the
greatest number of inputs with any of the remaining boxes.
The first criterion insures that the MSD algorithm works like the FFD if no
reconvergeant path exists. The second and third criteria help to place boxes that
share inputs into the same bins. The chosen box is then packed into the bins with
which it shares the most input without exceeding the bin capacity, i.e. the
number of inputs. If no more bins exist for packing the chosen bin, then a new
bin is created and the chosen box is packed into the new bin.

Logic replication at fan-out LUTs. The replication of the logic at fan-out


nodes can also help to reduce the amount of LUTs used. As stated earlier, the
Chortle technology mapping first decompose the Boolean network into a forest,
then maps each tree separately and finally assembles the final circu it from the
mapped threes. In the assembling phase logic replication is used to reduce the
amount of LUTs in the final circuit as illustrated in figure 3.12.

Figure 3.12. Logic replication at fan-out nodes to reduce the number of LUTs used
88 RECONFIGURABLE COMPUTING

The chortle algorithm exists in two versions. One with the replication of
all nodes and the other without replication. The solution that produces the
less amount of LUT is retained.
The Chortle algorithm presented in this section is known in the literature
6
[87] as the Chortle-crf
As seen in figure 3.10, the path generated by the Chorlte algorithm can
be-come very long if no effort is spent in reducing the delay. This problem is
addressed in another version of the Chrotle algorithm, the chortle-d7. Instead
of using the bin packing to minimize the amount of LUTs used, the Chortle-
d focus in the bin packing strategy, on the reduction of the number of levels
in the final design. Because we investigate a delay optimal algorithm in the
next section, the Chortle-d algorithm will not be considered futher. The
FlowMap algorithm that we next present focuses on delay minimization
using a network flow approach, a technique that was also use in the MIS-pga
algorithm of M ur-gai et al[163].

3.3.2 The FlowMap Algorithm


Cong and Ding proposed a polynomial time algorithm for LUT-based
tech-nology mapping with the goal of delay minimization [53]. Their
algorithm, the FlowMap uses the notion of cut in a network to construct an
optimal solution in polynomial time.
Because the delay of the final circuit is determined by the delays in the LUTs
as well as those of the interconnections, an algorithm seeking to minimize the
delay should consider those two factors. However, since the components are not
placed and routed on the chip before the technology mapping phase, no real
accurate estimation of the interconnection delays can be done. The technology
mapping with delay minimization as goal can be done only on the basis of the
LUT delays. Delay minimization is equivalent to the minimization of the depth
of the resulting DAG, i.e. the minimization of the number of LUTs on a path
from the primary inputs to the primary outputs of the final circuit.
In contrast to most existing algorithms that first partitioned the Boolean
net-work into a forest of trees and then map each tree separately, the
FlowMap algorithm can be directly applied to the Boolean network. The
precondition is that the network must be k-bounded. If this is not the case,
then a pre-processing must be performed on the network to transform it to a
K-bounded one. This can be done for example with the same approach used
in the Chortle to generate the trees.
The FlowMap algorithm is a two step method, which first determines the
labels of the nodes in the first phase and then, in the second phase, asse mbles

6c is for the constructive bin packing, r for the reconvergent path and f for the logic replication.
7
d stays for delay
Implementation 89

the nodes in the LUTs according to their level number. We present the two
steps in the next paragraphs.

First Phase: Node Labeling. Labeling is based on the notion of cut in a


network. We therefore first recall some definitions related to the notion of
network and flow in a network.

Definition 3.6 (Network, cut, label, height, volume) Given


a network N = (V, E) with a source s and a sink t, a cut is a partition (X,
X) of the graph N such that s ∈ X and t ∈ X).

The cut-size n(X, X) of a cut (X, X) is the number of nodes in X adjacent


to some nodes in X

A cut (X, X) is K-feasible if n(X, X) ≤ K

The edge cut-size e(X, X) of (X, X) is the sum of the crossing edges ca-
pacities.

The label l(t) of a node t is defined as the depth of the LUT, which contains
t in an optimal mapping of the cone at t.

The height h(X, X) of a cut (X, X) is the maximum label in X.


h(X, X) = max {l(x) : x ∈ X}

The volume vol(X, X) of a cut (X, X) is the number of nodes in X.

vol(X, X) = X

The level of the K-LUT containing the node t in an optimal mapping of N


is at least l(t) and the label of all primary outputs of N is the depth of the
optimal mapping of N .
The first phase of the algorithm computes the label of the nodes in a topolog-
ical order, starting with the primary inputs of the graph. The topological order
guarantees that each node is processed after all its predecessors have been pro-
cessed. The labeling is done as follows: Each primary input u is assigned the
label l(u) = 0. For a given node to be processed at a given level, the subgraph Nt
representing the cone at node t is transformed into a network Nt by insert-ing a
source node s which is connected to all inputs of Nt as shown in figure 3.13. For
the sake of simplicity the resulting network is still denoted Nt
Assuming that t is implemented in the K-LUT LU T (t) in an optimal
map-ping of Nt, the cut (X(t), X(t)), where X(t) is the set of nodes in LU T (t)
and X(t) = Nt − X(t), is a K-feasible cut between s and t and the level of LU
T (t) is l(u) + 1, where u is the node of X(t) with the maximum level.
90 RECONFIGURABLE COMPUTING

Figure 3.13. Construction of the network Nt from the cone Ct at node t

With the previous reflection, the K-LUT mapping with minimal delay is
8
reduced to the problem of finding a K-feasible cut with minimum height for
each node in the graph. The level l(t) of the node t is therefore given by the
following formula:

l(t) = min (X,X) is K−f easible (h(X, X) + 1) (3.1)


{ }
The following Lemma results from the previous discussion and therefore it
will be given without proof.

Lemma 3.7 The minimum depth of any mapping solution of Nt is given by:

l(t) = min (h(X, X) + 1)


{(X,X) is K−f easible}
Figure 3.14 illustrates the FlowMap labelling method. Because there is a
min-imum height 3-feasible cut in Nt, we have l(t) = 2 and the optimal 3-
LUT mapping solution for Nt is given in the figure.
The goal of minimizing the delay can be reduced to efficiently compute
the minimum height K-feasible cut for each node visited in the graph. The
FlowMap algorithm constructs a mapping solution with minimum delay in

8
It is assumed here that no cut (X, X) is computed with primary input nodes in X.
Implementation 91

Figure 3.14. Minimum height 3-feasible cut and node mapping

time O(Km), where m is the number of nodes in the network. Further trans-
formations are required on the networks Nt in order to reach this goal. The
node labels defined by the FlowMap scheme satisfy the following property.

Lemma 3.8 Let l(t) be the label of node t, then l(t) = p or l(t) = p + 1, where p
is the maximum label of the nodes in input(t).

Proof Let t′ be any node in input(t). Then for any cut (X, X) in Nt, either

1 t′ ∈ X or
2 (X, X) also determines a K-feasible cut (X′, X′) in Nt′ with
h(X′, X′) ≤ h(X, X), where X′ = X ∩ Nt′ and X′ = X ∩ Nt′. Those two
cases are lillustrated in figure 3.15
′ ′
In the first case, we have l(t) = h(X, X) + 1 ≥ l(t ) + 1 and therefore l(t) ≥ l(t )

In the second case we have l(t′) − 1 ≤ h(X′, X′) ≤ h(X, X) = l(t) − 1, which
implies l(t) ≥ l(t′). Therefore, l(t) ≥ p. This part of the proof shows that the
label of a node cannot be smaller than those of its predecessors.
Because the network Nt is K-bounded, input(t) ≤ K. Therefore (Nt − t, {t}) is
a K-feasible cut. Since each node in Nt − {t} is either in input(t) or is a
predecessor of some node in input(t), the maximum label of the nodes in Nt −
{t} is p. Therefore, h(Nt − {t}, {t}) = p, i.e l(t) ≤ p + 1.
92 RECONFIGURABLE COMPUTING

Figure 3.15. Illustration of the two cases in the proof of Lemma 3.8

According to Lemma 3.8, a delay optimal mapping algorithm could first


checks if there is a K-feasible cut (X, X) of height p − 1 in Nt. If such a cut
exists, then l(t) is assigned the value p and the node t will be packed in the
second phase in a common LUT with the nodes in X. If such a cut does not
exist, then the minimum height of the K-feasible cuts in Nt is p and N t − {t}
, {t} is such a cut. The value of l(t) is set to p + 1 in this case and a new LUT
will be used for t in the second phase of the algorithm. This is the way the
FlowMap algorithm works.
We now face the next problem, namely finding out if a network has a K-
feasible cut with a given height h. In order to provide an answer to this ques-
tion, further transformations are required. The first one transforms Nt into N

t as described in Lemma 3.9

Lemma 3.9 Let N t be the graph obtained from Nt by applying a transfor-
mation which collapses all the nodes in Nt with maximum label p − 1 together
′ ′
with t in a new node t . Nt has a K-feasible cut of height p − 1 iff N t has a
K -feasible cut.

Proof Let Ht denote the set of node in Nt that are collapsed into t′.
⇐ If N ′t has a K-feasible cut(X′, X′), let X = X′ and X = (X′ − {t′} ∪ Ht),
then (X, X) is a K-feasible cut of Nt. Because no node in X′(= X) has a label
p or larger, we have h(X, X) ≤ p − 1. Furthermore according to Lemma 3.8,
l(t) ≥ p, which implies h(X, X) ≥ p − 1. Therefore h(X, X) = p − 1.

⇒ If Nt has a cut (X, X) of height p − 1, then X cannot contain any node


of label p or higher. Therefore, Ht ⊆ X, meaning that (X, (X − Ht) ∪ {t′})

forms a K-feasile cut of Nt .
Now that the problem of finding a minimum height K-feasible cut in the

network Nt is reduced to the problem of finding a K-feasible cut in Nt , exist-ing

network flow algorithms can be used to compute cuts in Nt and then check if
they are feasible. Unfortunately, for those algorithms the cuts are define d
Implementation 93

Figure 3.16. Transforming Nt into N ′t by node collapsing

on crossing edges. Moreover, it is difficult to establish a relation between the


number of crossing edges and the number of adjacent nodes in the original
net-work. A second transformation, called node splitting is therefore applied
on Nt′. The goal of this second step is to reduce the node cut-size constraint
in Nt′ to an edge cut-size constraint in the new graph Nt′′ and then, use well
known existing edge-cut computation algorithms.
The node splitting transforms the graph Nt′ into a graph Nt′′ as follow:
For each node v in Nt′ other than s and t′, two new nodes v1 and v2 are
introduced and connected by a bridging edge (v1, v2).
The source s and sink t′ are also introduced in Nt′′. For each edge (s, v)
respectively (v, t′) in Nt′, an edge (s, v1) respectively (v2, t′) is
introduced in Nt′′.
′ ′
For each edge (u, v) in Nt with u 6= s and v 6= t , an edge (u2, v1) is
introduced in Nt′′. The capacity of each bridging edge is set to one and
those of the non bridging edges is set to infinity.
′ ′′
The constructions of the graphs Nt and Nt are illustrated on figures 3.16
and 3.17.
′ ′′ ′′
The transformation of the graph Nt into Nt insures that if a cut exists in Nt
with capacity less than K, then no edge with infinite capacity will be a crossing
one. The only edges that would be crossing the cut are the bridging ones.

Because each bridging edge represents a node of Nt and the capacity of
94 RECONFIGURABLE COMPUTING

Figure 3.17. Transforming the node cut constraints into the edge cut ones

′′
each bridging edge is one, the edge size cut in Nt is equivalent to the node cut

size in Nt . Based on this observation, the following Lemma can be stated.
′ ′′
Lemma 3.10 Nt has a K-feasible cut if Nt has a cut whose edge cut size is no
more than K.

The Ford and Fulkerson method [84] [55] can be used to check if a cut
with edge cut-size smaller or equal K exists. We first briefly provide some
more background on networks, which are important to understand the testing
procedure.
In a network N with source s and sink t a flow can be seen as a streaming of
data on different direction on the edges of the network. A node might have data
coming in and data going out through its edges, each of which has a given
capacity. Data streaming in the s-direction (resp. in the t-direction) caused a
negative (resp. positive) value of the flow at a given node. The value of the flow
in the network is therefore the sum of the flows on the network edges.
Formally, a flow f in N can be defined as a function of N × N in .
A residual value exists on an edge if, the flow at that edge has a lower value
than the edge capacity. The residual value is then the difference between the
capacity and the flow's value. This can be added to the flow in order to satur ate
the edge. The capacity of a cut in the network is defined as the sum of all
positive crossing edge capacities. Negative crossing edges do not infl uence the
capacity of a cut. An edge not saturated by the flow is called a residual edge.
The residual network of a network is made upon all the residual edges as well
Implementation 95

as the nodes that they connect. An augmenting path in the residual network
is a path from source s to the sink t that contains only residual edges, i.e. a
path in the residual network.
A very important relationship between a flow and a cut in a network is
given by the following corollary:

Corollary 3.11 The value of any flow in the network is bounded from above by
the capacity of any cut in the network.

This implies the following: the maximum flow value in the network is
bounded from above by the minimum cut of the network.
Based on the above observation, the notion of cut and that of residual net-
work, the famous max-flow min-cut theorem [55] of Ford and Fulkerson
states that a flow is maximum in the network if the residual network does not
contain any augmenting path. The value of the flow is then equal to the
capacity of the minimum cut.
Applying the max-flow min-cut theorem to our problem, we can state the
′′
following: If a cut with edge cut-size smaller or equal K exists in Nt , then
′′ ′′
the maximum value of any flow between s and t in Nt will be less than K.
Because we are only interested in testing if the value of the cut is smaller
than K, the augmenting path of Ford an Fulkerson can be applied to compute
the maximum flow. The approach starts with a flow f , whose value is set to
0 and then, iteratively find some augmenting path P in the residual network
and increase the flow on P by the residual capacity cf (P ), that is the value
by which the flow on each edge of P can be increased. If no path from s to t
exists, then the computing stops and return the maximum flow value.
Because each bridging edge in Nt′′ have a capacity of one, each
augmenting path in the flow residual graph of Nt′′ from s to t′′ increases the
flow by one. Therefore, the augmenting path can be recursively used to
check if the maxi-mum value for a flow associated to a cut is less than K. For
a given cut, if a K + 1 augmenting paths could be found, then the maximum
flow in Nt′′ has a value more than K. Otherwise, the residual graph will not
contain a (K + 1)th path.
′′
Testing if Nt has a cut, whose value is no more than K can therefore be done

through a depth first search starting at s and including in X all nodes reachable by s.

The run-time of the Ford and Fulkerson method is O(m|f |,

where |f | is the value of the maximal flow computed. In the F lowM ap al-
gorithm, this value is K, which corresponds to the number of iterations that were
performed to find the K augmenting paths. Since finding an augmenting path
′′
takes O(m) (m being the number of edges of Nt ), testing if a cut with
edge cut-size less or equal K exists can can be determined in time O(Km).
′′ ′′ ′′ ′ ′ ′
The resulting cut (X , X ) in Nt induces a cut (X , X ) in Nt which in turn
induces a K-feasible cut (X, X) in Nt.
96 RECONFIGURABLE COMPUTING

Because the above procedure must be repeated for each node in the
original boolean network, we conclude that the labeling phase, i.e.
computing the label of all edges in the graph can be done in O(Kmn) where
n is the number of nodes and m the number of edges in N .

Node Mapping. In its second phase, the FlowMap algorithm maps the nodes
into K-LUTs. The algorithm works on a set L of node outputs to be imple-
mented in the LUTs. Each output will therefore be implemented as a LUT-
output.
Initially L contains all primary outputs. For each node v in L, it is assumed
that the minimum K-feasible cut (X, X) in Nv has been computed in the first
phase. A K-LUT LU T v is then generated to implement the function of v as well
as that of all nodes in X. The inputs of LU T v are the crossing edges from
X to X which are less than K, since the cut is K-feasible. L is then updated to
be (L − {v}) ∪input(X). Those nodes w belonging to two different cut-set
X u and Xv will be automatically duplicated. Algorithm 5 provides the
pseudo code that summarizes all the steps of the FlowMap presented here.
Improvements performed in the FlowMap algorithm have the goal of re-
ducing the amount of LUTs used, while keeping the delay minimal. The first
improvement is used during the maping of the nodes into the LUTs. The al-
gorithm tries to find the K-feasible cut with maximum volume. This allows
the final LUTs to contain more nodes, thus reducing the area used. The se
cond possibility is the used of the so called flow-pack method, which
generalizes the predecessor packing used in the DAG-Map [46]. The idea of
predecessor pack-ing is to pack a K-inputs LUT v in the same K-inputs LUT
u, if v is a fan-out free fan-in LUT of u and if the total amount the inputs of
u and v is less than K. The flow-pack method generalizes the predecessor
packing method to the set of predecessors Pu of u (including u) of the node
u, provide that Pu has a number of inputs less or equal to K.
Figure 3.18 illustrates the predecessor packing approach. The gate decom-
position presented in the Chortle algorithm can be used as well for the reduc-
tion of the LUT amount.

Figure 3.18. Improvement of the FlowMap algorithm through efficient predecessor packing
Implementation 97

Algorithm 5 Pseudo code of the FlowMap algorithm


/* phase 1: node labeling */
for each P I node v do
l(v) := 0
end for
T := list of non-PI nodes in topological
order while T is not empty do
remove the first node t from
T construct the network Nt
let p = max(l(u)) : u ∈ input(t)

transform Nt into Nt using collapsing
′ ′′
transform Nt into Nt using node splitting
compute a cut (X′′, X′′) in Nt′′ with e(X′′, X′′) ≤ K
using the augmenting path method
′′ ′′ ′′ ′′ ′′
if (X , X ) with e(X , X ) ≤ K is not found in Nt then
Xt := {t}
l(t) := p + 1
else
′′ ′′ ′′
Induce a cut (X, X) in Nt from the cut (X , X ) in N t
Xt := X
l(t) := p
end if
end while
/* phase 2: mapping nodes to LUTs */
L := list of PO nodes
while L contains non-PI nodes do
take a non-PI node v from L
generate a K-LUT LU T (v′) to implement the function of v
such that input(v′) = input(X)
L := (L − {v}) ∪ input(LU T (v′))
end while

4. Conclusion
In this chapter, the general use of FPGAs in different systems was explained
as well as the different steps needed to synthesize the hardware modules that
will be executed in the FPGAs at run-time. We have taken a brief tour in logic
synthesis with the focus placed on LUT technology mapping, which is the step
by which FPGAs differ from other logic devices. Our goal was to present two
algorithms that best match the optimization requirements, namely minimal delay
and minimal area. Research in FPGA synthesis keep on going and we several
LUT-based technology mapping algorithms are expected to be
98 RECONFIGURABLE COMPUTING

developed in the future. The dynamic development in FPGAs introduce new


challenges for the synthesis tools, which has to deal with new elements like
em-bedded arithmetic modules, embedded memory, different clocking
schemes, etc... Therefore good technology mappers are required to cope with
the on-going innovations. Our goal in this chapter was to provide some
background to better understand the challenges and provide better solutions
for the next generations of FPGAs.
Chapter 8

SYSTEM ON A PROGRAMMABLE CHIP

Developments in the field of FPGA have been very amazing in the last
two decades. FPGAs have move from tiny devices, with few thousands of
gates, only able to implement some finite state machines and glue-logic to
very com-plex devices with millions of gates as well as coarse-grained cores.
In year 2003, a growth rate of 200% was observed in the capacity of the
Xilinx FPGA in less than 10 years, while in the meantime, a 50% reduction
rate in the power consumption could be reached with the prices also having
the same decrease rate. Other FPGA vendors have face similar development
and this trend is likely to continue for a while. This development, together
with the progress in design tools, has boosted the acceptance of FPGAs in
different computation fields. With the coarse-grained elements like CPU,
memory, arithmetic units, available in recent FPGAs, it is now possible to
build a complete system con-sisting of one or more processors, embedded
memory, peripherals and custom hardware blocks in a single FPGA. This
opportunity limits the amount of com-ponents that must be soldered on a
printed circuit board in order to get a FPGA system working.
In this chapter, we present the different possibilities that exist to build
those systems consisting of one or more processors, peripherals and custom
hard-ware component.
We start with an introduction in system on programmable chip, and then
we present some of the various elements usually needed to build those sys-
tems. We then present at the end of the chapter a design approach for
adaptive multiprocessor systems on chip.

1. Introduction to SoPC
A system on chip (SoC) is usually defined as a chip that integrates the major
functional elements of a complete end product. Figure 8.1 presents the achieve-
260 RECONFIGURABLE COMPUTING

ment of a system on chip, namely the replacement of a complete printed


circuit board by a single chip.

Figure 8.1. Integration of PCB modules into a single chip: from system on PCB to SoC

A system on chip usually consists of a processor, memory, peripheral mod-


ules and custom hardware modules. Contrary to a system on chip that is man-
ufactured on an integrated circuit and cannot be modified again, a system on
programmable chip (SoPC), or programmable system on chip, is built on pro-
grammable logic. Its structure can be modified by the end user, either at com-
pile time or at run-time by means of full or partial reconfiguration.
In the next sections, we take a brief look on the main components of sys-
tem on programmable chip and present some key implementations of those
components by manufacturers.

1.1 Processor
Processors are the central processing and coordination units in system on
programmable chips. Besides the coordination of the complete system, they
are in charge of collecting the data from different peripheral modules or
from the memory, process those data and store them into the memory or sent
them to the peripheral modules. Also the processor initializes the peripherals
as well as the dedicated hardware modules on the chip and manages the
memory. The most widely used processors are those used on Xilinx and
Altera devices, since those two companies control the largest part of the
programmable device market. Besides the processors offered by those two
companies, other platform independent implementations exist. We list some
of the available processors next.
System on a Programmable Chip 261

1.1.1 The MicroBlaze Processor Core


The MicroBlaze [127] is a soft 32-bit RISC processor core designed and
commercialized by Xilinx. The processor is not directly available on a chip. It
exists only as reference design, optimized for the Xilinx FPGA. With a clock
frequency of up to 200 MHz and a to possibility to include a 32-bit single
precision floating point unit (FPU) in the IEEE-754 format into the datapath, the
MicroBlaze is one of the fastest available soft core processor. It consists of a set
of fixed components and a set of optional features that can be con figured at
design time by the user. The main fixed features of the MicroBlaze are:

An Arithmetic and Logic (ALU) Block unit featuring a shifter

thirty two 32-bit general purpose registers

Harvard architecture with separate 32-bit address bus and 32-bit data bus

a 5 stage pipeline in the version 5.1a

seven Fast Simplpex Links (FSL) that can be used to connect the
processors to custom hardware

an interface for connecting the processor to an On-chip Peripheral Bus


(OPB)

Besides these fixed features, the MicroBlaze processor can be


parameterized by the users at compile time to include additional
functionalities. The latest version of the MicroBlaze, the version v5.1a, can
be configured to support the following additional features:

a barrel shifter, a multiplier and a divider can be added to the ALU-Block

a 32-bit single precision Floating Point Unit (FPU) in the IEEE-754 format

A data cache and an instruction cache

an interface to the Local Memory Bus (LMB), a high-speed synchronous


bus used primarily to access on-chip block RAM.

A CacheLink for accessing external memory

debug logic

The MicroBlaze soft processor core is available as part of the Xilinx


Embed-ded Development Kit (EDK), which includes a comprehensive set of
system tools to design an embedded application in a Xilinx FPGA.
262 RECONFIGURABLE COMPUTING

1.1.2 The PowerPC 405 Processor


Contrary to the MicroBlaze that exists only as reference design compiled
to a netlist, the PowerPC 405 processor [126] is immersed into some Xilinx
Virtex FPGAs (the Virtex II Pro, the Virtex 4 and the Virtex 5). The
integration of the PowerPC directly into the chip at fabrication allows for a
great performance increase compare configurable processor cores.
The PowerPC 405 processor core used in the Xilinx devices is a 32-bit
im-plementation of thePowerPC embedded environment architecture that
can be clocked at up to 450 MHZ. It provides a dual mode operation, which
allows programs to run either in privileged mode or in user mode. The main
compo-nents of the PowerPC 405 are:

the Central Processing Unit (CPU) with the following features:


– a five stage (fetch, decode, execute, write-back) pipelined datapath. A
fetch queue consisting of two prefetch and a decode buffer is used to
queued instructions in case of stalls. Instructions flow directly to the
decode buffer if the prefetch buffers are empty.
– a general purpose register file (GPR) consisting of thirty two 32-bit
reg-isters accessible via three reads and two write ports. The five
ports on the GPR allow the processor to execute load/store operations
in parallel with ALU or MAC operations.
– a issue execution unit consisting of ALU and a single cycle throughput
Multiply Accumulate Unit (MAC).

an exception handling module for servicing a total of 19 critical and non


critical exceptions that can be caused by error conditions, the internal
timers, debug events, and the external interrupt controller (EIC) interface.

a memory management unit, which provides address translation,


protection functions, and storage-attribute control for a total of 4 GB non
segmented address space.

a 16 KB instruction-cache and a 16 KB data-cache, which can be


accessed respectively through the data cache and instruction cache units.
additional resources like timers and debug units.
a Processor Local Bus (PLB) interface providing a 32-bit address and three
64-bit data buses attached to the instruction-cache and data-cache units.

a Device Control Register (DCR) interface for the attachment of on-chip


device control, clock and power management interfaces for clock
distribu-tion and power management.
System on a Programmable Chip 263

a JTAG debugging port, on-chip interrupt controller interface and on-


chip memory controller interface for attaching additional memory.
an Auxiliary Processor Unit (APU) interface and a APU controller that are
used for extending the native PowerPC 405 instruction set with custom in-
structions, implemented as coprocessor on the FPGA Fabric. With the APU
a tighter integration of application-specific function in to the processor is
possible. The APU can be used for processor interconnection as well. The
PowerPC 405 does not have a floating point unit, however Xilinx provides
in its module library a dedicated floating point unit that can be attached to
the processor through the APU interface.

1.1.3 The Nios Processor Core


Like the MicroBlaze, Altera's Nios II [124] processor core is available as
reference design to be synthesized and to be downloaded into FPGAs. The
architecture supports operation in user mode as well as in supervisor mode,
thus allowing for a protection of the control registers. The Nios II core can
be customized by the designer by adding optional functionalities to the basic
system. It main features are:
a register file that consists of thirty two 32-bit general purpose integer re
g-isters, and six 32-bit control registers. Floating point registers can be
add to the register file by the customization of the processor.
an Arithmetic Logic Unit supporting the main arithmetic and logic opera-
tions as well additional shift and rotate operations.
a single precision floating point as specified by the IEEE Std 754-1985,
tha t can be added to the core by customization.
optional, a NIOS core may include one instruction cache and/or one date
cache, depending on the user application. Caches can be avoided in order
to keep the design small.
one data bus selector module and one instruction-bus selector interfaces,
each of which can be used to provide memory and I/O access. Each con-
troller provides an Avalon master port for connecting the memory via the
Avalon switch fabric, and an interface to fast memory outside the Nios II
core.
various interface for debugging and interrupt handling.
a custom instruction port that can be used to extend the native instruction
set. A set of instructions implemented in a hardware module can be
acces-sible by connecting that hardware module to this port.
264 RECONFIGURABLE COMPUTING

1.1.4 The LEON Procesor Core


Developed by the company Gaisler Research, the LEON processor core
[1] is an open source 32-bit processor core based on the SPARC V8
architecture. Several versions of the LEON exists, however we present only
the LEON3 in this section. The LEON 3 is available as a synthesizable
VHDL model, which means that it can be implemented on any FPGA
platform with the correspond-ing vendor tool. The complete source code is
open and available under the GNU GPL license. The main features of the
LEON 3 processor, which can be clocked at up to 125 MHz, are:

Advanced 7-stage pipelined datapath

Hardware multiply, divide and MAC units

a fully pipelined floating point unit in the IEEE-754 format

Separated instruction and data cache, whose size, associativity, and


replace-ment policy, can be configured

a memory management unit with configurable TLB

an interface to the AMBA-AHB bus

support for Symmetric Multiprocessor (SMP)

various interfaces for debugging, clocking and power-down mode

1.2 Memory
A system on programmable chip needs memory for storing instructions
and data. Memory elements are available in small amount on almost all
modern FPGAs. This is usually used to build the caches directly on the chip.
For applications that require more memory, external SRAM or DRAM must
be used. On-chip memory can be built from the memory elements, the block
RAMs, or from the LUTs. The use of the LUT in the second case to built
memory has two drawbacks: First the amount of available computing
resources decreases, thus reducing the complexity of the logic that can be
implemented on the chip. Second, the memory built with LUTs is distributed
across the chip, meaning that chip interconnection must be used to connect
the spread LUTs. This leads to decrease in performance in the memory
access. On-chip block RAMs are usually dual-ported. They therefore provide
a nice possibility to integrate a custom hardware module, in particular those
working in two different clock domains. The memory is used in this case to
synchronize the module with each other.
System on a Programmable Chip 265

1.3 Peripheral
Peripherals are used in the system for communication with the external
world and for debugging purpose. The peripheral components are usually
pro-vided by the board manufacturers as ready to the module that can be
inserted in the SoPC-design. Because of their lower frequency, compare to
the one of processors, peripherals should be connected to the low-speed bus,
which in turn can be connected to the processor bus through a bridge. The
communi-cation between the processor and the available peripheral can be
done either through the I/O mapping paradigm or through the memory map
paradigm. In the first case, the processor addresses each peripheral using spe
cial instructions that directly address the peripheral on the bus. In the second
the peripheral is assigned an address space in the memory and the
communication happens in this space with the processor writing in control
register that can be read by the peripheral. The peripheral responds by
writing in status registers that can be read by the processor.

1.4 Custom hardware


The final types of component in a system on programmable chip are the
custom hardware. Those are hardware components that implement special
functions for which one intend to speed-up the computation. Custom hard-
ware modules are usually streaming-based, meaning that the computation is
performed on a stream of data that flow through the hardware module. Car e
should be taken while integrating custom hardware modules in a system on
programmable chip. Connecting a custom hardware module on a peripheral
bus may result on a performance decrease, if the data rate of the computation
is much higher than that of the bus, which is usually the case. Direct connec-
tion and communication using dual-ported RAM can help in this case to
allow the custom module to work with its maximal frequency.

1.5 Interconnection
The interconnection provides the mechanism for integrating integrate all the
components previously described in a workable system on programmable chip.
The interconnection mechanism provides the communication channels, the in-
terface specification, the communication protocols and the arbitration policy on
the channels. The design of the interconnection infrastructure depends on the
target application. In traditional system on chip, where the infrastructure is fixed
at chip production, the communication infrastructure must be as gen-eral as
possible to serve all class of applications that can be implemented on the chip. In
programmable devices, the flexibility can be used to modify the interconnection
infrastructure according to the type of application to be imple-mented. In this
case, each application can first be analyzed in order to der ive
266 RECONFIGURABLE COMPUTING

and implement the best communication infrastructure for its computation.


De-spite the great interest in network on chip in the last couple of years,
intercon-nection on chip is dominated by the SoC communication paradigm
which is in most of the case bus-based. Leading existing solutions were
previously devel-oped for SoCs before adapted to SoPCs. The general
connection mechanism is provided in figure 8.2
It usually consists of two different buses. A high performance bus that is used
by the processor to access the memory and a slow bus used to connect the
peripherals. High performance dedicated hardware modules are connected to the
high performance bus, while low performance custom hardware compo-nents
are connected to the low performance bus. A bridge is used to allow for
communication to happen between two modules attached on the two different
busses. Besides those two buses, several possibilities exists to directly connect
the component. Dedicated lines or crossbar switches can be used, even in co-
habitation with the two previously described buses. The two well established
bus systems in the world of programmable system on chip are the CoreConnect
from IBM and the ARM AMBA.

1.5.1 The IBM CoreConnect


As shown on figure 8.2 the IBM CoreConnect communication
infrastructure consists of three different buses:

the Processor Local Bus (PLB), which is a high-performance bus, used


to connect high-bandwidth devices such as high-performance processor
cores, external memory interfaces and DMA controllers.

the On-Chip Peripheral Bus (OPB), which is a secondary bus that can
be used to decoupled the peripherals from the PLB in order to avoid a
lost of system performance. Peripherals like serial ports, parallel ports,
UARTs, GPIO, timers and other low-bandwidth devices should be
attached to the OPB. Access to the peripherals on the OPB bus by PLB
masters is done through a bridge, which is used as a slave device on the
PLB and as master on the OPB. The bridge performs dynamic bus sizing,
in order to allow devices with different data widths to communicate.
Figure 8.3 illustrates the implementation of the OPB. Note that no tri-
state is required. The address and date buses are instead implemented
using a distributed multiplexer. This is a common technique for
implementing buses in programmable logic devices. All the master inputs
to the bus are ORed and the result is provided to all the slaves. The
arbitration module defines which master is granted the bus. All other
masters must then place a zero signal on their output. The ORed is then
used to write the value of the bus master on the bus.
System on a Programmable Chip 267

Figure 8.2. Example of system ingration with CoreConnect buses

the Device Control Register (DCR) Bus to allow lower performance


status and configuration registers to be read and written. It is a fully
synchrono us bus that provides a maximum throughput of one read or
write transfer every two cycles. The DCR bus removes configuration
registers from the memory address map, reduces loading and improves
bandwidth of the processor local bus.

1.5.2 The Amba


The Advanced Microcontroller Bus Architecture (AMBA) from ARM shares
many similarities with the IBM CoreConnect. Both architectures support data
bus widths of 32-bits and higher, they utilize separate read and write data chan-
268 RECONFIGURABLE COMPUTING

Figure 8.3. Implementation of the OPB for two maters and two slaves

nels, they allow multiple masters, and support split transactions and burst trans-
fers. The AMBA 2 interconnection infrastructure consists of two main buses:

the Advance High-Speed Bus (AHB) that is used as high-performance


sys-tem interconnect to allow communication between high-performance
mod-ules and the processor. The AHB plays the same role in the AMBA
that the PLB plays in the CoreConnect. A bridge from AHB or ASP is
required to interface to APB.

the Advance Peripheral Bus (APB), used to connect the slower peripherals.
A bridge is used to interface between the AHB and APB buses and allow
low peripheral to be accessed from components hung on the AHB.

2. Adaptive Multiprocessing on Chip


The continuous performance improvement of the last decades in
micropro-cessor is not likely to hold in the future. This positive trend
observed in past, and also known Moore's law was due to two main factors:
high-speed clocks and improvement in instruction level parallelism.
While the speed was increased by constantly growing clock frequency, the
capacity has always been boosted by the reduction of transistor size. Sequen-tial
programs could therefore be speeded-up through instruction level paral-lelisms
(ILP) in a transparent way for the user. The increase in speed is getting more and
more difficult to maintain. Enhancement on ILP-design for data-path isn't
coupled anymore with a performance increase of the system [169]
[213]. Moreover the power is becoming one of the main limitations. In order
to solve this problem, the research community and the industry have started
System on a Programmable Chip 269

using the large amount of available chip area to implement several


processors, thus creating the so called chip multi processing systems.

Basically, two paradigms exist for parallel computation on multiproces-sors.


The first one is the Message Passing Interface (MPI), usually available where
communication among processors happens in a network. The second paradigm,
the Shared Memory or Symmetrical Multi Processing (SMP) is ap-plied in bus-
based systems where communication happens through a share memory.
Multiprocessor on chip has been studied according to the second paradigm. In
most of the existing works, the purpose is to have an efficient mapping of a set
of threads onto the processors. Two possibilities have been presented that
address this purpose: Simultaneous Multithreading (SMT) [75, 120, 103] and
Chip Multiprocessor (CMP) [169, 109, 139, 94, 16]. An SMT-Chip is based on
a superscalar processor with a set of units for a parallel ex-ecution of
instructions. Given a set of threads, the resources are dynamically allocated by
extracting several instructions to be executed in parallel. Threads that need long
memory access are preempted in order to avoid idle states of processors. Base
on an Alpha 21164 processor with 10 function units and 8 instructions per clock,
an SMT-system were shown to provide a speedup of up to 4 compared to
common superscalar implementations [75]. In Hirata et al [120], ray-tracing-
application on SMT-architecture could be simulated with considerable
performance improvement. Gulati and Bagherzadeh [103] could also provide
simulation results of SMT with performance improvements.

Instead of using only one superscalar architecture to execute instructions in


parallel, Chip Multiprocessors use many processor cores to compute threads in
parallel. Each processor has a small first-level local instruction and da ta cache.
A second level cache is available for all the processors on the chip. CMPs target
applications consisting of a set of independent computations that can be easily
mapped to threads. This is usually the case in database or web-server where
transactions do not rely on each other. In CMPs, threads having long memory
access are preempted to allow others to use the processor. Bar-roso et al [16]
have presented a scalable CMP-architecture called Piranha in which the
processors -in this case alpha processors- use a crossbar switch for
communication. Despite the good simulation results no physical chip could be
manufactured. A most concrete approach is the work of Lammon and Oluku-tun
that leads to the design of the Hydra Chip [109], whose technology is in used in
the Niagara chip [139]. In this architecture, a hardware support for speculation
is available, besides the parallel execution of threads. The commu-nication
among the processors is done using a two-level on-chip cache. The simulation
provides almost a linear performance increase in the number of pro-cessors.
Other developments in multiprocessor on chip include the IBM Cell-Chip,
consisting of a 64-Bit Power processor and eight so called synergistic
270 RECONFIGURABLE COMPUTING

processors. The Intel Pentium D and the AMD Dual Core Athlon 64
Processor are other examples of multiprocessor on chip.
The work in [167] proposed an adaptive chip-multiprocessor architecture,
where the number of active processors is dynamically adjusted to the current
workload needed in order to save energy while preserving performance. The
proposed architecture is a shared memory based, with a fix number of embed
- ded processor cores. The adaptivity results in the possibility of switching
the single processors on and off.
Programmable logic devices in general and FPGA in particular have
experi-enced a continuous growth in their capacity, a constant decrease in
their power consumption and permanent reduction of their price. This trend,
which has increased interest in using FPGAs as flexible hardware
accelerators, is likely to continue for a while.
After an unsuccessful attempt in the 90's to use FPGA as co-processor in
computing system, the field FPGA in high-performance computing is experi-
encing a renaissance with manufacturers like Cray who now produce
systems made upon high speed processors couple to FPGAs that act as
hardware accel-erators [58]. However, actual FPGA-solutions for high-
performance comput-ing still use fast microprocessors and the results are
systems with a very high power consumption and power dissipation.
A combination of the chip multiprocessor paradigm and flexible hardware
accelerators can be used to increase the computation speed of applications.
In this case, FPGAs can be used as target devices in which a set of
processors and a set of custom hardware accelerators are implemented and
communicate together in a network.
FPGA manufacturers like Xilinx have been very active in this field by pro-
viding ready to use components (soft of hard-cores processors, bus-based in-
tercommunication facilities, and peripherals) on the base of which, multipro-
cessor on chip can be built. The Virtex II Pro FPGA for example provides up to
four PowerPC processors and several features like memory and DSP-modules.
The adaptivity of the whole system can be reached by modifying the com-
putation infrastructure. This can be done at compile-time using full reconfig
- uration or at run-time by means of partial device reconfiguration. This res
ults in a multiprocessor on-chip, whose structure can be adapted at run-time
to the computation paradigm of a given application.
Unfortunately, the capabilities of those devices are not exploited. FPGA
devices like the Xilinx Virtex Pro are usually used only to implement a small
hardware component. The two available Power PC processors remains most
the time unused. This is in part due to the lack of support for the
multiprocessor design.
In this section, we investigate the use of Adaptive Multiprocessor on Chip
(AMoC) and present a design approach for such systems. We also present an
System on a Programmable Chip 271

environment developed at the University of Kaiserslautern in Germany, for a


seamless generation and configuration of the hardware infrastructure in order
to produce a complete system.
One of the main applications of such infrastructure is in high-performance
computing in embedded systems. We next present our understanding of the
system architecture that must be first generated onto the device. Therea fter, we
present a two steps design approach and the design automation for AMoCs.

2.1 Hardware Infrastructure


Many possibilities exist to build a multiprocessor hardware infrastructure on
a chip. In this section we however focus on the general computing infrastruc-
ture shown on figure 8.4. For the sake of modularity, the infrastructure co nsists
of a set of processor blocks, each having a custom hardware accelerator and
memory directly attached the processor, either through the available fast pro-
cessor bus, or through a dedicated communication link. The processor blocks
are connected in a network and are accessible from off-chip components via
peripherals. Additional external RAM might be available.

Figure 8.4. General adaptive multiprocessor hardware infrastructure

One of the main tasks in the design and implementation of a system like that
of figure 8.4, is the interconnection network. Contrary to other modules
(processors, peripherals, memory controllers and some custom hardware) that
272 RECONFIGURABLE COMPUTING

are available in the vendor libraries, communication facilities, in particular


network on chip modules are not available at all. Also, most of the existing
tools provide only a very limited support for multiprocessing. We therefore
focus on the communication aspect next.

2.2 Communication Infrastructure


The communication between the processor blocks can basically be done in
two different ways: Shared-memory and message passing: in the first ca se, the
processor blocks are attached to a common bus on which a memory is also
attached. The exchanged of message is done via the common accessible mem-
ory. Although such a system can be built with the current vendor tools, the
amount of handwork required to have a system working is quite high. Further-
more, some modules needed to build a workable system, for instance a module
to handle the cache coherence amount the processors is not provided and must
be designed and implemented by the user. In the second case, where message
passing is used, the processor blocks are accessible via a network on which they
hung. The network is in charge of routing messages from source to des-tination.
Although message passing can be implemented with shared-memory, we focus
in this section on those implementations where a set of independent processors
communicate with messages that are sent through a network.
The design of the communication network should be carefully done, in
order to avoid a high consumption of chip resources, while maximizing the
speed and the throughput. A network on chip (NoC) normally consists of set
of routers each connected to a processor, and direct connections between the
routing ele-ments. Each router provides a processor an interface to the
network for sending and receiving messages in form of packets. Also, the
routers are in charge of placing incoming packets on the best tracks and
insure a smooth routing to destination.
While the routers have the advantage of flexibility, thus allowing any
topol-ogy to be built, their resource consumption has been shown to be quite
high in FPGAs, due to the large amount of multiplexers used [32]. In order
to avoid wasting too much chip resources, a limited amount of routers may
be used. This can be reached by using a ring architecture. Like a Bus, a ring
provides a great advantage in the resource consumption. The disadvantages
of the bus, like the exclusive bus access by masters and the lower fault
tolerance can be can be compensated by having several masters on different
ring segments. In [108], a ring architecture was designed for the
interconnection of a set of pro-cessors blocks on a FPGA.
As figure 8.5 illustrates, the processors are primarily connected using a rin g.
However, because a ring allows only one kind of topology to be built, routers
are available for building cross connections over different rings. The routers
therefore provide a means to build other topologies than the ring. A router
System on a Programmable Chip 273

is able to connect up to four rings. Building a mesh for example is done by


connecting two "perpendicular and disconnected rings" via the router. In
figure 8.5, two rings connected to a router are shown.

Figure 8.5. Structure of the on chip network

A ring has the advantage that components connected to it do not have to


deal with routing and therefore, the amount of multiplexers (one of the most
resource consuming element) needed to implement the complete
infrastructure decreases drastically. Components only need transceivers to
access the ring for sending and receiving messages (figure 8.5). Each
transceiver consists of a sender (S) and a receiver (R) that operate
independent from each other. The receivers behave on the ring as slave and
on the local bus as master, while the sender behave on the local bus as slave
and on the ring as master. The ring slave in the receiver collects data from
the ring and passes them to local bus master in the same receiver that. The
message can then be copied into the local memory via the local bus. The
local bus slave in the sender collects data from the local bus and send them
to ring master in the sender, which then place the message onto the ring.
Within a transceiver, each slave and each master is basically a finite state
machine(FSM), which is in charge of initiating and handle the requires trans-
274 RECONFIGURABLE COMPUTING

Figure 8.6. Implementation of the transceiver

actions. This modularization helps to decouple the complex bus operations


from the cumbersome ring transactions.
The transceivers operate in two different modes: active copying and DMA
(direct memory access). The active copy mode is used to send small number
of packets while DMA deals with the transmission of large data segments.
With the DMA capability on transceivers, a remote DMA between two
processors is implemented, in order to quickly transfer large amount of data
from one processor to another one, without loss of performance. The
communication protocol used in this case is next explained.

2.3 Data Transfer and Communication Protocol


In active copy mode, the processors pass each single byte to be send to the
transceiver while in DMA mode, the processor just specifies the destination
address and provides only the start address of the data block to be sent. The
transceiver can then access the memory and transmits the data over the net-
work, while the processor continues its computation.
A sending operation is initiated by a processor by copying the destination
address together with some control data to the control register of the FSM acting
as local bus slave. While this first FSM collects data to be sent from the local
bus, the second FSM, acting as master on the ring, builds packets and send them
via the ring. In case of an active copy, the data is place on the bus by the
processor, while in DMA, the slave FSM accesses the memory itself to collect
data, thus freeing the processor. This second possibility is used to implement a
remote DMA (RDMA) which allows a processor to directly copy a chunk of
data in the memory of another processor on the network. The destination-
processor must therefore dynamically allocate enough space in its local memory
for incoming data from the sender-processors. A processor Ps, willing to send
data to another processor Pr , first sends a request of RDMA data transfer to
processor Pr . The request specifies the length (in bytes) of data to transfer. Pr
then tries to allocate enough memory for the data and
System on a Programmable Chip 275

allows Ps to transfer data, if the reservation is successful. The transfer is


done by the sender and receiver modules of the corresponding transceivers
while the two processors might perform other tasks.
In order to implement the handshaking between the two processors, a
com-munication area is foreseen on each processor. This is a table with two
entries for each processor. The first one specifies the request/acknowledg e
of data transfer and the second one specifies the amount of bytes to transfer.
E ach pro-cessor performs a request/acknowledge through a remote writing
in the table of the sender/destination processor at the corresponding location.

Figure 8.7. The communication protocoll

A possible communication scenario is shown in figure 8.7; The network


consists of n + 1 processors also called clients. client0 and clientn consec-
utively request a RDMA-transfer of data to client1 (step 1 and step 2). Ac-
cording to a first-request-first-grant mechanisms, client1 first grants RDMA
access to client0 by writing the granted-code at the right location in the
hand-shaking table of client0 (step 3). From this point on, the sender
transceiver and the receiver transceiver are used to realize the data transfer,
while the two processors keep computing (step 4). Upon completion, client0
removes its request (step 5), and client1 removes its grant. clientn is now
granted the RDMA-access (step 6) and the data transfer is done according to
the scheme previously described (step 7 and step 8).
276 RECONFIGURABLE COMPUTING

2.4 Design Automation


The goal in design automation is to ease the implementation of an
adaptive multiprocessor system described in figure 8.4, on an FPFA.
The generation of such a system is a complex task whose support through the
current vendor tools is very limited. A substantial amount of hand work most be
done in order to have many processors working in an FPGA. The only paradigm
supported by the current tools, albeit very limited, is the SMP-like computation
paradigm. For the design of MPI-like systems the user must do everything from
scratch. The network components must be provided, also sup-port for cache
coherence in the case of SMP-like computation. This requires a great amount of
hand work and strong experience in hardware design. In order to make
multiprocessing implementation on chip easier, in particular for software
designers, but also for those hardware designers not willing to spend too much
time in the hardware part of the system, a seamless approach must be available
to generate the hardware infrastructure. The designer can then fo-cus only on
parallelizing he's code by defining the part to be executed by e ach processor
and the communication points in the code.
The work in [130] [183] has contributed, besides the development of the
communication infrastructure to the design and implementation of a vendor
independent framework in which the user, no matter if beginner or
experienced, can specify the complete requirements of he's system and let
the tool produce the necessary configuration and start-up code.
The approach that followed is a two-steps one, consisting of first provid
ing an abstract specification of the system. The abstract specification will
then be refined and mapped to a concrete description according to the
componen t library available in the second step.

2.5 Automatic Generation of the Hardware Infrastructure


In the first step, the hardware is automatically generated from an abstract
specification. This specification may be provided by the user or it can be the
result of an analysis process that generates the best multiprocessor
infrastruc-ture as well as the best topology to compute a given application.
As shown in figure 8.8, the description at the highest level is very abstrac
t and specifies only the amount of the processors, their interconnection pa
radigm and the characteristics of the computing blocks used in the system.
The description is done in XML in conjunction with a document type defi-
nition file (DTD). The document type definition describes the syntax as well a s
valid components and their attributes. At this level, no information about spe-
cialized and platform-dependent system components has to be supplied. Valid
modules that can be described at this stage are for instance CPU, Memory,
CommMedium, Periphery and HWAccelerator. CommMedium is an identifier
System on a Programmable Chip 277

for the whole class of communication media. It comprises buses, networks and
point-to-point connections. Periphery and HWAccelerator differ only in their
type of connections. While a HWAccelerator is used for internal components,
Periphery is used to describe modules with external connections.
Having specified the system, the description may be functionally
simulated to observe the system behaviour, and afterwards be validated.

Figure 8.8. Automatic hardware generation flow

In the last step, the final system can be created by supplying a concrete
system description for the target platform.
When transforming an abstract system description into a concrete one,
trace-ability must be maintained. Therefore, in each of both steps, the
respective XML input file is validated against its document type definition to
prevent in-valid component specifications.
Subsequently, the given specification is processed by a Java application con-
sisting of three consecutive phases. In phase one, an XML parser extracts re-
278 RECONFIGURABLE COMPUTING

quired information from the XML sources and an internal representation of


the resulting XML tree is built up for further use. For each component, an
ap-propriate class is instantiated, which knows how to process its own
parameters. Obviously, the application can be easily extended to support
new user-designed IP-cores by just adding a corresponding class.
Both parsers -the abstract one as well as the concrete one- share this ap-
proach. Whereas the abstract parser has finished its work at this point, the
concrete XML parser can now go on processing the gathered information to
create the platform specific hardware information files. Individual mappe rs
are created in the second phase, one for each component and each target
platform. In the third and final step, a mapper is invoked to create the desired
platform-dependent hardware description files. Again, it has to be
emphasized tha t ex-tending the application to support a new platform can be
simply accomplished by adding another mapper class.
Mapper classes for system components are derived from an abstract
platform-dependent super class that contains a generic algorithm which
enables derived classes to inherit mapper functions. The current system
component which has to be mapped is passed from the corresponding class
itself to its mapper class via constructor. The resulting hardware description
files mentioned above are then passed to the vendor's tool chain, for example
to Xilinx EDK to generate the bitstream for the configuration of the FPGA.
The system does not only enable the user to create hardware information
files from a given specification. Existing design can also be loaded from fi
les in which they were saved. Also, system components can be added to or
remove from the system in a very comfortable way, and the changes cane be
rewritten back to the XML file.
A tool called PinHat (Platform-independent hardware generation tool)
was developed with the goal to integrate all the functionalities and features
de-scribed above into one single, easy-to-use application. PinHat provides a
graphical user interface (figure 8.9) that allows the user to comfortably e dit
the desired system settings and guides him through the process of deriving
the concrete system description from the abstract one.
It integrates seamlessly with vendor supplied toolchains and allows the
gen-eration of a bitstream to configure any supported FPGA based on the
crea ted concrete specification.
After having adapted the abstract specification to the end-user's needs , Pin-
Hat offers a transformation wizard which supports the procedure of refinin g
abstract node elements to concrete ones. First of all, the root element has to be
refined by specifying the target toolchain and the target board. This p iece of
information is used to identify correct specified components when refinin g all
the other abstract nodes. Selecting for instance Xilinx as target toolchain and ML
310 as target board would allow an abstract CPU node to be refined
System on a Programmable Chip 279

Figure 8.9. The Platform-independent Hardware generation tool (PinHat)

to either a Microblaze or alternatively a PowerPC 405 concrete node


element. Subsequently, the refinement of all lower-level nodes can be done.
PinHat has integrated parsers to read out component and board description
file s where information about peripherals, FPGA pin connections, board
architecture, in-ternal component parameters and much more, can be stored
and thus does not need to be entered manually. This increases the comfort by
avoiding the users to enter hundreds of parameters for a single component.
Furthermore, this procedure guarantees that no non-existing parameters can
be specifie d. Param-eters which are not specified are initialized with
standard values which are a lso part of the parsed description files. The role
of PinHat in the design flow is illustrated in figure 8.10, where the abstract
specification is mapped later to the Xilinx Platform.

2.6 Automatic configuration of the


hardware infrastructure
In contrast to the downloading of the bitstream into the FPGA, we under-
stand under automatic configuration the generation of the binary code that w ill
280 RECONFIGURABLE COMPUTING

[ht]

Figure 8.10. The PinHaT framework

be run on each processor, the configuration of the operating system an d the


generation of the files needed for system start-up.
Like in the Automatic generation of the hardware infrastructure, we have a
SW-specification , which may be -as the HW-specification - provided by the user
or it can be the result of an analysis and code generation process. The SW-
specification describes whether a given processor is to be run with a stan dalone-
application or if an operating system is foreseen for that processor. For each
standalone-application additional information is given about the task it exe-
cutes; the task allocation results from splitting the application to be computed
System on a Programmable Chip 281

into a separated set of independent and communicating tasks. The


distribution of the jobs and the mapping to the processors is stored in XML,
complementing the abstract component input described in section 2.5. We
are not focussing on the partitioning process here. We assume that this part
has been done and that the segments of the overall code that will be
implemented on each processor are available.
As the abstract system description is transformed into a concrete one, the
mapping "task → abstract processor" is refined into a mapping "task → con-
crete processor".
Thereafter, parameters of the software for each processor must be specified:
this includes information about either the application (source-code, libraries,
etc.) or about the operating system(s)(type, configuration, require fun ction-
alities and modules, ...). Once those information are provided, scripts and
compiling files for building the standalone software and the operating systems
are generated. Building the standalone software is straightforward because it
requires only compiling and linking the files, using vendor tools on the basis of
the hardware generated in the previous step. The building of the operating
system is however a more complex issue. Depending on the OS chosen, differ-
ent steps are necessary; for some operating systems only the kernel-parameters
may be configured explicitly, while the file-system and therewith the available
programs is pre-build. Other operating systems allow for the complete configu -
ration of the kernel, file-system and the available programs. The configur ation
of the operating systems is done by generated scripts.
The result of the configuration step is one executable file for each proce
ssor in the system. This file can be the same for a pair of processors in the
system, if the parameters provided to build the file are the same.
Using vendor tools, the FPGA-configuration bitstream, resulting from the
generation of the hardware infrastructure is complemented with the generated
executable files. These results in a configuration file containing hardware and
software configuration of the whole system for the chosen FPGA.
According to the operation paradigm chosen, we may have the operating
system running on all the nodes or on just one node as illustrates in (figure
8.12). The last approach makes more sense. In this case, the node attached to
the peripherals uses the operating system for a better control of those periph-
erals. This node will then behave as a master in the system, collecting the
data and dispatching them to all the processors involved in the parallel
computa-tion of a given application, collect the partial results from the single
processors and send them trough the peripherals to off-chip components. The
operating system can also be made available on many processors, if those
processors manage some peripherals separately.
As case study, several implementations were done on FPGA, using the Pin-
Hat tool. A three processor system consisting of a PowerPC, and two MicroB-
282 RECONFIGURABLE COMPUTING

Figure 8.11. software configuration flow

laze processors on the Xilinx ML310 board, featuring a VirtexII Pro 30


FPGA, were built. The primary goal was to test the bandwidth performance
of a sys-tem with multiple rings and a router for the ring interconnection.
The design, in a first non optimized implementation, reveals an average real
bandwidth of 36 Mbytes/s. This includes the complete transaction need to
code, send and decode a message.
In order to have the system running with a real life application, the singu-
lar value decomposition (SVD) on the ML310-Board is implemented, with
the number of processors varying from one to eight, and with matrices
varying from (4 × 200) to (128 × 200). The performance increase due to
the use of multiprocessors was shown to be almost linear in the number of
micro proces-sors.
System on a Programmable Chip 283

Figure 8.12. 4-Processor infrastructure for the SVD on the ML310-Board

Since the bulk of the computations in the SVD is the computation of the dot-
products of the column pairs, which is a good candidate for hardware imple-
mentation. Custom multiply accumulate (MAC) modules for computing those
column-pair multiplications were designed and directly attached connected to
the MicroBlaze processors via the fast simplex link (FSL) ports (figure 8.12, in
order to increase the performance of the whole system.

2.7 Adaptivity
As state at the beginning of the chapter, the adaptivity can be used for several
purposes. According to the application, a physical topology consisting of a given
arrangement of routers and processors may be built to match the best
computation paradigm of that application. The topology can be modified, even
at run-time to better adapt to the changes observed in the application.
On the other hand, the adaptivity of the system can be reached by using
reconfigurable hardware accelerators, provide that the target devic e supports
partial reconfiguration.
Designing partially reconfigurable systems with unlimited capabilities is a
complex engineering task that requires a lot of hand work. In order to avoid this
and keep the thing very simple, a template for partial reconfiguration can firs t
be created for the target device. In this template, fixed locations must be se-
lected, where the reconfiguration is allows to take place. It acts as place ho lder
in order to accommodate reconfigurable modules at run-time. Any hardware
accelerator can be plugged on the hardware block at run-time by means of par-
284 RECONFIGURABLE COMPUTING

tial reconfiguration. This step can even be performed from within the devic e
using the appropriated port like the ICAP in the case of Xilinx devices.

3. Conclusion
In this chapter, we have addressed the system on programmable chip
paradigm and provided some advantages of using those architectures. The
main compo-nents needed to build such a system were presented, with a focus
on the leading manufacturers. The goal was not a comparative study on the
different system on programmable chip infrastructure. We rather wanted to
provide the user a brief view on the existing solution.
In the second part of the chapter, we have presented a design approach for
multiprocessor systems on FPGAs. Using the reconfiguration, it is possible
to modify the hardware infrastructure to adapt it to the best computation
paradigm of the application being computed.
We have presented a framework to specify and implement a complete sys-
tem without knowledge of the vendor tools. Obviously, all what is feasible with
the PinHat tool is also feasible with current vendor tools. However, the
complexity and the difficulty of using those tools do not allow any new comer to
generate design with it. The goal here was to hide the complexity in the de-sign
of multiprocessor, which is very limited in the current tool to the user and allow
him to focus on the efficient analysis and partitioning of its application.
We believe that multiprocessing on FPGA has a great potential to speed-
up computation in embedded system. In order to facilitate their use, systems
like the PinHat are welcome to hide the complexity of the hardware
generation and let the designer focus on the analysis of the application and
the definition of the best adapted structure for its computation.
The adaptivity of the whole system is provided through the use of partial
reconfiguration in order to exchange running hardware modules at run- time.
However in order to overcome the difficulty of designing for partial recon
figu-ration with the current tool, a template base approach is recommended,
which foresee predefined location for placing components at run-time.
Chapter 7

PARTIAL RECONFIGURATION DESIGN ∗

Partial reconfiguration is one the prerequisite in reconfigurable computing ,


since it allows for swapping modules into and out of the devices without having
to reset the complete device for a total reconfiguration. As we will see later in
chapter applications, the possibilities offered by the partial reconfigurab le de-
vices in the implementation of system adaptivity are enormous. Unfortunately
very few devices on the market have the capability to be partially reconfigur ed
in order to replace some modules, while the rest of the design keeps working.
One of the few devices falling in the category of partial reconfigurable chip s are
the FPGAs from Xilinx, in particular those from the Virtex class.
Designing for partial reconfiguration on Xilinx device is a complex exer-cise
that requires to be well prepared, technically, but also psychologically. This
chapter is much like a tutorial for designing partial reconfigurable appli-cations
on the Xilinx Virtex FPGA-devices. Our goal in this chapter is to guide the
designer in the design process for partial reconfiguration by provid ing the
essential steps and materials in order for him to avoid long search and long
introduction phases, which sometimes leads to a possible discouragement.
We therefore explained the three design flows used for the partial recon
fig-uration on Xilinx devices: The JBits approach, the Modular Design Flow
and the Early Access Design Flow. While the JBits approach apply for very
few old Virtex FPGA, the modular design and the early access design flows
sup-port newer devices. We therefore explain those two design flows on the
b asis of working examples, whose sources can be downloaded from the
book home page.


This chapter and the correponding Appendix was prepared by Christophe Bobda and Dominik Murr
214 RECONFIGURABLE COMPUTING

The development of partial reconfigurable applications on the Xilinx plat-


form is done according to the Modular Design Flow provided by Xilinx in
[227]. The modular design flow provides a guided flow on how to generate the
constraints required by the placement and configuration tools for the gene ra-
tion of full and partial bitstreams representing modules to be downloaded at run-
time into the device for a partial of for a full configuration.
Several tutorials for partial reconfiguration on the Xilinx devices exist to
b e downloaded from the internet. Despite their usefulness, almost all of
them tar-gets only VHDL and Verilog designs. With the increasing interest
in software-like hardware design language like SystemC and Handel-C, the
need for incor-porating those languages in partial reconfiguration design
flows is high. W e therefore described in this chapter how to incorporate
Handel-C designs in a partial reconfiguration design flow.
Finally, in order to efficiently use the capability of partial reconfigurable
chips, the platform that host the chip, i.e. the board, must be designed in
such a way that the use of the flexibility provided by the reconfigurable chip
can b e maximized. This issue is addressed at the end of the chapter where
we provide some guidelines in the design of partial reconfigurable platforms.
As example of such platforms, we present the Erlangen Slot Machine, which
was designed to allow the maximal use of partial reconfiguration.

1. Partial Reconfiguration on Virtex Devices


Virtex FPGAs like, other FPGAs fro Xilinx are organized as a two dimen-
sional array of CLBs containing a certain amount of logic. They are confi
gured with configuration data called a bitstream, which can be downloaded
into the device via the configuration port.
The idea behind partial reconfiguration is to realize reconfiguration by on
ly making the changes needed to bring the device in the desired state.
Fragments of the complete bitstream, also call packets, are sent to the device
in order to reconfigure the needed part of the device. A copy of the last
configur ation is maintained in a dedicated part of the processor memory,
called the configura-tion memory. Partial reconfiguration is done by
synchronization between the configuration memory and the device. Changes
made between the last config - uration and the present one is marked as dirty
packets which are then sent to the device for partial reconfiguration. A
packet is a sequence of (comma nd, data) pairs of varying length which are
used to read or write internal registers and configuration state data.
Virtex devices are organized in frames or column-slices which are the small-
est unit of reconfiguration. Partial reconfiguration can then be perfo rmed by just
downloading the configuration data needed to modify the part of the de-vice
needed. Up to the Virtex II Pro, a frame use to span a complete column, with the
consequence that the reconfiguration of one component affects a ll the
1
Partial reconfiguration design 215
components, which share a common column with that component. From the
Virtex 4 and upwards, the frames does not spans a complete column any
more, but only a complete tile. Therefore, the reconfiguration of a
component af fects only those components in the same blocks, which share a
common column with the reconfigurable module.
It is then the matter of the partial reconfiguration design flow to produce
the partial bitstream, i.e. the set of data required to configure the needed
frame s and therefore to allow the replacement of a component on the chip.
The ex-tracted partial bitstreams are then used to move the device from one
configur a-tion state to the next one without reconfiguring the whole device.
Two possibilities exist to extract the partial bitstream representing a given
module. A constructive approach which allows us to first implement any
single component separately using common development tools. The so
developed components are then constrained to be place at a given location
using bounding box and position constraints. The complete bitstream is
finally built as the sum of all partial bitstreams. This approach is the one
followed by the modular design and the early access design flows. The
second possibility consist of first implementing the complete bitstreams
separately. The fix parts as well as the reconfigurable parts are implemented
with components constrained at the same location in all the bitstreams,
which differs from each other only on the reconfigurable parts. The
difference of two bitstreams is then computed to obtain the partial bitstream
needed to move from one configuration to the next one.

Example 7.1 Consider for instance the two designs of figure 7.1, where the
full designs are named Top1 and Top2. The fix part consists of the module F-
Module 1, F-Module 2, and F-Module 3, which are placed at the same location
in the two designs. In Top1 , the reconfigurable module is R-Module 1, which is
placed at the same location, with the same bounding box constraints as the
reconfigurable module R-Module 2 in Top2.
The addition of R-Module 2 to the fix parts of the two bitstreams generate
the full bitstream Top2. The same is true for R-Module 2 and Top1. Also, as
shown on the figure, adding the two bitstreams Top1 and R-Module 2 will
produce the bitstream Top2, because R-Module 2 will replace R-Module 1
in Top1. The substraction of Top2 from Top1 produce the partial bitstream
for R-Module 1.

2. Bitstream Manipulation with JBits


JBits [102] is an application interface developed by Xilinx to allow the
end user to set the content of the LUT and make connections inside the
FPGA without the need for other CAD tools. JBits does permit the changes
to be made directly to the device, but to the configuration data.
216 RECONFIGURABLE COMPUTING

Figure 7.1. Generation of bitstreams for partial reconfiguration

The JBits API is constructed from a set of java classes, methods and tools
which can be used to set the LUT-values as well as the interconnections.
This is all what is required to implement a function in an FPGA. JBits also
provides functions to read back the content of a FPGA currently in use.
The content of a LUT can be set in JBits using the set function:

set(row, col, SliceN umber T ype, value)


which means that the content of the LUT Type (Type is either the LUT F
or the LUT G of the corresponding CLB) in the slice SliceNumber
(SliceNumber can be 0 or 1) of the CLB in position row and col should be
set to the value of the array value. The Virtex LUTs have four inputs and 16
entries representing the 16 possible values a 4-inputs function can take.
A connection is defined using the function connect:

connect(outpin, inpin)
This function uses the JBits routing capabilities to connect the pin outpin
to the pin inpin anywhere inside the FPGA. outpin and inpin should be one
of the CLB terminals.
Connecting the output of a LUT to a CLB terminal can be done with the
function.
2
Partial reconfiguration design 217

set(row , col , terminal control , LUT output )


where terminal control sets the correct terminal to be connected to the
out-put LUT output of the corresponding LUT. JBits provides hundreds of
prede-fine cores like adders, subtractors, multipliers, CORDIC Processor, en
coder and decoders, network modules, etc... which can directly be used in
designs. They can also be combined to generate more complex cores.
Although any function can be implemented entirely in JBits using the
basic primitives previously described as well as the predefined libraries
modules , it is usually not the best approach, due to the high complexity in
building com-ponents from the scratch. Instead, the bitstream substraction
described in the previous section is use to extract the part of the bitstream
needed at run-time for partial reconfiguration.
The approaches used to extract partial bitstream of the component that
will be replaced at run-time, consist of implementing as many full bitstream,
also called top-level, as required and perform the substraction among them
to ex-tract the differences. Those are the parts that will be used to configure
the device later. Implementing the substraction using JBits is
straightforward. The two bitstreams need to be scanned and compared for
each element. Only the difference is copied in the resulting bitstream.

3. The Modular Design Flow


One of the main drawback of JBits is the difficulty to provide fixed
routing channels for a direct connection between two modules. This is
important be-cause one must insure that signals will not be routed on the
wrong paths after the reconfiguration. Consider for instance the two full
designs of figure 7.2 in which the fixed as well as the reconfigurable
components are place at the s ame locations. The connection F-Module 1 ↔
R-Module 1 is runing on a different path than the connection F-Module 1 ↔
R-Module 2. On reconfiguration the design will probably not work correctly.
One of the main contribution in the modular design flow [128] is the use
of the so-called Bus Macro primitive to guarantee fixed communication
channels among components that will be reconfigured at run-time and the
fixed part o f the design.
The modular design flow however was not initially developed to support
partial reconfiguration. It is a an approach that allows several Engine ers to
cooperatively work on the same project. The project leader identifies the c om-
ponents of the whole project, estimates the amount of resources that will be
consumed by each component, defines the placement locations for the com-
ponents on the device, and let the single parts be developed by the different
engineers. Finally, the single developed parts are integrated into the whole de-
sign to build a workable circuit. The modular design flow is a 4-steps method
218 RECONFIGURABLE COMPUTING

Figure 7.2. Routing tools can route the same signals on different paths

(design entry and synthesis, initial budgeting, active implementation, and as-
sembly) for which a viable directory structure must be use.
We recommend to use the ISE version 6.3, which is on our opinion, the
most stable version for implementing partial reconfiguration with the
modular design flow.

3.1 Directory Structure


Xilinx recommends a structure of design directories as shown in figure 7.3:

The HDL Design directory (HDL): This directory is used to store the
HDL descriptions of the modules and that of the top level designs. The
HDL description can be in VHDL, Verilog or any other hardware
description language.

Synthesis directory (synth): In this directory the team leader synthesizes


the top-level design, which includes the appropriate HDL file, the project
file, and project directories. In the same way, each team member will also
have a local synthesis directory for his or her synthesized module.

Modules directory (Modules): In this directory, the active implementation


of individual modules is done in the respective module's subdirectory.

Implementation directory (Top): It includes two subdirectories, one for


the initial budgeting phase and one for the final assembly phase. In the
initial directory the team leader performs the initial budgeting operations
for the corresponding top-level design. In final assembly directory, the
team leader assembles the top-level design from the individual modules
implemented and published in PIMs directory.
3
Partial reconfiguration design 219
PIMs (Physically Implemented Modules) directory (Pims): The active im-
plementation of individual module creates appropriate module directory
in PIMs directory where implemented module files are copied.

Figure 7.3. The recommended directory structure for the modular design flow

We next focus on the single steps of the modular design flow

3.2 Design Entry and Synthesis


This step must be carried out before going to the modular design imple-
mentation step. In this step, the members of the team develop modules using
a hardware description language (HDL), and synthesize them using synthesis
tools. The team leader completes top-level design entry and synthesis before
moving to the initial budgeting phase. Team members complete individual
module design and synthesis before moving to the active module implemen-
tation phase for that module. However, the team members can work on their
modules while the team leader is working on initial budgeting phase of
modu-lar design implementation. The restriction that must be done on the
top-level design are the following:
All global logic, including I/Os must be included in the top-level descrip-
tion.
All design modules are instantiated as black boxes in the top-level
design, with only ports and port directions specified.
220 RECONFIGURABLE COMPUTING

Signals that connect modules to each other and to I/O pins must be
defined in the top-level design.

Top-level design and individual modules are synthesized as described in


the documentation of the synthesis tool selected. In synthesis properties, the
insert I/O pads settings must be enable for top-level design and disable for
each submodule. Separate unique netlists will be created for each of the
modules and for the top-level design. For each instantiated module, exactly
the same module name must be used as its file name. Some synthesis tools
like the Xilinx xst, will not be able to match the module names specified in
the top-level netlist to the module netlists.

3.3 Initial Budgetting


In this stage, the team leader assigns top-level constraints (pin locations, area
constraints for each modules, timing constraints, etc...) to the top-level design,
which results in topX.ucf file (user constrain file) . The same topX.ucf file is
used for the active module implementation of individual modules.
Only the netlist for the top-level design file (topX.ngc, if compiled with
Xilinx XST) and the constraint file (topX.ucf) are copied to the initial
directory in implementation directory of the corresponding top-level design.
The same top-level design name should be used for both files. The
following command is then first executed

ngdbuild -modular initial design name


In this case design name is top.ngc. In this step, the compiler ngdbuild pro-
duces two files, design name.ngd and design name.ngo. The design name.ngd
file is used during subsequent modular design steps, while design name.ngo is
not. The constraint editor that is used to edit design constraints such as clock
periods, in the top-level design, can be invoked with the following command:

constraints editor design name.ngd


The following command invokes Floorplaner.

floorplanner design name.ngd


Using Floorplanner, modules sizes, module locations, and information on
module ports can be created. The modifications are copied in the top-level
constraint file topX.ucf. This file can also be directly edited with the
constraints as specified in the user manual. Constraints that must be set in
the .ucf file, either by hand or by the floorplanner are the following:

1 Module area constraints:


4
Partial reconfiguration design 221
each module must have a minimum width of four-slices.
a module's width is always a multiple of four slices (e.g., 4, 8, 12, ..)
the module bounding box must span the full height of the device.
each module must be placed on an even four-slice boundary.

besides those constraints, area group must be defined for each


component and specific properties for the partial reconfiguration design
flow must b e added to the to the area group of components.

2 IOB constraints: The Input Output Block (IOB) must be constrained


within the columnar space of their associated reconfigurable modules.
This condi-tion is important to avoid the connection between a module
and a pin to be destroyed during the reconfiguration of another
component. Also, all IOB s must be locked down to exact sites.

3 Global logic constraint: All global logic like clocks, power and ground
lines must be constrained in the top level. No global logic should be
locked in the module topX.ucf file.

4 Insertion of Bus macros


Partial reconfiguration is the process of transiting from one full configu ra-
tion to a new full configuration using a partial configuration bitstream. This
partial bitstream represents a component or a set of components present in
the final bitstream, but not in the first one. The two bitstreams have a
common fix part, which does not change. The reconfigurable parts are c on-
nected to the fix part by defining signal and connecting them to the ports.
The path followed by the signals in two designs may be different due to the
routing tools. On partial reconfiguration, the resulting application may not
work anymore, if the signals connecting the fix and dynamic part are not
using the same paths in the two designs. The role of the bus macros in the
Xilinx FPGAs is to provide fix communication channels, which can be used
by reconfigurable module. Those are tri-state lines which help to keep the
signal integrity of modules being reconfigured. Those channels are not
affected by the reconfiguration and the signals keep their integrity , thus
insuring a correct operation of the application.
The approach consists of connecting the two components at the two ends
of a dedicated long line segment within the FPGA. The two component
use tri-state buffers to define the direction of the communication. A bus
macro is a construct that allow such connection to be specified in the
design tool. Each bus macro occupies a 1-row by 8-column section of tri-
buffer site space.
222 RECONFIGURABLE COMPUTING

Besides the constraints specified, global-level timing constraints can be


cre - ated for the overall design. The output of the initial budgeting phase is a
topX.ucf file containing all placements and timing constraints. Each module
is implemented using this topX.ucf file, extended with the module-specific
con-straint requirements.
To finalize the initial budgeting phase, the following command is run in
the initial subdirectory of the corresponding top-level design, after copying
the files topX.ucf file and topX.ngc file to the the same directory directory.

ngdbuild -p device type -modular initial topX.ngc

3.4 Implementing Active Modules


During this phase, the team members implement the top-level design with
only the active module expanded using the top-level topX.ucf file. Team
mem-bers can implement their modules in parallel. Once each de design has
been synthesized, floorplanned, and constrained, the implementation
(mapping, p lace and route) of all modules (both fixed and partially
reconfigurable) for the de-sign can start. Each module will be implemented
separately, but always in the context of the top-level logic and constraints.
Bitstreams will be generated for all reconfigurable modules. This section
describes the overview on h ow to independently implement each module.
The following instructions should be performed for each of the module in
the design.

1 Copy the module implementation files to the to the active


implementation directories for each module (active/module name)under
the module direc-tory in which active module will be implemented:

Synthesized module netlist file module name.ngc


The file topX.ucf, which the team leader created in the initial
budgeting phase must be copied in the module local directory and
renamed to module name.ucf.

2 Build the module in active module mode as follows:

ngdbuild -uc module name.ngc -modular module -active topX.ucf \\


top level design directorypath/topX.ngo

The output NGD file is named after the top-level design and contains im-
plementation information for both the top-level design and the individual
module. The -uc option ensures that the constraints from the local
topX.ucf file are annotated.
5
Partial reconfiguration design 223
3 Map the module to the library elements using the following

command. map topX.ngd

In this step, the logic of the design only with expanded active module is
mapped.
4 The place and route of the library elements assigned to the module is
done by invoking the placer and router with:

par -w module name.ncd top routed.ncd

The -w option ensures that any previous versions of the produced files
de-sign name routed.ncd are overwritten. If the area specified for the
module cannot contain the physical logic for the module because it is
sized incor-rectly, then resizing must be done again and the .ucf file
generated during the initial budgeting phase must be regenerated with the
changes made. The entire initial budgeting phase should be performed
again. This would then imply that new .ucf files would need to be copied
to each module and that each module needs to be reimplemented. If
everything went fine, then the module is now placed and routed at the
correct location in the given bounding box.
5 Run trace on the implemented design to check the timing report in order
to verify that the timing constraints are met.

trce top routed.ncd

If necessary, the Floorplanner is used to reposition logic, if the module


implementation is unsatisfactory, for example, if it does not meet timing
constraints. In this case the Map, place, and route steps must be rerun
again as described in the previous steps.
6 The last step in the active implementation is the publishing of the imple-
mented module file to the central located PIMs directory, which was set
up by the team leader. This step is performed using the command:

pimcreate pim directory path -ncd top routed.ncd

This command creates the appropriate module directory inside the PIMs
directory. It then copies the local, implemented module files, including the
.ngo, .ngm and .ncd files, to the module directory inside the PIMs direc-
tory and renames the .ngd and .ngm files to module name.ncd and mod-
ule name.ngm. The -ncd option specifies the fully routed NCD file that
should be published.
224 RECONFIGURABLE COMPUTING

3.5 Module Assembling


Module assembling is the final phase of modular design. Here the team
leader assembles the previously implemented modules into one top-level de-
sign. In this phase previously implemented modules, which were published
to the PIMs directory, and the top-level design file are used. The resulting f
ull de-sign can be used to generate a bitstream. As before, we provide the
necessary steps.

1 In order to incorporate all the logic for each module into the top-level de-
sign, run ngdbuild as follows:

ngdbuild -modular assemble -pimpath pim directory path topX.ngo

Ngdbuild generates an topX.ngd file from the top-level topX.ucf file, the
top-level .ngo file, and each PIM's topX.ngo file.

2 The logic for the full design can then be mapped as

follows: map topX.ngd

MAP uses the topX.ncd and topX.ngm files from each of the module direc-
tories inside the PIMs directory to accurately recreate the module logic.

3 The full design can then be placed and routed using the following com-
mand:

par -w topX.ncd topX routed.ncd

The place and route command, par, uses the topX.ncd file from each of
the module directories inside the PIMs directory to accurately
reimplement the module logic.

4 In order to verify if timing constraints were met, the trace command must
be invoked as follows:

trce top routed.ncd

Partial reconfiguration with the modular design flow can be performed as


batch job using one or more scripts that contain the large amount of commands
that must be invoked. An example of such scripts and design examples are
available for download on the book's website. Also tools like Part-Y [71]
provide a conformable graphical user interface on top of the modular design.
6
Partial reconfiguration design 225
4. The Early Access Design Flow
With the introduction of the Early Access Design Flow for creating
partially reconfigurable designs, Xilinx updated and enhanced the existing
Modular De-sign Flow [227]. The package consists of a number of scripts
that modify the user's ISE installation. Up to now the update is only viable
with the exact ISE version 8.1i with service pack 1. It can be obtained by
registered users from [226].
The main changes compared to the Modular Design Flow concern
usability and the relaxation of some rigid constraints for partial
reconfiguration des igns. Also new bus macros improve constraint setting
and allow more possibilities for reconfiguration.
First of all Virtex-4 platform support is added to the PR tools. Besides the
known Spartan 3, Virtex-II and -II Pro this enables users to deploy the
current top product of Xilinx's linecard.
Partially reconfigurable areas may now also be defined block-wise and do
not have to span over the full chip height. This provides the possibility, for
Virtex-4 devices and later to reconfigure the tiles rather than whole columns.
The Virtex-II and Spartan FPGAs, in contrast, physically lack this ability.
When a small region is reconfigured, the whole column is written, but the
configuration controller on the FPGA checks whether the reconfiguration w
ould actually change the content of the configurable logic block, and perform
re-configuration only where changes are necessary. The reconfigura tion will
not touch blocks that would remain unchanged at all as depicted in figure 7.4.
With this, applications may also be realized that require a bus that is
placed through the whole width of the chip and won't be destroy when
reconfigur a-tion occurs somewhere in between. Figure 7.5 shows an
example of design for a video streaming application that uses several stages
of filters and a bus th at transports the pictures from one module to the next.
The single filters in the modules are partially reconfigurable and may be
swapped without interrupt-ing and stalling the bus. In this case, we will not
want to alter the bus while configuring a module on the chain.
Another relaxation is that signals crossing partially reconfigurable blocks
without interacting, won't have to be passed through bus macros anymore. The
routing algorithm is now capable of determining those signals and using longer
range communication lines that will not be touched by a partial recon-figuration.
This greatly alleviates timing constraints to be met and simplify the designing
process as a whole. Before, the developer had to build up the design, have the
signals be routed and check for signals that had been guided through the
partially reconfigurable area. Also resources that use to lie w ithin this block but
have to be used in another fixed module, would have had to be fed signals
through some bus macros. Both cases, given the signal will not ex-change
information with the PR area, are now fortunately handled by the new
226 RECONFIGURABLE COMPUTING

Figure 7.4. Limitation of a PR area to a block (dark) and the actual dimensions (light)

Figure 7.5. Scheme of a PR application with a traversing bus that may not be interrupted

design flow. Now, traversing signals must not be guided through a bus mac
ro, a tedious handwork.
7
Partial reconfiguration design 227
4.1 New directory structure
The Early Access Tools now support a modified, more sensible directory
structure for PR projects as depicted in figure 7.6. There, the folders' c
ontents are presented in the order of usage in the design flow.

Figure 7.6. Improved directory structure for the Early Access Design Flow

The non pr folder should contain a working version of every combination of


modules that can be later use for partial reconfiguration. Although not
required it is strongly recommended to verify the functionality of the dif-
ferent variants of the design before trying to do any partial reconfigur ation!
synth holds the HDL-definitions of the modules. There should be at least
one subfolder for every partially reconfigurable module, a folder base for
all the fixed modules and top for the top-level design. The modules are
then synthesized herein. synth thus replaces the folders hdl and synth
from the Modular Design Flow.
The initial budgeting phase on the top-level module is done in top using
the synthesized netlist from synth/top, the UCF and the Bus Macro
definition files.
228 RECONFIGURABLE COMPUTING

Initial budgeting for the static modules is accomplished in static

The activation phase takes place in the reconfigmodules' subdirectories


for each partially reconfigurable module

Finally the design is assembled in merges and full and partial bitstreams
are generated.

4.2 Design Entry and Synthesis


Except the slightly changed locations because of the modified directory
structure the general implementation work stays the same as with the Mod-
ular Design. Thus top-level modules contain:

all global logic, including I/Os,

all design modules, instantiated as black boxes with only ports and port
directions specified and

signals that connect modules to each other and to I/O pins.

Again, top-level modules include no logic except clock generation and


dis-tribution circuits.

4.3 Bus Macros


Xilinx ships a new set of Bus Macros to go with the Early Access package.
They are now distinguished between device type, direction, synchronicity and
width. They are now directed and not deployable in any direction and either
clocked to run synchronously with the data or unclocked as the old bus macros.
The naming convention yields to terms like busmacro device direction synchronicity
width.nmc where
8
Partial reconfiguration design 229
device can be
xc2vp - for a Virtex-II Pro,
xc2v - for a Virtex-II or
xc4v - for a Virtex-4 FPGA.
direction is either one of
r2l - right-to-left
l2r - left-to-right
b2t - bottom-to-top (Virtex-4 only)
t2b - top-to-bottom (Virtex-4 only).
synchronicity may be
sync - synchronous or
async - asynchronous
width is either
wide - spanning four CLBs or
narrow - reaching over two CLBs.
A sample name is busmacro xc2vp r2l async narrow.nmc.
The direction of the macro is to be seen geographically: right-to-left
means data flowing from the right side of a border to the left side. If the right
side is inside a partially reconfigurable area, the bus macro is an output bus
macr o, otherwise it is an input bus macro. The same applies to the other
cases with left-to-right and the bottom-to-top and top-to-bottom bus macros.
The usage is shown in picture 7.7.

Figure 7.7. Usage of the new EA bus macros

Figure 7.8 shows the difference between a wide and a narrow type bus
macro. The classification concerns only the range of CLBs that one of the se
bus macros bridges. Both kinds offer eight signal lines to cross a boundary.
230 RECONFIGURABLE COMPUTING

The unoccupied CLBs between wide bus macros can be used by user
logic or other bus macros. They can also be nested up to three times in the
same y-row as depicted in figure 7.9. With the old bus macros, only four
signals could traverse the boundary at this position whereas the new macros
put through up to 24 lines. The three staggered bus macros do not have to be
of the same type or direction.

(a)

(b)

Figure 7.8. Narrow (7.8(a)) and wide (7.8(b)) bus macro spanning two or four CLBs

Especially during the reconfiguration signals from partially reconfigurab


le modules might toggle unpredictably. The new bus macros now provide
ver-sions that may therefore be disabled when appropriate to output a stable
zero instead of the fluctuant signals.

4.4 Initial Budgeting


For the initial budgeting as well as for the activation phase similar steps have
to be taken as with the Modular Design. The UCF generation will have a
different outcome since area constraints may now be set more freely and bus
macros have to be deployed more carefully concerning their type and direction.
The placement rules are generally defined as follow:
9
Partial reconfiguration design 231

Figure 7.9. Three nested bus macros

the x-coordinate of the CLBs on which the bus macros is locked to, has
to be divisible by two
the y-coordinate has also to be divisible by two
Virtex-4 devices require coordinates divisible by four due to the location
of block RAMs and DSPs
bus macros have to traverse a region boundary. Each of the endpoints has
to completely lie in a different area
Further actions for the Initial Budgeting like setting the timing constraints
follow the same guidelines as the Modular Design. One must keep in mind
that signals that will not interfere with a module can cross that module
without having to run through a bus macro.
To close the Initial Budgeting Phase, the top-level design has to be imple-
mented. In directory pr project/top, the command
ngdbuild -uc system.ucf -modular initial system.ngc
has to be invoked. pr project/static now budgets the static portion of the
de-sign. The .nmc files for the used bus macros have to be placed in this
directory. The necessary commands are:
ngdbuild -uc ../top/system.ucf -modular initial ../top/system.ngc
map system.ngd
par -w system.ncd sytem base routed.ncd
The routed outputs are placed in pr project/top respectively pr
project/static. The process yields another file, static.used, that contains a list
of routes within the partially reconfigurable regions that are already occupied
with static de - sign connections. This is used as an input for the placement
and routing of the partially reconfigurable modules later on. The file allows
for guiding signals through PRMs without using bus macros.
232 RECONFIGURABLE COMPUTING

4.5 Activation
In the Activation Phase the partially reconfigurable modules are
implemented. First the previously generated static.used is copied to each of the
modules di-rectories (e.g. pr project/ reconfigmodules/ prm a1) and renamed to
arcs.exclude. Each module is then run through the known sequence of
ngdbuild, map and par as follows for the prm a1. In directory pr project/
reconfigmodules/ prm a1, do the following:
ngdbuild -uc ../../top/system.ucf -modular module -active \\
prm component name ../../top/system.ngc
map system.ngd
par -w system.ncd prm des routed.ncd

Note that prm component name is the component name not the instance name
of the module. par automatically looks for and incorporates the arcs.exclude
file.

4.6 Final Assembly


The last step is to assemble the static and reconfigurable modules into the top-
level design. This work is greatly facilitated by two new scripts: pr verifydesign
and pr assemble. For this, a routed top-level design, a set of the static modules
and one PRM design has to be copied to a separate subdirectory for each PRM
under pr project/merges. For every PRM (e.g. in pr project/merges/prm a1), the
following has to be executed:
pr verifiydesign sytem base routed.ncd prm a1 routed.ncd
pr assemble sytem base routed.ncd prm a1 routed.ncd

It should be checked if the final netlists (e.g. pr project/merges/prm a1/


static routed.ncd) meet timing constraints. The timing analysis is performed
with the .pcf file for the static design.
The partial and full bitstreams to reconfigure the targeted FPGA are then
found in merges. Before using them the developer has to make sure that the
assembly process produced proper files by examining the .summary files
gen-erated.

4.7 Merging Multiple Partially Reconfigurable Areas


To merge more than one partially reconfigurable region in one design an
iterative invocation of pr verifydesign and pr assemble is needed.
With each iteration a new partially reconfigurable module is merged into the
existing overall design.
10
Partial reconfiguration design 233
The corresponding actions are

1 Merge each PRM of one reconfigurable area with the static design
pr verifydesign and pr assemble generate intermediate
designs with just the one PRM contained and partial bitstreams that can
be used to re-configure the FPGA at this particular region afterwards.
The necessar y commands are

merges/prmA/pr verifiydesign sytem base routed.ncd \\


prm a1 routed.ncd

merges/prmA/pr assemble sytem base routed.ncd prm a1 routed.ncd

merges/prmA/pr verifiydesign sytem base routed.ncd \\


prm a2 routed.ncd

merges/prmA/pr assemble sytem base routed.ncd prm a2 routed.ncd

2 Merge additional modules for other partially reconfigurable areas


For every additional region, the output of the previous proceeding is
taken as input: the partially reconfigurable module b1 is to be merged
with the combined a1 and base design.

merges/prmB1withA1/pr verifydesign \\
../../prmA/prm a1/base routed full.ncd \\
../../reconfigmodules/prm b1/prm b1 routed.ncd

merges/prmB1withA1/pr assemble \\
../../prmA/prm a1/base routed full.ncd \\
../../reconfigmodules/prm b1/prm b1 routed.ncd

Every combination of partially reconfigurable modules have to be merged


together. This requires for two partially reconfigurable regions accord ingly:

merges/prmB1withA2/pr verifydesign \\
../../prmA/prm a2/base routed full.ncd \\
../../reconfigmodules/prm b1/prm b1 routed.ncd

merges/prmB1withA2/pr assemble \\
../../prmA/prm a2/base routed full.ncd \\
../../reconfigmodules/prm b1/prm b1 routed.ncd
234 RECONFIGURABLE COMPUTING

merges/prmB2withA1/pr verifydesign \\
../../prmA/prm a1/base routed full.ncd \\
../../reconfigmodules/prm b2/prm b2 routed.ncd

merges/prmB2withA1/pr assemble \\
../../prmA/prm a1/base routed full.ncd \\
../../reconfigmodules/prm b2/prm b2 routed.ncd

merges/prmB2withA2/pr verifydesign \\
../../prmA/prm a2/base routed full.ncd \\
../../reconfigmodules/prm b2/prm b2 routed.ncd

merges/prmB2withA2/pr assemble \\
../../prmA/prm a2/base routed full.ncd \\
../../reconfigmodules/prm b2/prm b2 routed.ncd

With the iterations several files that will be created are equivalent to
each other. In this case for partially reconfigurable area B

prmB1withA1/prm b1 blank.bit,
prmB1withA2/prm b1 blank.bit,
prmB2withA1/prm b1 blank.bit and
prmB2withA2/prm b1 blank.bit
are equal. Equivalent bitstreams for module b1 are

prmB1withA1/prm b1 routed partial.bit and


prmB1withA2/prm b1 routed partial.bit

and for module b2

prmB2withA1/prm b2 routed partial.bit and


prmB2withA2/prm b2 routed partial.bit.

Four variations of the full bitstream are generated representing the four
combinations of the two partially reconfigurable modules in this example:

prmB1withA1/base routed full.bit,


prmB1withA2/base routed full.bit,
prmB2withA1/base routed full.bit and
prmB2withA2/base routed full.bit
12
Partial reconfiguration design 235
In practice only one full bitstream is needed to load the FPGA. The
recon-figuration can then be conducted with the partial bitstreams. Which
full bitstream to take is incumbent on the user.

5. Creating Partially Reconfigurable Designs


This chapter gives a detailed description of the steps to take when recre-ating
a design to be partially reconfigurable. The next part summarizes two sample
designs to explain the single tasks to accomplish. The understanding of this
guide depends on a profound knowledge of VDHL, ISE and EDK. To get a
quick glance of VHDL see [208], for a in depth introduction consult, e.g.
[131]. Introductions to ISE and EDK are provided in [229] and [230]. For
the Early Access Reconfiguration Flow software to work, it is crucial to
deploy the correct version of ISE. The current issue of the update package
requires ISE 8.1.01i (Service Pack 1) and updates it to ISE 8.1.01i PR8 resp.
PR12. The used EDK system has version 8.1.02i.

5.1 Animated Patterns


The sample design Animated Patterns is a partially reconfigurable pattern
generator for the XUP development board [228]. It is base on a project taken
from a previously published tutorial [40]. The design creates a simple mov-ing
pattern and directs it out to a VGA-compatible screen. Depending on the current
pixel position that is drawn on the monitor the reconfigurable module Produce
Color evaluates the color to be set for the next pixel. The system with its two
areas, reconfigurable and fixed, is depicted in figure 7.10.

Figure 7.10. Scheme of Animated Patterns


236 RECONFIGURABLE COMPUTING

Partially reconfiguring the device at run-time here means exchanging Produce


Color while the output to the monitor is continuing uninterruptedly. Control for
the user is provided through a simple c-program accessible over a serial terminal.
If a new partial bitstream is chosen the program loads the necessary partial
bitstream from a Compact-Flash card connected to the development board and
transmits the data to the ICAP, the Internal Configuration Access Port, to
11
reconfigure the device. The ICAP allows certain families of FPGAs to be
reconfigured from within the FPGA by a processor, in this case the Po w-erPC
sitting in the Virtex II Pro on the XUP development board itself. The design can be
found on the book's web page.

Figure 7.11. Two patterns that can be created with modules of Animated Patterns. Left: the
middle beam is moving. Right: the diagonal stripes are moving.

5.2 Video8
The second sample design is called Video8 and is used for the code and
command execution examples in the following guide. It deploys the Digilent
VDEC1 extension card [70] for the Virtex-II Pro XUP Development Board
[228] to digitize an analog video stream from a camera. The extension card
is connected to the XUP via a proprietary slot and is technically connected
through an I2C Bus.
As shown in figure 7.12, the data stream is taken into the system by video
in. The module converts to the RGB color space and hands over to a filter,
employ - ing e.g. a Sobel-implementation for edge detection. This filter
module will be reconfigurable at the end of this tutorial. The filtered data is
then processe d in vga out to be send to a VGA compliant screen.
The embedded hardcore processor on the FPGA is used to configure the
video digitizer card and the ICAP on start up. It is then used to run a simple
program to choose which partial bitstream to load next. The in- and output is
brought to the user's console on a remote PC through the serial port.

11currently the Xilinx Virtex-II, -II Pro, Virtex-4 and -5


13
Partial reconfiguration design 237

Figure 7.12. Scheme of Video8

5.3 Measures to take for PRM creation


Initially a completely working project should mark the starting point for
the creation of a partially reconfigurable design. The initial project Video8
non pr marks the starting point for the instructions to follow.
1. Move partially reconfigurable parts to the top-level module
Partially reconfigurable modules may not be submodules to other than
the top-level module. To move its instatiation to the top-level generally
two approaches are possible:

reconstruct the initial design to place the reconfigurable parts and also
therewith connected module instantiations in the top-level. On the on
hand the resulting code is much easier to understand and modules can be
developed by different engineers according to the modular design flow
straight forward. On the other hand the facilitation that comes for
example from using EDK for the basic design might be outweighed by
the lot of work involved when extracting and reconnecting all the
necessary modules. This militates all the more regarding the synthesis
process. The placement tool places modules according to their connec-
tivity with each other and the constraints given in the UCF not accord-
ing to their actual position in the hierarchy. Thus reconstructing the
design might be useless since the placement is done independently.
In the Video8 non pr example video in and vga out are directly affili-
ated to the partially reconfigurable module to-be. As suggested in fig-ure
7.13 these two modules at least would have to be instantiated in
238 RECONFIGURABLE COMPUTING

the top-level module. Additional difficulties arise from the use of the
OPB to connect the filter module. Either the developer has to adapt the
filter module and provide other means of connection to the rest of the
system or the OPB has to be brought up to the top-level as well. This
again affords work and other modules to be changed accordingly.

Figure 7.13. Reconstructing example Video8 to place the partially reconfigurable part and
connected modules in the top-level.

The other solution is to only replace the old instantiations of the par-
tially reconfigurable modules with instantiations on the top-level and
amend the entities that formerly hosted the modules: Ports have to be
added to redirect the input data to the partially reconfigurable
modules up the module hierarchy to its new location at the top-level
module and the output data back down to be fed into the system
again. Figure 7.14 depicts the new structures. The big advantage with
this approach is the minimal adaption work to be done. An initial
EDK design may be used as is and can easily be changed.

The aimed at module for partial reconfiguration, rgbfilter, is initially


instatiated in entity system, the top-level module of the initial EDK de-
sign. To place rgbfilter in the top-level module according to the
second approach, video in has to be expanded as follows:

entity video_in is {
...

//ports going out to the rgbfilter


LLC_CLOCK_to_filter : out std_logic;
R_out_to_filter : out std_logic_vector(0 to 9);
G_out_to_filter : out std_logic_vector(0 to 9);
B_out_to_filter : out std_logic_vector(0 to 9);
h_counter_out_to_filter : out std_logic_vector(0 to 9);
v_counter_out_to_filter : out std_logic_vector(0 to 9);
14
Partial reconfiguration design 239

Figure 7.14. Moving a partially reconfigurable module to the top-level design on the exam ple
of Video8

valid_out_to_filter : out std_logic;

//ports coming back from rgbfilter


R_in_from_filter : in std_logic_vector(0 to 9);
G_in_from_filter : in std_logic_vector(0 to 9);
B_in_from_filter : in std_logic_vector(0 to 9);
h_counter_in_from_filter : in std_logic_vector(0 to 9);
v_counter_in_from_filter : in std_logic_vector(0 to 9);
valid_in_from_filter : in std_logic
...
};

Since the ports of a submodule cannot be connected directly to those of


its hosting module, intermediate signals have to be introduced in the
hosting module. The entity vide in from the sample design has to be
appended accordingly after defining the signals themselves:

...
-- signals out to the filter:
R_out_to_filter <= R1; G_out_to_filter
<= G1; B_out_to_filter <= B1;
h_counter_out_to_filter <= h_counter1;
v_counter_out_to_filter <= v_counter1;
valid_out_to_filter <= valid1;

-- processed signals in from the filter:


240 RECONFIGURABLE COMPUTING

R2 <= R_in_from_filter;
G2 <= G_in_from_filter;
B2 <= B_in_from_filter;
h_counter2 <=
h_counter_in_from_filter; v_counter2
<= v_counter_in_from_filter; valid2
<= valid_in_from_filter; ...

The top-level entity must contain a similar construction.

2. Base design rebuilding


Many designs for the Xilinx FPGAs take advantage of the use of EDK
and its advanced wizards. Complete functioning systems on chip
including the software controlling can be created with a simple sequence
of point-and-clicks. In many cases the static base system, including the
microprocessor, the communication over a serial lines, RAM-controller
and so forth, is cre-ated with EDK. The controlling software that runs on
the microprocessor, may also be part of the system, leading to the
creation of a bistream file in the .elf format. This is a file supported by
the EDK, which describe the hardware configuration and the memory
mapping with the software that will be executed on the processor.
Up to now EDK does not support partial reconfiguration directly. The ne
c-essary phases for the design flow still have to be done manually.
Systems implemented with EDK have to be exported to an ISE-format
project, syn-thesized as required and then be plugged into the top-level
module as a submodule.
In the case of Video8 non pr a basic system consisting of the
infrastructure for the PowerPC on the Virtex-II Pro, a RS-232 controller,
a Video-in and a VGA-out module. The design is created so that the
partial bitstreams to reconfigure the FPGA are loaded from a Compact
Flash Card attached to the FPGA.
Since the reconfigurable module in Video8 non pr has been included as a
core (Pcore) in the EDK project, the additional port definitions
previously described have to be applied to the system entity generated
by the EDK as well. This can be easily accomplished by adding External
Ports to the project. These changes will be written to the project's MHS
file similar to the following sample lines:

PORT LLC_CLOCK_OUT = VIDEO_IN_PR_0_LLC_CLOCK_to_filter, DIR = O


PORT R_in = VIDEO_IN_PR_0_R_out_to_filter, DIR = O, VEC = [0:9]
PORT G_in = VIDEO_IN_PR_0_G_out_to_filter, DIR = O, VEC = [0:9]
PORT B_in = VIDEO_IN_PR_0_B_out_to_filter, DIR = O, VEC = [0:9]
15
Partial reconfiguration design 241
PORT h_counter_in = VIDEO_IN_PR_0_h_counter_out_to_filter, \\
DIR = O, VEC = [0:9]
PORT v_counter_in = VIDEO_IN_PR_0_v_counter_out_to_filter, \\
DIR = O, VEC = [0:9]
PORT valid_in = VIDEO_IN_PR_0_valid_out_to_filter, DIR = O
PORT R_out = RGBFILTER_0_R_out, DIR = I, VEC = [0:9]
PORT G_out = RGBFILTER_0_G_out, DIR = I, VEC = [0:9]
PORT B_out = RGBFILTER_0_B_out, DIR = I, VEC = [0:9]
PORT h_counter_out = RGBFILTER_0_h_counter_out, \\
DIR = I, VEC = [0:9]
PORT v_counter_out = RGBFILTER_0_v_counter_out, \\
DIR = I, VEC = [0:9]
PORT valid_out = RGBFILTER_0_valid_out, DIR = I

2.1. Additional Modules


To make the best use of the EDK, some of the needed components can
be added and connected in the EDK. A JTAG-PPC controller can help
to debug components connected to the PowerPC core. The opb sysace
module is the necessary interface to the SystemACE device that is de-
ployed for the management of the a Compact Flash Card attached to the
FPGA. opb timer is used here to measure the time the reconfiguration
takes. Finally the opb hwicap functions as a interface to the ICAP for
self-reconfiguration.
2.2. Bus Connections
The needed bus connections for the additional modules can also be
set up with EDK using the graphical user interface. This prevents a
lot of errors originating from misspelling of variables and names. It
also eases the task for less experienced users. Thus opb hwicap and
opb timer have to be connected to the same opb on which the Sys-
temACE module is connected to.
2.3. Setting Address Ranges and Software Locations
To ensure the correct function with the parameters set in the control-
ling software some of the project's modules have to be set to a certain
address as stated in table 7.1.

Module Name Start Address Size


opb sysace 0x41800000 64KB
opb timer 0x41c00000 64KB
opb hwicap 0x40200000 64KB
docm ctlr 0xe8800000 32KB
iocm ctrl 0xffff8000 32KB
Table 7.1. New address ranges to be set in EDK
242 RECONFIGURABLE COMPUTING

docm ctlr and iocm ctrl are the controllers for data and
instructions to be stored in on-chip block RAMs. The program that
controls the reconfiguration of the FPGA in this case is to large to fit
in a smaller ram area. It can generally be put into the DDR-RAM as
well but in this special design, the framebuffer for the VGA output is
placed in the DDR-RAM, consuming a great share of the bandwidth
of the bus it is connected to. Thus placing the program in the DDR-
RAM as well generates unwanted distortions in the VGA output.
2.4. Rewriting the Control Software
The program deployed to control the non-pr design has to be
extended by the initialization of the ICAP module and a mean to
choose which partial bitstream to load next.
For the sample design, the control program main.c contains just the
initialization of the video decoder card. The initialization routine for
the ICAP and the SystemACE has to be added as well as a menu that
prompts the user for the next partial bitstream.
2.5. Export to ISE
The generated design has to be synthesized to be a submodule to the
top-level design. For this EDK supports the export of a project to the
ISE where detailed options for the synthesis can be set. Alterna-
tively, the module can be synthesized using xst. To export the
project in Project|Project-Options|Hierarchy and
Flow the follow-ing settings have to be made as follows:
Design is a sub-module with name system i
use project navigator implementation flow (ISE)
don't add modules to an existing ISE-project

3. Creating a Top-Level Module


In order to integrate the different modules, a top-level module has to be
created. It should contain just module instantiations and clocks. Here,
I/O-buffers are put up and bus macros are instantiated. Input and output
ports are simply directed through the appropriate buffer instance.
Bidirectional inout-signals are split up in in-, out- and a tristate control
signals, using three corresponding buffers.
Instances have to be named, especially those, that will be static and
recon-figurable since their definition is placed in the UCF and the
connection to the design is established over the instance name.
The partially reconfigurable module rgbfilter requires bus macros for
signals R,G and B, v and h counter and valid from and to the rgbfilter.
Tools like PlanAhead or FPGA-Designer can facilitate this work. For de-
16
Partial reconfiguration design 243
tailed information on how and why to use bus macros see sections 3.3 and
4.3. An examples of the resulting UCF can be found on the book web page.

4. Synthesis
The synthesis of the partially reconfigurable project takes place in the sub -
folder synth of the project base folder. Here, corresponding subfolders for
static and reconfigurable modules and the top-level module can be found. It
should be mentioned again, that any submodules, including the partially
reconfigurable ones, have to be synthesized without automatically adding
I/O-buffers. Thus the option -IOBUF has be set to no. I/O-buffers only have
to be included in the top-level module. The exported project contains the
system as a submodule system i of type system and should already in-
clude the needed ports for the data flow to and from the reconfigurable mod-
ule. system i is ordered under a generated super module system stub
and can therefore directly be included in a top-level module. These ports
should be created by EDK when entering and wiring external ports as de-
scribed above. If not so the generated HDL files describing and instantia t-
ing entity system resp. system i (system.vhd resp. system
stub.vhd) have to be extended with the necessary ports.
For the sample design folders, the reconfigurable modules ( mod *) as well
as a top and an edk directory must be created. There is no separate static
module, except the EDK system. Therefore, no static folder is required.
The ports to add to entity system are the same as when changing the
former super module of the one that should be partially reconfigurable.
5. Appending the UCF Taking a UCF built for example from EDK, several
amendments have to be made to let the partial reconfiguration work. As
stated before the definition of the area to place the partial and static blocks
and the location of the bus macros have to be declared in the UCF.
To set a region to be partially reconfigurable the UCF is extended by
lines like

INST "reconfig_module" AREA_GROUP = "pblock_recon_module";


AREA_GROUP "pblock_recon_module" RANGE=SLICE_X30Y64:SLICE_X45Y95;
AREA_GROUP "pblock_recon_module" MODE=RECONFIG;

With this the reconfigurable module reconfig module is associated


with the area group pblock recon module whose bounding box is
defined by the rectangle between slices X30Y64 and X45Y95. It is then
marked recon-figurable. A fixed part like a basic system created with
EDK, can be set with:

INST "system_i" AREA_GROUP = "pblock_fixed_area"; AREA_GROUP


244 RECONFIGURABLE COMPUTING

"pblock_fixed_area" RANGE=SLICE_X46Y16:SLICE_X87Y149;

Omitting MODE=RECONFIG tells the parser that the block will be fixed.
Al-though the UCF is a simple text file and can be edited manually, tools
like PlanAhead can be a big help with their architecture dependant
graphical user interfaces.
6. The Build Process
The build process starts with the synthesis of all the involved modules.
The submodules should not include I/O-buffers, which must all be
included only in the top-level module. The initial budgeting is partly
done already, and must be complete now. All other phases, from the
activation to the final assembly follow the steps described in section 4.

6. Partial Reconfiguration using Handel-C Designs


The structure of the directory previously described is the same, if another
HDL like Handel-C or even SystemC have to be used. Using Handel-C, the goal
is to provide the implementation of top-levels designs and separately the
implementation of each module in a much comfortable way. The Handel-C must
code for the entire system must be divided in code for modules, each with its
own interface for connecting other modules. The integration is then done by
connecting the modules together using signals in the top-level design as shown
in figure 7.1. For each module to be reconfigured later, a separ ate top-level
must be produce. The transition from one design to the next one will then be
done later using either full reconfiguration or partial reconfigur ation with
modules representing the difference from one top-level to the next one.
We explain this modular design for Handel-C by mean of one example
pro-vided below.
We consider a system containing an adder in its first configuration. The
system can be partially reconfigured to act implement a subtractor in place
of the adder. Both modules (adder and subtractor) have to be implemented
separately. The implementation of the adder is provided in the following
code segment in Handel-C.

void main() {
unsigned 32 res;

// Interface definition
interface port_in(unsigned 32 var_1
with {busformat="B<I>"}) Invar_a();

interface port_in(unsigned 32 var_2


with {busformat="B<I>"}) Invar_b();
17
Partial reconfiguration design 245

interfaceport_out() Outvar(unsigned 32 Res =


res with {busformat="B<I>"});

// Addition
res = Invar_a.var_1 + Invar_b.var_2; }

The first part of the code is the definition of the interfaces for communication
with other modules and the second part realizes the addition of the input values
coming from the input interface, and the output values are sent to the output
interface. We need not to provide the implementation of the subtractor, since it
is the same like that of the adder, but instead of adding we subtract. Having
implemented the two modules separately, each module will be inserted in a
separated to-level. Up to the use of the adder or subtractor, the two top-levels are
the same. The design can now be reconfigure later to change the adde r against
the subtractor. Connecting the module in a top-level design is done as shown in
the following code segment.

unsigned 32 operand1, operand2;

unsigned 32 result;

interface adder(unsigned 32 Res)

my_adder(unsigned 32 var_1 = operand1,


unsigned 32 var_2 = operand2)

with {busformat="B<I>"};

void main() {
operand1 = produceoperand( 0);

operand2 = produceoperand( 3);

result = my_adder.Res;

dosethingwithresult(result);
}
According to the top-level design being implemented, the adder will be re-
placed by a subtracter. The next question is how to keep the signal integrity of
the interfaces. Since there is no bus macro available in Handel-C, bus-macros
provided by Xilinx must be used as VHDL-component in a Handel-C
246 RECONFIGURABLE COMPUTING

design. For instruction on integrating a VHDL code in Handel-C, consult the


Celoxica Handel-C reference manuals [125]. Bus Macros are provided with the
Xilinx application note on partial reconfiguration [128] [227]. Before set-ting
constraints on a Handel-C design, the design have to be compiled first.
Afterwards, the resulting EDIF-files must be opened with any text-editor an d
the longest pattern that contains the name of the module as declared in the top-
level module is selected. This is useful because the Handel-C compiler generate
EDIF-code and automatically adds some characters to the module name used in
the original design. If the original module name is used it will not be recognized
in the following steps of the Modular Design Flow. With the EDIF for the
modules and that of the top-level designs we now have all what we need to run
the modular design flow explained in section 3.

7. Platform design
One of the main factor that have hindered the implementation of on-line
placement algorithms developed in the past and presented in chapter 5, is the
development process of reconfigurable modules for the Xilinx platforms. A s
we have seen in the previous section, partial reconfiguration design is do ne
with many restrictions, which make a systematic development process for
par-tial reconfiguration very difficult. Each module placed at a given
location o n the device is implicitly assigned all the resources in that area.
This includes the device pins, clock managers and other hard macros
components like the embedded multipliers and the embedded BlockRAMs.

Figure 7.15. Modules using resources (pins) not available in their placement area (the com-
plete column) must use feed-through signals to access those resources

As illustrated in figure 7.15, a module using resource outside its placement


area must use connection running trough other modules in order to access its
resources. We call those signal used by a given module and crossing other
18
Partial reconfiguration design 247
modules feed-trough signals. Using feed-through lines to access resources
have two negative consequences:

Cumbersome design automation: Each module must be implemented


with all possible feed-trough channels needed by other modules to reach
their resources. This problem was already identified in section 1, where
dedi-cated channels are preconfigured on the device to be used by
modules at run-time. Because we only know at run-time, which module
needs to feed the signal through, many channels reserved for a possible
feed-through be-come redundant.

Non relocation of modules: Modules accessing resources, like pins n a given


area of the chip, are no more relocatable, because they are compiled for fixed
locations where a direct access to their pins must be available.

Most of the FPGA platforms available on the market do not provide solu-
tion to the problems previously mentioned. Many systems on the market
offer various interfaces for audio and video capturing and rendering, for
communi-cation and so forth. However each interface is connected to the
FPGA using dedicated pins in a fix location. Modules willing to access a
given interface like the VGA must be placed in the area of the chip where
the VGA signals available such that they can be assigned those signals
among other resources. This makes a relocation of module at run-time
impossible. Relocation pro-vides the possibility to have only one bitstream
for a module representing the implementation of that module for a given
location. At run-time, the coordi-nate of the module is modified to match the
location assigned to the module by the on-line placer. Without relocation,
each module must be compile for each possible position on the device where
it can be later placed. A copy of the component bitstream must therefore be
available for each location on the chip. The amount of storage needed for
such a solution does not make such an approach attractive.
A research platform, the Erlangen Slot Machine (ESM) that we next
present, was developed at the university of Erlangen to overcome the
drawbacks of existing platform, and therefore allow unrestricted on-line
placement on an FPGA platform. Through the presentation of the ESM, we
also hope to high-light some of the requirements in the platform design for
reconfigurable com-puting.
The goals that the ESM-designers had in mind while developing a new
plat-form was to overcome the deficiency of existing FPGA-Platforms by
provid-ing:

A new highly flexible FPGA Platform in which each component must


not be fixed all the time at a given chip location.
248 RECONFIGURABLE COMPUTING

A suitable tool support which goal is to ease the development process of


modules for run-time reconfiguration and communication and to make an
efficient use of the architecture.

While the tooling aspect is important, it is not part of the basic


requirements for designing the platform itself. We therefore just focus on the
architectural aspect of the platform design in the next sections.

7.0.1 Drawback of existing systems


The practical realization and use of partial and dynamic reconfiguration o
n the current existing FPGA-based reconfigurable platforms is highly restr
icted, due to the following factors:

1. Limitation of reconfiguration on actual FPGAs: Very few FPGAs al-


lowing partial reconfiguration exist on the market. Those few FPGAs like
the Virtex series, allow only a restricted partial reconfiguration. As stated
in section 1, up to the Virtex II Pro, the reconfiguration had to be done
col-umn wise meaning that loading a module in a given area will affect
all the modules on top and below the module. From the Virtex 4 and
upwards, the reconfiguration of a module affects only those components
in the same blocks, which share a common column with the
reconfigurable module. The block does not span the complete device
column, but only a few num-ber of lines.

2. I/O-Pin-Problematic: The design of existing FPGA platforms makes it


more difficult to design for partial reconfiguration. Most of the existing
platforms have the peripheral components like Video, RAMs, Audio,
ADC and DAC connected at fixed location on the device. This has the
conse-quence that a module must be stuck on a given region where it will
have access to his pins, and therefore making a relocation very difficult.
An-other problem related to the pin connection is that the pins belonging
to a given logical group like video, audio, etc... are not physically
grouped together. On many platform they are spread around the device.
A module willing to use a device will have to feed many lines through
many different components.
This situation is illustrated on figure 7.16 for the Celoxica RC200 platform.
To see are two modules among which a VGA module, that are implemented
on the FPGA. The VGA module uses a large number of pins at the bottom
part of the device and also on the right side. It is possible to implement a
module without feed-through only on the two first columns on the left side.
Most of the reconfigurable module will need more than the two avail-able
column of resources to be implemented. Feed-through lines that run
19
Partial reconfiguration design 249

Figure 7.16. Illustration of th pin problematique on the RC200-Board

through the reconfigurable module must be used, in order for the VGA
module to access the pins cover by the reconfigurable module.
Similar situation are not only present on the Celoxica boards. The XESS
boards [56], the Nallatech boards [165], the Alpha boards [8] face the
same limitations.
On the XF-Board [177] [211] from the ETH in Zurich, the connection to
the peripherals are done on the side of the device. Each module access its
peripheral through an operating system (OS) layer implemented on the
left and right part of the device and between the component swapped in
and out. This approach provides only a restricted solution to the problem,
since all modules must be implemented with a given number of feed-
trough lines and interfaces to access the OS layer for communication
with the pe-ripheral. The intermodule communication as well as the
communication between a module and its peripheral is done via buffers
provided by the OS. This indirect communication will affect the
performance of the sys-tem. Many other existing platforms like the
RAPTOR-Board [132], Celox-ica RC1000 and RC2000 [43] are PCI
systems, which require a workstation for operation. The use in stand-
alone systems as it is the case in embedded is not possible.

3. Intermodule communication: One of the biggest problem not considered in


the design of reconfigurable platform is the intermodule communication.
Modules placed at run-time on the device may need to exchange some data
among each other. This request of communication is a dynamic task due to
the dynamism in on-line placement. For module placed far from each other it
will not be possible to feed communication line through several module in
order to establish a connection. New paths are then necessary in order to
allow a dynamic communication to take place at run-time.
250 RECONFIGURABLE COMPUTING

The previous limitations was the primary motivation in the design of the
Erlangen Slot Machine (ESM), whose concept we next present.

7.0.2 The Erlangen Slot Machine


The Erlangen Slot machine (ESM) is made upon a BabyBoard mounted on a
MotherBoard. The separation of the system in two boards allows the Baby-
Board which contains the reconfigurable module to be used on a wide variety of
systems. For the integration of the ESM in a new platform, no redesign of the
complete system must be done. Only a new MotherBoard must be pro-vided
according to the computational requirement in the new environment. A
multimedia system for example will provide a MotherBoard with multimedia
devices like video and audio. In an automotive system, the MotherBoard will
mostly contain sensor and actuators. The BabyBoard can be mounted on a PCI
MotherBoard in a computer system or on a stand alone MotherBoard in an
embedded system. Actually, only one variety of MotherBoard is available, that
is mainly targeted for applications in multimedia and signal processing. In the
next, we provide a detailed description of the boards functionality.

7.0.3 The BabyBoard: Computation and Reconfigurable Engine


The reconfigurable engine of the ESM platform is a BabyBoard, which
fea-tures an FPGA Virtex II-6000 FPGA from Xilinx, several SRAMs and a
con-figuration circuit. Due to the restriction 20 in the reconfiguration of the
Virtex FPGAs, the architecture was adapted to match the following:
Free relocation of modules: On-line placement of modules on a recon-
figurable device, in this case the FPGA, is done by downloading a partial
bitstream implementing a given task in the FPGA. The full and partial bit-
streams represent circuits implemented on a given region of the device at
compile time. In other to place a module in another location other than the
location for which it was compiled, a relocation of the module must be done.
The relocation of a given module therefore modifies the coordinates of all
the resources used by the module in its previous location (the location in
which the module was constrained at compile time) with the coordinates of
the module in the new location. Relocation can be done only if all the
resources are available and structured in the same way in the previous lo-
cation and in the new location. This include the device pins used by the
module. If the connection of a module with its peripheral is done via some
pins fixed at a given chip location, the module must be constrained at that
location. A relocation will be possible, but the module must feed signals
through all the other modules between the pins and the new location.

20The reconfiguration can be done only column wise


21
Partial reconfiguration design 251
This problem is solved on the ESM by avoiding a direct connection of pe-
ripheral on the FPGA. As shown in figure 7.17, all the bottom pins from the
FPGA are connected to an interface controller (crossbar). The Interface
devices are connected to the interface controller as well. This makes it pos-
sible to establish any connection from one module to its peripheral via the
crossbar. No mater where the module resides in the device, the crossbar can
be set at run-time to connect the module to its peripheral.

Figure 7.17. Architecture of the ESM-Baby board

Uniform repartition of resource: The ESM like any other reconfigurable


platform is primarily a parallel computing engine in which each module
carries its own computation and communicate with other modules. There-
fore a certain amount of resources must be allocate to each module for its
operation, at least for a given period of time. Memory is very important in
application like video streaming in which a given module must exclu-sively
access a picture at time for computation. However, the capacity of the
available BlockRAMs in FPGA is limited. External memory (SRAM or
DRAM) must therefore be added to allow the storage of large amount of data
by each module. To allow a module to exclusively accesses its mem-ory, the
SRAM are connected at the top part of the FPGA. In this way, a module will
have its connection to its peripheral from the bottom part and the top part
will be used to temporally store the computation data. No mod-ule will need
to feed its lines through the other modules for accessing its resources.
According to the number of memory banks which can be con-nected on the
top level, the device is then divided in a set of exchangeable
252 RECONFIGURABLE COMPUTING

slots, each of which have access to the SRAM on the top part and to the
crossbar at the bottom.
The name Erlangen Slot Machine was choosen, because it was developed
at the University of Erlangen-Nuremberg in Germany, but mainly due to
this organization in slots. This modular organization of the device simpli-
fies the relocation, primary condition for a viable reconfigurable
computing system. Each module moved from one slot to another one will
find the same resources there. The architecture is illustrated in figure
7.17. The Baby-Board is logically divided into columns of 2 CLBs called
micro slots. The micro slots are the smallest allocatable units in the
system. Each module must therefore be implemented in a given number
of micro slots. Due to the number of pins needed to access one the
external SRAM, each module willing to used an SRAM module must be
implemented in a multiple of 3 micro slots.

7.0.4 The configuration manager


A part from the main FPGA, the BabyBoard also contains the configura-
tion circuitry. This consists of a CPLD, a configuration FPGA, that is a
small Spartan II-FPGA, and a Flash.

The CPLD is used to download the Spartan II's configuration from the
flash on power-up. It also contains board initialization routines for the on
board PLL and the Flash.

The configuration program for the main FPGA is implemented in the


Spar-tan II. Due to its small size, this reconfiguration program could be
directly implemented in the CPLD, which could configure the main
FPGA at power on. But a much larger device was chosen to increase the
degree of freedom in the reconfiguration management. As state before,
the relocation is an important operation of the ESM. This can be done in
different ways: The first approach is to keep a bitstream for each possible
module and each po s-sible position in memory. However, the size of the
Flash cannot allow us to implement this solution. The second approach is
the on-line modification of the coordinate of the resources used in the
module's bitstream to match the new position. This modification can be
done for example through a manipulation of the bitstream file with the
Java program JBits [102] previ-ously presented. However file
manipulation is a time consuming operation which will increase the
reconfiguration time. The last and most efficient solution if to compute
the new module's position while the bitstream is be-ing downloaded,
using a dedicated circuitry. This requires a much larger hardware code
than the simple reconfiguration. This is why the Spartan II was chosen.
22
Partial reconfiguration design 253
The Flash provides a capacity of 64 MByte, thus enabling the storage of
up to 32 Full configurations and few hundreds partial bitstreams for the
FPGA Virtex II 6000 that was used.

7.0.5 Memory
6 SRAM banks with 2 MBytes each are vertically attached to the board on
the top side of the device, thus providing enough memory space to six different
slots for temporal data storage. The SRAMs can also be used for shared mem-
ory communication between neighbor modules e.g for streaming applications.
They are connected to the FPGA in such a way that the reconfiguration of a
given module will not affect the access to other modules.

7.0.6 Debugs lines


Debugging capabilities are offered through general purpose I/O provided
in regular distance between the basic slots. A JTAG port provides debug
capabil-ities for the main FPGA, the CPLD and the Spartan II.

7.0.7 The MotherBoard


The MotherBoard provides programmable links from the FPGA to all pe-
ripherals. The physical connections are established at run-time through a pro-
grammable crossbar implemented in an a Spartan FPGA on the MotherBoard.
This crossbar functionality basically solves the I/O pin dilemma of many ex-
isting FPGA platforms, thus allowing free relocation of modules requiring I/O
pin connectivity. Besides the run-time programmable crossbar, many peripher-
als for multimedia and communication are available on the board (figure 7.18).
Video capture and rendering interfaces as well as high speed
communication links are available on the MotherBoard. This interfaces
allows the platform to be used for instance in autonomous system, where a
robot may collect pictures from the environment and send them after a
preprocessing step to a central station for evaluation.

7.1 Intermodule communication


One of the central point in dynamic reconfiguration is the communication.
Each module that is placed on the device must collects its data from a given
source and send its results to a given destination. This problem is given an
im-portance only in few systems like the XF-Board of Zurich [177] [211].
How-ever the XF-Board implements only a communication through a third
party which is slow, due to the long path that messages have to go.
In the ESM, the communication among different modules (figure 7.19) can be
realized in three different ways: The first one is a direct communication using
bus macros between adjacent modules. The second one is the shared
254 RECONFIGURABLE COMPUTING

Figure 7.18. Architecture of the ESM MotherBoard

memory using the external SRAMs or the internal BlockRAMs. However, only
neighbor modules can use those two communication modes. For modules placed
in non-adjacent slots, we provide a dynamic signal switching communi-cation,
the reconfigurable multiple bus (RMB) presented in section 4.1. Also, the
crossbar switch can be use as well for communication between modules.

7.1.1 Adjacent Communication


Adjacent communication is done between two modules, which abut each
other. Bus macros must be used for a direct communication between neigh-
boring modules. The number of Bus-macros needed is limited by the amount
of signal that can go through the Bus-macros and the number of signals to be
connected.

7.1.2 Communication via shared memory


The communication between two neighboring modules can be done in
two different ways using shared memory:
Communication using BlockRAM: The BlockRAMs are dual ported RAMs
used in the FPGA mostly to implement FIFOs and other memory. It is a nice
possibility to implement communication among two neighbor mod-ules,
working in two different clock domains. The sender will just write on
23
Partial reconfiguration design 255

SRAM SRAM SRAM

FPGA FPGA

M1 M2 M3 M1 M2 M3

FPGA FPGA

M1 M2 M3 M1 M2 M3

Crossbar

Figure 7.19. Intermodule communication possibilities on the ESM

one side with its own frequency, while the receiver will read the data at
the other end with its own frequency.

Communication using external RAM: The communication through the


ex-ternal RAM is particular useful in applications in which each module
must process a large amount of data and then send the processed data to
the next module. This is mostly the case in video streaming application in
which a first module captures a stream, image by image. The image are
alternately store in two different memory which are accessed by the next
module for reading. On the ESM, each SRAM can be accessed by three
modules as shown in figure 7.20.
Since a SRAM can only be accessed by one module at a time, a
controller is then used to manage the concurrent SRAM access between
the three modules. Depending on the application, the user may set the
priority of accessing the SRAM for the three modules.
256 RECONFIGURABLE COMPUTING

Figure 7.20. SRAM-based intermodule communication on the ESM

7.1.3 Communication via RMB


The communication between non adjacent modules happens through the
1-D circuit switching paradigm presented in section 4.1.

7.1.4 Communication via the crossbar


The last possibility to establish a communication among modules is to use
the crossbar switch available on the mother board. Because all modules are
connected to the crossbar via the pins at the bottom of the FPGA, the
commu-nication among the modules can be set in the crossbar as well. This
possibility should be use only as emergency solution, due to the high latency
caused by the of-chip routing.

8. Enhancement in the Platform Design


The design of the Erlangen Slot Machine was strongly influenced by the
ar - chitectural limitations of the Xilinx Virtex II FPGA, in particular their
colum-nwise reconfiguration. This limitation was the primary motivation for
the im-plementation of the crossbar switch on a separate FPGA. All the
connections between running modules and their peripherals could be
implemented in the crossbar switch and be safe from the reconfiguration of
the module. Subs tan-tial efforts were therefore place in the design of the
two boards previously described.
With the introduction of the Virtex 4, Xilinx has introduced a new paradigm
in which, complete column must not be replaced on reconfiguration. This new
24
Partial reconfiguration design 257
development can lead to a very great enhancement on the platform design in
the two following aspects:

First, the crossbar switch can now be implemented in the FPGA rather than
on a separated device. In this case, we need to attach all the peripheral on
one side of the block in which the crossbar is implemented. Modules can
then be connected on the other side of the block. We can even think of a
system in which all components are attached around the chip. The crossbar
can the be implemented as a ring, distributed module around the chip.

The distribution of the resources, like the external RAM must not be done in
a column wise manner anymore. Resources should now be homogeneous
spread around the device, in order to allow different modules, which are
placed on different blocks to access their own resources.

Figure illustrate the enhancements on the Erlangen Slot Machine, using


the Virtex 4 and Virtex 5 FPGAs as previously explained.

Figure 7.21. Possible enhancement of the Erlangen Slot Machine on the Xilinx Virtex 4 and
Virtex 5 FPGAs

Despite the advantages provided by the new class of Virtex devices, a great
disadvantage that results from this architectural enhancement is the communi-
cation: for the columnwise reconfigurable devices, the 1-D circuit was a s im-ple
and efficient possibility to enforce communication among different mod-ules
running on the FPGA. In a two dimensional reconfigurable device as it
258 RECONFIGURABLE COMPUTING

is the case with the Virtex 4 and Virtex 5, the difficulty of


implementing the communication increases. In 2-D we have a
quadratic growth in the amount of resources needed to implement
the communication infrastructure. I.e., the amount of resources
needed to implement a 2-D communication infrastruc-ture is not
only twice the amount needed to implement a 1-D communication
infrastructure but four times.

9. Conclusion
We have presented in this chapter the different possibilities to
design for partial reconfiguration on Xilinx devices. Our goal was not
to rewrite a co m-plete manual on partial reconfiguration, since several
description on this exist
[128] [227]. The Xillinx manuals as well as the work of Sedcole [190]
provide very good descriptions on using the modular design flow and
the early acce ss. our primary motivation was to provide a king of
tutorial, based on our experi-ence and for which a workable design,
not too complex, but also not so easy, exists. The designs as well as all
the scripts need for compiling are available to download from the
book's web page. The difficulty in designing for par tial
reconfiguration can be reduced, if the target platform is well designed.
One of such platform is the Erlangen Slot Machine that was presented,
with the goal to emphasize the challenges in designing such a
platform. The ESM is how-ever strongly influenced by the column
wise reconfigurable Virtex. The pr ice we pay for flexibility is very
high. With the advent of new Virtex 4 and virtex 5, enhancements can
be made in the design in order to increase the flexibility, while
reducing the costs.
One of the main problem remain the communication between the
module at run-time. While this problem is somehow better solved in a
1-D reconfig-urable device through the use of circuit switching and
dedicated channels on the module, its extension on a 2-D
reconfigurable device is not feasible due to the amount of resources
needed. Viable communications approaches like the DyNoC was
presented in chapter 6 however, with the amount of resources needed
by those approaches, their feasibility is only possible if manufacturers
provide coarse-grained communication element in their devices.
MODULE IV

CHAPTER
21

IMPLEMENTING APPLICATIONS
WITH FPGAS
Brad L. Hutchings, Brent E. Nelson
Department of Electrical and Computer Engineering
Brigham Young University

Developers can choose various devices when implementing electronic systems:


field-programmable gate arrays (FPGAs), microprocessors, and other standard
products such as ASSPs, and custom chips or application-specific integrated
circuits (ASICs). This chapter discusses how FPGAs compare to other digital
devices, outlines the considerations that will help designers to determine when
FPGAs are appropriate for a specific application, and presents implementation
strategies that exploit features specific to FPGAs.
The chapter is divided into four major sections. Section 21.1 discusses the
strengths and weaknesses of FPGAs, relative to other available devices. Section
21.2 suggests when FPGA devices are suitable choices for specific applications/
algorithms, based upon their I/O and computation requirements. Section 21.3
discusses general implementation strategies appropriate for FPGA devices. Then
Section 21.4 discusses FPGA-specific arithmetic design techniques.

21.1 STRENGTHS AND WEAKNESSES OF FPGAs

Developers can choose from three general classes of devices when implement-ing
an algorithm or application: microprocessor, FPGA, or ASIC (for simplicity, ASSPs
are not considered here). This section provides a brief summary of the advantages
and disadvantages of these devices in terms of time to market, cost, development
time, power consumption, and debug and verification.

21.1.1 Time to Market


Time to market is often touted as one of the FPGA’s biggest strengths, at least
relative to ASICs. With an ASIC, from specification to product requires (at least):
[153] design, (2) verification, (3) fabrication, (4) packaging, and (5) device test. In
addition, software development requires access to the ASIC device (or an emu-
lation of such) before it can be verified and completed. As immediately available
standard devices, FPGAs have already been fabricated, packaged, and tested by
the vendor, thereby eliminating at least four months from time to market.
[154] Chapter 21 ■ Implementing Applications with FPGAs

More difficult to quantify but perhaps more important are: (1) refabrications
(respins) caused by either errors in the design or late changes to the specifica-tion,
due to a change in an evolving standard, for example, and (2) software
development schedules that depend on access to the ASIC. Both of these items
impact product production schedules; a respin can easily consume an additional
four months, and early access to hardware can greatly accelerate software devel-
opment and debug, particularly for the embedded software that communicates
directly with the device.
In light of these considerations, a conservative estimate of the time-to-market
advantage of FPGAs relative to ASICs is 6 to 12 months. Such a reduction is
significant; in consumer electronics markets, many products have only a 24-month
lifecycle.

21.1.2 Cost
Per device, FPGAs can be much less expensive than ASICs, especially in lower
volumes, because the nonrecurring costs of FPGA fabrication are borne by many
users. However, because of their reprogrammability, FPGAs require much more
silicon area to implement equivalent functionality. Thus, at the highest volumes
possible in consumer electronics, FPGA device cost will eventually exceed ASIC
device cost.

21.1.3 Development Time


FPGA application development is most often approached as hardware design:
applications are described in Verilog or VHDL, simulated to determine cor-rectness,
and synthesized using commercial logic synthesis tools. Commercial tools are
available that synthesize behavioral programs written in sequential languages such
as C to FPGAs. However, in most cases, much better perfor-mance and higher
densities are achieved using HDLs, because they allow the user to directly describe
and exploit the intrinsic parallelism available in an application. Exploiting application
parallelism is the single best way to achieve high FPGA performance. However,
designing highly parallel implementations of applications in HDLs requires
significantly more development effort than soft-ware development with conventional
sequential programming languages such as Java or C++.

21.1.4 Power Consumption


FPGAs consume more power than ASICs simply because programmability requires
many more transistors, relative to a customized integrated circuit (IC). FPGAs may
consume more or less power than a microprocessor or digital signal processor
(DSP), depending on the application.

21.1.5 Debug and Verification


FPGAs are developed with standard hardware design techniques and tools. Coded
in VHDL or Verilog and synthesized, FPGA designs can be debugged
21.2 Application Characteristics and Performance 441

in simulators just as typical ASIC designs are. However, many designers verify their
designs directly, by downloading them into an FPGA and testing them in a system.
With this approach the application can be tested at speed (a million times faster
than simulation) in the actual operating environment, where it is exposed to real-
world conditions. If thorough, this testing provides a stronger form of functional
verification than simulation. However, debugging applica-tions in an FPGA can be
difficult because vendor tools provide much less observ-ability and controllability
than, for example, an hardware description language (HDL) simulator.

21.1.6 FPGAs and Microprocessors


As discussed previously, FPGAs are most often contrasted with custom ASICs.
However, if a programmable solution is dictated because of changing applica-tion
requirements or other factors, it is important to study the application care-fully to
determine if it is possible to meet performance requirements with a programmable
processor—microprocessor or DSP. Code development for pro-grammable
processors requires much less effort than that required for FPGAs or ASICs,
because developing software with sequential languages such as C or Java is much
less taxing than writing parallel descriptions with Verilog or VHDL. Moreover, the
coding and debugging environments for programmable processors are far richer
than their HDL counterparts. Microprocessors are also generally much less
expensive than FPGAs. If the microprocessor can meet application requirements
(performance, power, etc.), it is almost always the best choice.
In general, FPGAs are well suited to applications that demand extremely high
performance and reprogrammability, for interfacing components that communi-cate
with many other devices (so-called glue-logic) and for implementing hard-ware
systems at volumes that make their economies of scale feasible. They are less well
suited to products that will be produced at the highest possible volumes or for
systems that must run at the lowest possible power.

21.2 APPLICATION CHARACTERISTICS AND PERFORMANCE

Application performance is largely determined by the computational and I/O


requirements of the system. Computational requirements dictate how much
hardware parallelism can be used to increase performance. I/O system limi-tations
and requirements determine how much performance can actually be exploited from
the parallel hardware.

21.2.1 Computational Characteristics and Performance


FPGAs can outperform today’s processors only by exploiting massive amounts of
parallelism. Their technology has always suffered from a significant clock-rate
disadvantage; FPGA clock rates have always been slower than CPU clock rates by
about a factor of 10. This remains true today, with clock rates for FPGAs
[155] Chapter 21 ■ Implementing Applications with FPGAs

limited to about 300 to 350 MHz and CPUs operating at approximately 3 GHz. As a
result, FPGAs must perform at least 10 times the computational work per cycle to
perform on par with processors. To be a compelling alternative, an FPGA-based
solution should exceed the performance of a processor-based solution by 5 to 10
times and hence must actually perform 50 to 100 times the computational work per
clock cycle. This kind of performance is feasible only if the target application
exhibits a corresponding amount of exploitable parallelism.

The guideline of 5 to 10 times is suggested for two main reasons. First of all,
prior to actual implementation, it is difficult or impossible to foresee the impact of
various system and I/O issues on eventual performance. In our experience, 5 times
can quickly become 2 times or less as various system and algorithmic issues arise
during implementation. Second, application development for FPGAs is much more
difficult than conventional software development. For that rea-son, the additional
development effort must be carefully weighed against the potential performance
advantages. A guideline of 5 to 10 times provides some insurance that any FPGA-
specific performance advantages will not completely vanish during the
implementation phase.
Ultimately, the intrinsic characteristics of the application place an upper bound on
FPGA performance. They determine how much raw parallelism exists, how
exploitable it is, and how fast the clock can operate. A review of the liter-ature [3–6,
11, 16, 19–21, 23, 26, 28] shows that the application characteristics that have the
most impact on application performance are: data parallelism, amenability to
pipelining, data element size and arithmetic complexity, and sim-ple control
requirements.

Data parallelism
Large datasets with few or no data dependencies are ideal for FPGA imple-
mentation for two reasons: (1) They enable high performance because many
computations can occur concurrently, and (2) they allow operations to be exten-
sively rescheduled. As previously mentioned, concurrency is extremely impor-tant
because FPGA applications must be able to achieve 50 to 100 times the operations
per clock cycle of a microprocessor to be competitive. The ability to reschedule
computations is also important because it makes it feasible to tailor the circuit
design to FPGA hardware and achieve higher performance. For example,
computations can be scheduled to maximize data reuse to increase performance
and reduce memory bandwidth requirements. Image-processing algorithms with
their attendant data parallelism have been among the highest-performing
algorithms mapped to FPGA devices.

Data element size and arithmetic complexity


Data element size and arithmetic complexity are important because they strongly
influence circuit size and speed. For applications with large amounts of exploitable
parallelism, the upper limit on this parallelism is often deter-mined by how many
operations can be performed concurrently on the FPGA device. Larger data
elements and greater arithmetic complexity lead to larger
21.2 Application Characteristics and Performance 443

and fewer computational elements and less parallelism. Moreover, larger and more
complex circuits exhibit more delay that slows clock rate and impacts performance.
Not surprisingly, representing data with the fewest possible bits and performing
computation with the simplest operators generally lead to the highest performance.
Designing high-performance applications in FPGAs almost always involves a
precision/performance trade-off.

Pipelining
Pipelining is essential to achieving high performance in FPGAs. Because FPGA
performance is limited primarily by interconnect delay, pipelining (inserting reg-
isters on long circuit pathways) is an essential way to improve clock rate (and
therefore throughput) at the cost of latency. In addition, pipelining allows com-
putational operations to be overlapped in time and leads to more parallelism in the
implementation. Generally speaking, because pipelining is used extensively
throughout FPGA-based designs, applications must be able to tolerate some
latency (via pipelining) to be suitable candidates for FPGA implementation.

Simple control requirements


FPGAs achieve the highest performance if all operations can be statically sched-
uled as much as possible (this is true of many technologies). Put simply, it takes
time to make decisions and decision-making circuitry is often on the critical path for
many algorithms. Replacing runtime decision circuitry with static con-trol eliminates
circuitry and speeds up execution. It makes it much easier to construct circuit
pipelines that are heavily utilized with few or no pipeline bub-bles. In addition,
statically scheduled controllers require less circuitry, making room for more
datapath operators, for example. In general, datasets with few or no dependencies
often have simple control requirements.

21.2.2 I/O and Performance


As mentioned previously, FPGA clock rates are at least one order of magnitude
slower than those of CPUs. Thus, significant parallelism (either data parallelism or
pipelining) is required for an FPGA to be an attractive alternative to a CPU.
However, I/O performance is just as important: Data must be transmitted at rates
that can keep all of the parallel hardware busy.
Algorithms can be loosely grouped into two categories: I/O bound and com-pute
bound [17, 18]. At the simplest level, if the number of I/O operations is equal to or
greater than the number of calculations in the computation, the computation is said
to be I/O bound. To increase its performance requires an increase in memory
bandwidth—doing more computation in parallel will have no effect. Conversely, if
the number of computations is greater than the number of I/O operations,
computational parallelism may provide a speedup.
A simple example of this, provided by Kung [18], is matrix–matrix multi-plication.
2
The total number of I/Os in the computation, for n-by-n matrices, is 3n —each
matrix must be read and the product written back. The total number of
3
computations to be done, however, is n . Thus, this computation is
[156] Chapter 21 ■ Implementing Applications with FPGAs

2 2
compute bound. In contrast, matrix–matrix addition requires 3n I/Os and 3n
calculations and is thus I/O bound. Another way to see this is to note that each
source element read from memory in a matrix–matrix multiplication is used n times
and each result is produced using n multiply–accumulate operations. In matrix–
matrix addition, each element fetched from memory is used only once and each
result is produced from only a single addition.
Carefully coordinating data transfer, I/O movement, and computation order is
crucial to achieving enough parallelism to provide effective speedup. The entire
field of systolic array design is based on the concepts of (1) arranging the I/O and
computation in a compute-bound application so that each data element fetched
from memory is reused multiple times, and (2) keeping many processing elements
busy operating in parallel on that data.
FPGAs offer a wide variety of memory elements that can be used to coor-dinate
I/O and computation: flip-flops to provide single-bit storage (10,000s of bits); LUT-
based RAM to provide many small blocks of randomly distributed memory
(100,000s of bits); and larger RAM or ROM memories (1,000,000s of bits). Some
vendors’ FPGAs contain multiple sizes of random access memories, and these
memories are often easily configured into special-purpose structures such as
dynamic-length shift registers, content-addressable memories (CAMs), and so forth.
In addition to these types of on-chip memory, most FPGA plat-forms provide off-
chip memory as well.
Increasing the I/O bandwidth to memory is usually critical in harnessing the
parallelism inherent in a computation. That is, after some point, further multi-plying
the number of processing elements (PEs) in a design (to increase paral-lelism)
usually requires a corresponding increase in I/O. This additional I/O can often be
provided by the many on-chip memories in a typical modern FPGA. The work of
Graham and Nelson [8] describes a series of early experiments to map time-delay
SONAR beam forming to an FPGA platform where memory band-width was the
limiting factor in design speedup. While the data to be processed were an infinite
stream of large data blocks, many of the other data structures in the computation
were not large (e.g., coefficients, delay values). In this com-putation, it was not the
total amount of memory that limited the speedup but rather the number of memory
ports available. Thus, the use of multiple small memories in parallel were able to
provide the needed bandwidth.
The availability of many small memories in today’s FPGAs further supports the
idea of trading off computation for table lookup. Conventional FPGA fabrics are
based on a foundation of 4-input LUTs; in addition, larger on-chip memories can be
used to support larger lookup structures. Because the memories already exist on
chip, unlike in ASIC technology, using them adds no additional cost to the system.
A common approach in FPGA-based design, therefore, is to evaluate which parts of
the system’s computations might lend themselves to table lookup and use the
available RAM blocks for these lookups.
In summary, the performance of FPGA-based applications is largely deter-mined
by how much exploitable parallelism is available, and by the ability of the system to
provide data to keep the parallel hardware operational.
21.3 General Implementation Strategies for FPGA-based Systems 445

21.3 GENERAL IMPLEMENTATION STRATEGIES

FOR FPGA-BASED SYSTEMS


In contrast with other programmable technologies such as microprocessors or
DSPs, FPGAs provide an extremely rich and complex set of implementa-tion
alternatives. Designers have complete control over arithmetic schemes and number
representation and can, for example, trade precision for performance. In addition,
reprogrammable, SRAM-based FPGAs can be configured any num-ber of times to
provide additional implementation flexibility for further tailoring the implementation
to lower cost and make better use of the device.
There are two general configuration strategies for FPGAs: configure-once, where
the application consists of a single configuration that is downloaded for the duration
of the application’s operation, and runtime reconfiguration (RTR), where the
application consists of multiple configurations that are “swapped” in and out as the
application operates [14].

21.3.1 Configure-once
Configure-once (during operation) is the simplest and most common way to
implement applications with reconfigurable logic. The distinctive feature of
configure-once applications is that they consist of a single system-wide config-
uration. Prior to operation, the FPGAs comprising the reconfigurable resource are
loaded with their respective configurations. Once operation commences, they
remain in this configuration until the application completes. This approach is very
similar to using an ASIC for application acceleration. From the application point of
view, it matters little whether the hardware used to accelerate the appli-cation is an
FPGA or a custom ASIC because it remains constant throughout its operation.

The configure-once approach can also be applied to reconfigurable applica-tions


to achieve significant acceleration. There are classes of applications, for example,
where the input data varies but remains constant for hours, days, or longer. In some
cases, data-specific optimizations can be applied to the applica-tion circuitry and
lead to dramatic speedup. Of course, when the data changes, the circuit-specific
optimizations need to be reapplied and the bitstream regen-erated. Applications of
this sort consist of two elements: (1) the FPGA and system hardware, and (2) an
application-specific compiler that regenerates the bitstream whenever the
application-specific data changes. This approach has been used, for example, to
accelerate SNORT, a popular packet filter used to improve network security [13].
SNORT data consists of regular expressions that detect malicious packets by their
content. It is relatively static, and new regular expressions are occasionally added
as new attacks are detected. The application-specific compiler translates these
regular expressions into FPGA hardware that matches packets many times faster
than software SNORT. When new regular expressions are added to the SNORT
database, the compiler is rerun and a new configuration is created and downloaded
to the FPGA.
[157] Chapter 21 ■ Implementing Applications with FPGAs

21.3.2 Runtime Reconfiguration


Whereas configure-once applications statically allocate logic for the duration of an
application, RTR applications use a dynamic allocation scheme that re-allocates
hardware at runtime. Each application consists of multiple con-figurations per
FPGA, with each one implementing some fraction of it. Whereas a configure-once
application configures the FPGA once before execution, an RTR application
typically reconfigures it many times during the normal operation.
There are two basic approaches that can be used to implement RTR appli-
cations: global and local (sometimes referred to as partial configuration in the
literature). Both techniques use multiple configurations for a single application, and
both reconfigure the FPGA during application execution. The principal dif-ference
between the two is the way the dynamic hardware is allocated.

Global RTR
Global RTR allocates all (FPGA) hardware resources in each configuration step.
More specifically, global RTR applications are divided into distinct temporal phases,
with each phase implemented as a single system-wide configuration that occupies
all system FPGA resources. At runtime, the application steps through each phase
by loading all of the system FPGAs with the appropriate configura-tion data
associated with a given phase.

Local RTR
Local RTR takes an even more flexible approach to reconfiguration than does
global RTR. As the name implies, these applications locally (or selectively) recon-
figure subsets of the logic as they execute. Local RTR applications may configure
any percentage of the reconfigurable resources at any time, individual FPGAs may
be configured, or even single FPGA devices may themselves be partially
reconfigured on demand. This flexibility allows hardware resources to be tai-lored to
the runtime profile of the application with finer granularity than that possible with
global RTR. Whereas global RTR approaches implement the execu-tion process by
loading relatively large, global application partitions, local RTR applications need
load only the necessary functionality at each point in time. This can reduce the
amount of time spent downloading configurations and can lead to a more efficient
runtime hardware allocation.
The organization of local RTR applications is based more on a functional division
of labor than the phased partitioning used by global RTR applications. Typically,
local RTR applications are implemented by functionally partitioning an application
into a set of fine-grained operations. These operations need not be temporally
exclusive—many of them may be active at one time. This is in direct contrast to
global RTR, where only one configuration (per FPGA) may be active at any given
time. Still, with local RTR it is important to organize the operations such that idle
circuitry is eliminated or greatly reduced. Each operation is implemented as a
distinct circuit module, and these circuit modules are then downloaded to the
FPGAs as necessary during operation. Note that, unlike global RTR, several of
these operations may be loaded simultaneously, and each may consume any
portion of the system FPGA resources.
21.3 General Implementation Strategies for FPGA-based Systems 447

RTR applications
Runtime Reconfigured Artificial Neural Network (RRANN) is an early example of a
global RTR application [7]. RRANN divided the back-propagation algorithm (used to
train neural networks) into three temporally exclusive configurations that were
loaded into the FPGA in rapid succession during operation. It demon-strated a 500
percent increase in density by eliminating idle circuitry in individ-ual algorithm
phases.
RRANN was followed up with RRANN-2 [9], an application using local RTR. Like
RRANN, the algorithm was still divided into three distinct phases. However, unlike
the earlier version, the phases were carefully designed so that they shared common
circuitry, which was placed and routed into identical physical locations for each
phase. Initially, only the first configuration was loaded; thereafter, the common
circuitry remained resident and only circuit differences were loaded during
operation. This reduced configuration overhead by 25 percent over the global RTR
approach.
The Dynamic Instruction Set Computer (DISC) [29] used local RTR to create a
sequential control processor with a very small fixed core that remained resi-dent at
all times. This resident core was augmented by circuit modules that were
dynamically loaded as required by the application. DISC was used to implement an
image-processing application that consisted of various filtering operations. At
runtime, the circuit modules were loaded as necessary. Although the application
used all of the filtering circuit modules, it did not require all of them to be loaded
simultaneously. Thus, DISC loaded circuit modules on demand as required. Only a
few active circuit modules were ever resident at any time, allowing the appli-cation
to fit in a much smaller device than possible with global RTR.

21.3.3 Summary of Implementation Issues


Of the two general implementation techniques, configure-once is the simplest and is
best supported by commercially available tool flows. This is not surpris-ing, as all
FPGA CAD tools are derivations of conventional ASIC CAD flows. While the two
RTR implementation approaches (local and global) can provide significant
performance and capacity advantages, they are much more challeng-ing to employ,
primarily because of a lack of specific tool support.
The designer’s primary task when implementing global RTR applications is to
temporally divide the application into roughly equal-size partitions to effi-ciently use
reconfigurable resources. This is largely a manual process—although the academic
community has produced some partitioning tools, no commercial offerings are
currently available. The main disadvantage of global RTR is the need for equal-size
partitions. If it is not possible to evenly partition the appli-cation, inefficient use of
FPGA resources will result.
The main advantage of local RTR over global RTR is that it uses fine-grained
functional operators that may make more efficient use of FPGA resources. This is
important for applications that are not easily divided into equal-size temporally
exclusive circuit partitions. However, partitioning a local RTR design may require an
inordinate amount of designer effort. For example, unlike global
[158] Chapter 21 ■ Implementing Applications with FPGAs

RTR, where circuit interfaces typically remain fixed between configurations, local
RTR allows these interfaces to change with each configuration. When circuit
configurations become small enough for multiple configurations to fit into a single
device, the designer needs to ensure that all configurations will interface correctly
one with another. Moreover, the designer may have to ensure not only structural
compliance but physical compliance as well. That is, when the designer creates
circuit configurations that do not occupy an entire FPGA, he or she will have to
ensure that the physical footprint of each is compatible with that of others that may
be loaded concurrently.

21.4 IMPLEMENTING ARITHMETIC IN FPGAs


Almost since their invention, FPGAs have employed dedicated circuitry to
accelerate arithmetic computation. In earlier devices, dedicated circuitry sped up
the propagation of carry signals for ripple-carry, full-adder blocks. Later devices
added dedicated multipliers, DSP function blocks, and more complex fixed-function
circuitry. The presence of such dedicated circuitry can dramati-cally improve
arithmetic performance, but also restricts designers to a very small subset of
choices when implementing arithmetic.
Well-known approaches such as carry-look-ahead, carry-save, signed-digit, and
so on, generally do not apply to FPGAs. Though these techniques are com-monly
used to create very high-performance arithmetic blocks in custom ICs, they are not
competitive when applied to FPGAs simply because they cannot access the faster,
dedicated circuitry and must be constructed using slower, general-purpose user
logic. Instead, FPGA designers accelerate arithmetic in one of two ways with
FPGAs: (1) using dedicated blocks if they fit the needs of the application, and (2)
avoiding the computation entirely, if possible. Design-ers apply the second option
by, for example, replacing full-blown floating-point computation with simpler, though
not equivalent, fixed-point, or block floating-point, computations. In some cases,
they can eliminate multiplication entirely with constant propagation. Of course, the
feasibility of replacing slower, com-plex functions with simpler, faster ones is
application dependent.

21.4.1 Fixed-point Number Representation and Arithmetic


A fixed-point number representation is simply an integer representation with an
implied binary point, usually in 2’s complement format to enable the rep-resentation
of both positive and negative values. A common way of describing the structure of a
fixed-point number is to use a tuple: n, m, where n is the number of bits to the left of
the binary point and m is the number of bits to the right. A 16.0 format would thus
be a standard 16-bit integer; a 3.2 format fixed-point number would have a total of 5
bits with 3 to the left of the implied binary point and 2 to the right. A range of
numbers from +1 to −1A is common in digital signal-processing applications. Such
a representation might be of the
21.4 Implementing Arithmetic in FPGAs 449

form 1.9, where the largest number is 0.111111111 = 0.99810 and the smallest is
1.000000000 = −110. As can be seen, fixed-point arithmetic exactly follows the
rules learned in grade school, where lining up the implied binary point is required for
performing addition or subtraction.
When designing with fixed-point values, one must keep track of the number
format on each wire; such bookkeeping is one of the design costs associated with
fixed-point design. At any point in a computation, either truncation or rounding can
be used to reduce the number of bits to the right of the binary point, the effect being
to simply reduce the precision with which the number is represented.

21.4.2 Floating-point Arithmetic


Floating-point arithmetic overcomes many of the challenges of fixed-point arith-
metic but at increased circuit cost and possibly reduced precision. The most
common format for a floating-point number is of the form seeeeeffffff, where s is a
sign bit, eeeee is an exponent, and ffffff is the mantissa. In the IEEE stan-dard for
single-precision floating point, the number of exponent bits is 8 and the number of
mantissa bits is 23, but nonstandard sizes and formats have also been used in
FPGA work [2, 24].
IEEE reserves various combinations of exponent and mantissa to represent
special values: zero, not a number (NAN), infinity (+8 and −8), and so on. It sup-
ports denormalized numbers (no leading implied 1 in the mantissa) and flags them
using a special exponent value. Finally, the IEEE specification describes four
rounding modes. Because supporting all special case number represen-tations and
rounding modes in hardware can be very expensive, FPGA-based floating-point
support often omits some of them in the interest of reducing com-plexity and
increasing performance.
For a given number of bits, floating point provides extended range to a compu-
tation at the expense of accuracy. An IEEE single-precision floating-point num-ber
allocates 23 bits to the mantissa, giving an effective mantissa of only 24 bits when
the implied 1 is considered. The advantage of floating point is that its exponent
allows for the representation of numbers across a broad range (IEEE normalized
38 −38
single-precision values range from ≈ ±3 ×10 to ≈ ±1 ×10 ). Con-versely, while
a 32-bit fixed-point representation (1.31 format) has a range of only −1 to ≈+ 1, it
can represent some values within that range much more accu-rately than a floating-
point format can—for example, numbers close to +1 such as
0.11111111111111111111111111111111. However, for numbers very close to +0,
the fixed-point representation would have many leading zeroes, and thus would
have less precision than the competing floating-point representation.
An important characteristic of floating point is its auto-scaling behavior. After
every floating-point operation, the result is normalized and the exponent adjusted
accordingly. No work on the part of the designer is required in this respect (although
significant hardware resources are used). Thus, it is useful in cases where the
range of intermediate values cannot be bounded by the designer and therefore
where fixed point is unsuitable.
[159] Chapter 21 ■ Implementing Applications with FPGAs

The use of floating point in FPGA-based design has been the topic of much
research over the past decade. Early papers, such as Ligon and colleagues [15]
and Shirazi et al. [24], focused on the cost of floating point and demonstrated that
small floating-point formats as well as single-precision formats could be eventu-ally
implemented using FPGA technology. Later work, such as that by Bellows and
Hutchings [1] and Roesler and Nelson [22], demonstrated novel ways of leverag-ing
FPGA-specific features to more efficiently implement floating-point modules. Finally,
Underwood [27] argued that the capabilities of FPGA-based platforms for
performing floating point would eventually surpass those of standard computing
systems.
All of the research just mentioned contains size and performance estimates for
floating-point modules on FPGAs at the time they were published. Clever design
techniques and growing FPGA densities and clock rates continually com-bine to
produce smaller, faster floating-point circuits on FPGAs. At the time of this writing,
floating-point module libraries are available from a number of sources, both
commercial and academic.

21.4.3 Block Floating Point


Block floating point (BFP) is an alternative to fixed-point and floating-point
arithmetic that allows entire blocks of data to share a single exponent. Fixed-point
arithmetic is then performed on a block of data with periodic rescaling of its data
values. A typical use of block floating point is as follows:

[160] The largest value in a block of data is located, a corresponding


exponent is chosen, and that value’s fractional part is normalized to that
exponent.
[161] The mantissas of all other values in the block are adjusted to use
the same exponent as that largest value.
[162] The exponent is dropped and fixed-point arithmetic proceeds on
the resulting values in the data block.
[163] As the computation proceeds, renormalization of the entire block
of data occurs—after every individual computation, only when a value
overflows, or after a succession of computations.

The key is that BFP allows for growth in the range of values in the data block
while retaining the low cost of fixed-point computations. Block floating point has
found extensive use in fast Fourier transform (FFT) computations where an input
block (such as from an A/D converter) may have a limited range of values, the data
is processed in stages, and stage boundaries provide natural renormalization
locations.

21.4.4 Constant Folding and Data-oriented Specialization


As mentioned Section 21.3.2, when the data for a computation changes, an FPGA
can be readily reconfigured to take advantage of that change. As a simple example
of data folding, consider the operation: a =?b, where a and b are 4-bit
21.4 Implementing Arithmetic in FPGAs 451

a0
b0
a1 a0
b
1 a1
a2
a=?b a2
a = ? 1011
b2 a3
a3
b3
(a) (b)

FIGURE 21.1 ■ Two comparator implementations: (a) with and (b) without constant folding.

numbers. Figure 21.1 shows two implementations of a comparator. On the left


[164] is a conventional comparator; on the right (b) is a comparator that may be
used when b is known (b = 1011). Implementation (a) requires three 4-LUTs to
implement while implementation (b) requires just one. Such logic-level constant
folding is usually performed by synthesis tools.
A more complex example is given by Wirthlin [30], who proposed a method for
creating constant coefficient multipliers. When one constant to a multiplier was
known, a custom multiplier consuming far fewer resources than a gen-eral multiplier
could usually be created. Wirthlin’s manipulations [30], going far beyond what logic
optimization performed, created a custom structure for a given multiplier instance
based on specific characteristics of the constant.
Hemmert et al. [10] offer an even more complex example in which a pipeline of
image morphology processing stages was created, each of which could per-form
one image morphology step (e.g., one iteration in an erosion operation). The LUT
contents in each pipeline stage controlled the stage’s operation; thus, reconfiguring
a stage required modifying only LUT programming. A compiler was then created to
convert programs, written in a special image morphology language, into the data
required to customize each pipeline stage’s operation.
When a new image morphology program was compiled, a new bitstream for the
FPGA could be created in a second or two (by directly modifying the original
bitstream) and reconfigured onto the platform. This provided a way to create a
custom computing solution on a per-program basis with turnarounds on the order of
a few seconds. In each case, the original morphology program that was compiled
provided the constant data that was folded into the design.
Additional examples in the literature show the power of constant folding.
However, its use typically requires specialized CAD support. Slade and Nelson [25]
argue that a fundamentally different approach to CAD for FPGAs is the solution to
providing generalized support for such data-specific specialization. They advocate
the use of JHDL [1, 12] to provide deployment time support for data-specific
modifications to an operating FPGA-based system.
In summary, FPGAs provide architectural features that can accelerate sim-ple
arithmetic operations such as fixed-point addition and multiplication.
[165] Chapter 21 ■ Implementing Applications with FPGAs

Floating-point operations can be accelerated using block floating point or by


reducing the number of bits to represent floating-point values. Finally, constants can
be propagated into arithmetic circuits to reduce circuit area and accelerate
arithmetic performance.

21.5 SUMMARY
FPGAs provide a flexible, high-performance, and reprogrammable means for
implementing a variety of electronic applications. Because of their repro-
grammability, they are well suited to applications that require some form of direct
reprogrammability, and to situations where reprogrammability can be used indirectly
to increase reuse and thereby reduce device cost or count. FPGAs achieve the
highest performance when the application can be implemented as many parallel
hardware units operating in parallel, and where the aggregate I/O requirements for
these parallel units can be reasonably met by the overall sys-tem. Most FPGA
applications are described using HDLs because HDL tools and synthesis software
are mature and well developed, and because, for now, they provide the best means
for describing applications in a highly parallel manner.
Once FPGAs are determined to be a suitable choice, there are several ways to
tailor the system design to exploit their reprogrammability by reconfiguring them at
runtime or by compiling specific, temporary application-specific data into the FPGA
circuitry. Performance can be further enhanced by crafting arith-metic circuitry to
work around FPGA limitations and to exploit the FPGA’s spe-cial arithmetic
features. Finally, FPGAs provide additional debug and verification methods that are
not available in ASICs and that enable debug and verification to occur in a system
and at speed.
In summary, FPGAs combine the advantages and disadvantages of micropro-
cessors and ASICs. On the positive side, they can provide high performance that is
achievable only with custom hardware, they are reprogrammable, and they can be
purchased in volume as a fully tested, standard product. On the neg-ative side, they
remain largely inaccessible to the software community; more-over, high-
performance application development requires hardware design and the use of
standard synthesis tools and Verilog or VHDL.

References
[166] P. Bellows, B. L. Hutchings. JHDL—An HDL for reconfigurable systems.
Proceed-ings of IEEE Workshop on FPGAs for Custom Computing Machines, April
1998.
[167] B. Catanzaro, B. Nelson. Higher radix floating-point representations for FPGA-
based arithmetic. Proceedings of the IEEE Symposium on FPGAs for Custom
Computing Machines, April 2005.
[168] W. Culbertson, R. Amerson, R. Carter, P. Kuekes, G. Snider. Exploring
architectures for volume visualization on the Teramac custom computer. Proceedings
of IEEE Workshop on FPGAs for Custom Computing Machines, April 1996.
21.5 Summary 453

[169] A. Dandalis, V. K. Prasanna. Fast parallel implementation of DFT using config-


urable devices. Field-programmable logic: Smart applications, new paradigms, and
compilers. Proceedings 6th International Workshop on Field-Programmable Logic and
Applications, Springer-Verlag, 1997.
[170] C. H. Dick, F. Harris. FIR filtering with FPGAs using quadrature sigma-delta mod-
ulation encoding. Field-programmable logic: Smart applications, new paradigms, and
compilers. Proceedings 6th International Workshop on Field-Programmable Logic and
Applications, Springer-Verlag 1996.
[171] C. Dick. Computing the discrete Fourier transform on FPGA-based systolic arrays.
ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, February
1996.
[172] J. G. Eldredge, B. L. Hutchings. Density enhancement of a neural network using
FPGAs and runtime reconfiguration. Proceedings of the IEEE Workshop on FPGAs for
Custom Computing Machines, April 1994.
[173] P. Graham, B. Nelson. FPGA-based sonar processing. ACM/SIGDA International
Symposium on Field-Programmable Gate Arrays, February 1998.
[174] J. D. Hadley, B. L. Hutchings. Design methodologies for partially reconfigured
systems. Proceedings of the IEEE Workshop on FPGAs for Custom Computing
Machines, April 1995.
[175] S. Hemmert, B. Hutchings, A. Malvi. An application-specific compiler for high-speed
binary image morphology. Proceedings of the the 9th Annual IEEE Sympo-sium on
Field-Programmable Custom Computing Machines, 2001.
[176] R. Hudson, D. Lehn, P. Athanas. A runtime reconfigurable engine for image inter-
polation. In Proceedings of the IEEE Symposium on FPGAs for Custom Computing
Machines, IEEE, April 1998.
[177] B. L. Hutchings, P. Bellows, J. Hawkins, S. Hemmert, B. Nelson, M. Rytting. A CAD
suite for high-performance FPGA design. Proceedings of the IEEE Work-shop on
FPGAs for Custom Computing Machines, April 1999.
[178] B. L. Hutchings, R. Franklin, D. Carver. Assisting network intrusion detection with
reconfigurable hardware. Proceedings of the IEEE Symposium on FPGAs for Custom
Computing Machines, IEEE, April 2002.
[179] B. L. Hutchings, M. J. Wirthlin. Implementation approaches for reconfigurable logic
applications. Field-Programmable Logic and Applications, August 1995.
[180] W. B. Ligon III, S. McMillan, G. Monn, K. Schoonover, F. Stivers, K. D. Underwood. A
re-evaluation of the practicality of floating-point operations on FPGAs. Proc-eedings of
the IEEE Symposium on FPGAs for Custom Computing Machines, 1998.
[181] W. E. King, T. H. Drayer, R. W. Conners, P. Araman. Using MORPH in an industrial
machine vision system. Proceedings of the IEEE Workshop on FPGAs for Custom
Computing Machines, April 1996.
[182] H. T. Kung. Why Systolic Architectures? IEEE Computer 15(1), 1982.
[183] S. Y. Kung. VLSI Array Processors, Prentice-Hall, 1988.
[184] T. Moeller, D. R. Martinez. Field-programmable gate array based radar front-end digital
signal processing. Proceedings of the IEEE Workshop on FPGAs for Custom
Computing Machines, April 1999.
[185] G. Panneerselvam, P. J. W. Graumann, L. E. Turner. Implementation of fast Fourier
transforms and discrete cosine transforms in FPGAs. Fifth International Workshop on
Field-Programmable Logic and Applications, September 1995.
[186] R. J. Petersen. An Assessment of the Suitability of Reconfigurable Systems for Digital
Signal Processing, Master’s thesis, Brigham Young University, 1995.
[187] Chapter 21 ■ Implementing Applications with FPGAs

E. Roesler, B. Nelson. Novel optimizations for hardware floating-point units in a modern FPGA
architecture. Proceedings of the 12th International Workshop on Field-Programmable Logic
and Applications, August 2002.
N. Shirazi, P. M. Athanas, A. L. Abbott. Implementation of a 2D fast Fourier transform on an
FPGA-based custom computing machine. Fifth International Workshop on Field-Programmable
Logic and Applications, September 1995.
N. Shirazi, A. Walters, P. Athanas. Quantitative analysis of floating point arithmetic on FPGA-
based custom computing machines. Proceedings of the IEEE Workshop on FPGAs for Custom
Computing Machines, April 1995.
A. Slade, B. Nelson. Reconfigurable computing application frameworks. Proceed-ings of the
IEEE Symposium on Field-Programmable Custom Computing Machines, April 2003.
L. E. Turner, P. J. W. Graumann, S. G. Gibb. Bit-serial FIR filters with CSD coef-ficients for
FPGAs. Fifth International Workshop on Field-Programmable Logic and Applications,
September 1995.
K. Underwood. FPGAs vs. CPUs: Trends in peak floating-point performance. Proceed-ings of
the ACM/SIGDA 12th International Symposium on Field-Programmable Gate Arrays, 2004.
J. E. Vuillemin. On computing power. Programming languages and system archi-tectures.
Lecture Notes in Computer Science, vol. 781, Springer-Verlag, 1994.
M. J. Wirthlin, B. L. Hutchings (eds). A dynamic instruction set computer. Proceedings of the
IEEE Workshop on FPGAs for Custom Computing Machines, April 1995.

M. J. Wirthlin. Constant coefficient multiplication using look-up tables. Journal of VLSI Signal
Processing 36, 2004.

APPLICATIONS
Up to now, we have focussed on the technology behind reconfigurable d e-
vices, their integration in different systems and their programmability at dif-
ferent level of abstraction. The natural questions which arises it the one to know,
in which area does reconfigurable computing could be helpful? if ar - eas of
application exist, then we would also like to know what the gain of using
reconfigurable technology is, compared to other processing devic es used
before? The gain of using a solution rather another one is usually expressed as
the computation speed, or as the improvement in power consumption, or as the
price, or even as the easiness in the programmability. In [57], Craven and
Athanas provided a performace/price comparative study between FPGA-based
high-performance computing machines and traditional supercomputers. Based
on the gain in accelerating floating-point computations, they came to the
conclusion that the price to pay was too high compare to the marginal gain
obtained, when using FPGAs in supercomputers. Other researchers have re-
ported amazing gains with FPGA-based computing machines in several fields of
computation and several companies that have been offering FPGA-based
computing solutions a decade ago are still operating and even growing. Our goal
in this chapter is to present some applications that benefit somehow fro m an
implementation on reconfigurable devices. Almost all the experiments re-ported
were done on FPGA platforms. We will not report all the experiment in all
details. We rather focus on the coarse-grained structure of the applications and
point out where the reconfiguration can be helpful. We are primary c on-cerned
with the speed improvement and the use of flexibility of reconfigurab le
computing systems.

Despite the very large number of applications that could benefit from an
implementation in FPGAs, we have selected only 6 domains of application the
286 RECONFIGURABLE COMPUTING

we present here: pattern matching, video streaming, signal processing using


distributed arithmetic, control, and super computing.

1. Pattern Matching
Pattern matching can be defined as the process of checking if a character
string is part of a longer sequence of characters. Pattern matching is used in a
large variety of fields in computer science. In text processing programs like
Microsoft Word, pattern matching is used in the search function. The pur-
pose is to match the keyword being searched against a sequence of
characters that build the complete text. In database information retrieval, the
content of different fields of a given database entry are matched against the
sequ ence of characters that build the user request. Searching in genetical
database also use pattern matching to match the content of character
sequence from a database entries with the sequence of characters that build a
given query. In this case the alphabet is built upon a set of a genomic
characters. Speech recognition and other pattern recognition tools also use
pattern matching as basic opera-tions on top of which complex intelligent
functions may be built, in order to better classify the audio sequences. Other
application using pattern matching are: dictionary implementation, spam
avoidance, network intrusion detection and content surveillance.
Because text mining, whose primary role is the categorization of
documents, makes a heavy use of pattern matching, we choose in this section
to present the use of pattern matching in text mining and to point out the
possible use of reconfiguration.
Documents categorization is the process of assigning a given set of docu-
ments from a database to a given set of categories. The categories can either be
manually defined by a user, or it can be computed automatically by a soft-ware
tool. In the first case, the categorization is supervised and in the sec ond cased
we have an unsupervised categorization. In categorization, the fi rst step usually
consist of indexing the collection of available documents. This is usu-ally done
through the so called vector space representation. In this model a document is
represented as a vector of key words or term present in the doc-ument. A
complete collection of n documents over a list of m keywords is then
m n
represented by a term by documents matrix A ∈ R × R . An entry aij in A
represents the frequency of the word i in the document j. The term by documents
matrix is then used for indexing purpose or statistical analysis like LSI (Latent
Semantic Indexing) [67]. Building a terms by documents matrix is done by
scanning the documents of the given collection in order to find the appearance
of key words and return the corresponding entry in the matrix for each
document. Pattern matching is therefore used for this purpose.
The first advantage of using the reconfigurable device here is the inher ent
fine-grained parallelism that characterizes the search. Many differen t words
Applications 287

can be searched for in parallel by matching the input text against a set of
words on different paths. The second advantage is the possibility of quickly
exchange the list of searched words by means or reconfiguration.
Pattern matching was investigated in different work using different approaches
on FPGAs [52] [85] [104] [151] [101] [180] [34], each with a different over-head
in compiling the set of words down to the hardware and different capac-ities. We
define the capacity of a search engine as the number of words that can de searched
for in parallel. A large capacity also means a high complexity of the function to be
implemented in hardware, which in turn means a large amount of resources. The
goal in implementing a search engine in hardware is to have a maximal hardware
utilization, i.e. as many words as possible that can be searched for in parallel. We
present in this section various hardware imple-mentation of the pattern matching,
each with its advantages and drawbacks.

1.1 The Sliding Windows Approach


One approach for text searching is the so called sliding window (SL)[85]
[151]. In the 1-keyword version, the target word is stored in one register,
each character being stored in one register field consisting of a byte. The
length of the register is equal to the length of the word it contains. The text is
streamed through a separate shift register, whose length is the same as that of
the key-word. For each character of the given keyword stored as a byte in
one register field, an 8-bit comparator is used to compare this character with
the corre-sponding character of the text, which streams through the shift
register. A hit occurs when all the comparators return the value true.
The sliding window can be extended to check a match of multiple patterns
in parallel. Each target word will then be stored in one register and will have
as many comparators as required. The length of the window (the number of
characters in the shift register) is defined by the segment of the text to be p
ro-cessed. Words with length bigger than the length of the window can not
be handled. To overcome this problem one can define the length of text to be
considered as the maximum length over all the target words. In this case, all
the words with a length smaller than the maximum length should be filled
with don`t care characters to be handled correctly. The example of figure 9
.1 shows the structure of a sliding windows with three key words.
The main advantage of this approach resides in the reconfiguration. Be-
cause each keyword is stored in an independent register, the reconfig uration
can happen without affecting the other words, thus providing the possibility
to gradually and quickly modify the dictionary.1

1With dictionary, we mean here the set of keyword compiled into the reconfigurable device
288 RECONFIGURABLE COMPUTING

Figure 9.1. Sliding windows for the search of three words in parallel

This method however has a main drawback, which is the redundancy in the
amount of register fields used to stored the words as well as the number of
comparators. Redundancy in the sliding windows approach reduce the capac-ity
of such implementation, thus making its use not competitive. In the case of the
Xilinx Virtex FPGA XCV300 with a maximum of 6144 flip flops, a sliding
windows of length 10 needs 10 × 8 = 80 flip flops to store one target word in
the device. Consider that the device placement and routing permits an utiliza-
tion of 80%. The number of words which can be folded in such an FPGA will
theoretically be in the range of 60, which is small compared to the number of
words which can be folded in the same divide with a more efficient implemen-
tation. A nice approach would be to just perform the comparison on a given
alphabet consisting of the set of common character and to use the result in the
evaluation of each single word. This would lead to a reduce amount of register
fields and a reduce amount of comparators.

1.2 Hashed Table-Based Text Searching


The hashed table-based text searching approach was implemented on the
SPLASH II Platform [180]. The text to be processed is streamed in a pair of
consecutive characters called superbyte. An incoming superbyte is mapped on
one character of the target alphabet. Each non alphabet character is mapped to a
delimiter. To determine whether or not a word is in the target list, a hash-
Applications 289

ing technique is used. A hash table in memory is used to store the value of a
presence bit for a particular target word. An entry zero in this table means
that the word is not present and a value one means that the word will
probably be present. A hash register (with a length of 22 bit in the SPLASH
implementa-tion), which is initially set to zero, is incrementally modified by
the incoming superbytes. At the end of a word marked by a delimiter, the
hash register is set to zero to allow the next word to be processed. The hash
22
register is used to address the hash table of 2 pseudo-random mappings of
words. The modifi-cation of the hash register happens as follows: When a
non delimiter superbyte is encountered the contents of the 22-bit hash
register is updated by first XOR-ing the upper 16-bit of the register with the
incoming superbyte and a value of a hash function. The modified value of
the hash register is then circularly shifted by a fixed number of bits. Upon
reaching the end of a word, the con tent of the register is used to address the
hash table and determine the value of the presence bit. To reduce the
likelihood of false hits a number of independent hash functions is calculated
for each word of the text, with the stipulation that each hash function lookup
of a word must result in a hit for that word to be counted as a target word.
Estimations on performance of the SPLASH text searching have been
done mainly on the basis of the critical path length returned by the place and
route tool. Communication between the host computer and the boards like
memory latency have not been taken in consideration.

1.3 Automaton-Based text searching


It is well known that any regular grammar can be recognized by a deter-
ministic finite state machine (FSM). In an automaton-based search algorithm, a
finite state machine is built on the basis of the target words. The target word s
define a regular grammar which is compiled in an automaton acting as a recog-
nizer for that grammar. When scanning a text, the automaton changes its state
with the appearance of characters. Upon reaching an end state, a hit occurs and
the corresponding word is set to be found. One advantage of the FSM-based
search machine is the elimination of the preprocessing step done in many other
methods to remove stop words (like 'the', 'to', 'for' etc., which does not affect the
meaning of statements) from documents. In [52], the implementation of the
hardware based retrieval machine is done using a memory to store the tran-sition
table. Two registers hold the current state and the incoming character. The logic
is only used to decode the address of the next state in memory and to set the
correct values in the registers.
The performance (speed) of such an implementation is governed by the path
(RAM output → state register → RAM address → RAM output) which can
considerably slow down the search process, because of the multiple memory
accesses. To increase the speed of an FSM implementation, memory accesses
290 RECONFIGURABLE COMPUTING

can be suppressed by compiling the transition table in hardware and having a


mechanism to execute the state machine. Folding big transition tables in FP-
GAs can be difficult because of the limited capacity of those devices. Trans
i-tion tables for word recognizers are often full of backtracking edges and
cross-ing edges (edge which connects two nodes on different paths in the
FSM-tree). With the sequential structure of the memory based recognizer, it
is almost im-possible to eliminate the backtracking edges, which constitute
in most cases the biggest part of the transition table.
The SPACE (Scalable Parallel Architecture for Concurrency Experiments)
machine [104] [105] makes use of the parallel structure of FPGAs to eliminate
backtracking edge from the FSM. This search machine is a FSM like imple-
mentation for text searching capable of handling a search over a list of almost
100 target words. The search is performed simultaneously on a board with 16
CAL1024 FPGAs without using a memory to store the transition table. Each key
word is compiled in a separate FSM which is mapped to the hardware device.
The text is then streamed into the device, characters by charactes in ASCII form.
For the case insensitivity purpose the 8-bit characters are mapped by the
preprocessor to 5-bit characters. With the incoming characters, which clock the
flip flops, all the FSM move simultaneously to their next state. Upon reaching
an end state, the corresponding accumulator is incremented and the result is
returned. The backtracking edges appearing in the formal represen-tation of the
FSM become redundant and will not be taken in consideration. Figure 9.2a)
shows a transition graph of the FSM for the word "conte". The transition table
requires 5 × 6 memory locations to be stored (figure9.2b)). For the same FSM
the hardware representation will need only 4 flip flops, 4 AND gates, and 5
comparators (figure 9.2c)).
While having the advantage of replacing a word already folded in the FPGA
with a new target word of the same length, the SPACE implementation presents
two major inconveniences: First, it does not take into consideration words which
share a common prefix, thus increasing the amount of resources us ed to stored
the character states. Second, it uses one comparator for each character of a target
word, which increase the resource needed to implement the com-parators. Those
two factors lead to redundancy in flip flop and look-up ta ble utilization as it is
the case in the sliding window recognizer. On the SPACE board, with a total of
8192 flip flops it has been possible to implement a list of only 96 target words
with average length 12.
When taking in consideration the common prefix of words, it is possible to
save a considerable amount of flip flops. For this purpose, the search ma chine
could be implemented as an automaton recognizer, common to all the target
words. This approach was first presented in [34]. As shown in figure 9.3, the
resulting structure is a tree in which a path from the root to a leaf determines the
appearance of a corresponding key word in the streamed text. Words which
Applications 291

Figure 9.2. FSM recognizers for the word "conte": a) sate diagram, b) transition table, c) basis
structure the hardware implementation: 4 flip flops will be need to code a 5 × 6 transition table

share a common prefix use a common path from the root corresponding to
the length of their common prefix. A split occurs at the node where the
common prefix ends. In the hardware implementation of a group of words
with a com-mon prefix, common flip flops will be used to implement the
common path (figure 9.3a)).

Figure 9.3. a) Use of the common prefix to reduce the number of flip flops of the comm on
word detector for "partir", "paris", "avale","avant". b) implementation without use of common
prefix and common comparator set

For each character in the target alphabet only one comparator is needed.
The comparison occurs in this case, once at a particular location in the device.
292 RECONFIGURABLE COMPUTING

Incoming characters can be directly sent to a set of all possible comparators.


Upon matching a particular one, a corresponding signal is sent to all the flip
flops, which need the result of that comparison. This method will reduce the
number of comparators needed almost by the sum of the length of all target
words.
The overall structure of the search engine previously explained is given in
figure 9.4. It consists of of an array of comparators to decode the char acters of
the FSM alphabet, a state decoder which moves the state machine in the next
states and a preprocessor to map incoming 8-bit characters to 5-bit char-acters
thus mapping all the characters to lower cases. Since case insensitivity is
considered in most application in information retrieval, the preprocessor is
designed to map upper and lower case characters to the binary code of 1 to 26
and the rest of character to 0 [104]. Characters are streamed to the device in
ASCII notation. An incoming 8-bit character triggers the clock and is mapped to
a 5-bit character which is sent to the comparator array. All the 5-bit signals are
distributed to all the comparators which operate in parallel to check if a match of
the incoming character with an alphabet character occurs. If a match occurs for
the comparator k, the output signal chark is set to one and all the other are set to
zero. If no match occurs, all the output signal are set to be zero. Figure 9.5
illustrates this concept on the base of the word "tictic".

Figure 9.4. Basis structure of a FSM-based words recognizer that exploits the common prefix
and a common set of characters

The one-hot approach (figure 9.2c)) is probably the best implementation for
the kind of automaton realized in the state decoder. First it takes a major
advantage toward the regular structure of the FPGA and it is the fastest way to
implement FSM in hardware [142]. Moreover, there is no need to incorporate
backtracking edges in a suitable hardware representation. To implement a set of
different words, the resulting tree structure of the common FSM is mapped in
hardware. All the leafs corresponding to target words are connected to the hit
port. The width of the hit port is determined by the number of positions needed
to code the number of target words. Upon matching a particular word,
Applications 293

one leaf of the automaton is reached and the hit output is set as to the index
of the matched target word.

Figure 9.5. Processing steps of the FSM for the word "tictic"

1.4 Automaton generation


A tool was provided in [34] to allow for an automatic generation of a term by
document matrix from a document collection with a given list of target words.
This tool automatically generates a VHDL description of the search machine,
and the synthesis can be done with available vendors or third party tools. The
tool is based on the method described in [14] to generate the corresponding tree
structure which is then mapped in hardware. The resulting state machine is
optimized (in terms of number of states) and is able to solve the Mississippi
2
problem for a set of target words. With the union, concatenation, Kleene star
(Kleene closure) operations, all the basic FSMs (FSM related to one word) are
used to build one FSM capable of recognizing all the target words. A set of final
states is linked to the appearance of target words in a streaming text. Th e
resulting transition table is then translated into VHDL code in such a way that

2
Can the state machine recognize the word 'issip' in a file conta ining the word Mississippi?[52]
294 RECONFIGURABLE COMPUTING

no backtracking and crossing edges appear in the configuration. The r


esulting VHDL code is then synthesized, mapped, placed, routed and
configured to be downloaded into the FPGA with the help of the generated
scripts. The doc-ument collection is then scanned and a column vector
representing the weigh of target words in this document is returned for each
document. The perfor-mance of this approach was evaluated using the
Spyder Virtex board from the FZI Karlsruhe [214], a PCI board featuring a
Xilinx Virtex XCV300. A list of 330 key words with an average of 10
characters each could then be compiled into the FPGA, leading to an
automaton with 2370 states. The utilization of the FPGA was 59% (1817
CLB slices out of 3072 slices), including bus inter-face. The stream rate of
the characters through the FPGA was by 3 million3 characters per second.

2. Video Streaming
Video streaming is the process of performing computations on video data,
which are streamed through a given system, picture after picture.
Implementation of video streaming on FPGA have attracted several researchers
in the past, resulting in the building of various platforms [39] [115] [198] [72]
[207] [140] [201] for various computations. Two main reasons are always state
as motivation for implementation video streaming in FPGAs: the perfor-mance,
which results from using the exploitation of the inherent parallelism in the target
applications in order to build a tailored hardware architecture and the adaptivity,
which can be used to adjust the overall system by exchanging parts of the
computing blocks with better adapted ones.
In [193], Richard G.S discusses the idea of parameterized program gener-
ation of convolution filters in an FPGA. A 2-D filter is assembled from a set of
multipliers and adders, which are in turn generated from a canonical serial-
parallel multiplier stage. Atmel application notes [13] discuss 3x3 convolu-tion
implementations with run-time reconfigurable vector multiplier in Atmel
FPGAs. To overcome the difficulties of programming devices with classic
Hardware Description Languages(HDL) like VHDL and Verliog, Celoxica has
develop the Handel-C language. Handel-C is essentially an extended subset of
the standard ANSI-C language, specifically designed for use in a har dware
environment. The Celoxica Handel-C compiler, the DK1 development envi-
ronment includes a tool, the PixelStream, for an easy generation of video pro-
cessing functions for FPGA implementations. PixelStream offers a set of ba-sic
modules and functions that can be (graphically) combined to build a more
complex datapath in FPGA. The resulting design can be mapped to one of the
Celoxica FPGA development boards.

34 million if less than 255 words are compiled into the word recognizer.
Applications 295

We will not focus in this section on the details of video or image process-
ing. A comprehensive description of video processing algorithms and their
hardware implementation on FPGA is provided in [153].
The Sonic architecture [115] is a configurable computing platform for acc el-
eration and real-time video image processing. The platform consists of plug-in
processing elements (PIPEs), which are FPGA daughter card that can be
mounted on a PCI-board plugged on a PCI-slot of a workstation. A PIPEfl ow
bus exists to connect adjacent PIPES, while a the available PIPE provide global
connection to the PIPES. Sonic's architecture exploits the reconfiguration ca-
pabilities of the PIPES to adapt part of the computation flow at run-time.
The Splash [97] architecture was also used in video processing. Its
systolic array structure makes it well suited to image processing.
The ESM platform introduced in section 7.0.2 presents an optimal pipelined
architecture for the modular implementation of video streams. Its organization in
exchangeable slots, each of which can be reconfigured at run-time to p er-form a
different computation, makes it a viable platform in video streaming. Moreover,
the communication infrastructure available on the platform provides an
unlimited access of modules to their data, no matter on which slot they are
placed, thus increasing the degree of flexibility in the system.
The computation on video frames is usually performed in different steps,
while the pictures stream through the datapath. It therefore presents a nice
structure for a pipelined computation. This has led to a chained architecture
on the base of which most video streaming systems are built. The chain con-
sist of a set of modules, each of which is specialized for a given
computation. The first module on the chain is in charge of capturing the
video frames, while the last module output the processed frames. Output can
be rendered on a monitor or can be sent as compressed data over a network.
Between the fir st and the last modules, several computational modules can
be used according to the algorithm implemented. A module can be
implemented in hardware or in software according to the goals. While
software provides a great degree of flexibility, it is usually not fast enough to
carry the challenging computations required in video applications. ASICs
can be used to implement the computa-tional demanding parts, however
ASIC does not provide the flexibility need ed in many systems. On a
reconfigurable platform, the flexibility of a processor can be combined with
the efficiency of ASICs in order to build a high perfor-mance flexible
system. The partial reconfiguration capability of reconfigur able devices
provides the possibility to replace a given module on the chain at run-time.
Most of the video capture modules provide the frames to the system on a
pixel by pixel manner, leading to a serial computation on the incoming pixels.
Since many algorithms need the neighborhood of a pixel to compute its new
value, a complete block must be stored and processed for each pixel. Cap-
296 RECONFIGURABLE COMPUTING

turing the neighborhood of a pixel is often done using a sliding window data
structure with varying size. The size of the sliding window corresponds to
that of neighbor region of a pixel needed for the computation of the new
value. As shown in figure 9.6, a sliding window is a data structure used to
sequentially capture the neighborhood of pixels in a given image.

FIFO1 W11 W12 W13 W14 W15 Disposed

FIFO2 W21 W22 W23 W24 W25

FIFO3 W31 W32 W33 W34 W35

FIFO4 W41 W42 W43 W44 W45

RAM W51 W52 W53 W54 W55

Figure 9.6. Implementation of a 5x5 sliding windows

A given set of buffers (FIFO) is used to update the windows. The number
of FIFOs vary according to the size of the window. In each step, a pixel is
read from the memory and placed in the lower left cell of the window. Up to
the upper right pixel which is disposed, i.e. outputted, all the pixels in the
right part of the window are placed at the queue of the FIFOs one level
higher. The processing part of the video is a normal image processing
algorithm combining some of the basic operators like:

Median filtering
Basic Morphological operations
Convolution
Edge detection

In the field of video compression, the processing is usually done in a block by


block manner, different from the sliding window. However the overall struc-ture
is almost the same. The neighborhood must always be saved in order to
provided the necessary data for the computation of a new pixel value.
As explained earlier in this section and as figure 9.7 shows, the architecture
for a video streaming system is usually built on a modular and chained basis.
The first module deals with the image captured from an image source. This
can be a camera or a network module which collects the picture through a net-
work channel, or any other source. The frames are alternately written to the
RAM1 and RAM2 by the capture module. The second module collects the pic-
tures from the RAM1 or RAM2, if this is not in use by the first module, builds
Applications 297

Figure 9.7. A modular architecture for video streaming on the ESM

the sliding windows and passes it to the third module, which processes the
pixel and saves it in its own memory or directly passes it to the next module.
This architecture presents a pipelined computation in which the
computational blocks are the modules that process the data frames. RAMs
are used to tempo-rally store frames between two modules, thus allowing a
frame to stream from RAM to RAM and the processed pictures to the output.

2.1 Enabling Adaptivity through Reconfiguration


An adaptive video streaming system is characterized by its ability to opti-
mize the computation according to changing environmental condition. In most
cases, only one module on the computation chain must be changed while the rest
keep running. The video capture module for example can be changed if we want
to optimize the conversion of pixel to match the current brightness or the current
landscape. It is also possible to change the video source from camera to a new
one with different characteristics or the source can be for instance a network
channel collecting the frames in a compressed form. In an adap-tive system, the
functionality of a module on the computation path should be changed very fast,
and without affecting the rest of the system. Two possibili-ties exist for this
purpose. The first one consists of providing some para meters to the module to
instruct it to switch from one algorithm to the next one. How-ever, the structure
of the basic algorithms are not always the same. A Sobel filter cannot be
changed in a Laplace filter by just changing the parameters .
298 RECONFIGURABLE COMPUTING

This is also true for a median operator which cannot be replaced by a Gauss
operator by just changing the parameters. The network capture module and the
camera capture module require two different algorithms for collecting the pixels
and bringing them in the system. The second possibility consists of re-placing
the complete module at run-time with a module of the same size, but different in
its structure, while the rest of the system keeps running. Recon-figurable devices
in general and FPGAs in particular fulfill this requiremen ts. Many available
FPGAs can be partly configured, while the rest of the syste m is running.
Moreover many FPGAs provide small size on-chip memories, able to hold part
of a frame (the so called region of interest). It is therefore possible to perform
many computations in parallel on different regions of interest, thus increasing
the performance and flexibility of video applications.

3. Distributed Arithmetic
Distributed arithmetic (DA) is certainly one of the most powerful tool for
the computation of the product of two vector products, one of which is con-
stant, i.e it consists of constant values. DA exploits the nature of LUT-based
computation provided by the FPGAs by storing in a LUT, all the possible
result for a set of variable combinations. Computation at run-time only
consists of retrieving the results from the LUT, where they were previously
stored. The el-ements of the variable vector are used to address the look-up
table and retrieve partial sums in a bit-serial (1-BAAT, 1 Bit At A Time)
manner. Investigations in distributed arithmetic have been done in [215]
[158] [223] [224] [33] [95] [63] [62] [31].
One of the notorious DA-contribution is the work of White [215]. He pro-
posed the use of ROMs to store the precomputed values. The surrounding
logic to access the ROM and retrieve the partial sums had to be implemented
on a separate chip. Because of this moribund architecture, distributed
arithmetic could not be successful. With the appearance of SRAM based
FPGAs, DA became an interesting alternative to implement signal
processing applications in FPGA [158] [223] [224]. Because of the
availability of SRAMs in those FPGAs, the precomputed values could now
be stored in the same chip as the surrounding logic.
In [33], the design automation of DA architecture was investigated and a tool
was designed to help the user in the code generation of a complete DA design,
and to perform investigations on the various tradeoffs. Also, an initial attempt to
implement DA for floating-point numbers numbers in order to in-crease the
accuracy, was presented. However, the amount of memory required to
implement such a solution makes it applicable only on small examples.
The idea behind the distributed arithmetic is to distribute the bits of one
operand across the equation to be computed. This operation results in a new
Applications 299

one, which can then be computed in a more efficient way. The product of tw
o vectors X and A is given by the following equation:
n
X

Z = X × A =(Xi × Ai) (3.1)


i=0
We assume that A = (A1, · · · , An) is a constant vector of dimension n and X
= (X1, · · · Xn) is a variable vector of the same dimension. With the binary
Pw−1 Xij 2j , of Xi, where w is the width of the variables and
representation,
j=0
Xij ∈ {0, 1} is the j-th bit of Xi, equation (3.1) can be rewritten as:
n w−1 w−1 n
X X X X

j j X A
Z =Ai × Xij 2 = 2 ij i (3.2)
i=0 j=0 j=0 i=0

Spreading equation 3.2 for a better understanding leads to equation 3.3

Z = (X00 × A0 + X10 × A1 + ... + Xn0 × An)20


1
+ (X01 × A0 + X11 × A1 + ... + Xn1 × An)2 (3.3)
...
+ (X0(w−1) × A0 + X1(w−1) × A1 + ... + Xn(w−1) × An)2(w−1)
Equation 3.2 represents the general form of a distributed arithmentic. It
shows how the variable operands are distributed in the equation. Each bit of
n
each variable operand contributes only once to the sum X A . Because
i=0 ij i

X ∈ {0, 1}, the number of possible values n X Ai can take is limited to


nij i=0 ij P
. Therefore, they can be precomputed and stored in a look-up table, provide
2 P
that enough place is available in the LUT. Lets call such a look-up table a
n
distributed arithmetic look-up table (DALUT). The DALUT has the size w×2
bits. At run-time, the n-tupel (X1j , X2j , ..., Xnj ) will be used to address the
0 0, P n

n
DALUT and retrieve the partial sums i=0 Xij Ai. the n-tupel (0, 0, ..., 0) will
be used to address the location , ( 0, ..., 1) is used to address the location 1,
and (1, 1, ..., 1) is used to address the location 2 − 1.
The complete dot-product Z requires w steps to be computed in a 1-BAAT
manner. At step j, the j-th bit of each variable are used to address the DALUT
and retrieved the value Pn Xij Ai. This value is then left shifted by a factor
i=0

j (which corresponds to a multiplication by 2j ) and accumulated. After w


ac-cumulations, the values of the dot-product can be collected. The DA
datapath for this computation is obvious as figure 9.8 shows.
The DALUT address consists of a set of bits, each of which belongs to a
component of the input vector. A barrel shifter is used to shift the number
retrieved from the DALUT at the j-th step on j positions. The shifted value is
300 RECONFIGURABLE COMPUTING

Figure 9.8. Architecture of a distributed arithmetic datapath

then add to the value in the accumulator. After w steps, the results is
collected from the accumulator.
Many enhancements can be done on a DA implementation. The size of
the DALUT for example can be halved if only positive values are stored. In
this case, the sign bit, which is the first bit of a number, will be used to
decide if the retrieved value should be added to or subtracted from the
accumulated sum. On the other hand, it is obvious that all the bit operations,
i.e. the retrievals of a value from the DALUT, are independent from each
other. The computation can therefore be performed in parallel. The degree of
parallelism in a given implementation depends on the available memory to
implement the DALUTs. In the case where w DALUTs and datapaths can be
instantiated in parallel, the retrieval of all partial sums can be done in only
one step, meaning that the complete computation can be done in only one
step insted of w as in the serial case.
In general, if k DALUTs are instantiated in parallel, i.e. for a computation
performed on a k-BAAT basis, then w/k steps are required to retrieved all the
partial sums, which can be directly accumulated, thus leading to a run-time of
w/k steps. Figure 9.9 shows a datapath for the computation of the DA.
The input values are segmented in different fields, each of which is assig
ned to a datapath for the retrieval of the partial sums from the corresponding
DA-LUT. The retrieved values from the k DALUTs are shifted by the
correspond-ing values and added to the accumulated sum. After w/k steps,
the result can be collected from the accumulator.
Applications 301

Figure 9.9. k-parallel distributed arithmetic datapath

3.1 Distributed Arithmetic in High Dimension


In many computational areas like in mechanical control, computations are
not only limited to dot-product. Matrix operations are usually required,
whose general form can be defined by equation 3.4.
a
z2= 11 ... a1r x2 (3.4)
z1 ... x 1
zs a1s asr xr
...

... ...

This equation can be implemented using s DALUTs. The i-th (i ∈ {1, .., s})
Pr
DALUT is used to compute the vector product zi = xj × aij with the
j=0
constants ai1 to air . If there is enough space on the chip to hold all the DA-
LUTs, then equation 3.4 can be computed in few clocks. As we will see
later, partial reconfiguration can also be used to implement many dot-
products, if there is not enough memory on the chip to hold all the DALUTs.

3.2 Distributed Arithmetic and Real Numbers


The straightforward approach to handle real numbers in distributed arith-
metic is the use of fixed-point. The representation of a real number using the
fixed-point approach is done by defining an integer part and a fraction al part
302 RECONFIGURABLE COMPUTING

for each number. The integer part is separated from the fractional part using a
point. Because the point is only an imaginary and not physically available in the
number representation, operations on fixed-point numbers are not differ-ent than
those on normal integer numbers. The datapath previously presented can be used
without modifications for for real numbers represented as fi xed-point.
Representing real numbers as fixed-point is advantageous in the c ompu-tation
speed and the amount of resources required to implement the datapath.
However, the ranges of fixed-point representations as well as their pr ecisions is
small compared to those of a floating-point representations. Therefore we would
like to handle real numbers as floating-point.
In this section we present a concept first investigated in [33] for handlin g
real numbers, represented as floating-point in the IEEE 754 format, usin g dis-
tributed arithmetic. In IEEE 754 format, a number X is represented as follows:

X = (−1)SX 2eX × 1.mX (3.5)


In this equation, eX is the exponent (we consider that the substraction
with the bias is done and the result is eX ), mX is the mantissa and SX is the
sign of X. Without loss of generality, we will consider the sign to be part of
e
the mantissa, thus the new representation is X = 2 X × mX . With A, X and
Z being all floating-point numbers, the floating-point counterpart of equa tion
(3.1) is given by:
n n
X X

eA eX
Z = X × A =(Xi × Ai) = (2 i × mAi ) × (2 i × mXi )
i=0 i=0
n n
X X
+e
=(2eAi × 2eXi ) × (m ×m ) =(2eAi Xi ) × (m ×m )
Ai Xi Ai Xi
i=0 i=0
(3.6)
The goal here is to compute and provide Z as floating-point number, us-ing
the same principle of the partial sums previously described. At each step of the
computation, a floating-point value Fi must be read from the floating-point
DALUT and added to an accumulated sum, which is also a floating-point
number. Since the adder used at this stage is a floating-point adder, we as - sume
that issues like rounding and normalization have been integrated in its
e
implementation. From the last part of equation 3.6, we see that the value (2 Ai
+eX
i ) × (mAi × mXi ) represents a floating-point number with expo-

nent eAi + eXi and mantissa mAi × mXi . By setting eFi


m = eAi + eXi and = mAi × mXi , we have the requested values at each
Fi
computation step.
Instead of computing the exponential part (2eAi +eXi ) of Fi as well as its man-
tissa (mAi × mXi j ) on line, the idea here consists of using two floating-point
DALUTS for each constant Ai. The values eAi + eXi will then be precom-puted
and saved in the first DALUT and mAi × mXi in the second one. Lets
Applications 303

call the first DALUT which stores the exponents the EDALUT and the sec-
ond DALUT which stores the mantissas the MDALUT. The size of
EDALUT, size(EDALU T ) as well as that of MDALUT, size(M DALU T )
are given in equations (3.7).

size(EDALU T ) = n × 2|E| × |E| bits (3.7)

|M |
size(M DALU T ) = n × 2 × |M | bits (3.8)
|E| is the exponent width and |M | is the mantissa width of the floating-
point representation.
The main argument against this approach is the size of the DALUTs used
for this floating-point DA implementation, which can be so large that an im-
plementation cannot be possible. However, if we consider a DA implementa-
tion involving five variables and five coefficients represented in the IEEE 7
54 floating-point format with 8 bits exponent and 10 bits mantissa, the total
8
mem-ory requirement for the EDALUTs and the MDALUTs is: ((5 ×2 ×8)
+ (5 × 210 × 10))/1024 = 60 Kbits. Therefore the EDALUTS and the
MDALUTs will easily fit into a very small low cost FPGA like the Xilinx
Spartan III 50 with a total of 72 Kbits Block RAM [225].
For large vectors with numbers represented on large mantissas and expo-
nents, this approach will require a large amount of memory, certainly not di-
rectly available on the chip. An external memory must therefore be used.
Having the EDALUTs and the MDALUTs for each coefficient, the datapath
will not be implemented as in the fixed-point or integer case. The variables ar e
no more input in a bit-serial way. At step i the variable Xi is used to address the
two i-th EDALUT and MDALUT. The bits of eXi are used to access the
EDALUT, while the bits of mXi are used to access the MDALUT in parallel.
The values collected from the EDALUT and the MDALUT are used to build the
floating point number Fi. After n steps the floating-point dot-product is
computed. Figure 9.10 shows the datapath for the floating-point DA. Since all
the DALUTs for the n coefficients are available on the device, they can be
accessed in parallel to compute the dot-product in only one step.
Signal processing applications are usually based on the computation of the
dot-product of one vector of variables with a vector of constants. They are ideal
candidates for a DA implementation in reconfigurable hardware. As sta te earlier
in this section, in order to facilitate the design of DA applications a tool was
implemented in [33] for the design automation. Also, the possibil-ity is given to
investigate the different tradeoffs for a given application. For a given device and
a given set of constants, different DA implementations can be generated, from
the full parallel implementation (w-BAAT), which can be computed in only one
step, to the full sequential DA (1-BAAT). For each possi-
304 RECONFIGURABLE COMPUTING

e m
xn xn

e m
x1 x1

e m
x0 x0

e m
A1 A1
eA1 + 1 mA1 * 2 Control
LUTs 1

e + 2|E| m * 2|M|
A1 A1

e m
A2 A2
e m
LUTs 2 Fi Fi
e + 2|E| m * 2|M|
A2 A2

+
e m
An An e m
Zi Zi
LUTs n
e + 2|E| m * 2|M|
An An

Figure 9.10. Datapath of the distributed arithmetic computation for floating popin numbers

bility, the user is provided the area and speed of the design. Real numbers
can be handled either as fixed-point or as floating-point in the IEEE 754
forma t with the technique previously defined. The width of the mantissa as
well as that of the exponent has to be provided. For the final design, the tool
ge nerates a description in the Handel-C language.

3.3 Use of Reconfiguration


The modification of design implemented as distributed arithmetic can be
done at run-time through the replacement of the DALUTs. The remaining
part of the design must not be changed, because it is the same for all designs.
In this case, the designer must not have to deal with all the cumbersome
aspects of partial reconfiguration design. However, if the datawidth of the
opera tors changes, then the computing structure (shifter, adder,
accumulator) must be replaced as well. In this case the partial
reconfiguration of the device can not be avoided. In section 4, we present the
use of reconfiguration in DA de signs, using the concept of adaptive
controller concept, which makes use of DA and partial reconfiguration.
Applications 305

3.4 Applications
In this section, we present one application, the implementation of recur-
sive convolution algorithm for time domain simulation of optical multimode
intrasystem interconnects that was substantially speeded-up through the use
of distributed arithmetic. Another application that has benefit from the
imple-mentation as DA is the adaptive controller. Because section 4 is
devoted to adaptive controller, we explain only the first application here.

3.4.1 Recursive Convolution Algorithm of Time Domain Simulation


of Optical Multimode Intrasystem Interconnects
In general, an optical intrasystem interconnect contains several receivers with
optical inputs driven by transmitters with optical outputs. The intercon-nections
of transmitter and receivers are made by a passive optical waveguide which are
represented as a multiport (figure 9.11) using ray tracing appr oach.
O
O
N A

O
N
O
A

NA O
=
O
=A

Figure 9.11. An optical multimode waveguide is represented by a multiport with several


trans-fer paths.

The transfer of an optical signal along the waveguide can be computed by


a multiple convolution process. Frequency domain simulation methods are
not applicable regarding to the high number of frequency. Pure time domain
sim-ulation methods are more efficient if the pulse responses can be represen
ted by exponential functions. The application of a recursive method for the
convolu-tion of the optical stimulus signals at the input ports with the
corresponding pulse responses enables a time efficient computation of the
optical respons e signals at the belonging output ports. The recursive formula
to be implemented in three different intervals is given by equation 3.9.

y(tn) = f0 · y(tn−1) + (3.9)


f4 · x0 − f5 · x1 + f24 · x2 + f53 · x3 .
Because the values f0, f4, f4, f24 and f53 are constants, while tn−1, x0, x1, x2,
x3 are variables, equation 3.9 is best adapted for a DA-implementation.
306 RECONFIGURABLE COMPUTING

For this equation, different tradeoffs were investigated in the framework


pre-sented in [33], for generation and evaluation of DA-trade-off
implementation. A Handel-C code were generated and the complete design
was implemented on a system made upon the Celoxica RC1000-PP board
equipped with a Xilinx Virtex 2000E FPGA and plugged into a workstation.

Workstation 1 interval 3 intervals


Sun Ultra 10 73.8 ms 354.2 ms
Athlon (1.53 GHZ) 558 ms 1967.4 ms
FPGA (time) 1 interval 3 intervals
Custom dot-product design 25.6 ms 76.8 ms
Sequential DA 19.4 ms 19.4 ms
3-parallel DA 6.4 ms 6.4 ms
FPGA (area) 1 interval 3 intervals
Custom dot-product design could not fit could not fit
Sequential DA yes (7 % ) yes (14 % )
3-parallel DA yes (14 % ) yes (42 % )
Table 9.1. Results of the recursive convolution equation on different platforms

Table 9.1 provides the implementation results on different platforms. The


workstation is used for sending the variable to the FPGA and collecting the re-
sult of the computation. The implementation of equation 3.9 in three intervals
occupies about 14 % of the FPGA area while running at 65 MHZ. In order to
obtain a maximal parallelism in the computation, 6 DALUT could be imple-
mented in parallel leading to a 6 times speed-up. As can be seen on figure 9.12 ,
the 6-parallel DALUT and the corresponding adder three occupy about 76 % of
the FPGA area. Enough space is left for the routing.
The same design was implemented without the use of DA. It could not fit
into the same FPGA and the run-time was much bigger that that of the DA-
implementation. The performance as well as the area consumption of our the
DA-implementation is more efficient than that of all the other architectures.

4. Adaptive Controller
In this section, we investigate the next field of application of
reconfigurable devices, namely the implementation of adaptive controllers,
also identified here as multi-controller, using partial reconfiguration.
We will first introduce the multi-controller concept. Thereafter, we inves-
tigate its implementation using the distributed arithmetic. Finally, the imple-
mentation of the whole design using partial reconfiguration is shown.
Applications 307

Figure 9.12. Screenshot of the 6-parallel DA implementation of the the recursive convolution
equation on the Celoxica RC100-PP platform

4.1 Adaptive Control


The task of a controller is to influence the dynamic behavior of a system
referred to as plant. A control feedback is available, if the input values for
the plant are calculated on basis of the plant's outputs.
Adaptive controllers deal with the control of a plant, for instance a com-plex
mechatronic system, in a changing environment. In this case, the control is
modeled as a physical process that operates in a limited set of operating regimes.
Depending on the environmental conditions, the operation regime of the plant
must be changed to adapt to new conditions. With conventional meth-ods it
might be possible to design one robust controller that optimally controls the
plant in all operating regimes. According to the conditions, the control regime
may be changed using a set of parameters. Besides the possible slow response of
such a controller [166], the size can be too big due the large amount of possible
cases to be considered. To overcome this handicap, the controller modules can
be implemented as separate modules and stored in a database. They will be use
at run-time to replace the active controller, whenever a new control algorithm
must be implemented. The price to pay for this solution is the amount of
memory to store all possible module implementations as well as the time needed
to switch from one module to the next one.
The general architecture of an adaptive controller as described in [161] is
shown in figure 9.13. It consists of a set of controller modules (CM), each of
which is optimized for an operating regime of the plant. A supervisor com-
ponent is used to switch from module to module an set the active one. 4 The
decision to switch from one CM to the next one is made on basis of mea-

4At a given time, the active controller is the one which controls the plant.
308 RECONFIGURABLE COMPUTING

surements of physical values of the plant. The strategy of the supervisor can
vary from simple boolean combinations of the input values to very complex
reasoning techniques [206].
Multi−Controller

Supervisor

i CM 1

CM i
Plant MUX

CM n

Figure 9.13. Adaptive controller architecture

The computation of the plant inputs is normally done in a time discrete


manner, using a set of discrete sample points tk . . . tk+1. The time between two
sampling points is constant and defines the sampling period T . The controller
application can therefore be seen as a periodic real-time task, which must be
performed at every sampling point and must be completed within one period
T.
While in a Von Neumann processor based implementation of the system,
each module is implemented as sequence of instructions, in a reconfigurable
device, the tasks are implemented as hardware modules to be downloaded
onto the device at run-time. Lets take a look on the behavior of the single
modules and their implementations.

4.2 Controller-Module Specification


A common approach is to model the plant as a linear time-invariant system.
Based on this model and the requirements of the desired system behavior, a
linear controller is systematically derived using formal design methods.
Lets assume that every controller module CMi of figure 9.13 is designed
as a linear time-invariant system, optimized for a given operating regime of
the plant. After a time discretization has been performed, the behavior can be
captured in equation 4.1.
The input vector of the controller module, representing the measurements
from sensors of the plant, is represented by u, y is the output vector of the
controller, which is used to drive the actuators of the plant, and x is the inner
Applications 309

state vector of the controller. The matrices A, B, C and D are used for the
calculation of the outputs based on the inputs.

x(k + 1) = Ax(k) + Bu(k)


(4.1)
y(k) = Cx(k) + Du(k)
p = dim(u), s = dim(x),q = dim(y)
The task of the digital controller is to calculate equation 4.1 during one
sampling interval. The new state xk +1 and the output yk must be computed
before the next sampling point k + 1.
The state space equations of a digital linear time invariant controller
(equa-tion 4.1) can be written as a product of a matrix of constant
coefficients and a vector of variables (equations 4.2 and 4.3).

z =M v
(s+q,1) (s+q,s+p) (s+p,1) (4.2)
x1 (k + 1) a11
a b
... 1s 11 ... b1px1 (k)
. . .. . . . . .
. . . . . . .

. . . . . . . .
(4.3)
sy1 (k)

x (k + 1) a bs1
= s1
c11
. . .ass
c d
... 1n 11
. . .. . . .. . .
. . . . . .
. . .
yq (k)
. . . . . . .dqp
.

uq (k)

.. .cqsdq1
c {z }
q1
| {z }| }| {z
z M v

Equations 4.2 and 4.3 define a classical distributed arithmetic equation in


higher dimension. Each of the s + q entries of the resulting vector v defines
a separate DA equation. We therefore need to compute s + q different DA-
equations, all independent from each other. The computation of the s + q
equations can then be performed in parallel, provide that enough resources
are available on the chip to store all the DALUTs. If enough memory is not
available for storing all the DALUTs as well as the corresponding logic for
the datapath, then the different equations must be implemented sequentially.
In general, compromises must be found between the amount of parallelism
that one need to have and the speed of the design.

4.3 Use of Reconfiguration


As we explained earlier, the controller modules can be implemented as
hard-ware modules and store in a database from which they will be
downloaded at run-time by the supervisor module.
The straightforward approach to realize this is to build a structure consisting
of a fixed part in which the supervisor resides. A reconfigurable slot is then
foreseen as place holder for the controller module that will be downloaded at
run-time (figure 9.14 left). Each controller module is downloaded at run-time
310 RECONFIGURABLE COMPUTING

Figure 9.14. Adaptive controller architecture. Left the one slot implementation, and right the
two slot implemenation

into the reconfigurable slot, which has predefined interfaces to the super
visor and to the plant.
The problem with this approach is the vacuum, which arises on switching
from one module to the next one. Because only one slot is available, the re-
configuration of this slot will place the plant in a "floating state" on reconfig-
uration, where it is not really controlled. In order to avoid this situation, the
approach developed in [64] was the use of two slots. One active slot an a
pas-sive one (figure 9.14 right). The active slot is in charge of controlling the
plant, while the reconfiguration takes place in the passive slot. Whenever the
sup er-visor decides to replace a controller module, the reconfiguration will
first ta ke place on the passive slot. Thereafter, the control of the plant will
be given to the configured module, which becomes the active one, while the
former activ e becomes passive.

5. Adaptive Cryptographic Systems


Sensitive data exchange is always coupled with security issues. Cryptogra-
phy provides a means to allow two communicating entities to exchanged data in
such a way that a third party that intercept the data will not be able to under-
stand their meaning. A protocol is therefore required, which allow the sender to
crypt the data in such a way that only the receiver will be able decrypt them.
Considerable advances have been done in communication in the past and this
trend is likely to continue. The development of the internet has also pushed the
development in several branches where a network processing could not be
Applications 311

tough off few years ago. Applications like e-commerce, e-government, virtual
private network, on-line banking must provide a high degree of security.
Over years, a large variety of standards like Triple-DES, Advanced Encryp-
tion Standard AES), Data Encryption Standard (DES), RSA, OpenPGP, Ci-
pherSaber, IPsec, Blowfish, ANSI X9.59, CAST, RC4 and RC6, have been
developed to provide high security. With this large variety of standards and the
customized implementation possibilities for each standard, cryptography can be
seen as one of the most versatile application domains of computer science.
Depending on criteria like speed, degree of flexibility, and degree of sec urity,
single implementations of cryptography application were developed either as
software or as intellectual property component.
In [221] the advantages of using reconfigurable hardware in cryptog raphy
are listed. The author focus on the traditional advantage of flexibility and
performance. The flexibility is so far important because it offers the poss i-
bility to used the same hardware to switch from one algorithm to the next
one at run-time, according to factors like the degree of security, the compu-
tational speed, the power consumption. Also, according to some parameters,
a given algorithm can be tuned. Moreover, algorithms that has been broken
and where the security is no more insured can be changed by means of re-
configuration. The system can easily be upgraded to include new standard s,
developed while the system was already deployed. The corresponding algo-
rithm can therefore be compiled and included in the library of bitstreams for
the device configuration. The second advantage provided by reconfig urable
hardware, namely the performance can be used to efficiently implement the
components, by using the inherent parallelism and building efficient opera
tors for computing boolean operation on a very large amount of data. This
results on a large throughput and a cost efficiency. Experiments reported a
thr oughput of 12 GBit/s for an implementation of the block cipher AES on
an FPGA Virtex 1000, using 12,600 slices and 80 RAMs [90], while an
ASIC implementation, the Amphion CS5240TK [150] clocked at 200 MHz,
could provided twice the throughput of the FPGA solution. The same
algorithm, implemented on a DSP TI TMS320C6201 provided a throughput
of 112.3 Mbits/s [222], while a throughput of 718.4 Mbit/s could be reached
on a counterpart implementation on a on a Pentium III [10].
The general architecture of an adaptive cryptographic engine proposed by
Prasanna and Dandalis [61] [178], basically consists of a database to hold the
different configuration that can be downloaded at run-time onto the devic e,
like an FPGA for instance, to perform the computation and a configuration
controller to perform the reconfiguration, i.e. downloading the correspo
nding bitstream form the database into the FPGA. Each bitstream represent a
given algorithm implementing a given standard and tuned with some
parameters ac-cording to the current user's need.
312 RECONFIGURABLE COMPUTING

Figure 9.15. Architecture of an adaptive cryptographic system

With the recent development in FPGA, it is possible to have a complete sys-


tem on programmable chip (processor, peripherals, custom hardware compo-
nents, interconnection) implemented on an FPGA. The configuration controlle r
therefore must no more resides off chip. It can be implemented as custom on-
chip hardware module or as software running on am embedded processor. Also
the possibility to reconfigure only part of the chip opens new possibilities. In the
architecture presented in [61] [178], the whole device must be reset on re-
configuration, thus increasing the power consumption because of the amou nt of
the data that must be downloaded on a chip. Power consumption is usually a big
problem in mobile environments and the implementation must consider such
issues. Besides this power saving, partial reconfiguration also pr ovides the
possibility of keeping a skeleton structure into the device and perform only few
modifications at run-time on the basic structure to move from one algo-rithm to
the next one. In [42] a cryptographic application is implemented as
exchangeable module of a partial reconfigurable platform. The system, wh ich is
based on the AES algorithm algorithm consumes only 954 Virtex slices and 3
block RAMS. The cryptographic algorithm is used in this case just as a block to
test the partial reconfigurable platform, instead of using the partial reco nfig-
uration to enhance the flexibility of the cryptographic algorithm.
Figures 9.15 present a possible architecture of an adaptive cryptography
system, using the previous mentioned advantages of partial reconfigurab le
de-vices.
The main difference with the architecture presented in [61] is the used of
partial reconfiguration, which allows for an integration of all components o n a
Applications 313

single chip. Also, in contrast to the adaptive architecture for a control system
presented in figure 9.14, the loader module resides into the device. Depend -
ing on the implementation chosen, the loader can reside inside or outside the
device. If a Xillinx Virtex chip is used and the configuration takes place via
the ICAP port, then the loader, which is the ICAP module in this case, is au-
tomatically available inside the device. However, if the configuration
happens through the normal SelectMap port, then we need an external loader
module for collecting configuration data from the database and copy them on
the co n-figuration port.
In figure 9.15, the architecture is logically divided in two main blocks. A
fix one, which remains continuously on the chip. It consist of the parts, wh
ich are common to all the cryptographic algorithms in general or common to
the algorithms in a given class only. On the figure, we show only one
reconfig-urable slots, however, it can be implemented as set of configurable
blocks , each of which can be changed by mean of reconfiguration to realize
a giv en customized standard.
The last point concerns the development of building blocks that will be
com-piled in bitstreams to be downloaded into the device at run-time. A
designer is no more required to focus on the hardware implementation of the
crypto-graphic algorithm. A lot of work was done in this direction and the
results are available. We need mostly to focus on the architecture of the
overall system and find out how a viable partitioning can be done, according
to the reconfi gu-ration scheme.
Most of the work have focussed in various implementations of a given ap-
proach like the RSA [185] in [155] [88] [154] [232] [50] or the implementa-
tions described in [78] [19] [20] [47] [145] [152] [15], mostly
parameterizable and based on the Elliptic Curve approach [138] [157].
Generators for pro-ducing a customized description in a hardware
description language have been developed for example in [47]. This can be
used to generate the variety of configurations that will be used at run-time to
move from one implementation to the next one.

6. Software Defined Radio


Software defined radio (SDR) was defined in 1991 by Joe Mitola [129] as
follow: "A software radio is a radio whose channel modulation waveforms are
defined in software. That is, waveforms are generated as sampled d igital
signals, converted from digital to analog via a wideband DAC and then pos-
sibly upconverted from IF to RF. The receiver, similarly, employs a wideband
Analog to Digital Converter (ADC) that captures all of the channels of the soft-
ware radio node. The receiver then extracts, downconverts and demodulates the
channel waveform using software on a general purpose processor."
314 RECONFIGURABLE COMPUTING

As it is the case in many new technologies, software defined radio (SDR)


was initiated by the military in an attempt to alleviate numerous problems as-
sociated with traditional radio systems [162]. In the project SPEAKeasy [162],
which can be seen as precursor of the SDR technology, several military entities
(DARPA, AIR FORCE/AFRL, ARMY/CECOM, NAVY/NRaD/SPAWAR,
NSA) joined efforts in the development of "programmable" radios in order to
demon-strate the feasibility of a Multiband, Multimode Radio (MBMMR) and
to de-velop technologies that facilitate programmability and implementation of
MB-MMR. The achievement was a system with two programmable channels,
the Texas Instrument quad-TMS320C40 multi-chip module for digital signal
pro-cessing, and a SUN Sparc 10 workstation for interfacing with human.
The main goal in software defined radio is to move the analog-digital sig-
nal conversion as closer as possible to the antenna and to process the digital
signals in software rather than using a set of dedicated hardware
components. The expected benefits of using software defined radio have
drained gr eat atten-tion and interest from governmental entities, from the
academia and from the industry.
SDR is the next evolutionary stage of wireless technology. Many believe
that it is just a question of time up to the the adoption of SDR in the broad
range of communication facilities, and several initiatives like [121] [189]
were born with the goal of accelerating the proliferation of SDR.
The advantages of SDR can be categorized as follows:
Interoperability in order to provide support of multiple standards
through multimode, multiband radio capabilities
Flexibility for an efficient shift of technology and resources
Adaptivity to provide a faster migration towards new standards and tech-
nologies through programmability and reconfiguration
Sustainability for increasing resource utilization through generic
hardware platforms
Reduced ownership costs because of the reduced infrastructure, the
lower maintenance, and the easier deployment
The general architecture of a SDR is presented in figure 9.16.
At least one digital to analog (DAC) and one analog to digital converter
(ADC) must ba available in a SDR system. While the ADC performs
conver-sion of the incoming analog signals to digital signals, the DAC
performs the inverse operation.
The signal processing part collects digital signals from the ADC or from
another source like a base station input, processed them and send them to the
DAC.
Applications 315

Figure 9.16. Architecture of a software defined radio system

The ideal realization of a software defined radio would push the ADC and
DAC as close as possible to the antenna. The practical realization is
however, as explained in [217], very challenging.

6.1 use of reconfiguration


Reconfigurable devices provide the best prerequisites to fulfill the expe cta-
tions (interoperability, flexibility, adaptability, sustainability, and reduced own -
ership costs) placed on software defined radio systems. As explained in th e
previous section, it is possible to realize a complete system consisting of a pro-
cessor, peripherals and reconfigurable custom hardware modules as s ystem on
programmable chip. In such a system, the maximal flexibility will be insured by
the processor, while lower degree of flexibility can be realized by mean of
reconfiguration. For many algorithm in signal processing, software a lone will
not be able to provide the required performance. A migration of functions in
hardware is then required. The simple approach as realized in [9] [122]
[48] is to provided a single coprocessor for computation intensive tasks, like the
CORDIC [209] implementation, directly attached to the processor. This solution
provides the possibility to configure the system as whole, either to im-plement a
new functions at run-time, or event to upgrade the system structure according to
the availabilities of new standards. The use of partial reconfi g-uration [69] will
be welcomed, to allow the upgrading to take place without interrupting the
system, while saving a considerable amount of power.

7. High Performance Computing


One of the first and most investigated fields of application of FPGAs in the
90s was high-performance computing. The goal of many platform built at that
time was to provide acceleration of application like searching in genomic
databases, signal and image processing, matrix factorization. Several architec-
tures were developed, mostly consisting of a workstation on which one or more
FPGA were attached. By reconfiguration of the FPGAs, custom instruction s
could be added to the computing system, sometimes even at run-time. While
remarkable speed-ups were reported for some applications, the resulting cus-tom
computing machines (CCMs) could not be widely adopted as viable com-puting
platforms. This was mainly due to architectural limitations and lack of
316 RECONFIGURABLE COMPUTING

high-level programming tools. The capacity of FPGAs was too small to


allow real migration of software components in hardware. Furthermore, the
perfor-mance bottleneck posed by the slow communication between host
processor and FPGAs often busted speed-ups. Finally, there were no useful
software-like description languages that could have encourage software
designers to write programs for such machines and implement efficient
libraries for reusable data stream management. Programming CCMs meant
tedious low-level hardware design, which was not attractive at all.
The last couple of years brought a pivotal change. FPGA's logic densities
increased dramatically, now allowing for massive bit-level parallelism.
Avail-able high-level languages like Handel-C [125] and ImpulseC [175] as
well as progress in compilation technology makes the logic resources
available to high-level application programmers.
Boosted by the recent engagement of companies such as Cray [58], SRC
Computers [59], and ClearSpeed [51], Nallatech [165], and SGI [191],
CCMs are experiencing a renaissance. Despite the use of new high-speed
interfaces and efficient datastream management protocols between the
components in th e new systems, the new architectures are built on the same
models like the old ones in the 90s. It usually consist of a set of FPGAs, on
the same board with one or more processors. The processors control the
whole systems and con-figured the FPGAs at run-time with dedicated
function to be accelerated in hardware.
While considerable progress have been done, the rentability of the ma-chines
provided by the firms previously mentioned still have to be proven. The
rentability cannot be evaluated anymore only on the base of speed-ups ob-served
in some class of applications. It is not sure that a comparison made in terms of
performance/$, performance/J, and performance/sqm will favor fur-ther
deployment of those systems. Craven and Athanas recently provided in
[57] a study on the viability of the FPGA in supercomputers. Their study is
based on the cost of realizing floating-point into FPGAs. The cost include
not only the performance gain, but also the price to pay. The authors came to
the conclusion that the price to pay for a FPGA-based supercomputer is to
high for the marginal speed-ups.
High-precision Floating-point arithmetic is usually the bulk of computa-
tion in high-performance computing. Despite the progress done in develop-
ing floating-point units in hardware, Floating-point computation and FPGA
is still a difficult union. The advantage provided by FPGA in customized im-
plementation of floating-point as described Shirazi [192] is unlikely to help
here because the datapath must provide customization that consume the
largest amount of resources. Coarse-grained elements like the embedded
multipliers in FPGAs or coarse-grained reconfigurable device provide the
best p rerequi-sites for the use of reconfigurable device in supercomputers.
Applications 317

8. Conclusion
We have presented in this chapter a couple of applications that
can take advantage of the flexibility as well as performance of
reconfigurable dev ices. For each application targeted, we placed
the focus mostly on the architectural organization because the goal
was not to necessary present the detailed im-plementation of an
application. However, we have presented a comprehensive
description of the pattern matching application, while keeping the
other pre-sentation short. This is due to the low attention paid on
pattern matching in hardware in available textbooks. With the
large amount of literature in other presented topics like image
processing and control, we choose not to focus in details of those
applications.
A lot of work have been done in the last two decades and a large
number of applications was implemented in FPGA, which is the main
representative of reconfigurable devices. We could not present nor
cite all available imp le-mentations here. We have rather focussed on
few ones where we could show the usefulness of reconfigurability.
Finally, we would like to emphasize that despite two decades of
research, the existence of "killer applications" could not be shown for
reconfigurable device, thus limiting the acceptance of re con-
figuration in the industry. The existence of a killer application, will
certainly boost the development of reconfigurable devices, leading to
new class o f de-vices, tools and programmers. Despite this missing
step, research keep going and the support of the industry is needed
more than it have ever be.
AUTOMATIC TARGET RECOGNITION SYSTEMS
ON RECONFIGURABLE DEVICES
Young H. Cho
Open Acceleration Systems Research

An Automatic Target Recognition (ATR) system analyzes a digital image or video


sequence to locate and identify all objects of a certain class. There are several
ways to implement ATR systems, and the right one is dependent, in large part, on
the operating environment and the signal source. In this chapter we focus on the
implementations of reconfigurable ATR designs based on the algorithms from
Sandia National Laboratories (SNL) for the U.S. Department of Defense Joint
STARS airborne radar imaging platform. STARS is similar to an aircraft AWACS
system, but detects ground targets.
ATR in Synthetic Aperture Radar (SAR) imagery requires tremendous process-
ing throughput. In this application, data come from high-bandwidth sensors, and the
processing is time critical. On the other hand, there is limited space and power for
processing the data in the sensor platforms. One way to meet the high compu-
tational requirement is to build custom circuits as an ASIC. However, very high
nonrecurring engineering (NRE) costs for low-volume ASICs, and often evolving
algorithms, limit the feasibility of using custom hardware. Therefore, reconfig-urable
devices can play a prominent role in meeting the challenges with greater flexibility
and lower costs.
This chapter is organized as follows: Section 28.1 describes a highly paralleliz-
able Automatic Target Recognition (ATR) algorithm. The system based on it is
implemented using a mix of software and hardware processing, where the most
computationally demanding tasks are accelerated using field-programmable gate
arrays (FPGAs). We present two high-performance implementations that exercise
the FPGA’s benefits. Section 28.2 describes the system that automatically builds
algorithm-specific and resource-efficient “hardwired” accelerators. It relies on the
dynamic reconfiguration feature of FPGAs to obtain high performance using lim-ited
logic resources.
The system in Section 28.3 is based on an architecture that does not require
frequent reconfiguration. The architecture is modular, easily scalable, and highly
tuned for the ATR application. These application-specific processors are
automatically generated based on application and environment parameters. In
Section 28.4 we compare the implementations to discuss the benefits and the
trade-offs of designing ATR systems using FPGAs. In Section 28.5, we draw our
conclusions on FPGA-based ATR system design.
[188] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

28.1 AUTOMATIC TARGET RECOGNITION ALGORITHMS

Sandia real-time SAR ATR systems use a hierarchy of algorithms to reduce the
processing demands for SAR images in order to yield a high probability of detec-
tion (PD) and a low false alarm rate (FAR).

28.1.1 Focus of Attention


As shown in Figure 28.1, the first step in the SNL algorithm is a Focus of Attention
(FOA) algorithm that runs over a downsampled version of the entire image to find
regions of interest that are of approximately the right size and brightness. These
regions are then extracted and processed by an indexing stage to further reduce
the datastream, which includes target hypotheses, orientation estimations, and
target center locations. The surviving hypotheses have the full resolution data sent
to an identification executive that schedules multiple iden-tification algorithms and
then fuses their results.
The FOA stage identifies interesting image areas called “chips.” Then it com-
poses a list of targets suspected to be in a chip. Having access to range and
altitude information, the FOA algorithm also determines the elevation for the chip,
without having to identify the target first. It then tasks the next stage with evaluating
the likelihood that the suspected targets are actually in the given image chip and
exactly where.

28.1.2 Second-level Detection


The next stage of the algorithm, called Second Level Detection (SLD), takes the
extracted imagery (an image chip), matches it against a list of provided target

Synthetic aperture radar sensors

Focus of attention

Second-level detection driver

M-47 Tank
Angle: 3558
Reporting module Elevation: 10 ft

FIGURE 28.1 ■ The Sandia Automatic Target Recognition algorithm.


28.1 Automatic Target Recognition Algorithms 593

hypotheses, and returns the hit information for each image chip consisting of the
best two orientation matches and other relevant information.
The system has a database of target models. For each target, and for each of its
three different elevations, 72 templates are defined corresponding to its all-around
views. The orientations of adjacent views are separated by 5 degrees.
SLD is a binary silhouette matcher that has a bright mask and a surround mask
that are mutually exclusive. Each template is composed of several param-eters
along with a “bright mask” and a “surround mask,” where the former defines the
image pixels that should be bright for a match, and the latter defines the ones that
should not. The bright and surround masks are 32×32 bitmaps, each with about
100 asserted bits. “Bright” is defined relative to a dynamic threshold.

On receiving tasks from the FOA, the SLD unit compares all of the stored
templates for this target and elevation and the applicable orientations with the
image chip, and computes the level of matching (the “hit quality”). The two hits with
the highest quality are reported to the SLD driver as the most likely candidates to
include targets. For each hit, the template index number, the exact position of the
hit in the search area, and the hit quality are pro-vided. After receiving this
information, the SLD driver reports it to the ATR system.

The purpose of the first step in the SLD algorithm, called the shape sum, is to
distinguish the target from its surrounding background. This consists of adap-tively
estimating the illumination for each position in the search area, assuming that the
target is at that orientation and location. If the energy is too little or too much, no
further processing for that position for that template match is required. Hence, for
each mask position in the search area, a specific threshold value is computed as in
equation 28.1.

31 31
SMx, y = ∑ ∑ Bu, vMx+u, y+v (28.1)
u=0 v=0

SMx, y
THx, y = – Bias (28.2)
BC

The next step in the algorithm distinguishes the target from the background by
thresholding each image pixel with respect to the threshold of the cur-rent mask
position, as computed before. The same pixel may be above the threshold for some
mask positions but below it for others. This threshold calculation determines the
actual bright and surround pixel for each posi-tion. As shown in equation 28.2, it
consists of dividing the shape sum by the number of pixels in the bright mask and
subtracting a template-specific Bias constant.

As shown in equation 28.3, the pixel values under the bright mask that are
greater than or equal to the threshold are counted; if this count exceeds the minimal
bright sum, the processing continues. On the other hand, the pixel
[189] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

values under the surround mask that are less than the threshold are counted to
calculate the surround sum as shown in equation 28.4. If this count exceeds the
minimal surround sum, it is declared a hit.
31 31
BSx, y = ∑ ∑ Bu, v Mx+u, y+v ≥ THx, y (28.3)
u=0 v=0

31 31
SSx, y = ∑ ∑ Su, v Mx+u, y+v < THx, y (28.4)
u=0 v=0

Once the position of the hit is determined, we can calculate its quality by taking
the average of bright and surround pixels that were correct, as shown in equation
28.5. This quality value is sent back to the driver with the position to
determine the two best targets.
1 BSx, y SSx, y
Qx, y = 2 BC + SC (28.5)

28.2 DYNAMICALLY RECONFIGURABLE DESIGNS

FPGAs can be reconfigured to perform multiple functions with the same logic
resources by providing a number of corresponding configuration bit files. This ability
allows us to develop dynamically reconfigurable designs. In this section, we present
an ATR system implementation of UCLA’s Mojave project that uses an FPGA’s
dynamic reconfigurability.

28.2.1 Algorithm Modifications


As described previously, the current Sandia system uses 64 × 64 pixel chips and 32
× 32 pixel templates. However, the Mojave system uses chip sizes of 128 ×128
pixels and template sizes of 8 ×8 pixels. It uses different chip and tem-plate sizes in
order to map into existing FPGA devices that are relatively small. A single template
moves through a single chip to yield 14,641 (121 ×121) image correlation results.
Assuming that each output can be represented with 6 bits, the 87,846 bits are
produced by the system.
There is also a divide step in the Sandia algorithm that follows the shape sum
operation and guides the selection of threshold bin for the chip. This sys-tem does
not implement the divide, mainly because it is expensive relative to available FPGA
resources for the design platform.

28.2.2 Image Correlation Circuit


FPGAs offer an extremely attractive solution to the correlation problem. First of all,
the operations being performed occur directly at the bit level and are domi-nated by
shifts and adds, making them easy to map into the hardware provided by the
FPGA. This contrasts, for example, with multiply-intensive algorithms
28.2 Dynamically Reconfigurable Designs 595

that would make relatively poor utilization of FPGA resources. More important, the
sparse nature of the templates can be utilized to achieve a far more efficient
implementation in the FPGA than could be realized in a general-purpose corre-
lation device. This can be illustrated using the example of the simple template
shown in Figure 28.2.
In the example template shown in the figure, only 5 of the 20 pixels are asserted.
At any given relative offset between the template and the chip, the correlation
output is the sum of the 5 binary pixels in the chip that match the asserted bits in
the template. The template can therefore be implemented in the FPGA as a simple
multiple-port adder. The chip pixel values can be stored in flip-flops and are shifted
to the right by one flip-flop with each clock cycle. Though correlation of a large
image with a small mask is often understood con-ceptually in terms of the mask
being scanned across the image, in this case the opposite is occurring—the
template is hardwired into the FPGA while the image pixels are clocked past it.

Another important opportunity for increased efficiency lies in the potential to


combine multiple templates on a single FPGA. The simplest way to do this is to
spatially partition the FPGA into several smaller blocks, each of which handles the
logic for a single template. Alternatively, we can try to identify templates that have
some topological commonality and can therefore share parts of their adder trees.
This is illustrated in Figure 28.3, which shows two templates sharing several pixels
that can be mapped using a set of adder trees to leverage this overlap.

A potential advantage FPGAs have over ASICs is that they can be dynam-ically
optimized at the gate level to exploit template characteristics. For our application, a
programmable ASIC design would need to provide large general-purpose adder
trees to handle the worst-case condition of summing all possi-ble template bits, as
shown in Figure 28.4. In constrast, an FPGA exploits the sparse nature of the
templates and constructs only the small adder trees required. Additionally, FPGAs
can optimize the design based on other application-specific characteristics.

TemplateA
D
00
D10
D20 1 ResultA
D
01
D
21

Image
D D D D
00 10 20 30

D D D D
01 11 21 31

FIGURE 28.2 ■ An example template and a corresponding register chain with an adder tree.
[190] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

TemplateA
D
01
D
00
+ ResultA
D
D2010 +
TemplateB D
21
+ ResultB
D
31

FIGURE 28.3 ■ Common hardware shared between two templates.

TemplateA Image

Registers
for image chip
and templates

AND gates
used to
perform
dot product

Sum
Large adder

FIGURE 28.4 ■ The ASIC version of the equivalent function.

28.2.3 Performance Analysis


Using a template-specific adder tree achieves significant reduction in routing
complexity over a general correlation device, which must include logic to sup-port
arbitrary templates. The extent of this reduction is inversely proportional to the
fraction of asserted pixels in the template. While this complexity reduc-tion is
important, alone it is not sufficient to lead to efficient implementations on FPGAs.
The number of D-flip-flops required for storing the data points can cause
inefficiencies in the design. Implementing these on the FPGA using the usual flip-
flop–based shift registers is inefficient.
This problem can be resolved by collapsing the long strings of image pixels—
those not being actively correlated against a template—into shift registers, which
can be implemented very efficiently on some lookup table (LUT)–based FPGAs. For
example, LUTs in the Xilinx XC4000 library can be used as shift registers that delay
data by some predetermined number of clock cycles. Each 16×1-bit
28.2 Dynamically Reconfigurable Designs 597

LUT can implement an element that is effectively a 16-bit shift register in which the
internal bits cannot be accessed. A flip-flop is also needed at the output of each
RAM to act as a buffer and synchronizer. A single control circuit is used to control
the stepping of the address lines and the timely assertion of the write-enable and
output-enable signals for all RAM-based shift register elements. This is a small
price to pay for the savings in configurable logic block (CLB) usage relative to a
brute-force implementation using flip-flops.
In contrast, the 256-pixel template images, like those shown in Figure 28.5, can
be stored easily using flip-flop–based registers. This is because sufficient flip-flops
are available to do this, and the adder tree structures do not consume them. Also,
using standard flip-flop–based shift registers for image pixels in the template
simplifies the mapping process by allowing every pixel to be accessed. New
templates can be implemented by simply connecting the template pixels of concern
to the inputs of the adder tree structures. This leads to significant simplification of
automated template-mapping tools.
The resources used by the two components of target correlation—namely,
storage of active pixels on the FPGA and implementation of the adder tree cor-
responding to the templates—are independent of each other. The resources used
by the pixel storage are determined by the template size and are independent of the
number of templates being implemented. Adding templates involves adding new
adder tree structures and hence increases the number of function genera-tors being
used. The total number of templates on an FPGA is bounded by the number of
usable function generators.
The experimental results suggest that in practice we can expect to fit 6 to 10
surround templates having a higher number of overlapping pixels onto a 13,000-
gate FPGA. However, intelligent grouping of compatible templates is important.
Because the bright templates are less populated than the surround templates, we
estimate that 15 to 20 of them can be mapped onto the same FPGA.

FIGURE 28.5 ■ Example of eight rotation templates of a SAR 16 ×16 bitmap image.
[191] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

28.2.4 Template Partitioning


To minimize the number of FPGA reconfigurations necessary to correlate a given
target image against the entire set of templates, it is necessary to maximize the
number of templates in every configuration of the FPGA. To accomplish this
optimization goal, we want to partition the set of templates into groups that can
share adder trees so that fewer resources are used per template. The set of
templates may number in the thousands, and the goal may be to place 10 to 20 of
them per configuration; thus, exhaustive enumeration of all of the possible
groupings is not an option. Instead, we use a heuristic method that furnishes a
good, although perhaps suboptimal, solution.
Correlation between two templates can establish the number of pixels in com-
mon, and it is a good starting point for comparing and selecting templates. How-
ever, some extra analysis, beyond iterative correlations on the template set, is
necessary. For example, a template with many pixels correlates well with several
smaller templates, perhaps even completely subsuming them, but the smaller
templates may not correlate with each other and involve no redundant compu-
tations. There are two possible solutions to this. The first is to ensure that any
template added to an existing group is approximately the same size as the tem-
plates already in it. The second is to compute the number of additions required
each time a new template is brought in—effectively recomputing the adder tree
each time.
Recomputing the entire adder tree is computationally expensive and not a good
method of partitioning a set of templates into subsets. However, one of the
heuristics used in deciding whether or not to include a template in a newly formed
partition is to determine the number of new terms that its inclusion would create in
the partition’s adder tree. The assumption is that more terms would result in a
significant number of new additions, resulting in a wider and deeper adder tree.
Thus, by keeping to a minimum the number of new terms created, newly added
templates do not increase the number of additions by a significant amount.

Using C++, we have created a design tool to implement the partitioning pro-cess
that uses an iterative approach to partitioning templates. Templates that compare
well to a chosen “base” template (usually selected by largest area) are removed
from the main template set and placed in a separate partition. This process is
repeated until all templates are partitioned. After the partitions have been selected,
the tool computes the adder tree for each partition.
Figure 28.6 shows the creation of an adder tree from the templates in a par-
tition. Within each partition, the templates are searched for shared subsets of
pixels. Called terms, these subsets can be automatically added together, leading to
a template description that uses terms instead of pixels.
The most common addition of two terms is chosen to be grouped together, to
form a new term that can be used by the templates. In this way, each template is
rebuilt by combining terms in such a way that the most redundant additions are
shared between templates; the final result is terms that compute entire tem-plates.
For the sample templates shown in Figure 28.6, 39 additions would be required to
compute the correlations for all 5 in a naive approach. However,
28.2 Dynamically Reconfigurable Designs 599

A B C D E

Template A 5 1 1 3 1 4
1
Template B 5 3 1 4 1 5
2 3 4 5
Template C 5 2131617
6 Template D 5 1121617
7
Template E 5 1 1 3 1 7
8

FIGURE 28.6 ■ Example of template grouping and rewritten as sums of terms.

after combining the templates through the process just described, only 17 additions
are required.

28.2.5 Implementation Method


For a configurable computing system, the problem of dividing hardware and
software is particularly interesting because it is both a hardware and a software
issue. Consider the two methods for performing addition shown in Figure 28.7.
Method A, a straightforward parallel implementation requiring several FPGAs, has
several drawbacks. First, the outputs from several FPGAs converge at the addition
operation, which may create a severe I/O bottleneck. Second, the sys-tem is not
scalable—if it requires more precision, and therefore more bit planes, more FPGAs
must be added.
Method B in Figure 28.7 illustrates our approach. Each bit plane is correlated
individually and then added to the previous results in temporary storage. It is
completely scalable to any image or template precision, and it can implement all
correlation, normalization, and peak detection routines required for ATR. One
drawback of method B is the cost and power required for the resulting wide
temporary SRAM. Another possible drawback is the extra execution time required
to run ATR correlations in serial. The ratio of performance to number of FPGAs is
roughly equivalent for the two methods, and the performance gap can be closed
simply by using more of the smaller method B boards.
The approach of a reconfigurable FPGA connected to an intermediate memory
allows us a fairly complicated flow of control. For example, the sum calculation in
ATR tends to be more difficult than the image–template correlation. Thus, we may
want a program that performs two sum operations and forwards the results to a
single correlation.
Reconfigurations for 10K-gate FPGAs are typically around 20 kB in length.
Reconfiguring every 20 milliseconds gives a reconfiguration bandwidth of
approximately 1 MB per FPGA per second. Coupled with the complexity of the
[192] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

Bit plane 0 Corr SRAM

Bit plane 1 Corr


1
Bit plane 0
Sum Bit plane 1 Corr 1
Bit plane 7 Corr Bit plane 7 Sum

(a) (b)

FIGURE 28.7 ■ Each of eight FPGAs correlating each bit plane of the template (a). A single FPGA
correlating bit planes and adding the partial sums serially (b).

flow control, this reconfiguration bandwidth can be handled by placing a small


microcontroller and configuration RAM next to every FPGA. The microcontroller
permits complicated flow of control, and since it addresses the configuration RAM, it
frees up valuable I/O on the FPGA. The microcontroller is also impor-tant for
debugging, which is a major issue in configurable systems because the many
different hardware configurations can make it difficult to isolate problems.
The principal components include a “dynamic” FPGA, which is reconfigured on
the fly and performs most of the computing functions, and a “static” FPGA, which is
configured only once and performs control and some computational functions. The
EPROM holds configuration bitstreams, and the SRAM holds the input image data
(e.g., the chip). Because the correlation operation involves the application of a small
target template to a large chip, a first in, first out (FIFO) is needed to hold the pixels
being wrapped around to the next row of the template mask. The templates used in
this implementation are of size 8 ×8, whereas the correlation image is 128 ×128.
Each configuration of the dynamic FPGA implements a total of four template pairs
(four bright templates and four surround templates).

The large amount of sum in the algorithm can be performed in parallel. This
requires a total of D clock cycles, where D is each pixel’s depth of representa-tion.
Once the sum results are obtained, the correlation outputs are produced at the rate
of 1 per clock cycle. Parallelism cannot be as directly exploited in this step because
different pixels are asserted for different templates. However, in the limit of very
large FPGAs the number of clock cycles to compute the cor-relation is upper-
bounded by the number of possible thresholds, as opposed to the number of
templates.

28.3 RECONFIGURABLE STATIC DESIGN

Although the idea of reusing reconfigurable hardware to dynamically perform


different functions is unique to FPGAs, the main weaknesses of dynamic FPGA
reconfiguration are the lengthy time and additional resources required for FPGA
reconfiguration and design compilation. Although reconfiguration time
28.3 Reconfigurable Static Design 601

has improved dramatically over the years, any time spent on reconfiguration is time
that could be used to process more data.
Unlike the dynamic reconfigurable architecture describe in the previous sec-tion,
we describe another efficient FPGA design that does not require complete design
reconfiguration. However, like the previous system, it uses a number of parameters
to design a highly pipelined custom design to maximize utilization of the design
space to exploit the parallelism in the algorithm.

28.3.1 Design-specific Parameters


To verify our understanding of the algorithm, we first implemented a soft-ware
simulator and ran it on a sample dataset. Our simulations reproduced the expected
results. Over time this algorithm simulator became a full hardware simulator and
verifier. It also allowed us to investigate various design options before implementing
them in hardware.
The dataset includes 2 targets, each with 72 templates for 5-degree orientation
intervals. In total, then, we have 144 bright masks and 144 surround masks, each a
32 × 32 bitmap. The dataset also includes 16 image chips, each with 64 × 64 pixels
at 1 byte per pixel. Given a template and an image, there are 441 matrix
correlations that must take place for each mask. This corresponds to 21 search
rows, each 21 positions wide. The total number of search row correlations for the
sample data and templates is thus 48,384. The behavior of the simulator on the
sample dataset revealed a number of algorithm-specific characteristics. Because
the design architecture was developed for reconfigurable devices, these
characteristics are incorporated to tune the hardware engine for the best cost and
performance.

28.3.2 Order of Correlation Tasks


Correlation tasks for threshold calculation (equation 28.2), bright sum (equa-tion
28.3), and surround sum (equation 28.4) are very closely related. Valid results for
all three must exist in order to calculate the quality of the hit, so invalid results from
any one of them make other calculations unnecessary.
For the data samples, about 60 percent of the surround sums and 40 percent of
the threshold results were invalid, while all of the bright sum results were valid. The
low rejection rate by bright sum is the result of the threshold being computed using
only the bright mask, regardless of the surround mask. The threshold is computed
by the same pixels used for computing bright sum, so we find that, for a typical
dataset, checking for invalid surround sums before the other calculations drastically
reduces the total number of calculations needed.

Zero mask rows


Each mask has 32 rows. However, many have all-zero rows that can be skipped.
By storing with each template a pointer to its first nonzero row we can skip directly
to that row “for free.” Embedded all-zero rows are also skipped.
The simulation tools showed that, for our template set, this optimization
significantly reduces the total computational requirements. For the sample
[193] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

template set, there are total of 4608 bitmap rows to use in the correlation tasks. Out
of 4608 bright rows, only 2206 are nonzero, and out of 4608 surround rows, 2815
are nonzero. Since the bright mask is used for both threshold and bright sum
calculations, and the surround mask is used once, skipping the zero rows reduces
the number of row operations from 13,824 to 7227, which produces a savings of
about 52 percent.
It is also possible to reduce the computation by skipping zero columns. How-
ever, as will be described in following section, the FPGA implementation works on
an entire search row concurrently. Hence, skipping rows reduces time but skipping
columns reduces the number of active elements that work in parallel, yielding no
savings.

28.3.3 Reconfigurable Image Correlator


Although it is possible to reconfigure FPGAs dynamically, the time spent on context
switching and reconfiguration could be used instead to process data on a register-
based static design. For this reason, minimizing reconfiguration time during
computation is essential in effective FPGA use. Nevertheless, when we use FPGAs
as compute engines, reconfiguration allows the hardware to take on a large range
of task parameters.
The SLD tasks represented in equations 28.1, 28.3, and 28.4 are image cor-
relation calculations on sliding template masks with radar images. To explain our
design strategies, we examine each equation by applying the algorithm on a small
dataset consisting of a 6 ×6 pixel image, a 3 ×3 mask bitmap, and a 4 ×4 result
matrix.
For this dataset, the shape sum calculation for a mask requires multiplying all 9
mask bits with the corresponding image pixels and summing them to find 1 of 16
results. To build an efficient circuit for the sum equations 28.3 and 28.4, we write
out the subset of both equations as shown in Table 28.1. By expanding the
summation equations, we expose opportunities for hardware to optimize the
calculations. First, the same Buv is used to calculate the nth term of all of the shape
sum results. Thus, when the summation calculations are done in parallel, the Buv
coefficient can be broadcast to all of the units that calculate each result. Second,
the image data in the nth term of the SMxy is in the (n + 1)th term of SMxy−1,
except when v returns to 0, the image pixel is located in the subsequent row. This is
useful in implementing the pipeline datapath for the image pixels through the
parallel summation units.

TABLE 28.1 ■ Expanded sum equations 28.3 and 28.4


Term 1 2 3 4 5 6 7 8 9
u 0 0 0 1 1 1 2 2 2
v 0 1 2 0 1 2 0 1 2
+ + + + + + + +
SM00 = B00M00 B01M01 B02M02 B10M10 B11M11 B12M12 B20M20 B21M21 B22M22
SM01 = + + + + + + + +
B00M01 B01M02 B02M03 B10M11 B11M12 B12M13 B20M21 B21M22 B22M23
SM02 = + + + + + + + +
B00M02 B01M03 B02M04 B10M12 B11M13 B12M14 B20M22 B21M23 B22M24
+ + + + + + + +
SM03 = B00M03 B01M04 B02M05 B10M13 B11M14 B12M15 B20M23 B21M24 B22M25
28.3 Reconfigurable Static Design 603

4-byte pipeline
M10 M14

M M
11 15

M M
12 16

M13 M17

M M M M M M
00 01 02 03 04 05

1-byte (pixel) pipeline


U U U U3
0 1 2
1-bit (mask) broadcast
B00, B01, B02, B10, B11, ...

FIGURE 28.8 ■ A systolic image array pipeline.

Based on the characteristics of the expanded equations, we can build a systolic


computation unit as in Figure 28.8. To save time while changing the rows of pixels,
the pixel pipeline can either operate as a pipeline or be directly loaded from another
set of registers. At every clock cycle, each Uy unit performs one operation, v is
incremented modulo 3, and the pixel pipeline shifts by one stage (U1 to U0, U2 to
U1, . . . ). When v returns to 0, u is incremented modulo 3, and the pixel pipeline is
loaded with the entire (u + x)th row of the image. When u returns to 0, the results
are offloaded from the Uy stage, their accumulators are cleared, and x is
incremented modulo 4. When x returns to 0, this computing task is completed.
The initial loading of the image pixel pipeline is from the image word pipeline,
which is word wide and so four times faster than the image pixel pipeline. This
speed advantage guarantees that the pipeline will be ready with the next image row
data when u returns to 0.

28.3.4 Application-specific Computation Unit


Developing different FPGA mappings for equations 28.1, 28.3, and 28.4 in paral-lel
processing unit is one way to implement the design. At the end of each stage, the
FPGA device is reconfigured with the optimal structure for the next task. As
appealing as this may sound, current FPGA devices have typical reconfiguration
times of tens of milliseconds, during which the reconfiguring logic cannot be used
for computation.
As presented in Section 28.3, each set of template configurations also has to be
designed and compiled before any computation can take place. This can be a time-
consuming procedure that does not allow dynamic template sets to be immediately
used in the system.
Fortunately, we can rely on the fact that FPGAs can be tuned to target-specific
applications. From the equations, we derived one compact structure, shown in
Figure 28.9, that can efficiently perform all ATR tasks. Since the target ATR
[194] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

8-bit lmage pixel


M
x1u, y1v
S B
u,v u,v
LU
Carry

Accumulator Accumulator Register

Surround Shape sum (SMx,y)


sum (SSx,y) Bright
sum (BSx,y)

FIGURE 28.9 ■ Computation logic for equations 28.1, 28.3, and 28.4.

system can be seen as “embarrassingly parallel,” the performance of the FPGA


design is linearly scalable to the number of the application-specific units.

28.4 ATR IMPLEMENTATIONS

In this section we present the implementation results of two reconfigurable San-dia


ATR systems, researched and developed on different reconfigurable plat-forms.
Both designs leverage the unique characteristics of reconfigurable devices to
accelerate ATR algorithms while making efficient use of available resources.
Therefore, they both outperformed existing software as well as custom ASIC
solutions. By analyzing the results of the reconfigurable solutions, we examine
design trade-offs in cost and performance.

28.4.1 A Dynamically Reconfigurable System


All of the component technologies described in this chapter have been des-igned,
implemented, tested, and debugged using the Mojave board shown in Figure 28.10.
This section discusses various performance aspects of the com-plete system, from
abstract template sets through application-specific CAD tools and finally down to
the embedded processor and dynamic FPGA. The current hardware is connected
to a video camera rather than a SAR data source, though this is only necessary for
testing and early evaluation.
The results presented here are based on routing circuits to two devices: the
Xilinx 4013PG223-4 FPGA and the Xilinx 4036. Xilinx rates the capacity of these
parts as 13K and 36K equivalent gates.
Table 28.2 presents data on the effectiveness of the template-partitioning phase.
Twelve templates were considered for this comparison: in one case they were
randomly divided into three partitions; in the other, the CAD tool was used to guide
the process. The randomly selected partitions required 33 percent more CLBs than
those produced by the intelligent partitioning tool. These numbers
28.4 ATR Implementations 605

FIGURE 28.10 ■ Photograph of second-generation Mojave ATR system.

Table 28.2 ■ Comparison of scored and random partitioning


on an Xilinx 4036
Random grouping CLB count Initial partitioning CLB count

1961 1491
1959 1449
1958 1487

Table 28.3 ■ Comparison of resources used for the dynamic


and static FPGAs
Flip-flops Function generators I/O pins

Dynamic FPGA 532 939 54


Support FPGA 196 217 96
Available 1536 1536 192

account for the hardware requirements of the entire design, including the con-trol
hardware that is common to all designs as well as the template-specific adder trees.
Relative savings in the adder trees alone are higher.
Table 28.3 lists the overall resources used for both FPGAs in the system, the
dynamic devices used for correlation, and the static support device used to
implement system control features. Because the image source is a standard video
camera rather than a SAR sensor, the surround template is the comple-ment of the
bright template, resulting in more hardware than would be required for true SAR
templates. The majority of the flip-flops in the dynamic FPGA
[195] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

are assigned to holding the 8-bit chip data in a set of shift registers. This load
increases as a linear function of the template size.
Each configuration of the dynamic FPGA requires 16 milliseconds to complete
an evaluation of the entire chip for four template pairs. The Xilinx 4013PG223-4
requires 30 milliseconds for reconfiguration. Thus, a total of 4 template pairs can be
evaluated in 46 milliseconds, or 84 template pairs per second. This timing will
increase logarithmically with the template size.
Comparing configurable machines with traditional ASIC solutions is neces-sary
but complicated. Clearly, for almost any application, a bank of ASICs could be
designed that used the same techniques as the multiple configurations of the FPGA
and would likely achieve higher performance and consume less power. The
principal advantage of configurable computing is that a single FPGA may act as
many ASICs without the cost of manufacturing each device. If the com-parison is
restricted to a single IC (e.g., a single FPGA against a single ASIC of similar size),
relative performance becomes a function of the hardware savings enabled by data
specificity. For example, in the ATR application the templates used are quite
sparse—only 5 to 10 percent of the pixels are important in the computation—which
translates directly into a hardware savings that is much more difficult to realize in an
ASIC. Further savings in the ATR application are possible by leveraging topological
similarities across templates. Again, this is an advantage that ASICs cannot easily
exploit.
If the power and speed advantages of ASICs over FPGAs are estimated at a
factor of 10, the configurable computing approach achieves a factor of improve-
ment anywhere from 2 and 10 (depending on sparseness and topological prop-
erties) for the ATR application.

28.4.2 A Statically Reconfigurable System


The FPGA nodes developed by Myricom integrate reconfigurable computing with a
2-level multicomputer to promote flexibility of programmable compu-tational
components in a highly scalable network architecture. The Myricom FPGA nodes
and its motherboard are shown in Figure 28.11. The daughter nodes are 2-level
multicomputers whose first level provides the general-purpose infras-tructure of the
Myrinet network using the LANai RISC microprocessor. The FPGA functions as a
second-level processor responsible for application-specific tasks.

The host is a SparcStation IPX running SunOS 4.1.3 with a Myrinet inter-face
board having a 512K memory. The FPGA node—consisting of Lucent Tech-
nologies’ ORCA FPGA 40K and Myricom’s LANai 4.1 running in 3.3 V at 40 MHz—
communicates with the host through an 8-port Myrinet switch.
Without additional optimization, static implementation of the complete ATR
algorithm on one FPGA node processes more than 900 templates per second.
Each template requires about 450,000 iterations of 1-bit conditional accumulate for
the complete shape sum calculation. The threshold calculation requires one division
followed by subtraction. The bright and surround sum compares all the image pixels
against the threshold results. Next, 1-bit conditional accumulate is
28.4 ATR Implementations 607

FIGURE 28.11 ■ A Myrinet 8-port switch motherboard with Myricom ORCA FPGA
daughter nodes. Four FPGA nodes can be plugged into a single motherboard.

executed for each sum. And then the quality values are calculated using two
divides, an add, and a multiply.
Given that 1-bit conditional accumulate, subtract, divide, multiply, and 8-bit
compare are one operation each, the total number of 8-bit operations to process
one 32 × 32 template over a 64 × 64 image is approximately 3.1 million. Each
FPGA node executes over 2.8 billion 8-bit operations per second (GOPS).
After the simulations, we found that the sparseness of the actual templates
reduced their average valid rows to approximately one-half the number of total
template rows. This optimization was implemented to increase the throughput by 40
percent. Further simulations revealed more room for improvements, such as
dividing the shape sum in the FPGA, transposing narrow template masks, and
skipping invalid threshold lines. Although these optimizations were not imple-
mented in the FPGA, the simulation results indicated an additional 94 percent
increase in throughput. Implementing all optimizations would yield a result
equivalent to about a 7.75 GOPS correlator.

28.4.3 Reconfigurable Computing Models


The increased performance of configurable systems comes with several costs.
These include the time and bandwidth required for reconfiguration, the memory and
I/O required for intermediate results, and the additional hardware required for
efficient implementation and debugging. Minimizing these costs requires innovative
approaches to system design.
Figure 28.12 illustrates the fundamental difference between a traditional com-
puting model and the two reconfigurable computing architectures discussed in this
chapter. The traditional processor receives simple operands from data
[196] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

Instruction Configuration Configuration


memory memory memory

Traditional
FPGA FPGA
processor

Intermediate
Data Data Data
result
memory memory memory
storage
(a) (b) (c)

FIGURE 28.12 ■ A comparison of a traditional computing model (a) with a dynamically


reconfigurable model (b) and a statically reconfigurable custom model (c).

memory, performs a simple operation in the program, and returns the result to data
memory. Similarly, dynamic computing uses a small number of rapidly reconfiguring
FPGAs tightly coupled to an intermediate result memory, data memory, and
configuration memory. A reconfigurable custom computer is simi-lar to a fixed ASIC
device in that, usually, only one highly tuned design is config-ured on the FPGA—
there is no need to reconfigure to perform a needed function.
In most cases, a custom ASIC performs far better than a traditional processor.
However, traditional processors continue to be used for their programmability.
FPGAs attempts to bridge the gap between custom ASICs and software by allow-
ing designers to build custom hardware using programmable firmware. There-fore,
unlike in pure ASIC designs, configuration memory is used to program the
reconfigurable hardware as instructions in a traditional processor would dictate the
functionality of a program. Unlike software, once the FPGA is configured, it can
function just like a custom device.
As shown in previous sections, an ATR was implemented in an FPGA using two
different methods. The first implementation uses the dynamic computer model,
where parts of the entire algorithm are dynamically configured to pro-duce the final
results. The second design uses simulation results to produce a highly tuned fixed
design in the FPGA that does not require more than a single reconfiguration.
Because of algorithm modifications made to the first design, there is no clear way to
compare the two designs. However, looking deeper, we find that there is not a
drastic difference in the subcomponents or the algo-rithm; in fact, the number of
required operations for the algorithm in either design should be the same.

The adders make up the critical path of both designs. Because both designs are
reconfigurable, we expect the adders used to have approximately the same
performance as long as pipelining is done properly. Clever use of adders in the
static design allows it to execute more than one calculation
28.5 Summary 609

simultaneously. However, it is possible to make similar use of the hardware to


increase performance in the dynamic design.
The first design optimizes the use of adders to skip all unnecessary calcu-lations,
also making each configuration completely custom. The second design has to be
more general to allow some programmability. Therefore, depending on the template
data, not all of the adders may be in use at all times. If all of the templates for the
first design can be mapped onto a single FPGA, the first method results in more
resource efficiency than the second. The detrimental effect of idle adders in the
static design becomes increasingly more prominent as template bitmap rows grow
more sparse.
On the other hand, if the templates do not all fit in a single FPGA, the first
method adds a relatively large overhead because of reconfiguration latency.
Unfortunately, the customized method of the second design works against mak-ing
the design smaller. Every bit in the template maps to a port of the adder engine, so
the total size of the design is proportional to the number of total bits in all of the
templates. Therefore, as the number of templates increases, the total design size
must also increase. Ultimately, the design must be divided into several smaller
configurations that are dynamically reconfigured to share a single device.

From these results, we observe the strengths and weaknesses of dynamic


reconfiguration in such applications. Dynamic reconfiguration allows a large custom
design to successfully run in a smaller FPGA device. The trade-off is significant
time overhead in the system.

28.5 SUMMARY
Like many streaming image correlation algorithms, the Sandia ATR system dis-
cussed in this chapter can be efficiently implemented on an FPGA. Because of the
high degree of parallelism in the algorithm, designers can take full advan-tage of
parallel processing in hardware while linearly scaling total throughput with available
hardware resources. In this chapter we presented two different ways of
implementing such a system.
The first system employs a dynamic computing model to effectively imple-ment a
large custom design using a smaller reconfigurable device. To fit, high-performance
custom designs can be divided into subcomponents, which can then share a single
FPGA to execute parts of the algorithm at a high speed. For the ATR algorithm, this
process produced a resource-efficient design that exceeded the performance of
previous custom ASIC-based systems.
The second system is based on a more generic architecture highly tuned for a
given set of templates. Through extensive simulations, many parameters of the
algorithm are tuned to efficiently process the incoming data. With algorithm-specific
optimizations, the throughput of the system increased threefold from an initial naive
implementation. Because of the highly pipelined structure of the design, the
maximum clock frequency is more than three times that of the
[197] Chapter 28 ■ Automatic Target Recognition Systems on Reconfigurable Devices

dynamic computer design. Furthermore, a larger FPGA on the platform allowed


the generic processing architecture to duplicate the specifications of the original
algorithm. Therefore, the raw performance of the static design was faster than the
dynamically reconfigurable system.
Although the second system is a static design, it is best suited for
reconfigurable platforms because of its highly tuned parameters. Since this system
is reconfig-urable, it is conceivable that the dynamic computational model can be
applied on top of it. Thus, the highly tuned design may be implemented efficiently,
even on a device with enough resources for only a fraction of the entire design.

Acknowledgments I would like to acknowledge Professor William H. Mangione-


Smith for permission to integrate publications on the Mojave project into this
chapter.

You might also like