Coverage Analysis Techniques For HDL Design Validation: Jing-Yang Jou and Chien-Nan Jimmy Liu

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Coverage Analysis Techniques for HDL Design Validation

Jing-Yang Jou

and

Chien-Nan Jimmy Liu

Department of Electronics Engineering


National Chiao Tung University, Taiwan, R.O.C.
jyjou @ bestmap.ee.nctu.edu.tw , jimmy @ EDA.ee.nctu.edu.tw

Abstraction
Until now, the functional verification at RTL is still
mostly done by simulating the HDL designs with a
massive amount of test patterns. In a typical design
environment, the quality of the test mainly depends on the
designers understanding of the design and is not
measurable. Therefore, more objective methods, which
use some well-defined functional coverage metrics to
perform a quantitative analysis of simulation completeness,
are proposed and rapidly getting popular. For this purpose,
many functional coverage metrics are proposed to verify
the designs written in HDL. We will present a survey on
several popular functional coverage metrics first in this
paper. In order to monitor the coverage during simulation,
a dedicated tool is required besides the simulator. Several
commercial tools for Verilog and VHDL code coverage
are now available. We will then introduce three popular
approaches of coverage measurement in this paper.

1. Introduction
Due to the increasing complexity of modern circuit
design, verification has become the major bottleneck of
the entire design process [1]. Most verification efforts are
put on verifying the correctness of the initial
register-transfer level (RTL) descriptions written in
hardware description language (HDL). Until now, the
functional verification is still mostly done by simulation
with a massive amount of test patterns even though some
formal verification techniques claim to be able to verify
the equivalence of a design across several different design
levels. Before the high computing complexity of the
formal verification techniques is dramatically reduced, it
appears that the simulation method will continue to play
an important role in the verification process.
During simulation, an important question is often asked
by the designers and the verification engineers: Are we
done yet? In a typical design environment, the verification
task is completed when the engineers think they have done
a thorough simulation. The quality of the test mainly
depends on the designers understanding of the design and
is not measurable. Therefore, more objective methods,
which use some well-defined functional coverage metrics

to perform a quantitative analysis of simulation


completeness [2, 3] , are proposed and rapidly getting
popular. By monitoring the execution of the HDL code
during simulation, the verification engineers can
determine which part of code has not been tested yet so
that they can focus their efforts on those areas to achieve
100% coverage. Although 100% coverage still cannot
guarantee a 100% error-free design, it provides a more
systematic way to measure the completeness of the
verification process.
For this purpose, a lot of functional coverage metrics [4,
5] are proposed to verify the designs written in HDL. In
those metrics, code coverage is the most popular category.
The basic idea of those coverage metrics is to traverse the
language structure completely. FSM coverage is another
popular category of functional coverage metrics, which is
to apply input patterns to traverse the whole state
transition graph (STG) during the simulation process.
Besides these metrics, there are still many metrics
proposed for verifying HDL designs. Although so many
different functional coverage metrics have been proposed,
there is still not a single metric being popularly accepted
as the only complete and reliable metric. A trade-off often
exists between the speed and the accuracy in those
coverage metrics. Therefore, designers often use multiple
metrics to evaluate the completeness of a simulation. We
will give a survey on several popular functional coverage
metrics in Section 2.
In order to measure the coverage during simulation, a
dedicated tool is required besides the simulator. Several
commercial tools [7, 8, 9] for Verilog and VHDL code
coverage are now available. In those approaches, they use
some special PLI (Programming Language Interface)
routines [10, 11] of the HDL simulator to watch the
execution status of the source code during simulation.
However, due to those extra monitors, some overhead is
always incurred in the simulation. In order to reduce the
overhead, another approach [12] is proposed to trace the
execution flow recorded in the value change dump (VCD)
files [10, 11] , which are produced by simulations, after the
normal simulation process. The detailed explanation about
those measuring techniques will be given in Section 3.

2. Functional Coverage Metrics


In the various functional coverage metrics, code
coverage is the most popular category. In this category,
some metrics are the code coverage metrics used in
software testing [6] , such as statement coverage and
decision coverage, and some metrics are proposed for
HDL only, such as block coverage and event coverage.
Generally speaking, this kind of metrics is simpler to use,
but some design errors may not be detected in the test.
FSM coverage is another popular category of functional
coverage metrics. It is considered to find all bugs in a
finite state machine (FSM) design, but its computational
complexity is much higher. Besides these metrics, there
are still many metrics proposed for verifying HDL designs.
The detailed explanation about those functional coverage
metrics is given in the following sections.

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.

module example ( reset, clk, in, out, en );


input reset, clk;
input [7:0] in;
output en;
output [7:0] out;
always @ ( posedge clk ) begin
out = in;
if ( reset ) out = 0;
en = 1;
end
endmodule
Figure 1 : An example written in Verilog HDL.

In this section, we introduce six different coverage


metrics that are often classified as code coverage.
Different names are sometimes used for similar types of
coverage metrics. We will try to list those alternate names
in the following descriptions.

together at the same simulation time tick. For example, the


line 8 and line 10 in the example shown in Figure 1 can be
merged into a basic block because line 10 is sure to be
executed as long as line 8 is executed. As a result, there
are only two basic blocks in the example, line 9 and line 8
with line 10. Because the number of recording units can
be reduced in this way, most coverage systems use block
coverage in the recording process to reduce the
complexity but display the results in terms of statements.

A. Statement Coverage

C. Decision Coverage

Statement coverage is the measurement of statements


that have been exercised in the simulation. It is also called
line coverage or statement execution coverage. In all
functional coverage metrics, this is one of the simplest
metric that can give a lot of useful information quickly
with very little cost. Therefore, it is often considered as
the basic requirement in the functional verification of
HDL designs.
In this coverage analysis, it only counts the
executable statements. For a conditional statement, it
can be treated as one single statement or two separate
statements depending on users choice. However, it would
be better to consider it as two separate statements. For
example, consider a small example shown in Figure 1.
There are twelve lines in it, but only line 8, line 9, and line
10 are countable. The statements in line 8 and line 10
are clearly two countable statements. The if statement in
line 9 can be viewed as either one or two countable
statements. If this line is treated as a single statement, it
will be considered as covered even if the "out = 0"
segment is never executed. It would be more complete to
treat this line as two statements and report coverage of the
"out = 0" segment independently. In this scenario, there
are four independent statements in this example.

Decision coverage, which is also called branch


coverage, measures the coverage of each branch in the if
and case statements. In other words, it focuses on the
decision points that affect the control flow of the HDL
execution. For an if statement, decision coverage will
report whether the if statement is evaluated in both true
and false cases during simulation, no matter whether the
corresponding else statement exits. For a case statement,
decision coverage will verify that each branch of the case
statement is taken, including the default and others
branches.
If all the possible branches in the if and case statements
are explicitly specified in the HDL code, the results of
decision coverage and line coverage are the same.
However, if some branches do not explicitly mentioned in
the HDL code, such as the so-called implied else
statement, which means the corresponding else of an if
statement does not exist like the line 9 in Figure 1, the
reports will be quite different. In the case shown in Figure
1, the line coverage analysis will report 100% even if the
reset is always true, but the decision coverage analysis
will report only 50% in this case to remind the designer
that there are still some untested cases in the design.
Therefore, decision coverage is considered to be more
complete than line coverage.

2.1 Code Coverage

B. Block Coverage
Block coverage, which is also called basic block
coverage or segment coverage, is similar to statement
coverage, except that the measured unit of code is a
sequence of non-branching codes which is executed

D. Path Coverage
Path coverage measures the coverage of all possible
paths through the HDL code. A path is defined as a unique
sequence of branches or decisions from the beginning of a

code section to the end, which is corresponding to a path


in the control flow of the HDL code. Path coverage looks
similar to decision coverage; however, it handles multiple
sequential decisions. For example, consider the Verilog
code fragment shown in Figure 2. There are four paths
through the code fragment that correspond to all possible
combinations of a and b. Executing this sequence of code
with "a = 0 , b = 0" and then with "a = 1 , b = 1" will
obtain 100% coverage on both statement coverage and
decision coverage. However, path coverage will report
only 50% coverage because the two cases with a being not
equal to b have not been covered.
if ( a ) begin

end
if ( b ) begin

end
Figure 2 : A Verilog code fragment.
Path coverage is considered to be more complete than
decision coverage because it can detect the errors related
to the sequence of decisions. However, it is considered to
be too cumbersome in practice. There are often a very
large number of possible paths through a design such that
making 100 percent path coverage becomes impractical. It
implies that trade-off between the speed and the accuracy
is often needed in practice regarding to those coverage
metrics.

E. Expression Coverage
Expression coverage, which is also called condition
coverage, reveals how the variables or sub-expressions in
conditional statements are evaluated during simulation. It
can find the errors in the conditional statements that
cannot be found by other coverage analysis with
insufficient tests. For example, consider the two examples
shown in Figure 3. If the test pattern is a = 1 , b = 1, the
results are identical for both conditions. However, if the
actual expression in the code is a && b but your
intention is a || b, you will not discover the error unless
you test it by a = 1 , b = 0 or a = 0 , b = 1.
if ( a && b ) begin

end

if ( a || b ) begin

end

Figure 3 : The errors found by expression coverage.


Typically, expression coverage only examines the
variables or sub-expressions combined by logical
operators. In other words, it can be viewed as examining
the decisions that occur within the expression. For
example, given a continuous assign statement:

assign e = ( a = = b ) & ( ~c ) | ( d! = 2 ) ;
Expression coverage will check for the cases "a == b",
"a != b", "c == 0", "c != 0", "d == 2" and "d != 2".
Sometimes, it will also check for the combinations of
those cases to make the analysis more complete.

F. Event Coverage
Most HDL simulators are event-driven. Therefore, it is
nature for designers to care about the possible events that
may occur in a design. Events are typically associated
with the change of a signal. For example, as shown in the
line 7 of Figure 1, there is an event always @ ( posedge
clk ) which waits for the clk signal changing from low to
high. Event coverage is the measurement of events that
have been fired for each of the variables to which it has
been sensitized. This coverage is not commonly used but
sometimes useful for some special designs.

2.2 FSM Coverage


Using the language-based code coverage can quickly
get an overview about the exercised code. Generally
speaking, this kind of metrics is simpler to use, but some
design errors may not be detected in the test. Therefore,
the FSM coverage is proposed and is useful to verify the
FSM design because it can traverse the whole design
space. In other words, it has closer correspondence to the
behavior of a design such that it can find most of the bugs.
The detailed explanation about this coverage is given in
the following sections.

A. Conventional FSM Coverage


Typically, FSM coverage measures the state visitations
and the state transitions that have been exercised in the
FSM model of a design. Sometimes the paired state
visitations, which is the relationship between two state
machines, is also measured for further understanding the
behavior of the design. Because this is basically a
hardware design concept, it is difficult to build upon
software development techniques. Therefore, some
techniques [14, 15] are proposed to extract the FSMs in a
HDL design for further analysis on the FSMs, such as
measuring the FSM coverage of them.
FSM coverage can be very complex because the state
spaces of modern designs are often too large to be
traversed completely. Therefore, in typical usage, some
states or transitions may be manually specified as
unneeded to reduce the complexity. There are also some
efforts trying to reduce the high complexity in testing
FSMs. They are introduced in the next section.

B. SFSM Coverage
It is impractical to use the conventional FSM coverage
to verify the behavior of a FSM because the size of the
STG is often too large to be traversed completely. Some
techniques [16, 17] have been proposed to reduce the huge

sizes of STGs by separating the datapath and verifying the


control part only. However, the exponential relationship
between the number of states and the number of registers
still consumes a long simulation time to verify the
resultant STGs completely.
In order to cope the state explosion problem in
verifying FSM designs, another approach [18] is proposed
to model the FSM designs at higher level of abstraction.
This model is named as Semantic Finite State Machine
(SFSM) in the paper. The STGs of FSMs can be
dramatically reduced in this model by using the content of
HDL descriptions to merge the similar tests. For example,
consider an 8-bit counter with synchronous load and reset
functions as shown in Figure 4. In the conventional STG,
there are 256 states and 66047 transitions in it. However,
because there are only three different behaviors in the
HDL code, the STG can be greatly reduced to only 3
states and 11 transitions in the SFSM model as shown in
Figure 5.
module counter ( clk, reset, load, in, count ) ;
input
clk, reset, load ;
input [7:0] in ;
output [7:0] count ;
reg [7:0] count ;
always @( posedge clk ) begin
if ( reset ) count = 0 ;
else if ( load ) count = in ;
else if ( count == 255 ) count = 0 ;
else count = count + 1 ;
end
endmodule
Figure 4 : A counter example written in Verilog HDL.
C1

C1 : reset

C2 : (! reset) load

C3 : (! reset) (! load) (count == 255)

count
=0
C4
C1

C2
C1
C3

count =
C4
count+1

C4 : (! reset) (! load) (count != 255)

C3
C4
C2

count
= in

C2

Figure 5 : The reduced STG of the example in Figure 1.


As a result, if we replace the FSM model in the
conventional FSM coverage to the proposed SFSM model,
the complexity of the FSM coverage test could be
acceptable even for large designs. It seems to be a
practical solution to combine both the controller
extraction and high level modeling techniques for
verifying FSM designs.

2.3 Other Coverage Metrics


Besides those metrics, there are still many metrics
proposed for verifying HDL designs. We will introduce
three of them in this section.

A. Observability-Based Code Coverage


In the manufacturing test [19], a fault can be detected
only if the two main issues, controllability (activation) and
observability (propagation), are both satisfied. In this
point of view, most functional coverage metrics only
activate the statements, branch, or sequences of statements
but do not address the observability requirements.
Therefore, Devadas et al. propose a new coverage metric
with observability consideration for functional
[20]
verification. They also propose a series of techniques to
do the fault simulation [21] and pattern generation [22] based
on this coverage metric.
The basic idea of this coverage metric is to view a
circuit as computing a function and view the computation
as a series of assignments to variables. Errors in
computation are therefore modeled as errors in the
assignment. The possibility of an error is represented by
tagging the variable (on the left hand side of the
assignment) by the symbol which signifies a possible
change in the value of the variable due to an error. Both
positive tag () and negative tag (-) are considered to
represent the sign of the magnitude. Then we can watch
those tags at the primary outputs to see whether all
possible errors can be observed during simulation.
If we change the label D in the manufacturing test to ,
the overall operations of this coverage analysis are almost
the same as those in the manufacturing test, except the
operations are done at RT level in the coverage analysis
instead of gate-level in the manufacturing test. At
gate-level, the rules to propagate a fault are very clear
because only simple Boolean operators exist in the design.
At RT level, however, the rules to propagate a tag could
be very complex because there are some complex
operators such as a + b > 5. How to set other variables
so that the tags can be propagated out becomes the major
bottleneck of this technique.

B. Toggle Coverage
Toggle coverage, which is one of the oldest
measurements of coverage in hardware design, measures
the bits of logic that have toggled during simulation. It can
be used as a crude method of functional coverage because
a bit may not be properly tested if it never toggles from 0
to 1 or from 1 to 0. Since the measured unit is bit, it is
well suited to gate-level designs. More often, it is used as
the foundation for power analysis tools [23].

C. Variable Coverage
The extension of toggle coverage to the RTL is called
as variable coverage. In other words, variable coverage

and toggle coverage are the same for a single-bit variable.


Therefore, toggle coverage is typically used at the gate
level, where there is no knowledge of grouping bits into a
large entity. For a bus, we often use variable coverage to
verify whether particular values have been visited. For
example, it may verify that each bit of a bus has toggled,
and the bus has had its minimum value (typically 0), its
maximum value (all 1s), and an alternating pattern of 1s
and 0s.

small part of the entire language structures. The callback


capability provided by the ACC routines is also limited to
only value change callback. It is difficult to perform more
complex analysis with those routines only. Therefore,
Cadence proposes the VPI routines to extend the access
and callback capabilities of the original ACC routines.
Those VPI routines are often classified as PLI 2.0.
However, for performance consideration, some simulators,
such as Synopsys VCS, do not support the VPI routines.
It limits the usage of VPI routines in third-party tools.

3. Coverage Analysis Techniques

3.1.2 Value Change Dump Files

In existing coverage measurement tools, most of them


use some special PLI (Programming Language Interface)
routines [10, 11] of the HDL simulator to measure the
execution statistics of the source code during simulation.
Because the simulator-supported PLI routines have two
versions, PLI 1.0 and PLI 2.0 (also called VPI), there are
several ways to collect the coverage data by different PLI
routines. We will introduce two popular PLI-based
approaches in the following sections. Moreover, because
some extra overhead is always incurred to the simulation
in the PLI-based approaches, another approach [12] is
proposed to trace the execution flow by using the value
change dump (VCD) files [10, 11] , which is a by-product of
a simulation, after the normal simulation process. The
detailed explanation about this measuring technique will
also be given in the following sections.

VCD files, which can be generated in the simulation


process, are the files recording every value change of
selected variables during simulation. With the VCD
feature, we can save the value changes of selected
variables for any portion of the design hierarchy during
any specified time interval. The VCD format is a very
simple ASCII format. Besides some simple commands
that define the parameters used in the file, the format is
basically a table of the values of variables with timing
information as shown in Figure 6. For each time with
changed variables, the current time and the new values of
the variables changing their values at this time are
recorded. Because the VCD files keep all the value
information during simulation and are supported by most
Verilog simulators, they are widely used in commercial
tools for post-processing such as the waveform viewers.

3.1 Simulator Supports

PLI is an interface mechanism that provides a means


for users to link their applications to the simulators.
Through this interface, the users can access and modify
the data in an instantiated HDL data structure dynamically
by their own C functions. For Verilog HDL, there are
three primary categories of PLI routines: TF
(Task/Function) routines, ACC (Access) routines, and VPI
(Verilog Procedural Interface) routines. Those routines
have been clearly defined in the IEEE standard 1364 [11].
For VHDL, there is also a committee working on the PLI
standards for VHDL. Since the PLI standards are not
clearly set for VHDL, we will focus on the Verilog HDL
in the following discussions.
Almost all commercial Verilog simulators support TF
and ACC routines. Those routines are often classified as
PLI 1.0. However, the ACC routines can access only a

# time2
<new_value><variable1>
<new_value><variable2>

.....

3.1.1 Programming Language Interface

.....

Because the job of coverage measurement is to monitor


the actions in a simulation, the supports from the simulator
are necessary. In those measuring techniques, their
analysis capabilities mainly depend on two simulator
supports, PLI and VCD files. Before talking about those
techniques, we would like to introduce those two
simulator supports first.

# time1
<new_value><variable1>
<new_value><variable2>

Figure 6 : An illustration of the VCD format.

3.2 Instrumentation-Based Coverage Analysis


Verilog PLI allows users to create their own system
tasks and insert them into the Verilog programs to be
simulated together. When the program is simulated and
the task is executed, we can retrieve the current status of
the Verilog program at that moment. In other words, we
can put our own break points onto any places in the
Verilog program and do the jobs we want at the stops
during simulation. Because this mechanism is supported in
both PLI 1.0 and PLI 2.0, i.e., it is supported by almost all
commercial Verilog simulators, most of the coverage
measurement tools analyze the coverage in this way.
Before running simulation to analyze the coverage, a
pre-processing tool is required in this approach to

instrument the measuring tasks onto the suitable places in


the original source code. As described above, there are
many different coverage metrics which monitor different
objects. Therefore, the ways to insert the measuring tasks
have to be altered for each kind of metrics. Because the
places to insert code and the ways to record the statistics
have deep impact on the simulation performance, those
tasks have to be set very carefully.
In Figure 7, we show one possible approach for code
instrumentation. For statement coverage and block
coverage as shown in Figure 7(a), we can instrument the
measuring task at the end of each code block with suitable
parameters that indicate the current location. Then the
execution count of this code block can be incremented
once this task is executed in the simulation. For decision
coverage and path coverage, they can be measured by the
similar ways, which put measuring tasks onto each
possible branch or path. For expression coverage as
shown in Figure 7(b), we can first decompose the
sub-expressions in an expression into several temporary
variables. Then we can have the values of those
sub-expressions once the following sampling task is
executed.
(a)

(b)

a=b+c;
d=a+2;

if ( a == 3 | b == 5 )

a=b+c;
d=a+2;
Top.id = InstanceID ;
Top.index = 2 ;
$count_stmt ;

Tmp1 = a == 3 ;
Tmp2 = b == 5 ;
Tmp3 = Tmp1 | Tmp2;
Top.id = InstanceID ;
Top.index = 4 ;
$sample_values(Tmp1,Tmp2,Tmp3) ;
if (Tmp3)

Instrumentation

Figure 7 : Two examples of code instrumentation.


Since all difficult jobs are done in the pre-processing
tool, there are only a few extra jobs left in the simulation,
which are to increment the corresponding counts when the
instrumented tasks are executed and to output the statistics
at the end of the simulation. However, due to the large
number of inserted codes, the extra overhead on the
simulation time is high.

3.3 Callback-Based Coverage Analysis


Due to the inconvenience incurred by the instrumented
codes, another approach is proposed to analyze the
coverage by the capability of the simulator itself. In PLI
2.0, it supports many kinds of callback mechanisms that
can execute user-provided functions when specific events
occur in the simulation. For example, we can ask the

simulator to execute our recording routine before


executing a specific behavior statement. In this way, if we
properly setup the callback events, we can still have the
coverage statistics without those extra tasks in the
previous approach. Furthermore, because the capability of
the access routines is greatly improved in PLI 2.0, we can
analyze the design and setup those callback events without
an extra language parser. As a result, the pre-processing
tool is totally unnecessary. Everything can be done in one
command.
In this approach, the only one code that has to be
inserted into the source code is the system task of start
coverage analysis. This task can automatically setup the
callback events at the beginning of the simulation, keep
statistics during the simulation, and output the coverage
results at the end of the simulation. Therefore, the
overhead on the code size is very small, and the source
code can still be kept as the original appearance. However,
the supports required in this approach are only available in
PLI 2.0. It cannot be applied to the simulators that support
PLI 1.0 only.

3.4 Dumpfile-Based Coverage Analysis


In those two PLI-based approaches, their coverage
analysis capabilities depend on the simulator PLI. Because
the extra PLI routines have to be compiled with the
simulator, to run the coverage analysis also requires a
simulator run. If the users want to see another coverage
reports for different metrics after one analytic simulation
run, they may have to re-run the long simulation again
because the measuring tasks have to be arranged in
another way. Furthermore, if only partial coverage data is
required, we still have to simulate the whole design in
those PLI-based approaches to gather the execution
information thus no complexity reduction can be
achieved.
In order to solve those problems, another approach [12]
is proposed to analyze the coverage from the by-product
of a simulation, VCD files. The dumpfiles record every
value change of selected variables during simulation and
are widely used in commercial tools for post-processing
such as the waveform viewers. Generally speaking, given
the value of each variable in a HDL program at any
specific time, we can exactly know which part of code will
be executed next. Therefore, we can easily calculate the
desired code coverage by traversing the dumpfiles with
properly selected variables.
Because this DUCA (DUmpfile-based Coverage
Analysis) algorithm does not use PLI supports, the
coverage analysis runs can be independent to simulation
runs. In addition, since all information is kept in the
dumpfiles, we can easily generate the coverage reports for
any kind of coverage metrics from the same dumpfiles
thus providing the capability to switch between different

coverage reports very fast without re-running the long


simulation again and again. If only partial coverage report
is required, we can retrieve only partial data from the
dumpfiles so that the complexity can be reduced.
Furthermore, because the VCD feature is supported by
most Verilog simulators, no adoption problem exists as in
the callback-based approach.
In order to explain the operations in DUCA more
clearly, an example is shown in Figure 8. Given a value
change of a signal in the dumpfiles, it first finds the
fanouts of this signal, which are the code affected by this
change, and then puts marks on the associated positions in
the event graph and statement trees. If the affected code is
the triggering condition of a vertex in the event graph, it
marks the vertex as Triggered; if the affected code is the
entering condition of a vertex in the statement trees, it
marks the vertex as Modified. Those marks are to avoid
duplicate computations in the following steps.
clk

enb

always @ ( posedge clk ) begin


if ( reset )
out1 = 0 ;
else if ( enb )
out1 = in ;
else
out1 = data ;
end

signals
change

find their fanouts in the source code


T

mark the affected


conditions in the
event graph and
statement trees
T : Triggered
M : Modified

always @ ( negedge clk ) begin


if ( enb )
out2 = 1 ;
else
out2 = 0 ;
end

always @ (posedge clk)

always @ (negedge clk)


M

if (reset)
M

if (enb)
out2
=0

if (enb)

out1=
data

out1=
in

out2
=1

out1=
0

Figure 8 : An illustration of the signal change tracing.


After all the signal changes at the same time are
processed, DUCA can traverse the statement trees of the
triggered events to decide which code is executed. An
example is shown in Figure 9 with the marks in Figure 8.
While traversing down the statement trees, if the current
vertex is marked as modified, the result of the entering
condition may have been changed and has to be evaluated
again to decide which path to go; if not, the previous
evaluated result can be used directly to eliminate
unnecessary computing. Then the visited marks should be
cleared for the use of next marking process. The condition
evaluation of the vertices that are marked as modified but
not in the traversing paths is also skipped because it is
unnecessary now. The Modified labels of these vertices
are kept until they are finally in the traversing paths.
After traversing the dumpfiles following the steps
described above, we can obtain the statistics on which
code is executed and what the execution count is. With
those statistics, the coverage reports can be easily
generated for the user-specified coverage metrics by a

previous
result used

always @ (posedge clk)

always @ (negedge clk)


M

if (reset)

if (enb)

re-evaluated
out2
=0

if (enb)
out1=
data

out1=
in

out2
=1

keep
modified

out1=
0

executed code

Figure 9 : An illustration of deciding executed code.


post-processing step. Almost all coverage metrics can be
easily calculated from those statistics with little
computation overhead. This feature provides the
capability of switching the report between different
coverage metrics easily.

4. Conclusions
The coverage-driven functional verification is rapidly
getting popular. Many functional coverage metrics have
been proposed to verify the designs written in HDL. A
survey on several popular functional coverage metrics has
been presented in this paper. However, although so many
different functional coverage metrics have been proposed,
there is still not a single metric similar to the stuck-at fault
model in the manufacturing test being popularly accepted
as complete and reliable. A lot of efforts are still required
in developing better functional coverage tests.
In order to monitor the coverage during simulation, a
dedicated tool is required besides the simulator. Several
commercial tools for Verilog and VHDL code coverage
are now available. We have introduced three popular
approaches of coverage measurement in this paper.
However, due to the limited simulator supports in PLI 1.0,
the extra overhead incurred by the coverage measurement
is still high. Using the extended PLI 2.0 with more
supports seems to be a good solution, but it is not
available now in some Verilog simulators. Many
engineers and researchers are still trying to find a better
solution for more convenience and better performance on
coverage measurement.
Besides those standalone coverage measurement tools,
some simulator vendors, such as Cadence, Synopsys, and
Model Technology [13] , are trying to integrate the
coverage measurement feature into their simulation
environments. Because the coverage measurement is
basically to monitor the action of a simulator, it would be
more convenient to integrate those monitors into the
simulation environment. However, the existing simulators
have been optimized for speed consideration. Some
information may be lost in the optimization so that the
simulators cannot analyze the coverage. Therefore, it
would be a big challenge for those simulator vendors to
integrate the coverage measurement feature into their
simulators without speed degradation.

5. Acknowledgments
This work was supported in part by NOVAS Software
Incorporation and R.O.C. National Science Council under
Grant NSC89-2215-E-009-009. Their supports are greatly
appreciated.

References
[1]

[2]

[3]

[4]

[5]

Adrian Evans, Allan Siburt, Gary Vrckovnik, Thane


Brown, Mario Dufresne, Geoffrey Hall, Tung Ho,
and Ying Liu, Functional Verification of Large
ASICs, 35th Design Automation Conference, Jun.
1998.
Aarti Gupta, Sharad Malik, and Pranav Ashar,
Toward Formalizing a Validation Methodology
Using Simulation Coverage, 34th Design
Automation Conference, Jun. 1997.
Mike Benjamin, Daniel Geist, Alan Hartman, Yaron
Wolfsthal, Gerard Mas, and Ralph Smeets, A Study
in Coverage-Driven Test Generation, 36th Design
Automation Conference, Jun. 1999.
Tsu-Hwa Wang and Chong Guan Tan, Practical
Code Coverage for Verilog, Intl Verilog HDL
Conference, Mar. 1995.
D. Drako and P. Cohen, HDL Verification
Coverage, Integrated Systems Design Magazine,
June 1998.

[16]

[17]

[18]

[19]

[20]

[21]

(http://www.isdmag.com/Editorial/1998/CodeCoverage9806.html)

[6]

B. Beizer, Software Testing Techniques, Van


Nostrand Reinhold, New York, 1990.

[7]

CoverMeter, Advanced Technology Center.

[22]

( http://www.covermeter.com )

[8]

CoverScan, Design Acceleration Incorporation.


( http://www.designacc.com/products/coverscan/index.html )

[9]

HDLScore, Summit Design Incorporation.


( http://www.summit-design.com/products/hdlscore.html )

[10] Cadence Reference Manuals.


[11] IEEE Standard Hardware Description Language
Based on the Verilog Hardware Description
Language (IEEE Standard 1364), 1995.
[12] Chen-Yi Chang, On Functional Coverage Analysis
for Circuit Description in HDL, Master Thesis,
Department of Electronics Engineering, National
Chiao Tung University, Taiwan, R.O.C., Jun. 1999.
[13] ModelSim, Model Technology.
( http://www.model.com )

[14] Tsu-Hua Wang and Thomas Edsall, Practical FSM


Analysis for Verilog, Intl Verilog HDL
Conference, Mar. 1998.
[15] Chien-Nan Liu and Jing-Yang Jou, A FSM
Extractor for HDL Description at RTL Level, The

[23]

Fifth Asia-Pacific Conference on Hardware


Description Languages (APCHDL), Seoul, Korea,
Jul. 1998.
Richard C. Ho and Mark A. Horowitz, Validation
Coverage Analysis for Complex Digital Designs,
Intl Conference on Computer Aided Design, Nov.
1996.
Dinos Moundanos, Jacob A. Abraham, and Yatin V.
Hoskote, Abstraction Techniques for Validation
Coverage Analysis and Test Generation, IEEE
Trans. on Computers, VOL. 47, NO. 1, pp. 2-14, Jan.
1998.
Chien-Nan Jimmy Liu and Jing-Yang Jou, An
Efficient Functional Coverage Test for HDL
Descriptions at RTL, Intl Conference on Computer
Design, Oct. 1999.
Miron Abramovici, Melvin A. Breuer, and Arthur D.
Friedman, Digital Systems Testing and Testable
Design, Computer Science Press, New York, 1990.
Srinivas Devadas, Abhijit Ghosh, and Kurt Keutzer,
An Observability-Based Code Coverage Metric for
Functional Simulation, Intl Conference on
Computer Aided Design, Nov. 1996.
Farzan Fallah, Srinivas Devadas, and Kurt Keutzer,
OCCOM:
Efficient
Computation
of
Observability-Based Code Coverage Metric for
Functional Simulation, 35th Design Automation
Conference, Jun. 1998.
Farzan Fallah, Pranav Ashar, and Srinivas Devadas,
Simulation Vector Generation from HDL
Descriptions for Observability-Enhanced Statement
Coverage, 36th Design Automation Conference, Jun.
1999.
Farid N. Najm, A Survey of Power Estimation
Techniques in VLSI Circuits, IEEE Trans. on VLSI
Systems, VOL. 2, NO. 4, pp. 446-455, Dec. 1994.

You might also like