Mentorpaper 36762
Mentorpaper 36762
Mentorpaper 36762
w w w. m e n t o r. c o m
Introduction
The complexity of modern SoC designs has created a verification crisis. Engineers cannot imagine all
of the possible corner-case behaviors let alone write tests to exercise them. The only way to address
the increased complexity is to supplement traditional functional verification methods by combining
assertions, simulation, and formal techniques in a process called assertion-based verification (ABV).
Overall, ABV addresses two high-level verification challenges:
1. Improving the thoroughness of the coverage driven verification methodology.
2. Reducing the number of functional bugs missed by traditional verification methodology.
Various papers have discussed and addressed the first challenge[1,2,3,4]. They concur that adding ABV to
a coverage-driven methodology augments the base coverage metric with the coverage attained by
assertions. As assertions monitor the functionality of the design, they measure the thoroughness of the
directed or pseudo random test environment.
In this white paper, we address the second challenge: How can you use assertions and formal
verification to find bugs missed by traditional verification methodologies?
Based on our experience helping many design teams deploy assertions and formal verification [6,16,17],
we recommend deploying ABV (including formal model checking) on the most salient verification hot
spots in a design[19], following a seven-step, formal verification planning process[15]. By focusing ABV
on verification hot spots, a design team can adopt ABV incrementally as they continue to use their
simulation-based methodology. This has the added benefit of minimizing the risks involved with
adopting a new methodology while maximizing the return-on-investment.
Assertion-Based Verification
Assertions increase the observability of the design. When used in a simulation-based verification
environment, embedded assertions catch design issues locally, bringing them to the attention of the
users immediately as they happen. Assertions allow hard-to-reach, hard-to-verify logic and critical
functionality to be identified as coverage goals.
Formal verification with model checking technology[14] increases the controllability of the design.
Once a design is instrumented with assertions, formal verification can verify areas of concern, known
as hot spots. Model checking analyzes the RTL structure of a design and characterizes its internal
nature, and it targets corner-case behaviors directly. Each assertion violation discovered by model
checking is reported along with its counter-example. This uncovers functional errors that would have
been missed using traditional verification methodologies.
The benefits of ABV have become obvious and essential to many design teams. Standardized assertion
languages (such as PSL[9] and SVA [10]) and assertion libraries (such as OVL[11], QVL [12] and 0-In Checkers [13])
have lowered the barrier for adopting the ABV methodology. In fact, we have found that the majority of
assertions used are simple and that simple assertions work just as well as complex ones.
Table 1 summarizes the verification hot spots of a SDRAM controller and a cache controller. In the
following sections, we are going to discuss these hot spots in detail, identifying their verification
requirements, the properties used to express them, and the appropriate coverage strategies. We have
captured the properties with both assertion languages and libraries. Many verification engineers favor
keeping both library and verification languages as distinct verification elements. The merits of the
different approaches should be evident from the examples. Finally, we present real-world bug
examples to highlight the type of bugs that could be found.
// Round-robin check
// Default highest priority = 0 (channel 1, req[4])
integer p;
always @(posedge clk)
if (gnt != 0)
p <= gnt[4] ? 0 : (gnt[3] ? 4 : (gnt[2] ? 3 :
(gnt[1] ? 2 : (gnt[0] ? 1 : 0))));
else p <= 0;
property p_round_robin;
@(posedge clk) disable iff (rst)
(gnt != ‘b0) |-> ( (gntrb < (reqrb & (~gntrb))));
endproperty
OVL (Verilog):
The arbitration checker in the OVL 2.0 library can be used. It ensures that grants are given to the
requesting channels only, and at most one grant is given at a time. The supported arbitration schemes
include fair or round_robin(1), FIFO(2), and least_recently_used(3). The arbitration rule parameter is 1;
it implies the round_robin scheme is used.
ovl_arbiter
#(.width(5), .arbitration_rule(1))
ovl_round_robin_check
(.clock(clk), .reset(rst), .enable(1’b1),
.reqs(req), .gnts(gnt));
QVL (Verilog):
This arbitration checker supports more functionalities than the OVL arbiter. The supported arbitration
schemes include fixed-priority(1), basic fairness(2), queue(3), round-robin(4), and least-recently
used(5). In addition, it also supports different request styles and parking for immediate grant of the
high-priority channel. Coverage information is collected automatically with SystemVerilog
covergroups and coverpoints.
qvl_arbiter
#(.width(5), .req_type(2), .gnt_type(4))
qvl_round_robin_check
(.clk (clk), .reset_n(rst_n), .active(1’b1),
.req(req), .gnt(gnt));
0-In Checkers:
This 0-In checker provides functionality that is equivalent to the QVL arbiter checker. The directive
approach makes specification easier. The bit-width for the arguments, clock, and reset signals are
inferred automatically.
// 0in arbiter –round_robin –req_until_gnt –req req -gnt gnt
An arbiter checker from the QVL assertion library was used to instrument this channel arbiter in a
multimedia processor core. The goal was to ensure the requests from the channels were granted with a
round-robin scheme. During simulation, the arbiter was not heavily loaded. Formal verification was
able to stress the design more thoroughly. When multiple channels were requesting and the processor
went from BUSY to AVAILABLE, and simultaneously the processor received an interrupt, the arbiter
granted the data path to the wrong channel. This was found statically without any simulation. During
the bug review, the project team admitted that they would not have found this bug. It required multiple
events to happen simultaneously. Even when they achieved 100 percent coverage with their coverage
model, they will not have hit this scenario.
This bug occurred in an ARM-based SoC design. The zero_one_hot checker from the OVL assertion
library, the multiplexer checker from the QVL library, and multiple SVA properties were written to
instrument the design. Due to a coding error and incorrect address masking the logic in the address
decoder did not enforce selection of a single slave. As a result, with certain addresses, multiple slaves
could be selected. Static formal verification found this problem at the block level before any
simulation test was written to regress the block. After fixing this bug, the project team formally proved
that, under any possible circumstance, one and only one slave would be selected.
In addition, we should capture the handshake scheme between blocks. For instance, the following
handshake scheme is used. We want to:
• Ensure valid is asserted for 2 to 4 cycles.
• Ensure data is stable and known when valid is asserted.
• Ensure ack is asserted at the end of the data transfer.
property valid_data_stable;
@(posedge clk) disable iff (rst)
valid |-> $stable(data) && !$isunknown(data);
endproperty
property valid_acknowledge;
@(posedge clk) disable iff (rst)
$rose(valid) |-> ( valid && !ack)[*1:3] ##1
( valid && ack) ##1
(!valid && !ack);
endproperty
OVL (Verilog):
assert_width
#(.min_cks(2), .max_cks(4))
valid_asserted
(.clk(clk), .reset_n(rst_n), .test_expr(valid));
assert_never_unknown
#(.width(32))
valid_data_known
(.clk(clk), .reset_n(rst_n),
.qualifier(valid), .test_expr(data))
assert_win_unchange
#(.width(32))
valid_data_stable
(.clk(clk), .reset_n(rst_n), .start_event(valid),
.test_expr(data), .end_event(ack));
assert_handshake
#(.min_ack_cycle(1), .max_ack_cycle(3),
.req_drop(1), .max_ack_length(1), .deassert_count(1))
valid_ack_handshake
(.clk(clk), .reset_n(rst_n), .req(valid), .ack(ack));
0-In Checkers:
// 0in assert_timer –var valid –min 2 –max 4
// 0in known –var data –active valid
// 0in constant –var data –active valid
/* 0in req_ack –req valid –ack ack –req_until_ack
–min 1 –max 3 –max_ack 1
–ack_assert_to_req_deassert_max 1 */
A bus bridge in a multi-core design converted transactions from the processor bus (MBUS) to the
AHB bus and vice-versa. The AHB interface of the bus bridge was instrumented with the AHB master
assertion monitor, and the processor interface was instrument with a set of PSL properties. Constrained
random simulation regression was done first. The simulation coverage was close to completion.
Dynamic formal verification was applied to boost confidence and to close the gap. It leveraged the
simulation stimulus and exercised corner case behaviors. It was found that when the bus bridge
received a retry transaction from the AHB slave and the processor was handling an interrupt request,
the retry transaction was not communicated. The bus bridge did not retry the transaction until it was
completed or issued an error to the processor. The processor would assume the transaction was
completed successfully, but actually, it was lost by the bus bridge. The project team agreed that this
scenario would have never been exercised by the simulation environment.
This was a low-power design where the subsystem was running at a slower frequency. Data was
transferred from the slow clock (subsystem) domain to the fast clock (processor) domain. The clocks
are ratio synchronous and dynamically controlled by a configuration register. OVL and SVA assertions
were used to instrument the design. They ensured that the data valid signal was generated at the right
time and the data was stable when sampled. In simulation, the data valid signal and the data were
synchronized. The data was sampled by the fast clock domain at the perfect time. However, with
formal verification, a few corner cases were identified where the data valid signal was asserted at the
wrong time. As a result, unstable data was sampled. The problem was that these problematic scenarios
did not happen often. Even when they happened, the corrupted data would not have been detected for a
long time, making the targeted products unstable in the field.
property seq_header;
@(posedge clk) disable iff(rst)
$rose(load_enable) ##[1:1] $rose(load_done) |->
##[1:$] $rose(hdr_ready)
endproperty
property seq_decode;
@(posedge clk) disable iff(rst)
$rose(load_enable) ##[1:1] $rose(load_done)
##[1:$] $rose(hdr_ready) |->
##[1:$] $rose(ctrl_enable || data_ld);
endproperty
property seq_store;
@(posedge clk) disable iff(rst)
$rose(data_ld) |-> ##[1:$] $rose(mem_cs)
##[1:$] $rose(return);
endproperty
OVL (Verilog)
assert_frame
#(.min_cks(1), .max_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_load
(.clk(clk), .reset_n(rst_n),
.start_event(load_enable), .test_expr(load_done));
assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_header
(.clk(clk), .reset_n(rst_n),
.start_event(load_done), .test_expr(hdr_ready));
assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_decode
(.clk(clk), .reset_n(rst_n),
.start_event(hdr_ready), .test_expr(ctrl_enable || data_ld));
assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_store
(.clk(clk), .reset_n(rst_n),
.start_event(mem_cs), .test_expr(return));
0-In Checkers
/* 0in assert_sequence –min 1 –max 1 –var
$0in_rising_edge(load_enable)
$0in_rising_edge(load_done) */
/* 0in assert_sequence –min 1 –var
$0in_rising_edge(load_done)
$0in_rising_edge(hdr_ready)
$0in_rising_edge(ctrl_enable || data_ld) */
/* 0in assert_sequence –min 1 –var
$0in_rising_edge(data_ld)
$0in_rising_edge(mem_cs)
$0in_rising_edge(return) */
Coverage Strategy
During simulation, assertions monitor activities inside the design and provide information showing
how thoroughly the test environment covers the design’s functionality. By capturing the properties of
the FSM using assertions, we can identify them easily, exercise them completely, and collect cross-
product coverage information during the simulation process. In addition, the implementation style of
the FSM has a direct impact on the effectiveness of verification [5].
When several FSMs are interacting with each other, it is important to ensure that they do not get
“trapped” in a corner-case behavior. Formal verification can be applied to check these situations.
However, formal model checking is not efficient with complex properties that are unbounded. Hence,
the high-level timing requirements of an FSM may have to be decomposed and captured within
several assertions.
This was a DMA controller in a multi-layer AMBA SoC design. It is not easy to verify a DMA
controller exhaustively. DMA channels are allocated/de-allocated dynamically, and data transfers are
interleaved among channels. In this case, formal verification found a rare situation where data in
memory was corrupted. The problem occurred when more than one channel finished their transfers at
the same time. During the de-allocation process, one channel’s address pointer was de-allocated twice
and the other channel’s pointer was not de-allocated at all. As a result, when the same channel was
allocated again, corrupted data was transferred. DMA channels should never be allocated together
because their access is controlled by an arbiter. However, multiple channels can finish data transfers at
the same time.
BUG: Unexpected reset of memory discard counter.
property addr_value_check;
@(posedge clk) disable iff (rst)
((STRANS[1] == 1’b1) -> (addr_memory[addr_ptr] == SADDR));
endproperty
property data_value_check;
@(posedge clk) disable iff (rst)
(SREADY -> (data_memory[data_ptr] == SWDATA));
endproperty
ovl_fifo
#(.depth(4), .width(32),
.full_check(1), .empty_check(1), .value_check(1))
data_transfer_check
(.clk(clk), .reset(rst), .enable(1’b1),
.enq(IREADY), .enq_data(IRDATA),
.deq(SREADY), .deq_data(SWDATA));
0-In Checkers
/* 0in fifo –depth 4
–enq (ITRANS[1] == 1’b1) -enq_data IADDR
-deq (STRANS[1] == 1’b1) -deq_data SADDR */
/* 0in fifo –depth 4
–enq (IREADY) -enq_data IRDATA
-deq (SREADY) -deq_data SWDATA */
Coverage Strategy
End-to-end, simulation-based verification methodologies transfer many data packages so that many of
the data integrity assertions are thoroughly exercised. However, simulation typically fails to stress test
storage elements by filling up the FIFO/memories, so pseudo-random simulation environments should
be guided by coverage information from the assertions. This can be done manually (by fine tuning the
random parameters) or automatically (using a reactive testbench that monitors the assertions).
To compound the issue, multiple queues and data streams might transfer and retrieve data at
unpredictable intervals. Simulation alone cannot verify all of the corner-case scenarios—there are
simply too many of them. So formal model checking is used to target the remaining data integrity
assertions. It can explore all legal combinations of transfer events to ensure the data is never corrupted.
When we examine the bugs found with this methodology, they are all related to complex combinations
of events initialized by different data transfers. An example is described in the next section.
This was a networking design with channels between the PHY and the LINK interfaces. Checkers
from the OVL and the QVL assertion libraries, ovl_fifo, qvl_multi_clock_fifo, and
qvl_multi_clock_multi_enq_deg_fifo were used to instrument all the FIFOs in the design. The FIFOs
were deep enough to hold data and control responses. Besides checking for the underflow and
overflow conditions, the FIFO checkers were used to ensure the integrity of the data passing through
the channels. Using formal verification a FIFO overflow issue was found. The run was started from a
simulation state where one particular FIFO was close to full. After a complex series of STOP and
SEND commands generated by formal verification, the FIFO was full. With another ingress packet,
the FIFO would overflow. When reviewing this issue with the project team, it was realized that this
issue was not uncommon with live traffic. They felt lucky that this was found before tape out.
BUG: Packet lost when data streams were multiplexed.
Verification Results
We have been able to help dozens of project teams deploy ABV successfully by identifying and
focusing on the verification hot spots in their designs. The designs represent a wide range of
applications, including processor chip-sets, I/O hubs, gigabit Ethernet controllers, networking switches,
storage network controllers, and several ARM-based SoC platforms for mobile devices and consumer
electronics. The challenges presented by each of the four verification hot spots were experienced by all
the design teams. Table 2 summarizes the verification hot spots from two of these designs.
Verification
HyperTransport I/O Hub [6] ARM-based SoC Platform [20]
Hot Spots
ABV is a relatively new verification methodology. So when the majority of project teams deployed
ABV, they had already performed a significant amount of functional simulation. Hence, they did not
experience one direct benefit of the ABV method, which is finding bugs early in the design cycle,
concurrent with functional simulation. But, by concentrating on the hot spots of the designs, the teams
found difficult bugs. These results boosted the teams’ confidence that tape outs would be successful.
Table 3 gives a summary of bugs found at the verification hot spots of the two designs.
It is not surprising that most of the bugs were found at the design interfaces. For the bus interfaces,
standard bus and memory interface monitors are already available. Their protocol rules have been
captured as assertions, so the effort to use monitors is minimal. All of the teams employ protocol
monitors to verify the standard interfaces and have incorporated them into their simulation environment.
A few design teams had also leveraged the monitors as interface constraints for formal verification. In
these cases, model checking will only analyze the legal transactions.
We also analyzed how project teams added assertions. Typically, verification engineers added
assertions capturing the test plan criteria. Design engineers added assertions for verifying the
implementation structures.
Besides using assertions with formal verification, we have also seen the “bug triage” time improve
significantly, especially with constrained random simulations. In a constrained random simulation,
once a bug has taken effect, many cycles can pass before the bug propagates and the simulation fails—
if indeed the simulation fails at all. Assertions simplify the identification of the root causes of problems
when they catch bugs at their sources.
Most of the design teams spent significant effort with the resource control logic and data integrity hot
spots. Assertions for resource control logic were easier to capture. Checkers from the assertion library
were used extensively. Since most simulation environments did not stress test the resource control
logic sufficiently, formal verification is ideal for this task.
On the other hand, the assertions required to capture data integrity properties were complex—for example,
when data was repackaged (such as when data goes from a 32-bit external interface to a 128-bit internal
interface) or when packages were dropped intentionally. However, once these complex assertions were in
place, they can be verified with various methodologies.
Although resource control logic and data integrity hot spots can be difficult to verify. They represented an
essential part of the test plans. So, the effort was well spent. Importantly, the bugs found at these hot spots
were both critical and obscure. None would have been found with traditional verification methodologies.
Conclusions
In this paper, we discussed four prominent verification hot spots that can be effectively addressed
using assertion-based verification methods, including a seven-step formal verification planning
process. They represent hard-to-verify structures for which traditional simulation-based verification is
not effective. By concentrating on these hot spots, project teams can benefit greatly from the ABV
methodology. The objective is to find bugs missed by traditional verification methodologies.
We emphasize that these are not the only hot spots for verification in a design. Teams should conduct
internal design audits and identify the relevant hot spots in their designs. Design teams can leverage
the knowledge developed on the four hot spots described herein to tackle the design specific cases of
their projects.
As we have experienced, strategic deployment of the ABV methodology on hot spots has proven
effective. This approach applies scarce verification resources to areas that need them most. It improves
verification efficiency and boosts the overall quality of the design.
References
1. Scott Taylor, Michael Quinn, Darren Brown, Nathan Dohm, Scot Hildebrandt, James Huggins, Carl Ramey,
“Functional Verification of a Multiple-issue, Out-of-Order, Superscalar Alpha Processor”, DEC, Design
Automation Conference 1998.
2. Michael Kantrowitz, Lisa M. Noack, “I’m Done Simulating; Now What? Verification Coverage Analysis
and Correctness Checking of the DECchip 21164 Alpha microprocessor”, DEC, Design Automation
Conference 1996.
3. Carey Kloss, Dean Chao, “Coverage based DV from Testplan to Tape out Using Random Generation and
RTL Assertions”, Cisco Systems, DVCon 2004.
4. Namdo Kim, Byeong Min, “An Efficient Reactive Testbench with Bug Identification Structure”, Samsung
Electronics, DVCon 2004.
5. Dan Joyce, Ray Harlan, Ramon Enriquez, “Audit Your Design to Assess and Even Reduce the Amount of
Random Testing Needed”, HP, DVCon 2003.
6. Frank Dresig, Alexander Krebs, Falk Tischer, “Assertions Enter the Verification Arena”, AMD, Chip
Design Magazine, Dec 2004.
7. Richard Ho, “Maximizing Synergies of Assertions and Coverage Points within a Coverage-Driven
Verification Methodology”, DesignCon 2005.
8. Harry Foster, Adam Krolnik, David Lacey, “Assertion-based Design”, Kluwer Academic Publishers, 2003.
9. Property Specification Language, IEEE Standard for Property Specification Language, IEEE Std 1850-
2005.
Visit the Mentor Graphics web site at www.mentor.com for the latest product information.
© 2007 Mentor Graphics Corporation. All Rights Reserved.
Mentor Graphics is a registered trademark of Mentor Graphics Corporation. All other trademarks are the property of their respective owners.
This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the original recipient for internal business purposed only, provided that
this entire notice appears in all copies. In accepting this document, the recipient agrees to make every reasonable effort to prevent the unauthorized use of this information.