Lin Ting
Lin Ting
Lin Ting
Fredrik Wickberg
LiTH-ISY-EX--07/3924--SE
Linkping 2007
Fredrik Wickberg
LiTH-ISY-EX--07/3924--SE
Abstract
The complex work of designing new ASICs today and the increasing costs of
time to market (TTM) delays are putting high responsibility on the research
and development teams to make fault free designs. The main purpose of
implementing a static rule checking tool in the design flow today is to find
errors and bugs in the hardware definition language (HDL) code as fast and
soon as possible. The sooner you find a bug in the design, the shorter the
turnaround time becomes, and thereby both time and money will be saved.
There are a couple of tools in the market that performs static HDL analysis
and they vary in both price and functionality. In this project mainly Atrenta
Spyglass was evaluated but similar tools were also evaluated for comparison
purpose.
The purpose of this master thesis was to evaluate the need of implementing a
rule checking tool in the design flow at the Digital ASIC department PDU Base
Station development in Kista, who also was the commissioner for this project.
Based on the findings in this project it is recommended that a static rule
checking tool is introduced in the design flow at the ASIC department.
However, in order to determine which of the different tools the following
pointers should be regarded:
If the tool is only going to be used as for lint checks (elementary structure
and code checks) on RTL, then the implementation of Mentors Design
Checker is advised.
The areas regarding checks that could be of interest for Ericsson is believed
to be regular lint checks for RTL (naming, code and basic structure),
clock/reset tree propagation (netlist and RTL), constraints and functional DFT
checks (netlist and RTL).
Acknowledgements
I am very grateful towards Ericsson AB for the opportunity of accomplishing
this master thesis at the Digital ASIC department, PDU Base Station
development, Kista. I would like to give a special Thank you to the following
people.
Bjrn Fjellborg (Ericsson) - for great guidance during the entire project which
helped me a lot in many areas and kept me from slipping out of focus.
Joakim Eriksson (Ericsson) for guidance and teaching of project
procedures, design structures/methodologies, important issues, discussions
regarding results, possible improvements and so on.
Mark Vesterbacka (Linkpings Universitet) examiner of this thesis.
Hans-Olov Eriksson (Ericsson) as the section manager making this thesis
possible.
Jens Andersson (Ericsson) for UNIX related issues and information
regarding test procedures, constraints usage, improvements etc.
Tume Rmer (Ericsson) for information regarding synthesis procedures and
improvements.
Jakob Brundin (Ericsson) regarding Phoenix related information and design
structures/methodologies.
Andreas Magnusson (Ericsson) for information regarding SystemVerilog.
Victor Folkebrant (Ericsson) for information regarding design issues.
The ASIC department (Ericsson, Kista) everybody at the department for
being very helpful and making this thesis possible.
Ericsson Mobile Platform in Lund (EMP) everybody I have had contact with
for being very helpful.
Tom-Carlstedt Duke (Atrenta) for Spyglass workshop.
Shawn Honess (Synopsys) for Leda workshop.
Jonas Norlander (Mentor) for HDL-Designer workshop.
Magnus Jansson (Cadence) for HAL workshop.
Table of Figures
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12
Figure 13
Figure 14
Figure 15
Figure 16
Figure 17
Figure 18
Figure 19
Figure 20
Figure 21
Figure 22
Figure 23
Table of Tables
Table 2
Table 3
Table 4
8
14
35
Table of Equations
Equation 1
Equation 2
Table of Contents
1
INTRODUCTION ........................................................................................................................................................................1
1.1
1.2
1.3
1.4
BACKGROUND .........................................................................................................................................................................1
METHOD ..................................................................................................................................................................................2
PROJECT GOALS .......................................................................................................................................................................3
INTRODUCTION TO SPYGLASS .................................................................................................................................................4
Case 1.......................................................................................................................................................................................... 37
Case 2.......................................................................................................................................................................................... 37
Case 3.......................................................................................................................................................................................... 37
4.5.1.4
4.5.1.5
4.5.2
Leda ...............................................................................................................................................................................39
4.5.2.1
4.5.2.2
4.5.2.3
4.5.2.4
4.5.2.5
4.5.2.6
4.5.2.7
Case 4.......................................................................................................................................................................................... 37
Summary..................................................................................................................................................................................... 37
Case 1.......................................................................................................................................................................................... 39
Case 2.......................................................................................................................................................................................... 39
Case 3.......................................................................................................................................................................................... 40
Case 4.......................................................................................................................................................................................... 40
Case 5.......................................................................................................................................................................................... 40
Case 6.......................................................................................................................................................................................... 40
Summary..................................................................................................................................................................................... 40
RESULTS ....................................................................................................................................................................................43
5.1 PART I ....................................................................................................................................................................................43
5.1.1 Phoenix netlist version 8 compared to version 9 ........................................................................................................43
5.1.2 Hwacc netlist comparison ............................................................................................................................................45
5.1.2.1
5.1.2.2
5.2.2
5.2.3
5.2.4
DFT ................................................................................................................................................................................53
CDC ...............................................................................................................................................................................55
Synthesis readiness .......................................................................................................................................................58
CONCLUSIONS.........................................................................................................................................................................61
REFERENCES ...........................................................................................................................................................................65
Abbreviations
BIST
CDC
CPU
DDR
DFT
EMP
FF
Flip Flop.
FIFO
FSM
GUI
HW
Hardware
HDL
IP
Intellectual Property
KI/UMB
LSSD
Lint
MUX
Multiplexer.
Policy
RTL
SDC
SGDC
Anna
Template
TTM
Time To Market
Table 1
Abbreviations.
Introduction
1.1
Background
Design complexity increases with the shrinking of the transistor geometry.
Now when technology reaches 65 nm and below, the error correction and
debugging of the complex designs can result in great loss in both time and
money. The time it takes for completing a design project is also a great issue.
One way of preventing code errors and speeding up the process is to use a
static HDL analysis tool that statically checks source code for suspicious
behaviour. Static code analysis tools have been on the market for
programming languages like C and C++ for decades but during the past few
years a number of tools in this area has been introduced for HDL languages
like VHDL and Verilog. One of these tools is Spyglass from Atrenta. This tool
was the main objective for evaluation in this project performed at the Digital
ASIC department, PDU Base Station development, in Kista (from now on
referred to as the ASIC department) but other similar tools were evaluated for
comparison purpose.
The netlist delivery process between the design teams at Ericsson and the
chip vendor is iterative. The process consists of a number of scheduled
deliveries. In each of these deliveries the netlist must have reached an agreed
maturity degree in terms of implemented functionality and structure, this
scheduled delivery is referred to as a drop. The problem is that the number of
undesired iterations has increased. These iterations (revisions of a drop) are
caused by errors that prevent the netlist to reach the agreed maturity degree.
A consequence is that the engineers have to put more work and time into
correcting errors than before. In these days, when time to market and design
cost have to be shortened, the extra work generated from these extra
iterations is greatly undesired. One of the main objectives for the evaluation
was therefore to check whether the use of Spyglass on gate-level netlists, can
reduce or eliminate the number of undesired iterations between Ericsson and
the chip vendor. An illustration of this is shown in Figure 1 (the dotted arrows
on the left symbolise the undesired iterations and would preferably be
replaced by the dotted arrows on the right).
Ericsson
Specification
RTL-coding
VHDL
(Sys. Verilog)
RTL-code
Verification
Verilog
Synthesis
Cell library
Gate-level netlist
Verification
Floorplanning
Testbenches
Rule check
Simulation
Rule check
Equivalence check
STA
Netlist Hand-off
Chip Vendor
Place & route
Timing closure
Figure 1
Prototype man.
Other objectives were to see if the errors found at gate level could have been
prevented if Spyglass had been used on RTL and hence cut off the iterations
completely. The type of rules that could be of interest in an eventual future
purchase of the tool was also investigated.
1.2
Method
Anna was the design used as test platform on which the evaluated tools was
applied upon (Anna is a pseudonym, since the actual name is classified).
Anna was a design in 65 nm technology and contains several cores
(Phoenix), a Common Memory Controller (CMC), several hardware
accelerators (Hwacc), a block containing test/clock/reset/timer interfaces
(MISC), external IO interfaces (eIO) among others. The project was divided
into three parts.
In part I an evaluation of the use of Spyglass on netlists has been performed.
Complementary information regarding the used method is described in
chapter 2.1. In part II an evaluation of different packages of Spyglass
(containing different sets of rules) with the purpose of use on RTL code has
been performed. Complementary information regarding the used method is
described in the beginning of each chapter in part II. In part III a comparison
between Spyglass and similar tools has been performed. The results from the
evaluations in part I and part II are presented in chapter 5.
The cost and savings calculations are found in [Wickberg, 2007].
1.3
Project goals
In this thesis an evaluation of static HDL-code analysis tools has been
performed. The primary goal was to produce such an extensive amount of
information that the decision makers would be able to base their choice of tool
upon. Problem aspects both on gate level as well as on RTL have been
considered. The primary goal was formulated into the seven questions listed
below. The questions are discussed and answered in chapter 6.
1. Should Ericsson introduce a static HDL-code analysis tool in the design
flow? What resources and quality gains are to be expected from usage on
RTL? What resources and quality gains are to be expected from usage on
gate level?
2. Which rule packages are the most interesting for the ASIC department at
Ericsson?
3. How well would a static HDL-code analysis tool fit into the current design
flow?
4. How user-friendly is such a tool in terms of debugging and navigation?
How well is the result presented?
5. How easy is it to create your own rules?
6. How many licenses are expected to be required for a typical ASIC
project?
7. What gain are there to be seen for the use of the static rule checking tool
on design in 65 nm technology?
1.4
Introduction to Spyglass
Spyglass is a tool that performs a static and/or dynamic analysis of RTL code
or gate level netlists in the purpose to find structural, coding, consistency
and/or functional problems with the help of rules. A common
misunderstanding is that Spyglass is a regular static lint checking tool. Even
though this is one of the tools main fields of application, much more can be
performed than just checking that e.g. a _n has been used after the name of
an active low reset signal can be performed. One of the more advanced
checks Spyglass can perform is analysing whether the data signal in a clock
domain crossing protocol is stable during request. Spyglass has a number of
rules in different areas and below is a short summary.
Code structure For example, if flip flops have reset, reset is used both
synchronously and asynchronously, shift overflow etc.
Structural checks For example, if flip flop has the specified reset,
combinational loops, long logic path, redundant logic e.g.
y <= not (not X) etc.
The rules in the first two points above are often referred to as lint rules. To
analyse these types of rules Spyglass only investigates the source code and
no synthesis is needed. This is also the case for checking if a flip flop has a
reset. But if a rule that checks whether this reset is the one specified in the
Spyglass Design Constraints (SGDC) file (explained later), then Spyglass
must synthesise the design. The rules that are run depend on the templates
that are selected. A template in Spyglass is a set of rules. The user can use
the templates that are shipped with the tool or put together a new one
containing already existing or custom made rules. Putting together a custom
template is done by browsing and selecting rules in the policies. A policy is a
set of similar rules and distinguishes from a template in the way that a
template can contain rules from many different policies. A visualisation of this
is shown in Figure 2.
Figure 2
Spyglass can either run in batch mode or interactively using the GUI (shown
in Figure 3). A typical run is done in batch mode. The reports Spyglass
generates are then used to correct the errors. If there are problems that
require more information than the reports can offer, the GUI is brought up. In
the GUI the user can open the schematic view (the AND gate symbol
button) or the incremental schematic (the IS button). The difference between
these two is that the incremental schematic shows the user structures
correlated to the selected violation, while the schematic view shows the entire
block. When structural and functional rules are run Spyglass must be supplied
with an SGDC (SpyGlass Design Constraints) file. This file contains
information regarding clocks, resets, test mode signals, additional Synopsys
Design Constraints (SDC) files etc. An example is the specification of the
master clock. In order to check clock tree propagation Spyglass needs
information regarding where to start the checking. Below is a typical clock
specification in the SGDC file:
clock name I_REF_CLK domain domain1 value
rtz -sysclock
The line provides Spyglass with information regarding clock source, domain (if
there are several clock domains), if the clock returns to zero or one and if the
specified clock is a system clock or a test clock. The information the SGDC
file must contain depends on the types of checks that are to be run, but
typically clock and reset information are mandatory for most checks. For more
information regarding Spyglass in general or how to define signals in the
SGDC file, the reader is refered to [Atrenta 2, 2006].
Figure 3
Spyglass GUI.
Even though Spyglass main contribution is when run at RTL the tool has
functionality to check netlists as well. Different types of rules are often run at
different stages in the design flow. Many of the DFT checks are meant to be
run at gate level, namely because typically the implementation of the scan
chain is done after synthesis. Therefore is there no point in checking all the
DFT rules at an early RTL stage, but instead use a subset containing checks
for clock and reset propagation. There is also no point in checking rules
regarding internal company naming conventions and code structure at the
gate level. Here it is more useful to check naming conventions desired by the
chip vendor to check that the synthesis tool has not named IPs and similar
according to a forbidden style. An example of how Spyglass can be used in
the design flow is shown in Figure 4. The versions of Spyglass that have been
evaluated are 3.8.0.3 for the netlist level checks in part I and 3.8.1 for the rest
of the evaluation.
Fast RTL
Naming checks
Sensitivity lists
FSM structures
Figure 4
Fast RTL
CDC checks
Light DFT checks
Pre synthesis checks
Example of how the usage of Spyglass in the design flow could be.
Starting Spyglass is done with the spyglass command. There are a number
of switches to be used depending on the type of evaluation the user desires.
Table 2 shows the switches and commands that were used during this
evaluation. Below is a typical start up command.
spyglass verilog policy=stdc template stdc/RTL-Signoff
f <PATH>sglib_list <PATH>phoenix.v sgdc
constraints.sgdc
Description
Spyglass
Invokes Spyglass.
-f <file name>
-64bit
-batch
-stop
<file/entity/module name>
-policy=<policy name>
Or
-f <custom template path>
-mtresh <number>
-lvpr <number>
Table 2
2.1
Method
The method of analysing possible causes to undesired netlist iterations (from
now on referred to as undesired iterations) is explained in the following
section. The designs used for this part of the evaluation were Hwacc, Phoenix
and Anna. It was not possible to run Spyglass on all netlist versions and then
examine all the violations from all the rules. Therefore a heuristic evalutaion
method was formed.
A set of desired rules were defined. As previously mentioned a set of rules in
the Spyglass world is called a template. The template used during this part of
the evaluation originated from the ASIC vendors add-on for Spyglass. The
violations found during the evaluation had to reflect the errors the ASIC
vendor found when they ran Spyglass or a similar tool in their flow. This made
it possible to estimate the engineering hours saved if a violation/violations
were recognised as the cause to the undesired iteration. Spyglass was used
on different revisions of the netlist in a common drop. A typical netlist flow
between Ericsson and the chip vendor is shown in Figure 5. How an
undesired iteration was recognised and selected to be evaluated is described
in Appendix A.
2nd drop, 1st revision
Ericsson
Netlist versions
Time
TTM loss
Chip vendor
Undesired iter.
Undesired iter.
Project starts
Figure 5
The method used to identify causes to the undesired iterations was more or
less a heuristic model in the sense that it did not guarantee an optimal result.
The problem was that there were too many violations to examine from a run
and the work of exhaustively going through them all would have been
impossible. The method model that was developed and used in this part of
the evaluation is described in the following section.
When running checks, Spyglass generates a couple of reports. What reports
that are generated vary depending on what rules that were selected for that
particular run. In order to make the violation comparison, the default report
named Count was viewed and considered. This report lists all the rules that
generated errors, warnings and information during the run. For each listed
rule the report also presents the number of violations it had. The two reports
from the consecutive runs were copied together into an excel spreadsheet
and compared. If the number of violations a rule had generated differed, it
was taken into account for further examination. Exceptions to this procedure
were made to rules considered to be extra important. They were examined
even if they had generated the same number of violations. If this was the case
the violations were traced further trough the netlist versions. Figure 6 shows a
visualisation of the selection process.
Figure 6
The Spyglass GUI was used to examine the selected violations. Violations
obviously not being the cause to the undesired iteration were disregarded.
The remaining violations were taken into further account and discussed with a
designer. If a violation, in consent with a designer, had a high probability of
being the cause of an undesired iteration, unnecessary design time had been
found. The time was then added to the final equation, used to compare cost
against earnings of the tool (explained more later). The procedure described
was applied to a selected set of different drops on both block and chip level.
10
When all the selected drops had been evaluated (see Appendix A for the drop
selection procedure), the cost of the extra work the undesired iterations
produced were then calculated by multiplying the estimated time it took to
correct the identified bug (Eh), with the cost of one engineering hour (Ec).
This was then added with the lead time delay (Ch) multiplied with the lead
time cost (Cc). The Ch x Cc factor was very hard to estimate and varies a lot
between projects. The result from the addition makes the left side of Equation
1. On the right side the cost of introducing Spyglass in the design flow was
calculated. The number of required licenses (Li) was multiplied by the yearly
cost of a license (Lc). The initial cost of introducing Spyglass in the design
flow was then added (Ic), which will contain factors like estimated cost of
changing the flow, time costs for educating R&D people in the use of
Spyglass. The sum was divided by the number of projects during a year (P).
The use of Spyglass in the flow will also consume extra engineering hours.
But with respect to that one of the main reasons for using Spyglass in the flow
is to save rather than causing longer developing time (finding errors quicker,
easier debugging etc.), these two factors is assumed in this part to take each
other out.
##
& # Li ) Lc + Ic &
&
#
&
)
%%% " Eh (( ) Ec + %% " Ch (( ) Cc ( > % (
(
%
( $
P
'
$ drops '
$ drops '
$1
'
1
4
4
4
2
4
4
4
3
444442444443
R
L
Equation 1
2.2 !
11
Figure 7
The libraries are usually compiled into a common directory in the working
area. For this evaluation a directory called sglibs/ was created, into which all
the necessary libraries were compiled. The compilation was performed by the
command spyglass_lc gateslib <library file path>. When
Spyglass was run, pointers to the libraries that were compiled had to be
provided in the start up command. The switch sglib <spyglass
library path>, did this. Typically when there were a lot of libraries all the
pointers could be collected in a file. The -f command was then used to
include the file at start up.
When correcting errors using the Spyglass GUI, the user browses through the
violations in the Msg Tree tab. By default, the messages were sorted by the
severity levels INFO, WARNING, ERROR and FATAL. A more
preferred order was to sort by policy. This was done by selecting Window ->
Preferences -> Misc and then selecting Group by policy under Tree
Selection. The user is also able to specify a preferred editor. This was done
in the same tab as for the sort order mentioned above but under Specify
Editor program. To bring up the editor when correcting and analysing the
results, the user only highlights a row in the code window and presses the
e button. This started the editor and a cross reference makes the editor
automatically to show the part with the selected row.
2.3
Set one, various rules for poorly implemented design (contains the RTLSignoff template).
2.4
The greatest need for a large quantity of licenses will be when RTL block
delivery takes place before a scheduled netlist delivery. Ericsson Mobile
Platform (EMP) made the same assumption.
10
1.25
15
1.875
20
2.5
25
3.125
13
Table 3
Number of licenses needed, based on spyglass runtime,
calculated for design teams containing 10, 15, 20 and 25 designers.
2.5
Rules
As mentioned in chapter 2.1 the template provided from the ASIC vendor was
selected to be used in this part of the evaluation. This template contains all
the rules in the stdc-policy. These are naming conventions, RTL coding rules
and design rules. The motive behind the decision to use the RTL-Signoff
template is explained in the last section of chapter 2.3.
2.5.1
BB_DES042 Avoid constant values on the data input pin on flip flops.
There is no point in setting a data pin of a D-flip flop to a constant value.
The only thing this will result in is extra silicon and power consumption.
The possibility of an unintended construct is high.
BB_DES063 Preferably use clock signals active on the rising edge only.
The most common practice is to have the entire design to trigger on active
edge of the clock signal. This makes both the verification task easier as
well as meeting the timing constraints. High risk of unintended construct.
There are design exceptions, such as a Double Data Rate (DDR)
mechanism.
14
2.6
BB_DES070 Do not use D flip flops nor latches with connected set and
reset pins.
A D-flip flop has either reset or set, not both like an Set/Reset (SR) flip
flop. If they are both set and reset, different tools in the flow might have
different priority levels on the signals and hence make the design work in
an unintended way.
Spyglass has not been used in the way it should be. The optimal way of
use is through the entire design flow with a set of rules that was defined in
advanced. In this way faults and errors will not be allowed to propagate
through the design. Otherwise a single error at a low level might generate
thousands in a higher level. A simple example is a signal naming rule. If
this rule is not applied until chip signoff and is violated, all the signals will
be reported and the work of renaming all these would be massive. If it had
been applied from the beginning only a few violations would have been
reported and the work correcting these would be effortless.
The memory usage was an issue as well, and the reason for this behaviour
was strongly connected to the reasons why Spyglass had such long run
times. At some points the memory on a regular 32-bit processor was not
enough and the run crashed with the message:
ERROR [100]
It was mainly Anna that suffered from the large memory usage and was run
on a 64-bit CPU. A strange behavior was that a 32-bit CPU could be used
when running the actual evaluation of the Phoenix block in batch mode but
when bringing up the GUI for violation navigation and examination the 64-bit
core had to be used. As mentioned above the reasons for the immense
memory usage were the same as for the reasons with the large run times.
The number of violations and type of rules selected for the run had a great
impact on how large the spyglass.vdb file (the database file) became. But
even though only one rule was selected for the evaluation on Anna, which
only generated 320 violations, the 64-bit CPU had to be used. It was therefore
legitimate to assume that Spyglass demands more than 2^32 = 4.29 GB of
memory for a synthesis of a 3 million gate design. This might sound alarming
but one must keep in mind that most of the runs are to be executed on block
level RTL code (RTL code checks runs faster). When checking the netlist,
theoretically only one evaluation at the time will be run.
16
3.1
Constraints checks
Constraints are essential when developing a new design. The constraints set
up are the environmental, clock, test and power boundaries the designers
must keep themselves within when designing an ASIC. The constraints set up
is often captured and written in a Synopsys Design Constraints file (SDC). For
more information regarding SDC, the reader is referred to [Synopsys 2, 2006].
Without constraints the design can go haywire and when implemented on a
chip it will not have the desired functionality nor behave as expected. In order
to implement the functionality when a new ASIC is under development, the
designers review the design specification. In order to develop a design that
will make sense in reality, it is at least as important to have a complete
constraints set up and to verify the design against it during the development
phase. This process is shown in Figure 8.
Design
Specification
Design constraints
New
Design
Figure 8
17
Figure 9
Method
Spyglass ability in terms of checking constraints cover and whether the
constraints are followed in the design were evaluated. The evaluation was
done on the RTL code of the CMC block. The reason for this choice was that
the block in question had a large number of associated constraints in the
Anna SDC file. The constraints belonging to the CMC block was extracted
from the original AnnaChip.ref.sdc and interpreted to map RTL instead of
netlist. The path to the new SDC file was added in the SGDC file with the
following command:
mapped_pin_map clock CP enable EN E data D out Q QN
sdcschema file cmc_tmp3.sdc level rtl mode fn1
18
B_o
C_i
FF
FF
Block B
Block C
clk
Block A
output delay
Clk
overdue violation
(period 3.2)
B_o
output delay + input delay
C_i
Figure 10 If the output delay constraint from block B plus the input delay constraint
from block C is longer than the clock period it could lead to a severe error.
This error would then cause unwanted iterations late in the design phase
costing time and resources.
In order to check this functionality a smaller example design was used. The
design is a bubble sort accelerator.
The Bubble sort design was constructed of three blocks. There were an SDC
file written for each of the blocks and these were linked together in the SGDC
file with the following commands.
current_design ARRAY_BB_SORT_STRUCTURAL
mapped_pin_map clock CLK enable EN E data D out Q QN
sdcschema file bb_sort_structural.sdc level rtl mode fn1
block name REG_STD_LOGIC_VECTOR FSM
19
current_design REG_STD_LOGIC_VECTOR
mapped_pin_map clock CLK enable EN E data D out Q QN
sdcschema file bb_sort_reg.sdc level rtl mode
current_design FSM
mapped_pin_map clock CLK enable EN E data D out Q QN
sdcschema file bb_sort_fsm.sdc level rtl mode fn1
The templates used when running Spyglass were the same as for the CMC
block evaluation.
3.2
DFT checks
Size and complexity of new architectures are growing fast which in turn
makes the testing procedures harder. It is therefore important for the
designers to follow the Design For Test (DFT) technique. DFT can be
described in short as making the validation process easier and simpler by
taking testing in consideration from the beginning of the design process. To
achieve a design that is easy to test, the two terms controllability and
observability are used. The meanings of them are shortly explained below.
One of the approaches under the DFT technique is Scan-Based testing. This
means that flip flops are directly connected with each other in a long chain,
from chip input to output. The testing personnel are then able to scan desired
values into the design using predefined test vectors. The vectors put the
design into desired states which are excited whereupon the new states are
scanned out and read at the outputs. The work of implementing an effective
scan chain puts great demands on the design regarding the observability and
controllability properties previously mentioned. Spyglass capability of
checking these properties early in the design phase was interesting and
therefore evaluated. For more information regarding DFT the reader is
referenced to [Rabaey, Chandrakasan and Nikolic, 2003] page 721.
3.2.1
Method
The Anna Core was chosen for the DFT evaluation. See Figure 11 for a
hierarchy overview. The reason for not including the entire design was that
the ASIC vendor had not implemented parts of the test logic in the Anna Chip.
Therefore were some test signals connected to ground at the Anna Chip level
awaiting further implementation. The following signals were, referring to the
Anna test documentation [Test, 2006], defined in the SGDC file in order to
satisfy the scan ready checks.
tst_scanenable and ScTest were set to 1, which puts the design in scan
mode.
In order to check the observability and controllability of the BIST logic the
following signals were also stimulated:
BIST_IN [1] (RBACT) and BIST_IN [5] (BYPASS) were set to 1. This
enables ram bypass and test mode.
BIST_IN [10] (CLK_BMP) defined a test clock to verify that the BITMAP
logic had a controllable clock as input.
The template Scan_ready was used, which contains rules to check for how
well prepared the design was for Scan chain insertion. The three properties
Spyglass checks for are given below.
All resets and sets should be disabled in test mode. To check this property
the Scan_ready template uses the rule Async_07 All internally
generated asynchronous sources should be inactive in scan mode.
tmc
3.3
CDC checks
As new hardware designs are getting larger, clock path length becomes
critical. In order to shorten the clock path, and thus avoid clock skews that are
too large to be handled, a common design method used is Globally
Asynchronous Locally Synchronous (GALS). The main idea with this method
is to design small blocks with individual clocks. The blocks are then
connected with each other through an asynchronous handshaking
mechanism or a FIFO (among others). On each of the communication paths
between the individual synchronous blocks there will be one or more Clock
Domain Crossings (CDC). The CDC can be a bit tricky to handle and the
consequences of a badly constructed CDC can be severe. For more
information about GALS the reader is referenced to [Myers, 2001]. An
alternative reason for the use of several clock domains is different demands
of clock frequency from CPU subsystem and varieties of I/Os (as there are in
Anna).
The structures used for synchronising data at a CDC are many and varies a
lot in complexity. A simple example of a poorly implemented data
synchronisation structure is the absent of one or several Meta stability flip
flops. The structural idea of using meta stability flip flops can be viewed in
Figure 12. The meta stable behaviour is a question of probability. The
probability of sampling meta stable data once is very small but significant,
while the probability of sampling meta stable data twice in a row is several
times smaller and hence considered to be insignificant. More about CDC
problems can be found in [Cadence 1, 2006].
Domain 1
D1
Domain 2
D12
Domain 1
D2
D1
clk_1
clk_1
clk_2
clk_2
Domain 2
D12
D2
D2
clk_1
clk_1
D12
D12
clk_2
clk_2
D2
D2
D2
22
Method
The Anna Core was used with the template Clock-reset/Sync_checks. The
template contains rules from the basic CDC package mentioned in the
previous section. The following clocks and their relation were defined in the
SGDC file.
I_REF_CLK, domain1
I_TCK, domain1
I_EIO0_FP_156_CLK, domain2
I_EIO1_FP_156_CLK, domain3
I_EIO2_FP_156_CLK, domain4
O_DDR_CLK_P, domain5
O_DDR_CLK_N, domain5
I_POW_ON_RESET_N
ddr2_rst_n
The reason why the FIFO rules were not evaluated was problems with the
Design Ware (DW) component in the FIFO implementation. This problem is
more intensely discussed in chapter 3.5.
3.4
23
3.4.1
Method
Warnings from a synthesis log was extracted and counted. The warnings
were then discussed with an experienced designer regarding severity and
their need to be checked. After this, the warnings were compared with the
result from a Spyglass run on Anna to see if they could have been caught. A
dilemma was that the synthesis log file origins from a different design than the
one used for the Spyglass test run. This can be a problem in means of
evaluation accuracy, but it should reflect the reality sufficiently. If time could
be saved, the cost of this time was added to the calculations in [Wickberg,
2007]. The cost of the saved time was calculated through engineering hours
saved multiplied with 850 SEK (cost of one engineering hour).
3.5
24
25
3.6
26
4.1
4.1.1
4.1.2
Running LEDA
Setting the environment took a lot of time and LEDA did not elaborate in the
beginning. The main reason for this behaviour was that there were instances
in the design lacking functionality in libraries. The compilations of the libraries
were difficult and the impression was that Synopsys have to work with this.
The application was uncomplicated to handle once the environment had been
set up, in terms of finding and compiling all the libraries etc, the checks run
quite fast and the explanations to the violated rules were decent however
more details were desirable. It was easy to dig deeper into the code from
where the violation occurred which could be done by either tracing the error
with a schematic view or directly in the code with the built in editor. The
schematic view visualised only the logic involved in with the chosen violation,
however the user could navigate further in the design if that was required. The
LEDA main window with the schematic view is visualised below in Figure 13.
There were cross-reference capabilities between the schematic view and the
editor. This made it possible to select a desired structure in the schematic to
be viewed in the editor.
27
When browsing through the violations after a run it was hard to get a good
overview of the violations. It would have been nice with filter capabilities like
not showing violations that went over a thousand in number, or was decided
to be of little importance during that particular run, without to have to turn that
rule of before the run. Filter functions like show only violations regarding
clocks and resets or show FSM correlated violations in the design could be
of help. Another nice but absent functionality would have been the ability to
filter a particular violation of a rule in the design.
4.1.3
Reports
A very nice feature was the ability to save an HTML document that further
explained the rules that had failed. In this document the violations were
described in more details than in the GUI and it also presented the number of
gates and latches that the design contained etc. This feature would make it
possible to correct uncomplicated bugs and errors without running the
application and thus not occupy a licence someone else needs. Below is an
example of one of the reports.
-Name of the top:
DESIGN REPORT
top
--
/home/efrewic/leda/demo/demo-
35
1
1
There were a lot of different angles to approach the report. This was done
through, as mentioned, a HTML document, a text document, or the GUI
depending on the desired way of procedure.
4.1.4
Concerning rules
There were a lot of different rules in different categories, about 1500. When
choosing the desired rules you can either sort them by topic (e.g. CLOCKS,
FSM, DFT etc.) or by policy (e.g. IEEE, DC, LEDA etc.). There were rules for
e.g. how a state machine should be constructed, if the outputs from a block
are registered, is there a reset for all the flip flops etc. There were also rules
concerning the RTL codes power efficiency. The desired checks concerning
identification and verification of multi-cycle path was however not available.
LEDA was capable of checking whether a signal was synchronised with a
double flip flop when crossing clock domains, but for now, not capable of
checking the synchronisation of busses.
It was relatively hard to configure your own set of rules in terms of selecting
and identifying desired rules. At this point the application was not as user
friendly as the HTML report. When putting together a set there were a lot of
different rules to choose from and it was hard to get a good overview. A
feature desired was a search function where you can search for e.g.
Asynchronous loop and then get a list of rules concerning that subject.
Instead you had to search with the UNIX functions find and grep in a
terminal.
29
4.1.5
4.2
4.2.1
4.2.2
Running HAL
Navigating through the violations was easy in HAL and the explanations to the
violations were good. The schematic view was also quite nice and easy to
use. Using HAL Definition File Editor the user could either decide which
predefined set of rules to be used or configure new customized sets. All the
rules could be parameterized. The overview of the rules in the HAL Definition
File Editor was poor and the user have to use a combination of the grep
command in a terminal and the GUI to be able to define a set of rules. The
main reason to the low grade given was that there were no explanations to
the rules in the GUI, but instead a six letter long name that did not tell you
anything about the rules purpose. The switching between the GUI and the
terminal made it so complicated that simply editing the .def with a text editor
was easier.
30
4.2.3
Reports
The report HAL generates fulfilled its purpose in terms of making it possible to
correct errors without occupying a license. The report was short, descriptive
and concise, though at some occasions more explanations to the violations
was desired.
4.2.4
Concerning rules
HALs main use was for regular lint checks and simpler CDC and DFT
checks. More advanced rule checking as complex CDC checks, constraints
checks and power were not supported.
4.2.5
31
4.3
4.3.1
32
4.3.2
4.3.3
Reports
The report features in Design Checker has not been tested and therefore a
grade has not been set. But referring to the Design Checker tutorial the tool
is capable of generating CSV, TSV and HTML document. The CSV (comma
separated value) is written into a .csv extended file, the TSV (tab separated
value) into a .txt file and HTML (HyperText Markup Language) into a .htm
extended file.
4.3.4
Concerning rules
Design Checkers purpose, as mentioned earlier is pure lint checking. This
makes an impact to the number of rules the tool provides. There are rules for
typical lint checking like; naming conventions, FSM structures, registered
outputs, flip flogs has reset etc. But there are no rules for DFT, CDC,
constraints, timing or power.
4.3.5
Are the tool meant for plain lint checking or shall the tool be used to debug
clock/reset tree propagation, constraint files, DFT problems (test signal
propagation observability/controllability), power use etc.?
If the answer is that the tool chosen solitarily will be used for lint checking
then Design Checker is the preferred choice, otherwise Spyglass or LEDA
should be selected.
Both tools give a very solid impression and it feels like there are a lot of
thoughts behind the development in either case. The filter function in Design
Checker was somewhat more sophisticated than its counterpart in Spyglass,
even though the Spyglass equivalent was good.
34
4.4
Ge ne ra l
Ove ra ll s ta b ility
K n o w le d g e a b o u t d e s ig n
fu n c tio n a lity
C a de nc e H A L
to a c e rta in d e g re e
no
y es
no
S e ttin g u p th e e n viro n m e n t
4
lim ite d , fu ll in
S y s te m ve rilo g s u p p o rt
lim ite d
E la b o ra tio n e rro r
m es s age
R u n tim e
1 0 m in - 1 h
1 0 m in - 1 h
~fe b ru a ry 2 0 0 7
Give s a w a rn in g o f
B B d e te c tio n
1 0 m in - >2 4 h ,
d e p e n d in g o f ru le s
no
B la c k b o x h a n d lin g
lim ite d
E la b o ra tio n e rro r
m es s age
R u le s e a rc h e n g in e
P a ra m e te riz e d ru le s
N o (D o n e w ith g re p
in te rm .)
y es
N o (D o n e w ith g re p
in te rm .)
y es
4
3
4
4
C re a tio n o f n e w ru le s
y e s , u s in g V e R S L ,
V R S L , C /C ++ o r
TC L
y e s , in C P I
(C a d e n c e S p e c ific
A P I fo r c o d in g
s tru c tu ra l c h e c k s )
n o , n e w ru le s c a n b e
o rd e re d a fte r
c h a rg e /c o n tra c t.
R e port
H TM L d o c u m e n t g e n e ra tio n
.tx t d o c u m e n t g e n e ra tio n
4
4
no
y es
no
4
y es
y es
3
b u ilt-in
3
2 w h e n d e fin in g a
ru le s e t. 5 w h e n
b ro w s in g vio la tio n s
b u ilt-in
3
5
u s e r d e fin e d
5
B lo c k c o n n e c tio n s p re a d s h e e t
N a vig a tin g th ro u g h d e s ig n
no
3
no
3
no
5
4
b u ilt-in
4
5 , n ic e fe a tu re w h e re
b lo c k c o n n e c tivity is
s hown.
4
no
no
y es
no
3
y es
y es
3
y es
s o o n w ith
p ra g m a s /e x c lu s io n s
5
y es
lig h t s y n te s is
y es
y es
p a rtly
te m p la te b a s e d /lig h t
s y n th e s is
y es
no
no
lig h t s y n te s is
y es
y es
y es
te m p la te b a s e d
y es
n o , d o n e in P re c is io n
no
y es
no
no
no
no
no
y es
0 7 -m a r
0 7 -m a r
no
n o , d o n e in C D C 0 -In
n o , d o n e in C D C 0 -In
y es
y es
y es
n o , d o n e in C D C 0 -In
no
y es
no
y e s , ~1 0 ru le s
y es
y es
n o , d o n e in C D C 0 -In
y es
y es
y es
y es
y e s , ~ 4 0 ru le s
y es
y es
y es
no
H ig h lig h te n s B B
1 0 m in - 1 h
R ule se le c tion
Table 4
4.5
4.5.1
Spyglass
The chapter will start with a short result presentation for the different test
cases and end with a summary of the work procedure and conclusions.
36
4.5.1.1
Case 1
Possible rules to take as template for customization is: STX_287,
STX_296.
Solution: No rules were written.
Test code runs through Spyglass parser: YES
4.5.1.2
Case 2
Possible rule to take as template for customization is: SYNTH_137 and
SYNTH_5335.
Solution: No rules were written.
Test code runs through Spyglass parser: YES
4.5.1.3
Case 3
Possible rules to take as template for customization is: SYNTH_5399.
Solution: No rules were written.
Test code runs through Spyglass parser: YES, Spyglass reported
unsynthesizable block, unsupported SystemVerilog construct and pointed to
the alias statement.
4.5.1.4
Case 4
Possible rules to take as template for customization is: DisallowMult-ML,
STX_1199.
Solution: No rules written.
Test code runs through Spyglass parser: YES, Spyglass reported
unsynthesizable block, unsupported SystemVerilog construct if the double
operator (e.g. ++) was used.
Spyglass was able to parse the code in Case 5 and Case 6 but there was no
functionality present for writing new rules.
4.5.1.5
Summary
Referring to the [Atrenta 3, 2006] there are four ways of customizing rules.
These are listed below in order of increasing complexity.
The easiest way is to identify a similar existing rule and check whether it
has some parameters that can be changed to fit your need.
If there are no rules to parameterise the user can see if the Built-In
Checks can be used. This method does not actually end up with new
check functionality, but more messages to existing built-in rules.
37
If there are no c-primitives matching the desired functionality the user can
define new.
After some browsing and searching among the existing rules it was
established that there were none which could be parameterized to have the
desired abilities. This was expected from the beginning. The next way of
course, described above, was of no interest because it was new checking
functionality rather than only new messages desired. Using rule-primitives
could be profitable because it would make the development of new rules
much easier. Spyglass set of rules templates were searched for rules with
similar behaviour to the cases previously defined. As an example for case 1,
the rule STX_287 (improper assignment to a mixed array) was found. If the
functionality concerning identifying a mixed array along with the functionality
to identify logic declaration could have been united, it could have been the
solution. The problem was that this rule (STX_287) and other similar rules
used a primitive called veParser. This primitive could not be found in the cprimitive directories. After contact with Atrenta they confirmed that this
primitive was not documented and could not be used by external users. They
also eventually confirmed what was already suspected, that there are no
primitives to handle the sought behaviour. This left it into the creation of new
rule-primitives which in turn could have been used to make the rules for the
test cases. This method was, compared to the other three, quite more
complicated and required a lot of knowledge of how the Spyglass database
was constructed. According to Atrenta it would be difficult for customers to
write the new rules themselves. Being able to create new rule-primitives was
also only possible if the user possessed a Spyglass Rule Builder license,
which gave full access to the database.
An interesting point of view was that the test code went through the Spyglass
parser without complaints or errors. This indicated that the tool could handle
SystemVerilog code, which also was a question mark. The test code used is
shown in Figure 16
It was decided to round up the investigation here and leave the further
rule-primitive definition to an external part, if the company decided to go
further in this area.
38
4.5.2
Leda
The chapter will start with a short result presentation for the different test
cases and end with a summary of the work procedure and conclusions. For
4.5.2.1
Case 1
Solution: A new custom rule was not written. The current build of Leda was
not able to catch this structure. It will however work with the next build
according to Synopsys.
Test code runs through Leda parser: YES
4.5.2.2
Case 2
Solution: A rule to catch cast violations in Systemverilog already existed in
Leda and worked with the test case in Figure 17.
Test code runs through Leda parser: YES
39
4.5.2.3
Case 3
Solution: A new custom rule was not written. Alias assignments in
Systemverilog were not recognised in VeRSL.
Test code runs through Leda parser: NO, the code regarding the alias test
was taken from [Accellera, 2006] with some modification, but were
commented out because of build failure.
4.5.2.4
Case 4
Solution: A new custom rule was written. For this occasion a new rule were
created which prohibited all assignment operators. The function used was not
documented in [Synopsys 4,2006] but came directly from Synopsys R&D
division.
Test code runs through Leda parser: YES
4.5.2.5
Case 5
Solution: A new custom rule was not written. Problems with identifying both
the array structure { .., .. , .. } and the use of unpacked arrays in the
assignment were encountered.
Test code runs through Leda parser: YES
4.5.2.6
Case 6
Solution: According to Synopsys there was no way of writing a rule to catch
this syntax.
Test code runs through Leda parser: Not tried.
4.5.2.7
Summary
The language used when creating new rules was called VeRSL (Verilog Rule
Specification Language) or VRSL (for VHDL). For more information regarding
this area the reader is referred to [Synopsys 1, 2006] and [Synopsys 4, 2006].
This language was macro-based and was constructed of a number of
templates and adherent attributes on which a set of commands were used
upon. Example: The definition of a rule checking that only addition and
subtraction were used in the design looked like this:
Only_add_n_sub:
limit operator_symbol in binary_operation to +, -
message Only + and operators are allowed in the
design
severity WARNING
limit is the command, operator_symbol the attribute and
binary_operation the template.
40
In Case 2 there were already a rule checking the use of cast, but that rule
prevented all use of casts. It was later also confirmed by Synopsys that it
would not be possible to customise the rule. The absence of documentation
regarding the cast attribute used in the existing rule made it impossible to
know how it should be used. In all other cases Leda lacked the functionality of
identifying the SystemVerilog syntax. However all the test cases passed the
parser except Case 3, where the use of the alias statement caused build
failure.
Apart from using VeRSL, rules could also be made using C or TCL, however
rules checking RTL code could not be made in this way. The C and TCL
languages were only meant to be used for rules catching design behaviour.
Therefore, if there were no templates and attributes available that could catch
the undesired language syntax, no rules could be made.
Figure 17
41
42
Results
5.1
Part I
In this chapter the results from the evaluations of Phoenix, Hwacc and Anna
are presented. For each block the deviating rules are presented along with an
explanation to why they were or were not considered to be the cause to the
undesired iteration. Rules that did not cause a deviating number of violations
between two versions, but were considered to be important nevertheless are
also presented. Rules that were violated in more than one comparison but not
considered to be the cause to the undesired iteration is not presented
extensively more than once, but only mentioned.
Rule violation concerning naming and DFT was not considered in this part of
the evaluation. The reasons were that a naming violation would not have
caused an undesired iteration and the chip vendor inserted most of the test
functionality. The setup functionality was to a great part therefore unknown. In
part II of this evaluation on the other hand, where checks on RTL was studied,
an entire chapter is spent on Spyglass capacity of making RTL code DFT
ready. The reason to why these rules were run was to simulate the run time, if
such rules were to be used in the future.
5.1.1
43
5.1.2
The error was of such a kind that Spyglass was unable to find it.
Remember that the ASIC vendor did not evaluate Anna using Spyglass.
It might not have been an error that generated the iteration. New
functionality might have been added.
Spyglass did recognise the error but it slipped past in the filtering before
the violation examination. As mentioned, in chapter 2.1, the method used
for the evaluation is not flawless and fault proof but only a heuristic model.
5.1.2.1
45
The netlist versions were checked in increasing order starting from the
current.
The trace stopped when a version with less number of violations from rule
BB_DES040 occurred and an explanation to all violations were found.
The result from the trace was the following. In version 5 only 8 violations
occurred. Therefore a possible cause to the undesired iteration was found
between version 4 and version 5. After consultation with a designer the
conclusion that an actual error had been found were made. The remaining 8
violations turned out to be flip flops that should not have resets according to
their functionality. The errors found were added to Equation 1.
5.1.3
Anna
The evaluation of the Anna block suffered from incredibly long run times (this
issue will be discussed further in chapter 2.6) and after a few tryouts with
different sets of rules only rule BB_DES040 was selected and run on version
3 and 5.
47
5.2
Part II
5.2.1
Constraints
In this section the results from the evaluation regarding Constraints cover and
correctness are presented. The rules that were evaluated were chosen
regarding to the SDC commands used in the Anna project and rules that were
especially desired by the designers in the project. Besides the chosen rules
associated parts of the SDC file is presented.
Most of the constraints rules were not violated in the unmodified version of the
CMC. In order to check the functionality of the rules some changes were done
in SDC file in order to force a violation.
5.2.1.1
48
2 [get_clocks {iReset_n}]
2 [get_clocks {iReset_n}]
2 [get_clocks {clk}]
2 [get_clocks {clk}]
49
rise max
0.15 [get_clocks
fall max
0.15 [get_clocks
rise min
0.15 [get_clocks
fall min
0.15 [get_clocks
0.15 [get_clocks
0.15 [get_clocks
0.14 [get_clocks
0.15 [get_clocks
50
rise max
0.15 [get_clocks
fall max
0.15 [get_clocks
rise min
0.15 [get_clocks
fall min
0.15 [get_clocks
Inp_Del09, Not all pins on the same bus have the same input delay.
The rule was violated if the ports on a bus were constrained by deviating
values of the set_input_delay command. If the rows below were
defined in the SDC file a violation occurred due to input delay constraint
mismatch.
set_input_delay
max [get_ports
set_input_delay
min [get_ports
set_input_delay
max [get_ports
set_input_delay
min [get_ports
set_input_delay
max [get_ports
set_input_delay
min [get_ports
set_input_delay
max [get_ports
set_input_delay
min [get_ports
{iReset_n}]
4 [get_ports {iCmDataAck_*}]
52
IO_Consis04, Sum of input delay and output delay should not exceed
(clock period * parameter).
The rule generated a violation if the sum of the output delay from one
block and the input delay from a connected block was more than the
specified clock period multiplied with a customizable factor. When the
lines below were used the rule was violated. The factor used was 1.1,
3.2x1.1 < 2.5+1.6.
From the bb_sort_structural.sdc
create_clock name CLK period 3.2 waveform { 0 1.6 }
[get_ports {CLK}]
From the bb_sort_fsm.sdc
set_output_delay 2.5 clock [get_clocks {CLKVirtual}]
max [get_ports {REG_ENB_0}]
From the bb_sort_reg.sdc
set_input_delay 1.6 clock [get_clocks {CLKVirtual}]
max [get_ports {ENB}]
5.2.2
DFT
Clock_11 Internally generated clocks must be test clock controlled in shift
mode.
There were a large number of violations to this rule before the BIST signals
were defined in the SGDC file. The BIST[7] (TST_GATED_CLOCK) had a
great impact on the number of violations, which was very reasonable because
of the purpose and functionality of the signal. There were however 25
violations left, which altogether affected around 2000 flip flops, after the
definition of the BIST signals. Most of these violations had to do with the
ddr2TstPadlogicIn bus. The bus contains both a clock signal as well as a
select signal for propagation of the clock before an internally generated one.
The reason to why the bus was not defined in the SGDC file was that it was
not found in the test documentation. When tracing the signal further outside
the Anna core the following line was found in AnnaChip.vhdl:
ddr2TstPadlogicIn
The assumption was made that the bus was cleared awaiting further test logic
implementation by the ASIC vendor (as the case with the BIST signals).
53
Three violations were caused by the absence of test clock definition for each
of the clocks into the EIO. As one can see in Figure 19 there was no select
logic between the clock input and the first flip flop. A reasonable assumption
was that the system clock/test clock selection was done in the AnnaChip
higher in the hierarchy. The last violations concerned the clock propagation of
the ddr2_fifo_test_in bus and the tst_pll_bypass signal. Which are
both cleared in the AnnaChip.vhdl. These signals were assumed to be
awaiting further test logic implementation by the ASIC vendor regarding to
their functionality and naming.
Figure 19 The propagation of the 156 MHz clock into the EIO jitterbuffer. Notice the
absence of a system clock / test clock multiplexer. The probable reason is
that this selection is done in the Test Mode Controller (see Figure 11).
54
Figure 20 The reset selection structure that was flagged by rule Async_07. The X in the zoom in
symbolises the unknown value of the reset input. The ddr2_rst_n input signal is connected to the rst_n.
5.2.3
CDC
Clock_sync01 Signals crossing clock domains must be correctly
synchronized in destination domain.
The rule generated nine violations correlated to the EIO jitter buffer3 crossing.
The structure used was verified to be working and was therefore not
considered being a problem.
Clock_sync05 Primary inputs should not be multi-sampled.
The rule generated two violations that concerned Reset and ScTest
(controlling test mode) going into the EIO jitter buffer. The reason not to multi
sample a signal is that a transition might be shifted one clock pulse between
the two domains. The functionality of Reset and ScTest was however not one
pulse dependant.
Asynchronous signal
De-asserter
De-asserted signal
Clk
Clk
Asynchronous signal
De-asserted signal
56
PsFifo01 Detects FIFO:s and looks for underflow and overflow situations.
The rule checks for underflow and overflow of FIFO queues. The code in the
demo looked standard. But that information does not provide much in terms of
how well the rule suits Ericsson. The interesting part was however that the
tool generated a waveform diagram in order to aid in the debugging
procedure. It was impressing that the tool can generate and point to
suspicious structures automatically. The Waveform diagram (see Figure 23
for example) could though have been a little bit more specific in terms of
pointing out where the suspected behaviour appeared. In the demo example
used it was easy to find, but with more complex structures it might get a lot
trickier.
57
5.2.4
Synthesis readiness
The warnings extracted from the synthesis log are presented. For each of
these warnings, the number of occasions it had in the synthesis log is
presented. An importance grading is also given to each warning. The degree
given was decided in consent with a designer. Rules with low importance
degree were not considered in the evaluation. The rule that could have
prevented the warning from reaching the synthesis phase is also presented. A
comment is also given regarding severity and if the rule was tested in reality.
Warning: Signal assignment delays are not supported for synthesis. They are
ignored.
Number of occasions: 705
Importance: low
Rule: W257 Lint, Synthesis tools ignores delays.
Comment: The rule generated violations on Anna.
Warning: Potential simulation-synthesis mismatch if index exceeds size of
array <name>.
Number of occasions: 227
Importance: High
Rule: SYNTH_5130 Spyglass. For proper synchronization between
simulation and synthesis index and array size should match.
Comment: If it could be assured that all the warnings of this kind was
intended it would be very valuable and a lot of resources could be saved. The
synthesis engineer would then not have to consider ever one of these
warnings to decide whether it is an actual error or not. The rule generated
violations on Anna. A large resource saving potential expected.
Warning: Symbol <name>, declared as an enum, may be assigned nonenum values.
Number of occasions: 24
Importance: Medium
Rule: SYNTH_1028 Spyglass, ArrayEnumIndex Lint, Array defined using
enumeration type as index is not supported (in synthesis).
Comment: The rule has not been confirmed to work on Anna.
Warning: DEFAULT branch of CASE statement cannot be reached.
Number of occasions: 22
Importance: Medium
Rule: LPFSM18 lowpower, Unreachable states exist in FSMs.
Comment: Not been tested.
Warning: Floating pin <name> of cell <name> connected to ground.
Number of occasions: 67
Importance: Medium
Rule: W287a LINT, Some inputs to instance is not driven or unconnected
Comment: If the pin is unconnected by mistake and the synthesis tool
connects the pin to ground the design might malfunction. The rule generated
violations on Anna.
58
60
Conclusions
This chapter aims to discuss and answer the questions in chapter 1.3 based
on the results gained from the evaluation of Spyglass, LEDA, HAL and Design
Checker.
It is advised to implement a static rule checking tool in the design flow. The
tool to be selected for this depends of the usage demand. Spyglass is the
most advanced and considered to be the most stable tool tested during this
project, but it has issues; as with rules generating false violations, unable to
detect certain structures, limited SystemVerilog support, a number of rules
that aught to be customizable regarding design choices. One of the strongest
arguments to choose Spyglass is probably the extensive use of the tool by the
chip manufacturers. It is believed that there is a lot to be gained in the means
of netlist deliveries if there is a good collaboration in the terms of using
Spyglass between both parties. Another reason to choose Spyglass is
Atrentas helpfulness and co-operate when issues regarding the tools
functionality occur.
One reason for not choosing Spyglass is the packaging of the tool. There are
a number of licenses for different packaging set ups that can be purchased
compared to the other tools where only one is license is required, which
covers all their functionality4. The packaging can be an issue when cost
savings are measured againts performance and efficiency.
The implementation and use of Spyglass is in the range of 700 kSEK to 1 500
kSEK per project (depending on the number of licenses required and number
of project starts per year) including educational costs and support. This
corresponds to 20 - 40 weeks of work in costs, or 15 engineers that waits for
a delay two to three weeks.
The gain of using Spyglass is mainly on RTL. This is based on the fact that
the errors found in the netlist during part I also were found in the RTL code
during part II. The cost versus saving calculation for Part I in [Wickberg, 2006]
applies therefore also in Part II. The objective is to find errors as soon as
possible in the design and thus save time. There are also gains regarding less
synthesis runs. Spyglass can catch a number of errors that appears during
synthesis. Even though not all of the extra synthesis runs may be possible to
prevent with the use of Spyglass, it is expected that a significant number will
be prevented. As mentioned in chapter 3.5 the saving potential could be one
week to months. This corresponds to delay costs in the range of tens of
thousands to a million SEK.
On netlist level the direct savings, if Spyglass would have been used in the
Anna project, was far from covering the tool cost. But then it should be
remembered that the ASIC vendor did not use the tool in their verification
process and that the lead time has not been taken in account (for which the
costs could be very large). It is believed that the gain seen during this
evaluation would have been much greater if Spyglass had been used from
project start and that the set of rules used on both RTL and netlist would have
reflected the rules the chip vendor used.
A scenario could also be that the delayed projects purpose was to reduce
cost by replacing an expensive card with a new cheaper one. The new ASIC
would only have been a part among many other circuits on that cheaper card.
A delay of the ASIC would hence have caused the new and cheaper card to
be delayed and the old one would have had to be manufactured in the
meantime. If the manufacturing cost would have been reduced with 1000
SEK/card with the new card, 10 000 cards per month were produced and the
delay is one month, the cost of the delay would be 10 MSEK.
The scenario given above along with what has been previously discussed in
this chapter regarding RTL and netlist level of use concludes that the saving
potential of the tool is large compared to the implementation costs.
After discussion with designers and specialists at the ASIC department, it is
concluded that the greatest value is gained on RTL with rule checking:
Naming conventions and basic structure, Clock/Reset tree propagation,
Constraint cover and consistency, DFT for scan readiness and basic CDC to
detect crossings. But also sets of rules for netlist checks regarding structure,
Constraints and DFT in collaboration with the chip manufacturer of choice, are
advised to be purchased. The package that comes to mind is the Design
Analysis package.
The changes to the design flow are not expected to be extensive but
nevertheless considerable. Spyglass is meant to be used during the entire
process, from starting of the RTL coding until netlist delivery. This means that
the designers might have to run the tool up to four times a day during intense
periods which might cause some inconvenience at start. In the constraints
area there might also be a bit of change in terms of how large the constraints
set to use, this to gain as much as possible from the tool. At each project start
there should also be a person who is assigned as a collaborator with the chip
manufacturer. The reason is to make the rules that are to be used to mirror
the chip vendors hand off procedure from the beginning, and thus save
resources.
The error correction work is very sophisticated in Spyglass in terms of using
the GUI. The violation explanations are comprehensive and the schematic
views are state of the art. The reports are acceptable and give enough
information to enable correction of regular violations.
Attempting to create entirely new rules for Spyglass is not advised. But if Cprimitives that suit the purpose exist it is manageable. Creating new rules to
Synopsys LEDA (using VeRSL or VRSL) is considered to be easier, but
perhaps not as powerful, than the use of C-primitives.
62
63
64
References
[Atrenta 1, 2006] - SpyGlass DC Rules Reference, Version 3.8.0.3, July 2006,
Author: Atrenta, Inc.
[Atrenta 2, 2006] - SpyGlass Predictive Analyzer User Guide, Version 3.8.1, August
2006, Author: Atrenta, Inc.
[Atrenta 3, 2006] - SpyGlass Policy Customization Guide, version 3.8.1, 2006,
Author: Atrent, Inc.
[Rabaey, Chandrakasan and Nikolic, 2003] - Digital Integrated Circuits, second
edition, 2003, Authors: Jan M. Rabaey, Anantha Chandrakasan, Borivoje Nikolic.
[Test, 2006] - Test Architecture Specification
[Cadence 1, 2006] - Clock Domain Crossings, Technical Paper, Author: Cadence.
URL: http://www.cadence.com/whitepapers/
[Cadence 2, 2004] - The Cadence Newsletter, July 2004, Author: Cadence.
[Cadence 3, 2006] - Incisive HDL analysis (HAL) User Guide, Version 5.82, August
2006, Author: Cadence.
[Myers, 2001] - Asynchronous Circuit Design, 2001, Author: Chris J. Myers.
[Accellera, 2001] - SystemVerilog 3.1a Language Reference Manual Accelleras
Extensions to Verilog, Author: Accellera.
[Synopsys 1, 2006] - Leda Rule Specifier Tutorial, Author: Synopsys.
[Synopsys 2, 2006] - Synopsys hompage, URL:
http://www.synopsys.com/partners/tapin/sdc.html
[Synopsys 3, 2006] - Leda User Guide, Version 2006.06, June 2006, Author:
Synopsys
[Synopsys 4, 2006] - Leda VeRSL Reference Guide, Version 2006.06, June 2006,
Author: Synopsys
[Mentor, 2006] - DesignChecker User Guide, Version 2006.1, 10 May 2006, Author:
Mentor.
[Wickberg, 2007] - HDL code analysis of ASIC:s in mobile systems, Complementary
report, version 1.0, January 2007, Author: Fredrik Wickberg
65
66
18-May.12:55
PseudoChip.v@@/main/5
22-May.09:33
Phoenix
phoenix.v@@/main/8
29-May.14:21
phoenix.v@@/main/9
02-Jun.10:39
Hwacc
hw acc.v@@/main/2
10-Apr.17:55
hw acc.v@@/main/3
12-Apr.07:39
hw acc.v@@/main/4
20-Apr.17:49
67
68
Appendix B
The code below was used to evaluate the handshake rules. The original
version is taken from a demo made by Atrenta. The modified version was
made by the author of this thesis. The modification was used to check
whether the violations from the original version disappeared and thus validate
the correctness of the handshake rules.
In the modified version data is applied one state in advance to the activation
of the request signal, which will ensure data signal stability. The other change
is that a new state has been added (WAIT_ACK2). The new state waits for a
completion of acknowledge. This functionality will make sure that a full
handshake has taken place before a new can be initiated.
reset) begin
if ( reset ) begin
state1 <= IDLE;
req <= 1'b0;
end
else
case (state1)
IDLE: begin
req <= 1'b0;
if (in1 ) state1 <= SEND_REQ;
end
SEND_REQ: begin
bus_data <= in2; //violation
req <= 1'b1;
//violation
state1 <= WAIT_ACK;
end
reset)
if ( reset ) begin
state1 <= IDLE;
req <= 1'b0;
end
else
case (state1)
IDLE: begin
//req <= 1'b0;
if (in1 ) begin
state1 <= SEND_REQ;
bus_data <= in2;
end
end
SEND_REQ: begin
//bus_data <= in2;
req <= 1'b1;
state1 <= WAIT_ACK1;
end
WAIT_ACK: begin
if ( core_ack_sync ) state1 <= IDLE;
end
endcase
WAIT_ACK1: begin
if ( core_ack_sync ) begin
req <= 1'b0;
state1 <= WAIT_ACK2;
end
end
WAIT_ACK2: begin
if ( !core_ack_sync ) state1 <= IDLE;
end
endcase
end
end
69
70
P svenska
Detta dokument hlls tillgngligt p Internet eller dess framtida ersttare
under en lngre tid frn publiceringsdatum under frutsttning att inga extraordinra omstndigheter uppstr.
Tillgng till dokumentet innebr tillstnd fr var och en att lsa, ladda ner,
skriva ut enstaka kopior fr enskilt bruk och att anvnda det ofrndrat fr
ickekommersiell forskning och fr undervisning. verfring av upphovsrtten
vid en senare tidpunkt kan inte upphva detta tillstnd. All annan anvndning
av dokumentet krver upphovsmannens medgivande. Fr att garantera
ktheten, skerheten och tillgngligheten finns det lsningar av teknisk och
administrativ art.
Upphovsmannens ideella rtt innefattar rtt att bli nmnd som
upphovsman i den omfattning som god sed krver vid anvndning av
dokumentet p ovan beskrivna stt samt skydd mot att dokumentet ndras
eller presenteras i sdan form eller i sdant sammanhang som r krnkande
fr upphovsmannens litterra eller konstnrliga anseende eller egenart.
Fr ytterligare information om Linkping University Electronic Press se
frlagets hemsida http://www.ep.liu.se/
In English
The publishers will keep this document online on the Internet - or its possible
replacement - for a considerable time from the date of publication barring
exceptional circumstances.
The online availability of the document implies a permanent permission
for anyone to read, to download, to print out single copies for your own use
and to use it unchanged for any non-commercial research and educational
purpose. Subsequent transfers of copyright cannot revoke this permission. All
other uses of the document are conditional on the consent of the copyright
owner. The publisher has taken technical and administrative measures to
assure authenticity, security and accessibility.
According to intellectual property law the author has the right to be
mentioned when his/her work is accessed as described above and to be
protected against infringement.
For additional information about the Linkping University Electronic Press
and its procedures for publication and for assurance of document integrity,
please refer to its WWW home page: http://www.ep.liu.se/
Fredrik Wickberg