HFNS
HFNS
HFNS
During placement and optimization, ICC does not buffer clock nets as defined by
the create_clock cmd, but it does, by default, buffer other high-fanout nets, such
as resets or scan enables, using a built-in high-fanout synthesis engine.
Before running high-fanout net synthesis during the place_opt step, define the
buffering options by using the set_ahfs_options cmd.
During high-fanout net synthesis, ICC automatically analyzes the buffer trees to
determine the fanout thresholds by default, and then it removes and builds buffer
trees.
The high-fanout synthesis engine does not buffer nets that are set as ideal nets
or nets that are disabled with respect to design rule constraints.
As an alternative to performing high-fanout net synthesis during place_opt, you
can use the create_buffer_tree cmd to run standalone high-fanout net synthesis.
NOTE:
To get information about the buffer trees in your design, use the
report_buffer_tree cmd.
To remove buffer trees from your design, use the remove_buffer_tree cmd.
SCAN CHAIN:
https://anysilicon.com/overview-and-dynamics-of-scan-testing/
Shipping a defective part to a customer could not only result in loss of goodwill for
the design companies, but even worse, might prove out to be catastrophic for the
end users, especially if the chip is meant for automotive or medical applications.
Scan Chain Testing:
Scan chain testing is a method to detect various manufacturing faults in the silicon
or IC. Although many types of manufacturing faults may exist in the silicon, in this
post, we would discuss the method to detect faults like- shorts and opens.
Scan Flip-Flop:
Figure 1 shows the structure of a Scan Flip-Flop. A multiplexer is added at the
input of the flip-flop with one input of the multiplexer acting as the functional input
D, while other being Scan-In (SI). The selection between D and SI is governed by
the Scan Enable (SE) signal.
Scan Chain:
Using this basic Scan Flip-Flop as the building block, all the flops are connected in
form of a chain, which effectively acts as a shift register. The first flop of the scan
chain is connected to the scan-in port and the last flop is connected to the scan-
out port.
The Figure 2 depicts one such scan chain where clock signal is depicted in red,
scan chain in blue and the functional path in black. Scan testing is done to detect
any manufacturing fault in the combinatorial logic block. To do so, the ATPG tool
try to excite each and every node within the combinatorial logic block by applying
input vectors at the flops of the scan chain.
Figure 2: A Typical Scan Chain
Scan-in:
Scan is enabled and an ATPG (Automatic Test Pattern Generator) pattern is
loaded into the scan flops. Then we are shifting in and loading all the flip-flops with
an input vector. During scan-in, the data flows from the output of one flop to the
scan-input of the next flop like a shift register.
Scan Capture:
Then scan is disabled once, and a logic combination output is captured. Once the
sequence is loaded, one clock pulse (also called the capture pulse) can excite the
combinatorial logic block and the outputs of combo logic is captured at the flops.
Scan-out:
Then scan is enabled again, and the data is then shifted out. The output pattern is
then observed. Then we verify whether the output pattern matches the expected
pattern.
NOTE: Modern ATPG tools can use the captured sequence as the next input
vector for the next shift-in cycle. Moreover, in case of any mismatch, they can point
the nodes where one can possibly find any manufacturing fault. Figure 3 shows
the sequence of events that take place during scan-shifting and scan-capture.
[not imp: From timing point of view, higher shift frequency should not be an issue
because the shift path essentially comprises of direct connection from the output
of the preceding flop to the scan-input of the succeeding flop and therefore setup
timing check would always be relaxed. Even though higher shift frequency would
mean lower tester time and hence lower cost, the shift frequency is typically low
(of the order of 10s of MHz). The reason for shifting at slow frequency lies in
dynamic power dissipation.]
It must be noted that during shift mode, there is toggling at the output of all flops
which are part of the scan chain, and within the combinatorial logic block, although
it is not being captured. This results in toggling which could perhaps be more than
that of the functional mode.
Higher shift frequency could lead to two scenarios:
Voltage Drop: Higher rate of toggling within the chip would result in drawing more
current from the voltage supply. And hence there would be a voltage droop
because of the IR drop. This IR drop could well drop the voltage below the safe
margin and the devices might fail to operate properly.
Increased Die Temperature: High switching activity might create local hot-spots
within the die and thereby increase the temperature above the worst-case
temperature for which timing was closed. This could again result in failure of
operation, or in the worst case, it might cause thermal damage to the chip.
Therefore, there exists a trade-off. It is desired to run the scan shift at a lower
frequency which must be dictated by the maximum permissible power dissipation
within the chip.
At the same time, the shift-frequency should not be too low, otherwise, it would risk
increasing the tester time and hence the cost of the chip!
=> http://physicaldesignvlsi.blogspot.com/2015/01/scan-chains-stitching-reordering.html
Wiki:
Partial scan: Only some of the flip-flops are connected into chains.
Multiple scan chains: Two or more scan chains are built in parallel, to
reduce the time to load and observe.
Test compression: the input to the scan chain is provided by on-board
logic.
=>ATPG (acronym for both Automatic Test Pattern Generation and Automatic
Test Pattern Generator) is an electronic design automation method/technology
used to find an input (or test) sequence that, when applied to a digital circuit,
enables automatic test equipment to distinguish between the correct circuit
behavior and the faulty circuit behavior caused by defects.
=>Fault coverage refers to the percentage of some type of fault that can be
detected during the test of any engineered system. High fault coverage is
particularly valuable during manufacturing test, and techniques such as Design For
Test (DFT) and automatic test pattern generation are used to increase it.
Faults:
Stuck at faults occur when a line is permanently stuck to Vdd or ground
giving a faulty output. This line may be an input or output to any gate. Also
this fault can be single or multiple stuck at faults.
When a signal, or gate output, is stuck at a 0 or 1 value, independent of the
inputs to the circuit, the signal is said to be “stuck at” and the fault model
used to describe this type error is called a “stuck at fault model”.
A fault model is an engineering model of something that could go wrong in
the construction or operation of a piece of equipment. From the model, the
designer or user can then predict the consequences of this particular fault.
Basic fault models in digital circuits include the stuck-at fault model, the
bridging fault model, the transistor faults, the open fault model, the delay
fault model, etc. In the past several decades, the most popular fault model
used in practice is the single stuck-at fault model.
To use this fault model, each input pin on each gate in turn, is assumed to
be grounded, and a test vector is developed to indicate the circuit is faulty.
Hence, if a circuit has n signal lines, there are potentially 2n stuck-at faults
defined on the circuit.
The test vector is a collection of bits to apply to the circuit's inputs, and a
collection of bits expected at the circuit's output. If the gate pin under
consideration is grounded, and this test vector is applied to the circuit, at
least one of the output bits will not agree with the corresponding output bit
in the test vector.
After obtaining the test vectors for grounded pins, each pin is connected in
turn to a logic one and another set of test vectors is used to find faults
occurring under these conditions. Each of these faults is called a single
stuck-at-0 or a singlestuck-at-1 fault, respectively.
The stuck-at fault model is a logical fault model because no delay
information is associated with the fault definition.
It is also called a permanent fault model because the faulty effect is assumed
to be permanent, in contrast to intermittent faults which occur (seemingly) at
random and transientfaults which occur sporadically, perhaps depending on
operating conditions like temperature, power supply voltage or on the data
values (high or low voltage states) on surrounding signal lines. The single
stuck-at fault model is structural because it is defined based on a structural
gate-level circuit model.
A pattern set with 100% stuck-at fault coverage consists of tests to detect
every possible stuck-at fault in a circuit. 100% stuck-at fault coverage does
not necessarily guarantee high quality, since faults of many other kinds—
such as bridging faults, opens faults, and transition or delay faults—often
occur.
Why DFT?
A simple answer is DFT is a technique, which facilitates a design to become
testable after production. Its the extra logic which we put in the normal design,
during the design process, which helps its post-production testing. Post-production
testing is necessary because, the process of manufacturing is not 100% error free.
There are defects in silicon which contribute towards the errors introduced in the
physical device. Of course, a chip will not work as per the specifications if there
are any errors introduced in the production process. But the question is how to
detect that. Since, to run all the functional tests on each of say a million physical
devices produced or manufactured, is very time consuming, there was a need to
device some method, which can make us believe without running full exhaustive
tests on the physical device, that the device has been manufactured correctly. DFT
is the answer for that. It is a technique which only detects that a physical is faulty
or is not faulty. After the post-production test is done on a device, if it is found
faulty, trash it, don’t ship to customers, if it is found to be good, ship it to customers.
Since it is a production fault, there is assumed to be no cure. So it is just a
detection, not even a localization of the fault. That is our intended purpose of DFT.
=>https://www.slideshare.net/agyemanh/fa-31428491
=>
=>Default Voltage Area
The IC Compiler tool automatically derives a default voltage area after you
create the first voltage area. Conversely, when you delete the last voltage
area of the design, the tool automatically removes the default voltage area.
The default voltage area is used for the placement of cells that are not
specifically assigned to any voltage area. The default voltage area can contain
top-level leaf cells and buffer-type level shifters, as well as blocks.
You can prevent the insertion of new cells in the default voltage area by setting
the mv_no_cells_at_default_va variable to true. The default for this variable
is false. Setting the mv_no_cells_at_default_va variable to true eliminates the
requirement for Chapter 12: Multivoltage Design Flow Defining and Using
Voltage Areas 12-18 ICCompiler™ImplementationUserGuideI-201\3.12
Setting the dont_touch attribute on the affected top-level nets in the default
voltage area. It also controls the placement of new buffers in the default
voltage area. This variable is effective for designs with both abutted and
nonabutted voltage areas at the top level.