DSDV Mod5@AzDOCUMENTS - in

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

@azdocuments

https://www.azdocuments.in/
DSDV (18EC644) Module 5 Az documents

DESIGN METHODOLOGY
A design methodology codifies the process of design, verification and preparation for
manufacture of a product. It involves development of virtual prototypes to support design
analysis and refinement.
In this module, design and verification ideas are elaborated, and also the larger context in
which digital systems are designed is considered.

1. Design flow
 Design methodology is not standardized across the industry
o Each organization defines its own methodology based on the kinds of the design
projects it undertakes, and evolves the methodology from project to project.
 A prototypical design methodology divides the design flow into a number of stages:
functional design, synthesis, and physical design.
 Each stage includes verification steps to ensure that the design meets its requirements and
satisfies constraints.
 Figure 1 shows the elements of the design flow, including hierarchical hardware/software
codesign, integrated into a single diagram.
 The product of the design process is a set of data files used in manufacturing the product.
 Each manufactured unit is then tested and delivered to the end customer or market.
 The design flow for embedded systems is refined to include design and verification of the
embedded software.
 For many complex systems, the hardware is simply the platform upon which to deliver
the software.
o Developing the software is a major proportion of the system development effort.
 A key part of a design methodology is the set of electronic design automation (EDA)
tools used to support it.
o EDA tools can analyze and refine virtual prototypes

www.azdocuments.in Page 1
DSDV (18EC644) Module 5 Az documents

Figure 1 A prototypical design flow, including hardware/ software codesign.

Architecture Exploration
 Architecture exploration is the process of modeling and evaluating candidate designs at a
high level of abstraction.
 A system is partitioned for subsequent refinement.
 Logical partitioning identifies functional components, whereas physical partitioning
identifies physical hardware components.

www.azdocuments.in Page 2
DSDV (18EC644) Module 5 Az documents

 Logical functions are mapped onto physical partitions.


 The physical partitions can include processor cores, accelerators, memories and I/O
controllers.
 Hardware/software partitioning -
o A given logical component may be mapped to a specialized hardware component
whose only task is to implement that logical component.
o Another logical component may be mapped to a software task run on a processor
core under control of a real-time operating system.
Eg. System partitioning – A transport monitoring system
 Consider a road transport monitoring system that checks whether freight trucks drive
from one part of the country to another in too short an interval.
 Stations on freeways each have a video camera on a gantry over the road.
 The video images are analyzed to identify the license plate of each truck passing
underneath, and the time and license number are logged.
 The information is transmitted to a central facility for recording and comparison with
information from other stations.
 A hypothetical functional decomposition (logical portioning) of the monitoring station is
shown at the top of Figure 2.
o It includes logical components for input of video from a camera, filtering to
remove noise, edge-detection, shape detection, license plate detection, character
recognition to identify the license number, logging, network interface, system
control, and diagnostic and maintenance tasks.
 This logical structure can be mapped onto the physical structure (physical partitioning)
shown at the bottom of Figure 2.
o In this case, the physical components comprise an embedded system with
accelerators for video processing up to the shape-detection stage.
o License plate detection and recognition, logging, system control, and diagnostics
and maintenance tasks are mapped onto software tasks running on the processor
core.

www.azdocuments.in Page 3
DSDV (18EC644) Module 5 Az documents

Figure 2 Logical partitioning (top) and physical partitioning (bottom) of a transport


monitoring system.
 Architecture exploration and partitioning is often done by expert system designers
o Very difficult to automate these tasks using EDA tools
 The result of architecture exploration is a high-level specification of the system.
 For each of the components in the system, specification describes the function,
connections to other components, and constraints.
 The specification might be expressed in a language that can be executed or simulated,
such as certain forms of the Unified Modeling Language (UML).

Functional Design
 Functional design refines partitions to a level from which implementations can be
synthesized.
 A behavioral model of the component is developed, expressing its functionality at an
intermediate level of abstraction between system level and register transfer level.
o The purpose of the behavioral model is to allow function verification of the
component before proceeding to detailed implementation.
 Components may be implemented through IP reuse or by core generators.
o Intellectual property, or IP, to refer to reusable components from a previous
system, from a library of components, or from a component vendor.

www.azdocuments.in Page 4
DSDV (18EC644) Module 5 Az documents

o A core generator is an EDA tool that generates a model of a component based on


parameters that describe its function.
 Core generators are available for memories, arithmetic units, bus
interfaces, digital signal processing, and finite-state machines.
 Need for revision management (source code control)
o Required in hardware model development and software development
o Revision management software helps coordinate designers’ work by maintaining
a repository of versions of the code.
 Included in EDA tool suites or open source tools

Functional Verification
 Functional verification ensures that the refined design meets functional requirements,
and can be performed using simulation and formal verification.
 Functional coverage is the proportion of functionality verified.
 Successful verification of a system requires a verification plan that identifies –
1. what parts of the design to verify
2. what functionality to verify
3. how to verify
o Hierarchical decomposition of the system identifies what parts to verify
o The specification for each component defines what functionality to verify
 At higher levels of the design hierarchy, it is much harder to verify
functional requirements under all circumstances
 Use functional coverage
o Use techniques like directed testing and constrained random testing to find out
how to verify
 Directed testing involves identifying particular test cases to apply to the
DUV and checking the output for each test case.
o Effective for simpler components
 Constrained random testing involves a test case generator randomly
generating input data, subject to constraints on the ranges of values
allowed for the inputs
o Verification languages – Vera, e, System-Verilog
 Both techniques require checkers that ensure that the DUV produces the
correct outputs for each applied test case.

www.azdocuments.in Page 5
DSDV (18EC644) Module 5 Az documents

 A comparison test bench, illustrated in Figure 3, verifies that the


implementation has the same functionality as the behavioral model.

Figure 3 A comparison testbench for comparing outputs of a behavioral model and its
RTL refinement.
o Directed and constrained random testing are both simulation-based verification
techniques.
 not feasible to attain 100% coverage

 Formal verification allows complete verification that a component meets a specification.


 The specification is embodied in one or more asserted properties, expressed in a property
specification language, such as PSL
o A property can be as simple as a Boolean expression relating the values of signals
in the design.
o A formal verification tool performs state-space exploration to verify the asserted
properties.
 Writing properties that completely and accurately capture the intent of a specification is
very difficult.
 Properties can also be used in simulation-based verification of a system to generate a
checker
 Properties can be re-used for formal verification after an initial phase of simulation-based
verification.

Hardware/Software Co-Verification
 Hardware/software co-verification uses instruction-set simulators and hardware emulation
to test software before hardware models are available.
 Software and hardware can be tested together using co-simulation.

www.azdocuments.in Page 6
DSDV (18EC644) Module 5 Az documents

Synthesis
 Synthesis is the refinement of the functional design to a gate-level net list.
 Synthesis can be performed automatically using an RTL synthesis tool.
o Automatic RTL synthesis is used for FPGA-based designs.
o Custom design is required if a design is complex, has very high performance
requirements, and is implemented as an ASIC
 RTL synthesis starts with models of the design refined to the register-transfer level.
o Many VHDL or Verilog features cannot be synthesized into equivalent gate-level
circuits.
o RTL synthesis tools require that RTL models be written using a subset of
language features
 Eg. Sequential hardware should be expressed using always blocks
o Early synthesis tools performed relatively simple pattern recognition on the HDL
source code to determine which hardware circuits were implied.
o Subsequent developments in synthesis tools focused more on improving the
quality and optimization of the synthesized hardware
 RTL models cannot be portable across a range of tools
o To help designers write interoperable models, the IEEE had defined two
standard coding styles for synthesizable models, one for VHDL (IEEE Standard
1076.6) and the other for Verilog (IEEE Standard 1364.1).
 A synthesis tool starts by analyzing the model, checking to make sure the code conforms
to its style requirements.
o It also performs some design rule checks, such as checking for unconnected
outputs, undriven inputs, and multiple drivers on nonresolved signals.
o The tool then infers hardware constructs for the model. This involves things like:
 Analyzing wire and variable declarations to determine the encoding and
the number of bits required to represent the data.
 Analyzing expressions and assignments to identify combinational circuit
elements, such as adders and multiplexers, and to identify the input, output
and intermediate signal connections.
 Analyzing always blocks to identify the clock and control signals, and to
select the appropriate kinds of flip-flops and registers to use.
o The tool determines an implementation each of the inferred hardware elements
using primitive circuit elements selected from a technology library.

www.azdocuments.in Page 7
DSDV (18EC644) Module 5 Az documents

 Technology library is a collection of components that are available within


the implementation fabric selected for the design
 Provided by ASIC or FPGA vendor
 Typical components in a library - inverting and noninverting gates,
small multiplexers, carry chain components, and flip-flops.
o The process of translating the design into a circuit of library components is guided
by synthesis constraints
 Such constraints include bounds on clock periods and propagation delays.
 Synthesis tool uses the constraints to choose among alternative
implementations.
 Directly instantiate specific predetermined implementations created using the core
generators
 Verify whether the implementation of the synthesis tool meets timing constraints.
 Simulate (gate-level simulation) the implemented design to ensure that it meets
functional requirements.

Physical Design
 Physical design refines the gate-level design into an arrangement of circuit elements in
an ASIC, or builds the programming file that configures each element in an FPGA.

Physical design for ASICs - consists of floorplanning, placement, and routing.

Floorplanning, involves deciding where each of the blocks in the partitioned design is to be
located on the chip.
 A number of factors influence the floor plan -
o Blocks that have a large number of connections between them should be placed
near each other
o Blocks that are connected to external pins should be placed near the edge of the
chip.
o The blocks should be arranged to make the chip as close to square as possible,
since that influences the size of the package that can be used.
 Square chips are easier to package than rectangular chips.
 Floorplanning involves –
o Arrangement of power supply and ground pins and internal connections

www.azdocuments.in Page 8
DSDV (18EC644) Module 5 Az documents

o The connection and distribution of clock signals across the chip


o Provision of channels for laying out interconnections between blocks.
 EDA tools can assist to visualize floorplans and rearrange blocks, ensuring all the time
that a floorplan is feasible, and by analyzing alternative floorplans to determine figures of
merit.

Placement and routing - involves positioning each cell in a synthesized design (placement)
and finding a path for each connection (routing).
 The main goals are to position all cells and route all connections (not always achievable!),
while minimizing area and delay of critical signals.
 The result of placement and routing is a suite of files to send to the chip foundry for
fabrication.
 Detailed timing information is generated, based on the actual positions of components
and wires.
 Detailed timing simulation is a final check to ensure design meets its timing constraints.
 Placement and routing are automated by EDA tools

Physical design for FPGAs - involves deciding how to implement the synthesized design
using the programmable resources of a prefabricated chip.
 Involves – Floorplanning, mapping, and placement and routing

Floorplanning
 A good arrangement of blocks in the FPGA fabric-
o reduces the number of long-distance interconnects
o Simplifies connections to I/O blocks and their associated pins
 For smaller FPGA-based applications, the floorplan generated automatically by the
vendor’s EDA tools is sufficient.
 For larger designs, if there is difficulty in fitting a design into a given FPGA, either
attempt to improve the floorplan, or use a larger FPGA.

Mapping - involves identifying the FPGA-specific resources to be used for each of the
library components instantiated in the synthesized design.
 The result is an implementation of the design using logic blocks, I/O blocks and FPGA-
specific resources, as opposed to the library cells used by the synthesis tool.

www.azdocuments.in Page 9
DSDV (18EC644) Module 5 Az documents

Placement and routing - involves identifying specific blocks and routing wires in the FPGA
to use for the mapped blocks, while minimizing area and delay of critical signals.
 Performed by automatic tools (EDA tools) provided by the FPGA vendor
o Constraints on placement and timing can be specified to improve placement or
routing
 The final result is a bit file specifying how the FPGA is to be configured.
 Detailed timing information for the design can be generated based on the internal timing
parameters of the logic blocks and interconnect in the FPGA
 Final timing simulations are performed to verify that the implemented design meets
timing constraints.

2. Design Optimization
 Design optimization usually involves making trade-offs of one property against another.
 A design can be optimized at various stages in the design flow.
 The main parameters to optimize are area, timing, and power consumption.

Area Optimization
 The area of a circuit determines cost.
 Cost can affected by managing the area of the design

Area can be optimized at various stages in the design flow.


 A preliminary level floorplanning can be done as part of the partitioning step of
architecture exploration.
o may exclude some candidate architectures as infeasible, and include others that
have less area
 The number of pins that will be required for the chip can be estimated
o if the pin count is large, the area required for the pad ring may constrain the
overall area of the chip, alternative architectures with reduced pin counts can
be considered
 Early floorplan can avoid wastage of time at next stages of design flow
 In the functional design stage of the design flow, circuit area can be influenced through
choice of components, whether explicitly instantiated or implied by RTL model code.

www.azdocuments.in Page 10
DSDV (18EC644) Module 5 Az documents

 In the synthesis stage, the circuit area can be influenced by specifying constraints to the
synthesis tool.
o the tool can be directed to use a synthesis strategy that favors minimizing area
instead of delay, or to use additional effort to optimize the design instead of
reducing turnaround
 In the physical design stage, circuit area can be influenced through intervention in the
floorplanning, placement and routing of the circuit.
o fine tuning
o cannot readily change the number or kind of components used or the amount or
connectivity of the wiring between them
o Hence, decisions made earlier in the flow have more significant impact.

Timing Optimization
 The aim of timing optimization is to ensure that a design meets performance constraints.
o Performance and timing are essentially the inverses of each other.
 Goal is to maximize the number of operations per second, or, conversely, to minimize the
time per operation.
 In the architecture exploration stage of the design flow, the greatest impact on
performance can be done through application of parallelism, limited by the data
dependencies involved.
o Increasing parallelism is in conflict with minimizing area and power, since the
extra resources required to realize the parallelism take up area and consume
power.
 need to make trade-offs
 In the functional design stage of the design flow, timing can be influenced through
choice of components.
o This is also followed in area optimization, but objectives may be in conflict with
timing optimization
 In the synthesis stage, directives and hints can be specified to a synthesis tool to optimize
timing of the detailed design, and then analyze the resulting synthesized circuit to verify
that timing constraints are met.
o If they are not, revise the directives and hints and resynthesize.
o If unable to meet constraints through this iterative process, need to revisit earlier
stages of the design flow

www.azdocuments.in Page 11
DSDV (18EC644) Module 5 Az documents

 Static timing analysis tool analyzes the synthesized design


o uses timing estimates for each of the components in the technology library
 In the physical design stage, timing can be fine tuned by choice of placement of
components and wires.
o EDA tool does the job
 Accurate delay values for components and wiring can be extracted after physical design.
o repeat the static timing analysis using these values to verify whether timing
constraints are met
o If they have not, need to revisit earlier stages of the design flow to improve the
timing of the circuit.

Power Optimization
 Power consumption has become a more significant constraint in the design of digital
systems
 Electrical power consumed by a circuit is turned into heat, which must be dissipated
through the chip and system packaging.
 Dealing with additional heat dissipation adds cost to a system, so keeping power
consumption to a minimum is part of keeping cost down.
 Approaches to minimize circuit area also help reduce power consumption

Approaches to reduce power consumption –


1. Identify blocks of a system that remain idle for substantial periods during the system’s
operation, and to remove power from those blocks during idle periods.
 Eg. (i) Powering down a network card when the computer is not connected to a
network cable
(ii) An embedded microcontroller only need be active for small periods of time to
sample data inputs and determine control settings. Rest of the time it is in standby
mode
(iii) Recent processor cores also include power management features
 Disadvantage –
o Not simple to implement – a powered-down block may cause spurious effects
other parts of the system, connected to it
o Operation of a powered-down block resumes after a significant delay

www.azdocuments.in Page 12
DSDV (18EC644) Module 5 Az documents

2. Implement power management through clock frequency control within the real-time
operating system of an embedded computer.
 The clock generator in such a system would need to be adjustable under program
control.
 Can be used in CMOS circuits to reduce dynamic power consumption
 If the performance requirements of a system are not constant, that is, if there are
periods where high performance is required and other periods where lower
performance is acceptable, dynamic power consumption can be reduced by
reducing the clock frequency.

3. Clock gating - involves turning off the clock to parts of a circuit whose stored values do
not need to change.
 Another common way of reducing power in CMOS systems
 With clock gating, the components see no clock transitions when the clock is turned
off, as shown in Figure 4. Here, the clock is gated off for two cycles. During that
interval, the component consumes no dynamic power.

Figure 4 Timing diagram for a flip-flop with a gated clock.


 Gating a clock is not simple
o the resulting clock edges would be skewed from those of the ungated clock
o since the gating control signal is typically generated by a clocked control
section, a naive approach can lead to glitches on the gated clock signal, as
shown in Figure 5.
o The glitch may cause unreliable triggering of the components to which the
gate clock is connected.

Figure 5 Glitch on a gated clock due to poor design.


Analysis of a circuit design must be performed to determine whether power constraints are
met in the final circuit.

www.azdocuments.in Page 13
DSDV (18EC644) Module 5 Az documents

3. Design for Test (DFT)


 Design for test enhances testability of a product, thus reducing test cost.
 Testing involves applying test patterns to a circuit’s inputs and verifying that the
expected outputs are produced.
 The intention is to verify that the manufactured chip performs as designed.
 Additional circuitry is included to improve the system’s testability.
o Such circuitry includes elements that make internal nodes observable, or that
perform testing automatically as a special mode of system operation.

Fault Models and Fault Simulation


 Fault models represent the effects of defects (faults) in a circuit, and are used by a fault
simulator to determine fault coverage of a set of test vectors.
 The simulator applies test vectors until an incorrect output results, indicating that the fault
has been detected.
 If no incorrect output is produced for all of the test vectors, the fault remains undetected
by that set of vectors.
 The simulator repeats the simulation for other faults and other locations in the circuit.
 Once all of the faults have been simulated, the fault coverage of the test vectors, that is,
the proportion of faults detected can be determined.
 Ideally, the fault coverage should be 100%, but for a large design, this may not be
feasible.
 An automatic test pattern generator (ATPG) can be used to choose test vectors
o ATPG – EDA tool that analyzes a circuit and seeks to create a minimal set of test
vectors with as close to full coverage as possible.

Fault models:
1. Stuck-at model – A fault model in which an input or output of a gate in a circuit can be
stuck at 0 or stuck at 1, rather than being able to change between 0 and 1.
 Such a fault might be caused by a short circuit to the ground or power supply.
 This is illustrated in Figure 6, in which an input to the AND gate is stuck at 1.
o For some input combinations to the circuit (b = 1 or c = 1), the value at the stuck
node would normally be the same as the stuck-at value; the circuit would produce
the correct output, and fault would not be detected.

www.azdocuments.in Page 14
DSDV (18EC644) Module 5 Az documents

o For other input combinations (b = 0 and c = 0), the value at the stuck node would
be the opposite of the stuck-at value.
o In this circuit, if a = 0, the output value is independent of the value at the stuck
node, so the fault is masked. However, if a = 1, the value of the stuck node is
propagated to the output, allowing to detect the fault.

Figure 6 A circuit with a stuck-at-1 fault.


 Detecting the fault involves applying a combination of input values that sensitizes the
path from the fault to the output and that drives the stuck node to the opposite of its stuck-
at value.
 A node in a circuit is observable if a fault at the node can be made to result in an incorrect
output value.
 The node is controllable if there are input combinations that cause the node to take on a
given value.
 Observability and controllability of nodes in a circuit determine the testability of the
circuit.

2. Stuck on or Stuck off - considers the transistor-level circuits for gates, and involve
transistors being stuck on or stuck off.
 This detects faults that are not adequately represented by the stuck-at fault model.

Figure 7 Output driver circuit of a gate.


 For example, given the output driving circuit of a gate, shown in Figure 7, a fault might
cause the upper transistor to be stuck on.
o When the gate should be driving a high logic level, its output is correct. However,
when it should be driving a low logic level, both transistors are on.

www.azdocuments.in Page 15
DSDV (18EC644) Module 5 Az documents

o This creates a voltage divider, and the output logic level is an invalid level
between the valid high and low levels.
 A testing approach used for such faults is to measure the steady-state current drawn from
the power supply (IDDQ) to detect the increase when both transistors are on.

Other fault models –


Bridging faults - represents short-circuit connections between signal wires
Delay faults – the propagation delay of a circuit is longer than normal
Faults in storage elements

Scan Design and Boundary Scan


 Faults in registers and other storage elements are significantly more difficult to control
and observe. Scan design techniques address this problem.

Scan Design
 Scan design techniques involve modifying the registers to allow them to be chained into a
long shift register, called a scan chain, as shown in Figure 8.
 Test vectors can be shifted into the registers in the chain, under control of the test mode
input, thus making them controllable.
 Stored values can also be shifted out of the registers, thus making them observable.

Figure 8 Connection of modified registers in a scan chain.


 The chain of registers allows to control and observe the combinational blocks between
registers.
 Each combinational block can be tested separately.
o Shift test values into the register chain until the test vector for each block reaches
the input registers for that block
o Run the system in its normal operational mode for one clock cycle, clocking the
output of each block into the block’s output registers.

www.azdocuments.in Page 16
DSDV (18EC644) Module 5 Az documents

 Apply test vectors to the external inputs of the system and observe the external outputs of
the system, in order to test any combinational input and output circuits.
 Finally, shift the result values out through the register chain.
 The test equipment controlling the process compares the output values with the expected
results to detect any faults.
 This sequence is repeated until all of the test vectors have been applied to all of the
combinational blocks, or until a fault is detected.
 Advantage - increased controllability and observability
o High fault coverage is feasible, especially for large circuits
o Test generation for testing combinational circuits can be automated by ATPG
tools to achieve 100% fault coverage.
 Modification of the registers to allow them to function as shift registers can also be
automated.

Figure 9 Modified flip-flop for use in a scan chain.

o One approach is to design and synthesize the circuit normally, generating a gate-
level circuit with flip-flops implementing the registers.
o Then as part of physical design, a tool can substitute modified flip-flops that have
a shift mode, as shown in Figure 9.
o The circuit is placed normally.
o Finally, during the routing step, connections are made between adjacent flip-flops
to form the shift-register chain.
 Disadvantage of scan design
1. Overhead in circuit area and delay
o modified flip-flops have additional circuitry, including an input multiplexer
o input multiplexer imposes additional delay in the combinational path leading to
the flip-flop input
 If the path is a critical timing path, performance of the whole system is
affected.

www.azdocuments.in Page 17
DSDV (18EC644) Module 5 Az documents

2. The scan chain is very long


o Shifting test vectors in and result vectors out takes a large fraction of test time

 Issue - Possibility of faults within the test hardware (scan chain)


o First test the scan chain
o Then, proceed to test the internal circuits of the system

Boundary scan
 Boundary scan (extension of scan design concept) is a technique for testing the
connections between chips on a PCB.
o The idea is to include scan-chain flip-flops on the external pins of each chip.
o To test the PCB, the test equipment shifts a test vector into the scan chain.
o When the chain is loaded, the vector is driven onto the external outputs of the
chips.
o The scan-chain flip-flops then sample the external inputs, and the sampled values
are shifted out to the test equipment.
o The test equipment can then verify that all of the connections between the chips,
including the chip bonding wires, package pins and PCB traces, are intact.
o Various test vectors can be used to detect different kinds of faults, including
broken connections, shorts to power or ground planes, and bridges between
connections.
 The success of boundary scan techniques led to the formation of the Joint Test Action
Group (JTAG) in the 1980s for standardizing boundary scan components and protocols.
 The term JTAG has now become synonymous with boundary scan in its basic and
extended forms
o JTAG (Boundary scan) supports automatic testing of individual chips and
PCBs containing multiple chips.
 Standardization has been managed for some time by the IEEE as IEEE Standard 1149.1.
 The JTAG standard specifies that each component have a test access port (TAP),
consisting of the following connections:
o Test Clock (TCK): provides the clock signal for the test logic.
o Test Mode Select Input (TMS): controls test operation.
o Test Data Input (TDI): serial input for test data and instructions.
o Test Data Output (TDO): serial output for test data and instructions.

www.azdocuments.in Page 18
DSDV (18EC644) Module 5 Az documents

o An optional Test Reset Input (TRST)


 Figure 10 shows a typical connection of automatic test equipment (ATE) to the TAPs
of components on a PCB. Figure 11 shows the test logic within each component.

Figure 10 Connection of ATE to a system with multiple JTAG TAPs.

Figure 11 Architecture of a component with JTAG boundary scan.


 The TAP controller governs operation of the test logic.
 There are a number of registers for test data and instructions, and a chain of boundary
scan cells inserted between external pins and the component core.
 A typical boundary scan cell is shown in Figure 12.

www.azdocuments.in Page 19
DSDV (18EC644) Module 5 Az documents

Figure 12 A JTAG boundary scan cell for an input or output pin.


 Depending on the control inputs to the cell, data can flow straight through, input data can
be captured, output data can be driven, and test data can be shifted through.
 Input and output pins of the component each require just one cell.
 Tristate output pins require two cells: one to control and observe the data, and the other to
control and observe the output enable.
 Bidirectional pins require three cells, as they are a combination of a tristate output and an
input.
 The TAP Controller operates as a simple FSM, changing between states depending on the
value of the TMS input.
 Different states govern shifting of data into the Instruction Register or one of the data
registers (including the scan chain).
 The JTAG standard defines a number of instructions formats for operations that select
among data registers, control the mode of the scan chain, and so on.
 There are also instructions for component-specific extensions like built-in self test
modes.

 The JTAG standard also defines the Boundary Scan Description Language (BSDL),
which is a subset of VHDL used to specify the pins, registers, and instructions
implemented in the test logic of a component.
 We can use a BSDL description of a component, together with a set of test vectors, as
input to ATE for testing the component and the board in which it is embedded.
 JTAG architecture solves two problems: in-circuit testing of components in a system,
and in-circuit testing of the connections between the components

Built- In Self Test (BIST)


 Built-in self test (BIST) techniques involve adding test circuits that generate test
patterns and analyzing output responses.

www.azdocuments.in Page 20
DSDV (18EC644) Module 5 Az documents

 Role of the ATE is reduced to initiating test operations, verifying successful completion,
or, if a test fails, storing any diagnostic information produced by the BIST circuits
 Advantage of BIST:
o Being embedded in a system, it can generate test vectors at full system speed.
 can also generate multi-cycle test sequences
o BIST hardware remains available during the operational lifetime of the system
 Disadvantage: larger area overhead
 A system with BIST and redundant components is capable of reporting faults to a
service center over a network connection

Generating test patterns:


 The most common means of generating test patterns is a pseudorandom test pattern
generator.
 Pseudorandom sequences can be repeated from a given starting point, called the seed.
 Pseudo-random sequences can be readily generated with a simple hardware structure
called a linear-feedback shift register (LFSR).
 Figure 13 shows an LFSR for generating sequences of 4-bit values.
 The sequence is initiated by presetting the flip-flops, generating the test value 1111 as the
seed.
 On successive clock cycles, the LFSR generates values in the sequence shown in Figure
13.
 The sequence contains all possible 4-bit values except 0000.

Figure 13 A 4-bit LFSR for generating pseudo-random test vectors.


 LFSR can be modified to form a complete feedback shift register (CFSR), as shown in
Figure 14, which generates all possible values.

www.azdocuments.in Page 21
DSDV (18EC644) Module 5 Az documents

Figure 14 A 4-bit CFSR for generating pseudorandom test vectors.


 Similar circuits can be designed to generate pseudo-random test vectors of other lengths.
 Placement of the XOR gates within the LFSR is determined by the characteristic
polynomial of the LFSR

Analyzing output responses


 Infeasible to store the correct output response for comparison with the circuit’s output
response
 Commonly used technique for output response compaction is signature analysis
 This technique is closely related to use of LFSRs for test pattern generation, and the same
mathematical theory underlies operation of both.
 A signature register forms a summary, called a signature, of a sequence of output
responses.
 Two sequences that differ slightly are likely to have different signatures.
 Figure 15 shows an example of a multiple-input signature register (MISR), with four
inputs from a circuit under test and a 4-bit signature.

Figure 15 A 4-bit MISR with four inputs from a circuit under test.

www.azdocuments.in Page 22
DSDV (18EC644) Module 5 Az documents

 First a logic simulation of the circuit is performed.


o The sequence generated by LFSR is used as input values for simulation
 Output values from the simulation are used to compute the expected signature
 This signature is saved for use during test
 When BIST of a circuit is initiated, either by ATE during manufacturing test or by an
in-system test controller during system operation, the LFSR generates test patterns
and the MISR computes the signature of the actual circuit’s outputs.
 The ATE or test controller then shifts the computed signature out of the MISR and
compares it with the expected signature.
 If they are the same, no fault is detected. If they differ, there is definitely a fault.

4. Nontechnical Issues
 Electronics products go through life cycles. Stages of life cycle -
o Market research and financial modeling
o Product design
o Establishment of manufacturing facilities and supply channels, and sales and
distribution channels
o Maintenance and repair or customer service.
o Redesign - to meet changing needs, or Reuse – may be reused in other products.
o Product becomes obsolete and is retired from production and support.
 Financial models that can be applied to estimate revenue from a product over its life
cycle.
o Revenue from a product typically peaks early in the product’s life cycle, and tails
off until obsolescence.
o The non-recurring engineering (NRE) costs of developing the product, along with
other up-front costs, must be met from the revenue stream.
 Time-to-market:
o Entering the market early has a critical impact on revenue.
o Late entry allows competitors to gain market share, reducing the revenue available
for the late product, and possibly making it unprofitable.
o Time-to-market pressures for design of products (with short life cycle) are very
intense. Eg. Cell phones

www.azdocuments.in Page 23
DSDV (18EC644) Module 5 Az documents

 For products with very long life cycles, attributes such as reliability and maintainability
are important. Eg. Military systems and telecommunications infrastructure
o Such long-lived products must typically be supported throughout their lifetime.
o Design phase will also involve development of design documentation, and liaison
with support service providers to develop support plans, procedures and
documents.
 Implementation technology continues to evolve rapidly.
o Each generation of chip technology allows more transistors to be packed into a
given chip area, more bits of storage per memory chip, and higher clock
frequencies.
o If the design process for a complex system spans an 18-month period, a new
technology generation is likely to be available when the product reaches the
manufacturing stage.
o Designing using the previous generation may well lead to a product with lower
performance or capacity than competitors’ products.
o At start of a design project, be aware of technology trends
 make projections to determine the appropriate technology for the future
manufacture of product
 Project management - Design of a digital system is a complex undertaking.
o For smaller systems, a small team of engineers can feasibly deal with product
definition and specification, detailed design, verification, and manufacture.
o For larger systems, a larger development team is typically needed.
 Larger teams are often structured with subteams being responsible for
different aspects of the design methodology, such as architectural
definition, detailed design, verification, test development, and liaison with
the manufacturing facility.
 It is important for individual team members to understand the structure of
the overall project and the context in which they are working.
o Maintaining good communication and information flow within the project is
critical.
o Good project management is essential to a successful outcome.

www.azdocuments.in Page 24
For more please do visit
www.azdocuments.in

@azdocuments

https://www.azdocuments.in/

You might also like