Notes (2)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Design Methodologies for IIR Filters: Impulse Invariance and Bilinear

Transformation

The Impulse Invariance and Bilinear Transformation methods are two common
techniques for designing Infinite Impulse Response (IIR) filters. They transform analog
filter designs into digital filters while preserving certain properties of the original system.

1. Impulse Invariance Method

Objective:

The Impulse Invariance Method maps the analog filter’s impulse response into the digital
domain. It preserves the time-domain characteristics of the analog filter.

Design Steps:

- Start with the Analog Transfer Function: Given the analog transfer function H(s):

𝑁(𝑠)
𝐻(𝑠) =
𝐷(𝑠)

- Expand Using Partial Fraction Expansion: Decompose H(s) into partial fractions:
𝑁
𝐴𝑘
𝐻(𝑠) = ∑
𝑠 − 𝑝𝑘
𝐾=1

where 𝐴𝑘 are the residues, and 𝑝𝑘 are the poles.

- Obtain Z-Transform Using Impulse Invariance Transformation

1 1
→ 𝑝𝑘 𝑇𝑠
1−𝑒 𝑍 −1
𝑠−
- Transfer Function of Digital Filter:

𝐻(𝑍) = ∑
1 − 𝑒𝑝𝑘 𝑇𝑠 𝑍 −1
𝐾=1
Key Features:

- Advantages: Preserves the time-domain impulse response and is simple to implement.

- Disadvantages: Aliasing may occur, making it unsuitable for precise frequency-domain


designs.

2. Bilinear Transformation Method

Objective:

The Bilinear Transformation Method maps the analog filter's frequency response to the
digital domain while avoiding aliasing. It preserves stability and monotonicity but
introduces frequency warping.

Design Steps:

- Start with the Analog Transfer Function: Begin with the analog transfer function H(s).
1
𝐻(𝑠) =
𝑠

- Apply the Bilinear Transformation: Replace s with the bilinear transformation:

2 𝑍−1
𝑠 = ( )( )
𝑇𝑠 𝑍 + 1

- Frequency Pre-Warping: Pre-warp critical frequencies to correct for distortion caused


by nonlinear mapping.

- Digital Filter Transfer Function: Substitute the bilinear transformation into H(s) to
obtain the digital filter transfer function.
Key Features:

- Advantages: Eliminates aliasing, preserves stability, and ensures monotonicity of the


frequency response.

- Disadvantages: Frequency warping requires correction, and calculations are more


complex.

Comparison of Methods

Feature Impulse Invariance Bilinear Transformation


Aliasing Possible Avoided
Frequency Warping None Present
Time-Domain Preserves impulse response Alters impulse response
Accuracy
Stability Preservation Preserved if original filter is Always preserved
stable
Applications Best for low-frequency designs Suitable for wide frequency ranges

Conclusion:

The Impulse Invariance Method is ideal for lowpass filters but may suffer from aliasing.
The Bilinear Transformation Method avoids aliasing and ensures stability, making it
preferred for most practical IIR filter designs.
Fixed-Point vs Binary Floating-Point Number Representation

Fixed-Point Representation

Definition:

Fixed-point representation stores numbers with a fixed number of digits for the integer
and fractional parts. It uses a predefined scaling factor to determine the position of the
radix point.

Key Characteristics:

- Range: Limited range since the number of bits is fixed, and the scaling factor is static.

- Precision: Precision depends on the scaling factor and bit allocation between the integer
and fractional parts.

- Hardware Requirements: Simple implementation, often used in low-cost or resource-


constrained systems.

- Arithmetic Operations: Faster and requires less computational effort since operations
are straightforward.

Example:

A 16-bit fixed-point number with 8 bits for the integer part and 8 bits for the fractional
part. The range is limited to -128 to 127.99609375.

Binary Floating-Point Representation

Definition:

Binary floating-point representation stores numbers in the form of (-1)^s * M * 2^E,


where s is the sign bit, M is the mantissa (or significand), and E is the exponent.

Key Characteristics:

- Range: Wider range as the exponent allows representation of very large and very small
numbers.

- Precision: Higher dynamic range but lower precision for very small or very large
numbers due to rounding errors.

- Hardware Requirements: Complex implementation requiring floating-point units


(FPUs).
- Arithmetic Operations: Slower than fixed-point due to the need for normalization and
rounding.

Example:

A 32-bit floating-point number uses 1 bit for the sign, 8 bits for the exponent, and 23 bits
for the mantissa. It can represent values ranging from approximately 10^-38 to 10^38.

Performance Comparison

Aspect Fixed-Point Binary Floating-Point


Representation Representation
Range Limited due to static scaling Very wide range due to the
factor dynamic exponent
Precision High precision within a Precision decreases with
limited range very large or small numbers
Hardware Complexity Simple, requires less Complex, needs dedicated
hardware floating-point units
Speed Faster; operations are Slower due to normalization
straightforward and rounding
Memory Usage Efficient; fewer bits are Requires more bits for
sufficient similar range/precision
Applications Embedded systems, DSPs, Scientific computing,
and resource-constrained graphics, and applications
environments needing wide dynamic
range

Conclusion:

Fixed-point representation is ideal for applications requiring high performance and low
power consumption, such as digital signal processing. Binary floating-point
representation is preferred for applications requiring wide dynamic range and flexibility,
such as scientific calculations and graphics rendering.
Architectural Features of ADSP-21xx Processors

The ADSP-21xx processors are high-performance digital signal processors designed for
real-time signal processing. They feature a Harvard architecture with separate memory
buses for parallel data and instruction access. These processors are optimized for tasks
like audio processing and telecommunications.

1. Data Address Generators (DAGs):

• Two independent Data Address Generators (DAG #1 and DAG #2) provide
efficient addressing modes for accessing data memory and registers.
• They support addressing modes such as circular buffering and bit-reversed
addressing, which are particularly useful in digital signal processing (DSP)
applications.

2. Program Sequencer:

• The program sequencer controls the flow of instructions in the processor.


• It manages instruction fetching, branching, and looping efficiently, enabling the
execution of complex algorithms.

3. Instruction Register and Cache Memory:

• Instructions are fetched into the instruction register from the on-chip cache
memory, reducing memory access delays.
• Cache memory allows fast access to frequently used instructions, improving
overall performance.

4. Arithmetic Logic Unit (ALU):

• Performs arithmetic and logical operations required for DSP computations.


• Operates on data fetched from input registers and writes results back to output
registers.

5. Multiplier-Accumulator (MAC) Unit:

• Dedicated hardware for performing multiplication and accumulation in a single


cycle.
• Crucial for DSP tasks like filtering, convolution, and Fourier transforms.

6. Shifter:

• A dedicated shifter unit supports various shift and scaling operations.


• Useful for adjusting the dynamic range of signals in fixed-point arithmetic.

7. Registers:

• Input and output registers store intermediate data for processing within the ALU,
MAC, and Shifter units.
• Registers help reduce data access latency.

8. Buses:

• Multiple buses (PMA, DMA, PMD, and DMD) enable parallel access to program
and data memory, enhancing the throughput.
• The R bus connects various computational units, ensuring seamless data
exchange.

9. Bus Exchange Unit:

• This unit manages data transfer between the different buses, facilitating efficient
memory and register access.

Key Advantages:

High-speed computations enabled by parallel processing of arithmetic and memory


access operations. Efficient support for DSP algorithms through specialized units like the
MAC and Shifter. Low-latency operation facilitated by dedicated buses and registers.
Compact and power-efficient design, making ADSP-21xx processors ideal for embedded
DSP applications.
Architecture of a TMS Processor
The TMS processor architecture features a streamlined design optimized for digital signal
processing applications. Below is an explanation of its components based on the block
diagram:

1. Central Processing Unit (CPU):


- The CPU consists of an integer/floating-point multiplier and an integer/floating-point
ALU for efficient mathematical operations.
- It includes extended-precision registers for handling large intermediate results during
computations.
- Two address generators and auxiliary registers allow flexible memory addressing,
useful for DSP applications like circular addressing.

2. Memory Blocks:
- The architecture includes program cache, RAM blocks, and ROM blocks connected via
data buses for fast and parallel data access.
- The Harvard architecture separates the program memory and data memory for
concurrent data and instruction fetching.

3. Direct Memory Access (DMA):


- The DMA module features address generators and control registers, enabling high-
speed data transfers between memory and peripherals without CPU intervention.
4. Controller:
- The controller handles system inputs like reset signals, interrupts, and clock signals. It
ensures proper coordination of processor activities.

5. Peripheral Interfaces:
- Peripheral components such as serial ports (Serial Port 0 and Serial Port 1) and timers
(Timer 0 and Timer 1) facilitate external device communication and timing operations.

6. Buses:
- The architecture utilizes a primary bus and expansion bus to interconnect internal
modules and facilitate external memory and peripheral access.

The TMS processor’s modular and efficient design ensures high performance in real-time
signal processing tasks.
Use of DSP Techniques in Image Processing and Wireless Communication
Applications

1. Image Processing:
DSP (Digital Signal Processing) techniques are essential in digital image processing for
various tasks, such as:

• Filtering: Removing noise, enhancing edges, and improving image clarity


through spatial and frequency-domain filters.
• Compression: Implementing algorithms like JPEG and MPEG to compress
images and videos for efficient storage and transmission.
• Feature Extraction: Detecting objects, patterns, or edges for applications like
facial recognition and object detection.
• Image Transformation: Using Fourier or Wavelet transforms for frequency
analysis, image enhancement, and reconstruction.

2. Wireless Communication:
In wireless communication, DSP techniques are fundamental for ensuring efficient and
reliable data transmission. Common applications include:

• Modulation and Demodulation: Encoding and decoding digital data using


schemes like QAM and PSK.
• Error Detection and Correction: Ensuring accurate data delivery using error-
correction codes like Reed-Solomon and Convolutional Codes.
• Channel Equalization: Mitigating signal distortions caused by interference or
multipath propagation.
• Speech and Data Compression: Reducing bandwidth requirements with
compression algorithms like MP3 and AMR.

DSP Applications in Speech Processing

1. Speech Enhancement:
DSP techniques like noise suppression and echo cancellation are used to improve speech
clarity in telecommunications and hearing aids.

2. Speech Compression:
Compression methods such as Linear Predictive Coding (LPC) and Code-Excited Linear
Prediction (CELP) enable efficient speech storage and transmission.
3. Speech Recognition:
Features like Mel-frequency Cepstral Coefficients (MFCCs) are extracted using DSP and
used in AI-based models for speech recognition.

4. Text-to-Speech (TTS) Conversion:


DSP synthesizes speech signals from text, powering virtual assistants, audiobooks, and
accessibility tools.

5. Voice Authentication:
DSP-based algorithms analyze voice characteristics for biometric authentication in secure
systems, such as banking and smart devices.

You might also like