End Focus

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Past Papers:

1. ISR

Assume AVR ATMEGA328p Micro-controller is used for this design and both switches can
not turn on at the same time.
(a) Analyse the diagram shown in Figure No 1 and pseudo-code program and determine the
basic logic function that is being implemented with this system.
The system implements an XOR logic between the two inputs (PIN1 and PIN2). The LED will be
turned on if the two inputs are different (one is high and the other is low), and it will be turned off
if both inputs are the same.
(b) Describe the advantages of Interrupts compare to Pooling in Micro-controllers.
Efficient CPU Utilization: Interrupts allow the CPU to perform other tasks and only handle events
when necessary, leading to better multitasking.
Low Power Consumption: The CPU can remain idle or in a low-power mode until an interrupt
occurs.
Faster Response: Interrupts enable the CPU to respond to events as soon as they happen, unlike
polling, which might cause delays due to continuous checking.
(c) Specify the type of interrupt in AVR micro controller that can be used for this design.
External Interrupts can be used for this design, such as INT0 or INT1 on the AVR ATMEGA328p.
These interrupts can be triggered by a change in the state of the input pins (PIN1 and PIN2).
(d) Write the AVR assembly code for the first two lines of Pseudo code. You cannot use two
different ports for inputs and outputs.
; Declare PIN7 as output
SBI DDRB, 7 ; Set the 7th bit of DDRB to 1 (output)
; Declare PIN1 and PIN2 as inputs
CBI DDRB, 1 ; Clear the 1st bit of DDRB to 0 (input)
CBI DDRB, 2 ; Clear the 2nd bit of DDRB to 0 (input)

(e) Identify the registers to enable the local interrupt and write the Hexa values that need to
be updated in the registers.
EIMSK (External Interrupt Mask Register) is used to enable external interrupts. For INT0 and
INT1:
EIMSK = 0x03 (Enables both INT0 and INT1).
LDI R16, 0x03 ; Load 0x03 into register R16
OUT EIMSK, R16 ; Enable INT0 and INT1
(f) Write the assembly command that can enable the Global Interrupt.
SEI ; Enable global interrupts
(g) Briefly describe about IVT(interupt vector table) in micro-controllers.
IVT is a table of memory addresses that store the starting addresses (vectors) of ISRs. When an
interrupt occurs, the microcontroller uses the interrupt vector corresponding to the interrupt to
jump to the correct ISR. Each interrupt source has a unique vector, which the microcontroller uses
to execute the appropriate ISR.
(h) According to the Logic, write the AVR C code for ISR (Interput Service Routine) in this
design.
ISR(INT0_vect) {
if (PINB & (1 << PIN1)) { // If PIN1 is HIGH
if (PINB & (1 << PIN2)) { // If PIN2 is HIGH
PORTB &= ~(1 << PIN7); // Turn off LED
} else {
PORTB |= (1 << PIN7); // Turn on LED
}
} else { // If PIN1 is LOW
if (PINB & (1 << PIN2)) { // If PIN2 is HIGH
PORTB |= (1 << PIN7); // Turn on LED
} else {
PORTB &= ~(1 << PIN7); // Turn off LED
}
}
}
(i) What is the difference between RET and RETI used with Interrupt Service Routine (ISR)
in assembly?
RET: Returns from a function or subroutine. It does not restore the global interrupt flag, which
remains unchanged.
RETI: Returns from an Interrupt Service Routine (ISR) and restores the global interrupt flag, re-
enabling interrupts that were disabled when the ISR was triggered.

2. Serial Port Programming


Serial vs Parallel Data Transfer:
Parallel: Multiple data bits transmitted simultaneously over multiple channels.
Serial: Data bits transmitted one after the other over a single channel.

Synchronous vs Asynchronous Communication:


Synchronous: Sender and receiver share a common clock signal.
Asynchronous: Synchronization signal sent before each message, no shared clock.

Data Transmission Methods


Between AVR and Peripheral Devices: Direct connection to AVR ports for reading data.
Between AVRs or AVR and PC: Requires a communication protocol to manage data transfer.

Asynchronous Serial Communication


Protocol: Defines how data is packed, the bit rate, start/stop bits, and parity.
Data Framing: Wrapping data with start and stop bits (e.g., sending ASCII character “A” as
01000001).

Baud Rate and Bit Rate


Bit Rate: Number of bits transmitted per second.
Baud Rate: Number of symbols transmitted per second (for AVR, bit rate ≈ baud rate).
RS232 Standard
Interface Standard: Used for data communication, requiring RXD, TXD, and GND pins.
Voltage Levels: High (1) = -3 to -25V, Low (0) = 3 to 25V, not TTL compatible.
TTL/CMOS Logic: Differences in logic levels and noise margins between TTL and CMOS.

Serial Port Programming with AVR


USART (Universal Synchronous/Asynchronous Receiver/Transmitter):
Registers involved: UDR, UCSRA, UCSRB, UCSRC, UBRR.
UDR: For transmitting/receiving data.
UCSRA/B/C: Control and status registers for handling Rx/Tx signals, interrupts, data format.
UBRR: Baud rate generator.

USART Transmission and Reception


Transmission: Load data into UDR → Monitor UCSRA for transmission completion.
Reception:
Monitor UCSRA for data receipt.
Read data from UDR.
Baud Rate Setting: Calculations for setting the baud rate using UBRR.

Debugging Tools
Tools for Debugging:
HyperTerminal, Arduino IDE, RealTerm, Atmel Studio.
Using Interrupts:
Define ISRs for handling serial communication interrupts.
What is the baud rate in UART communication, and how is it set in AVR microcontrollers?
1. What is the baud rate in UART communication?
Baud rate in UART (Universal Asynchronous Receiver/Transmitter) communication refers to the
number of signal changes or symbols transmitted per second. In simple UART systems, the baud
rate is equivalent to the bit rate, meaning it defines the number of bits transmitted per second (bps).
For instance, a baud rate of 9600 means 9600 bits are transmitted each second. The baud rate
determines how fast data is transmitted over the serial connection, and both the transmitter and
receiver must agree on this rate to ensure proper communication.

2. How is the baud rate set in AVR microcontrollers?


In AVR microcontrollers, the baud rate for UART communication is set using the UBRR (USART
Baud Rate Register). This register determines the frequency of the baud rate generator, which, in
turn, sets the baud rate for serial communication. The UBRR register is split into two parts:
UBRRH (high byte) and UBRRL (low byte).

3. Interfacing
Interfacing refers to the communication between a processor and other components like memory
and peripherals.
Components Involved:
Processing: Data transformation, implemented using processors.
Storage: Data retention, implemented using memory.
Communication: Data transfer between processors and memories, implemented using buses.

Bus Structures
Wires: Uni-directional or bi-directional.
Bus: Set of wires with a single function, like address or data bus.
Protocols: Rules governing communication over the bus.

Ports: Conducting devices that connect a bus to the processor or memory.


Types:
Address Port: Used for sending addresses.
Data Port: Used for data transfer.
Control Port: Used to control the communication.
Basic Protocol Concepts
Time Multiplexing: Sharing wires for multiple data transfers to save space but increase time.
Control Methods:
Strobe Protocol: Simple, fixed timing.
Handshake Protocol: More flexible, uses acknowledgment signals.

I/O Addressing
Types:
Port-based I/O: Direct access to ports like registers.
Bus-based I/O: Uses address, data, and control lines to communicate.
Extensions:
Parallel I/O: Needed when the processor does not support bus-based I/O.
Extended parallel I/O: When processor supports port-based I/O but more ports needed

Types of bus-based I/O:


Memory-Mapped I/O:
No special instructions required.
Peripheral registers occupy addresses in same address space as memory.
Standard I/O:
Requires special instructions (e.g., IN, OUT) to move data between peripheral registers and
memory.
Additional pin (M/IO) on bus indicates whether a memory or peripheral access

Direct Memory Access (DMA)


Allows peripherals to communicate with memory without involving the microprocessor, freeing
up processing power.
Buffering: Temporarily storing data in memory before processing
DMA Controller: Manages data transfer between peripheral and memory

Peripheral to Memory Transfer without DMA, Using Vectored Interrupt


In this approach, the microprocessor is directly involved in the data transfer between the peripheral
device and memory. The peripheral device uses an interrupt to notify the microprocessor when it
has data ready to be transferred. The microprocessor then executes an Interrupt Service Routine
(ISR) to perform the transfer.
1. A peripheral device, such as an I/O device, receives data that needs to be transferred to memory.
This data is usually stored in a register within the peripheral.
2. The peripheral asserts an interrupt request (Int) to signal the microprocessor that it has data
ready for transfer.
3. The microprocessor finishes its current instruction, saves its state (such as the program counter),
and acknowledges the interrupt by asserting the interrupt acknowledge (Inta) signal.
4. The peripheral or an interrupt controller provides a vector address on the data bus. This vector
address points to the specific Interrupt Service Routine (ISR) associated with the interrupting
device.
5. The microprocessor uses the vector address to jump to the ISR. The ISR is a special block of
code designed to handle the data transfer.
6. The ISR executes the data transfer. Typically, it reads the data from the peripheral's register (e.g.,
using an instruction like MOV R0, [Peripheral_Address]) and writes it to the desired memory
location (e.g., MOV [Memory_Address], R0).
7. After transferring the data, the ISR usually ends with a RETI (Return from Interrupt) instruction.
This restores the microprocessor’s saved state and resumes the execution of the main program.
8. After the ISR has executed and the data has been transferred, the peripheral de-asserts the
interrupt signal, indicating that the data transfer is complete.

• Interrupts: Used to notify the microprocessor that a peripheral needs servicing.


• ISR: A special routine that the microprocessor executes to handle the interrupt, during
which the actual data transfer occurs.
• Vectored Interrupts: Use an interrupt vector to jump to the correct ISR, allowing multiple
devices to have their dedicated ISRs.

Arbitration
Purpose: Manage multiple peripherals requesting service from a single resource.
Methods:
Priority Arbiter: Determines which request gets serviced first.
Types of priority
• Fixed priority
– each peripheral has unique rank
– highest rank chosen first with simultaneous requests
– preferred when clear difference in rank between peripherals
• Rotating priority (round-robin)
– priority changed based on history of servicing
– better distribution of servicing especially among peripherals with
similar priority demands

Daisy-Chain Arbitration: A sequence where peripherals pass requests downstream.


-Peripherals connected to each other in daisy-chain manner
-One peripheral connected to resource, all others connected “upstream”
-Closest peripheral has highest priority
Pros/cons
– Easy to add/remove peripheral - no system redesign needed
– Does not support rotating priority
– One broken peripheral can cause loss of access to other peripherals

Network-oriented arbitration: When multiple microprocessors share a bus (sometimes


called a network)

Serial and Parallel Communication


Parallel Communication:
Multiple wires, higher throughput over short distances.
Used on the same IC or circuit board.
Serial Communication:
Single data wire, used for longer distances.
More complex but less bulky and cheaper.
Wireless Communication:
Types:
Infrared (IR): Electronic wave frequencies just below visible light spectrum, Limited range,
requires line of sight.
Radio Frequency (RF): Electromagnetic wave frequencies in radio spectrum, Greater range, does
not require line of sight

Serial Protocols
I2C (Inter-IC):
Two-wire protocol developed by Philips.
Used for communication between ICs.
Capable of handling multiple devices with different addresses.
USB (Universal Serial Bus):
Common for connecting peripherals to a PC.
Different standards (USB 1.X, 2, 3.X, 4).
Tiered star topology allowing up to 127 devices.

Advanced Communication Principles


Layering: Breaking down protocols into layers for easier design and understanding.
Physical Layer: The lowest layer, responsible for transmitting bits across the medium.

Multilevel Bus Architectures


Processor-local Bus: High speed, connects processors, memory controllers, etc.
Peripheral Bus: Lower speed, connects peripherals, uses industry-standard protocols like ISA or
PCI.

Question
a) Difference between Port-based I/O and Bus-based I/O
Port-based I/O:
Definition: In port-based I/O, the processor communicates with peripherals directly through
dedicated I/O ports.
How It Works: The processor has specific ports that are either input or output. The software directly
reads or writes to these ports as if they were registers within the processor. Each port corresponds
to a peripheral device.

Bus-based I/O:
Definition: In bus-based I/O, the processor communicates with peripherals over a common bus
using address, data, and control lines.
How It Works: The processor, memory, and peripherals are connected to a system bus. The
processor sends an address on the bus to select a particular peripheral, followed by data and control
signals to perform the I/O operation. This method often uses a more complex communication
protocol.

(b) Draw a block diagram of processor, Memory peripheral and DMA controller connected
with a system bus, and explain the steps showing what happens during the data transfer from
peripheral to data memory using DMA?

Explanation of DMA Data Transfer:


Step 1: Request Initiation
The peripheral device that needs to transfer data asserts a request (REQ) signal to the DMA
controller.
Step 2: Bus Request
The DMA controller sends a request (DREQ) to the processor to gain control of the system bus.
Step 3: Processor Acknowledgment
The processor completes its current instruction and acknowledges the DMA controller by asserting
the DMA Acknowledge (DACK) signal, granting the bus control to the DMA controller.
Step 4: Data Transfer
The DMA controller initiates the data transfer directly between the peripheral and memory,
bypassing the processor. Data from the peripheral is transferred to the memory via the system bus.
Step 5: Completion
Once the data transfer is complete, the DMA controller releases the system bus, and the processor
resumes its operation.

(c) Advantages and Disadvantages of Daisy Chain Arbitration


Advantages:
Simple Implementation: Daisy chain arbitration is easy to implement, especially in systems with
a small number of devices.
Cost-effective: It requires minimal additional hardware, making it a low-cost solution for priority
management.
Flexible: Peripherals can be easily added or removed without redesigning the entire system.
Disadvantages:
Fixed Priority: The device closest to the processor has the highest priority, which can lead to lower
priority devices being starved of resources.
Single Point of Failure: If one peripheral in the chain fails, it can block access to all other
peripherals downstream.
No Support for Rotating Priority: This method does not support changing priorities dynamically
based on system needs.

4. Analog to Digital
Converters
Analog Signals: Continuous signals representing physical quantities like temperature or sound.
Digital Signals: Discrete signals with binary states (0 or 1). (Eg: Light Switch --> On or off, Door
--> Open or closed.)

Basic Concepts in ADC


Quantization: Process of mapping a continuous range of values into a finite range of discrete states.
(Mapping a 0-10V signal into 1.25V increments)

Encoding: Assigning digital values to quantized states.


(Binary representation of discrete states)

Resolution: The smallest change in analog input that results in a change in the output digital value.
Higher resolution improves accuracy.
(A 10-bit ADC has 1024 levels of resolution)

Sampling Rate: Frequency at which an ADC samples the analog signal. Higher sampling rates
allow more accurate representation of the signal.
Nyquist Rule: Sampling frequency should be at least twice the highest frequency in the
signal to avoid aliasing (misrepresentation of the signal).
Aliasing Occurs when the input signal is changing much faster than the sample rate.
Quantization Error: The difference between the actual analog value and the quantized value.
Average quantization error is half the step size.
Types of ADCs
1. Flash ADC (Parallel ADC)
Fastest type; uses multiple comparators for high-speed conversion.
Advantages: Speed.
Disadvantages: High cost and low resolution.

2. Sigma-Delta ADC
Uses oversampling and digital filtering to achieve high resolution.
Advantages: High resolution, no need for precision external components.
Disadvantages: Slower conversion speed.

3. Dual Slope (Integrating) ADC


Measures the time taken to charge and discharge a capacitor to determine the input signal.
Advantages: High accuracy and noise immunity.
Disadvantages: Slow conversion time.

4. Successive Approximation ADC


Uses a binary search algorithm to approximate the input signal.
Advantages: Good speed and accuracy tradeoff.
Disadvantages: Slower than Flash ADC, but faster than Sigma-Delta and Dual Slope ADCs.

Reference Voltage (Vref): Dictates the step size for conversion.


Conversion Time: Time required to convert an analog signal to digital. Depends on clock speed
and ADC architecture.
ATmega32 ADC Features
Resolution:
ATmega32 features a 10-bit ADC with a resolution of 1024 steps.
Input Channels:
8 single-ended analog input channels, 7 differential channels, and 2 differential
channels with optional gain (10x, 200x).
Output Registers:
ADCL (Low byte) and ADCH (High byte) hold the converted binary data.
Vref Options:
AVCC, Internal 2.56V, External AREF.
Conversion Time:
Determined by crystal frequency (Fosc) and prescaler bits (ADPS0:2).
ADC Interfacing and Programming
Interfacing: Involves connecting sensors (e.g., LM34 or LM35 temperature sensors) to the ADC
pins of a microcontroller.

FIVE major registers


ADCH/ADCL: Hold the ADC conversion result.
ADCSRA: ADC Control and Status Register, used for enabling ADC, setting prescaler, and
starting conversions.
ADMUX: ADC multiplexer selection register, used for selecting the ADC input channel and
reference voltage.
SFIOR: Special Function I/O Register.

Programming Steps:
(1) Make the pin for the selected ADC channel an input pin.
(2) Turn on the ADC module of the AVR because it is disabled upon power-on reset to save power.
(3) Select the conversion speed. We use registers ADPS2:0 to select the conversion speed.
(4) Select voltage reference and ADC input channels. We use the REFS0 and REFS1 bits in the
ADMUX register to select voltage reference and the MUX4:0 bits in ADMUX to select the ADC
input channel.
(5) Activate the start conversion bit by writing a one to the ADSC bit of ADCSRA.
(6) Wait for the conversion to be completed by polling the ADIF bit in the ADCSRA register.
(7) After the. ADIF bit has gone HIGH, read the ADCL and ADCH; otherwise, the result will not
be valid.
(8) If you want to read the selected channel again, go back to step 5.
(9) If you want to select another Vref source or input channel, go back to step 4.

Interrupt-Driven ADC:
The ADC can be programmed to trigger interrupts upon conversion completion, allowing efficient
data handling without constant polling.

Sensor Interfacing with ADC (Temperature Sensors)


Analog Sensors: These sensors produce an analog output, typically a voltage, proportional to the
physical quantity they measure (e.g., temperature, light, pressure).
LM34 and LM35 Temperature Sensors:
LM34: Outputs a voltage proportional to the temperature in Fahrenheit. It gives 10 mV for
every degree Fahrenheit.
LM35: Outputs a voltage proportional to the temperature in Celsius. It gives 10 mV for
every degree Celsius.

Interfacing the LM34/LM35 with an ADC


Connecting the Sensor:
The output pin of the LM34/LM35 sensor is connected to one of the ADC input pins on the
microcontroller.
Ensure that the sensor’s power supply is stable and within the operating range specified in the
sensor’s datasheet.

Reading the Sensor Output:


The analog voltage from the sensor is fed into the ADC input.
The ADC converts this voltage into a digital value based on its resolution and reference voltage.

Example Calculation:

Signal Conditioning
Importance: Sensors might output signals in forms that are not directly suitable for ADC input
(e.g., too small, too noisy, or in a non-voltage form like current or resistance).
Techniques:
Amplification: If the sensor output is too low, it may need to be amplified.
Filtering: Noise can be reduced using low-pass filters, which allow the desired signal frequency to
pass while attenuating higher frequencies.
Level Shifting: Adjusting the signal to ensure it falls within the ADC’s input range.
Linearization: Some sensors produce non-linear outputs, requiring linearization to accurately
interpret the data.

Practical Considerations
Calibration: Sensors may need calibration to ensure accurate readings. This involves adjusting the
sensor output to match known reference values.
Temperature Compensation: If the sensor's performance varies with temperature, compensation
techniques may be necessary to maintain accuracy.
Power Supply Stability: The ADC’s accuracy is highly dependent on the stability of the Vref and
the sensor’s power supply. Using regulated power supplies and decoupling capacitors can improve
stability.

Simple Temperature Measurement System


5. PWM

Fast PWM

Phase Correct PWM


Question
An ATMEGA328P micro-controller is to be used as the basis of a greenhouse climate controller.
This system has the following temperature sensor and water sprinkler.
Temperature sensor: It outputs a signal of 10mV/°C.
The water sprinkler controller system: It controls two different zones by changing the rotating
speed of the sprinklers. The controller needs to receive a 976.589 Hz PWM signal with 75% duty
cycle from your micro-controller to rotate the sprinklers fast. To the slow rotation of sprinklers,
the controller needs to receive the same PWM signal with a 25% duty cycle.

(a) What are the blocks inside the Micro controller that can be used for this design?
ADC (Analog to Digital Converter): Converts the analog signal from the temperature sensor
(which outputs 10mV/°C) to a digital signal for processing.
PWM (Pulse Width Modulation) Module: Used to generate the PWM signal required to control
the speed of the water sprinklers.
Timer/Counter Module: Works with the PWM module to generate the specific frequency and duty
cycle needed (976.589 Hz with 25% and 75% duty cycles).
IO Pins: Used for interfacing with the external devices like the temperature sensor and the water
sprinkler system.

(b) Briefly explain about signal conditioning?


Signal conditioning refers to the process of manipulating an analog signal in a way that prepares
it for the next stage of processing. In the context of this design:
Amplification: The output of the temperature sensor may need to be amplified to match the input
range of the ADC.
Filtering: To remove noise from the sensor output before feeding it into the ADC.
Level Shifting: If the sensor's output does not match the expected range of the ADC, a level shifter
might be used to adjust the signal.

(c) In this design ADC block used external 2.56V reference voltage with 10bit output
resolution. Find the equation to convert the real temperature value from the binary output
of the ADC block.
(d) Find the threshold binary output value of the ADC to change the PWM operation.

(e) How to configure ADMUX register to the specific condition? (You have to choose left or
right justified ADC output that can be adapted to this situation.)
Reference Voltage: Set to external 2.56V (REFS1 = 1, REFS0 = 1).
Input Channel: Select the channel connected to the temperature sensor.
Left or Right Justified: For this design, right justification is typically preferred as it allows for more
straightforward manipulation of the 10-bit output without needing to shift the data.

(f) Briefly explain that why PWM is used in motor controller applications.
PWM is used in motor control applications because it efficiently controls the power delivered to
the motor, allowing for:
Speed Control: Varying the duty cycle of the PWM signal changes the average voltage, thus
controlling the motor speed.
Reduced Power Loss: PWM reduces power loss compared to analog control methods by switching
the power on and off rather than dissipating excess energy as heat.
Precision: PWM allows for precise control of motor speed and torque.

(g) Fast PWM mode is set in the TimerO Block and AVR Micro controller is connected to the
16 MHz oscillator. Find the prescaler value that generate given PWM signal?
(h) Fast PWM mode is Clear OCO on compare match, set OCO at TOP. Find the OCRO
values that can generate 25% and 75% duty cycles in this operation respectively.

6. Design Methodologies & Embedded Systems


6.1 Design Methodologies
A structured process for creating a system that improves quality and reduces costs.
Essential for managing complex systems with large specifications and multiple designers.

Main Techniques
Waterfall Model
Steps: Requirements → Architecture → Coding → Testing → Maintenance
Critiques: Limited feedback, assumes hardware is fixed.

Spiral Model:
Emphasizes iterative refinement and risk management.
Iterative stages: Requirements → Design → Test → Prototype.

Successive Refinement Model:


Involves repeated refinement of system specifications and designs.

Co-design Methodology:
Integrates hardware and software design processes to avoid bottlenecks.

Quality Assurance
Importance: Ensures that the final product meets requirements and functions correctly.
Verification Techniques: Prototyping, usage scenarios, formal techniques.

Good Requirements
Characteristics: Correct, unambiguous, complete, verifiable, consistent, modifiable, and traceable.

Design Reviews
Purpose: To catch design flaws through team meetings.
Roles: Designers, review leader, scribe, and audience.

6.2 Embedded Systems Design


Definition: Design of systems that integrate hardware and software.
Challenges: Complex functionality, productivity gaps, and tightly constrained metrics.

Design Process
Synthesis: Automating the conversion of system functionality to physical implementations.
Verification: Ensures designs are correct (accurate implementation) and complete (covers all
scenarios).
Design Evolution
Abstraction Levels:
Higher abstraction simplifies capturing design intent but may complicate implementation.

Gajski’s Y-chart:
Represents different types of descriptions (Behavioral, Structural, Physical) and their transitions.

Synthesis Techniques
Logic Synthesis: Converts logical descriptions into gate-level implementations.
Register-Transfer Synthesis: Converts finite state machines to processors.
Behavioral Synthesis: High-level synthesis for creating single-purpose processors.

Verification Techniques
Formal Verification: Proves properties of designs, such as correctness and completeness.
Simulation: Models design behavior and checks outputs against expected results.

Hardware/Software Co-simulation
Integration: Merges software and hardware simulations to create a comprehensive model.
Challenges: Speed and complexity in maintaining consistent data between models.

Emulation
Provides a faster alternative to simulation by using physical hardware (e.g., FPGAs) to map
designs.

• Functional Requirements: What the system should do (input/output).


• Non-functional Requirements: How the system performs (timing, reliability).
• CRC Cards: Class-Responsibility-Collaborator method for system analysis.
• Concurrent Engineering: Simultaneous design of all components to streamline processes.

7. Reliable System Design


Importance of Reliability and Fault Tolerance: Essential for systems used in critical applications
(e.g., aerospace, medical devices) where failures can have severe consequences.

Why Reliable Systems?


Increasing complexity and cost of systems.
Components can fail due to hostile environments, aging, poor design, etc.
Need for systems that are reliable, available, and safe.

Redundancy Techniques
Types of Redundancy:
Physical Redundancy: Duplicate hardware components.
Temporal Redundancy: Repeating tasks to ensure accuracy.
Analytical Redundancy: Using mathematical models to detect failures.
Applications of Reliability and Fault Tolerance
Safety-Critical Applications:
Aerospace systems (e.g., fly-by-wire).
Railway systems (e.g., driverless trains).
Medical devices.
Business-Critical Applications:
Financial transactions.
E-business systems.
Embedded/Real-Time Systems: Reliability is crucial for performance and safety.

Types of Faults
Categories:
Hardware Faults: Component failures, wear and tear.
Software Faults: Bugs, design flaws.
Human Errors: Operator mistakes.

Fault Handling
Definitions:
Fault: Defect within the system.
Error: Incorrect output due to a fault.
Failure: System's inability to perform as required.

Fault Management: Fault avoidance during development.


Fault detection and tolerance during operation.

Reliability and Maintainability


Reliability: Probability of performing intended functions over time.
Maintainability: Probability of restoring a system to operational status after a failure.
Availability: Probability that a system is operational at a given time.

Safety Considerations
Definitions:
Accident: Unplanned event causing damage.
Incident: Unplanned event with potential for harm.
Hazard: A situation that can cause an accident.
Safety-Critical Systems: Systems that can cause accidents due to failures.

Safety Standards
MIL-STD-882D: Military standard for systems safety.
IEC 61508: Standard for functional safety in electrical/electronic systems.

Dependability
Definition: The property allowing reliance on the service delivered by a system.
Attributes of Dependability:
Availability
Reliability
Safety
Confidentiality
Integrity
Maintainability

Developing Reliable Systems


Specification: Define functional and reliability requirements.
Design and Implementation: Focus on architecture and modular design.
Verification and Validation: Ensure the system meets specs and is fit for purpose.

Primary Design Techniques


Fault Avoidance: Prevent faults through design reviews.
Fault Masking: Localize faults to prevent errors from affecting the system.
Fault Tolerance: Design systems to perform in the presence of faults.
Fault Detection and Recovery: Identify and rectify faults during operation.

Reliability Analysis
Modeling Techniques:
Failure rates, mean time to failure, and availability metrics.
Markov Modeling: Used for reliability prediction based on current states.

Basic Steps in Fault Handling


Processes:
Fault confinement
Detection
Masking
Retry strategies
Diagnosis and recovery

8. Low Power Computing


Importance of Power Efficiency: Power is a critical constraint in embedded systems, with
increasing power demands and limited battery capacities.

Key Concepts
Power vs. Energy:
Power: Rate at which energy is consumed; measured in watts (W).
Energy: Total consumption over time; measured in joules (J).

Power Consumption in Embedded Systems


Types of Power Consumption:
Dynamic Power Consumption: Occurs during the operation of the circuits (charging/discharging
capacitors). Includes short circuit power during switching.
Static Power Consumption: Arises from leakage currents in inactive components, becoming
significant with smaller feature sizes in semiconductor technology.

Techniques for Low Power Computing


1. Basic Techniques
Parallelism: Utilizing multiple processing units to perform operations simultaneously, reducing
execution time.
Very Long Instruction Word (VLIW): Architecture that allows multiple operations to be executed
in parallel, reducing overhead.
Dynamic Voltage Scaling (DVS): Adjusting the supply voltage (Vdd) to reduce power
consumption. Reduces power quadratically with voltage, but increases gate delay.

2. Power Supply Gating


Power Gating: Cutting off power to inactive components to minimize static power consumption.

Dynamic Power Management (DPM)


Optimizes power states based on workload.
States:
• RUN: Fully operational mode.
• IDLE: CPU is inactive but can respond to interrupts.
• SLEEP: Significant shutdown of on-chip activity.
Trade-offs: Balancing between energy savings and overhead from transitioning between states.

Energy Efficiency Considerations


Low Power vs. Low Energy Consumption:
Power minimization is crucial for supply design and short-term cooling.
Energy minimization is essential for mobile systems due to limited battery capacity and high costs.

Implementation Alternatives
Heterogeneous Architectures: Combining different types of processors to optimize performance
and power efficiency.
Specialization Techniques: Tailoring hardware/software for specific tasks to enhance efficiency.

Future Directions
Energy Harvesting: Techniques to capture energy from the environment (e.g., solar, kinetic) to
extend battery life.
1st Question: List down four basic techniques used to achieve low-power computing in
embedded systems.
Parallelism: Utilizing multiple processing units to perform operations simultaneously, reducing
execution time and power consumption.
Very Long Instruction Word (VLIW): Architecture that allows multiple operations to be executed
in parallel, enhancing efficiency and reducing overhead.
Dynamic Voltage Scaling (DVS): Adjusting the supply voltage to reduce power consumption, as
power decreases quadratically with a reduction in voltage.
Dynamic Power Management (DPM): Optimizing the power states of the system based on
workload, enabling components to enter low-power states when not in use.

2. You are tasked with designing an automated greenhouse irrigation system. The system
should monitor soil moisture levels using sensors placed in different areas of the greenhouse
and control imigation accordingly. The system should be able to adjust watering schedules
based on real-time environmental conditions, such as temperature and humidity.
Additionally, it should provide remote monitoring and control capabilities through a user-
friendly interface accessible via a smartphone application or a web interface.

A) Outline the overall architecture” of ’he embedded system, Including the main components
and their interactions.
Main Components:
Soil Moisture Sensors: Monitor moisture levels in different areas.
Microcontroller: Central unit for processing sensor data and controlling the irrigation system.
Water Flow Control Valves: Regulate water flow based on commands from the microcontroller.
Environmental Sensors: Measure temperature and humidity.
User Interface: Smartphone app or web interface for remote monitoring and control.
Communication Module: Enables data transmission between the microcontroller and the user
interface (e.g., Wi-Fi or Bluetooth).

Interactions:
Sensors send data to the microcontroller.
The microcontroller processes the data and controls the valves.
The communication module relays information to the user interface for remote access.

b) Explain the design of the irrigation control mechanism, considering factors like water flow
rate, valve control, and automated scheduling based on soil moisture levels and
environmental conditions.
Water Flow Rate: The system should be able to measure and control the flow rate of water to avoid
overwatering.
Valve Control: Use solenoid valves that can be opened or closed based on commands from the
microcontroller.
Automated Scheduling:
The microcontroller adjusts watering schedules based on soil moisture readings.
Integrates environmental data (temperature, humidity) to optimize irrigation times and durations.

c) Detail the communication protocols and interfaces used to enable remote monitoring and
control of the greenhouse irrigation system through the smartphone application or web
interface.
Protocols:
• Wi-Fi: For connecting the system to the internet and enabling remote access.
• MQTT: Lightweight messaging protocol for efficient communication between devices.
Interfaces:
• Smartphone Application: Provides real-time monitoring and control options.
• Web Interface: Offers a dashboard for viewing sensor data and adjusting settings.

d) Discuss the power management strategy employed to optimize energy consumption and
extend the system's battery life.
Dynamic Power Management: The system enters low-power modes when inactive (e.g., sleeps
during the night).
Energy Harvesting: Use solar panels to supplement battery power.
Scheduled Wake-ups: The system wakes up periodically to check sensor data and control valves
only when necessary.

e) Describe the safety mechanisms and fail-safe features implemented to prevent


overwatering or potential system malfunctions.
Moisture Thresholds: Set upper and lower limits for soil moisture to prevent overwatering.
Manual Override: Users can manually control the system via the app in case of malfunction.
Alerts: Send notifications to users for abnormal conditions or failures in the system.

3. List and describe three general approaches to improve the design productivity?
Modular Design: Breaking down systems into smaller, manageable modules that can be developed
and tested independently.
Reuse of Components: Utilizing pre-designed components or libraries to reduce development time
and effort.
Automated Testing: Implementing automated testing frameworks to quickly validate design
changes and ensure reliability.

4. "Some families of embedded devices have a very high threshold of quality and reliability
requirements".
i. Briefly explain the term reliability in your own words.
Reliability: The ability of a system or component to perform its intended function without failure
over a specified period under stated conditions.

ii. Describe why reliability of an embedded system is important with the aid of an example.
Importance of Reliability: In medical devices, for instance, a malfunction could lead to incorrect
dosages of medication, risking patient health. High reliability ensures safety and trust in critical
applications.

iii. What is the difference between reliability and quality?


Reliability: Focuses on consistent performance over time.
Quality: Refers to meeting user requirements and specifications. A product can be of high quality
but unreliable if it fails frequently.

5. "Functional requirements are essential to be fully covered in a design. But non- functional
requirements are also given equal importance in a design." Justify the above statement.
Functional Requirements: Define what a system should do (e.g., watering schedule based on
moisture).
Non-Functional Requirements: Address how the system performs tasks, including reliability,
efficiency, and usability. They impact user satisfaction and system performance, ensuring the
system is not only functional but also efficient and user-friendly.

6. You are in charge of a system development, which is going to be used in surgical process
to accurately regulate and monitor a variety of parameters such as blood pressure,
temperature and flow to make sure the patient's condition is correct and stable.
a. What do you understand by dependability of a system?
Dependability refers to the trustworthiness of a system, ensuring it performs reliably and safely
under specified conditions.

b. Explain the importance of dependability of above system using attributes of


dependability?
Availability: Ensures the system is operational when needed.
Reliability: Consistent performance without failure.
Safety: Prevents catastrophic outcomes due to malfunctions.
For surgical systems, high dependability is critical to maintain patient safety and effective
monitoring.

c. How would you incorporate measures to make sure the system you develop is dependable
throughout the design process?
Thorough Testing: Implement rigorous testing protocols to identify and resolve potential failures.
Redundancy: Use redundant sensors and systems to ensure continuous operation in case of a
component failure.
Regular Maintenance: Schedule updates and checks to ensure the system remains operational and
up-to-date.
9. Multi-Processor Systems-on-Chip (MPSoCs)
Introduction to SoCs
Definition: A System on Chip (SoC) integrates a microcontroller or microprocessor with advanced
peripherals such as GPUs, Wi-Fi modules, and coprocessors.
Functions: Includes both analog and digital functions, mixed signals, and radio frequency
functionalities.

Importance of MPSoCs
Performance Needs: Single processors are often insufficient for high-performance applications.
MPSoCs are essential to meet the increasing performance demands of modern applications.

Key Application Areas:


• Network security
• Telecommunications
• Multimedia processing

Characteristics of Multiprocessors
Definition: Multiprocessors consist of parallel processors that share a single address space.
Cost-Effectiveness: Microprocessors are currently the most cost-effective processing solution.

Communication Modes
Shared Memory: All processors communicate through a single memory address space, allowing
implicit communication via loads and stores.
Message Passing: Processors communicate explicitly by sending and receiving messages.

Types of Memory Access


Uniform Memory Access (UMA): Access times are consistent regardless of which processor
requests data.
Non-Uniform Memory Access (NUMA): Access times vary based on the memory location relative
to the requesting processor.

Architectural Configurations
Single Bus Configuration: Processors are connected by a single bus, typically accommodating 2
to 32 processors.
Network Configuration: Processors are connected through a network, allowing for more complex
interconnections.

Programming Challenges
Complexity: Writing efficient multiprocessor programs is challenging, especially as the number of
processors increases. Programmers must understand hardware capabilities to optimize
performance.
Design Constraints
Key Considerations:
• Real-time performance to meet deadlines.
• Low power or energy consumption.
• Cost-effectiveness.

Concept of MPSoC
Definition: MPSoCs are custom architectures that balance the constraints of VLSI technology with
application-specific needs, unlike chip multiprocessors, which focus solely on increasing
processor density.

Memory Systems in MPSoCs


Heterogeneous Memory Systems: Some memory blocks may be accessible only to specific
processors, complicating programming efforts.
Irregular Memory Structures: Necessary for supporting real-time performance.

Challenges and Opportunities


Hardware and Software Complexity: Designing MPSoCs involves managing complex hardware
systems and software systems.
Methodology Importance: Effective design methodologies can significantly reduce development
time and improve performance and power consumption.

Enhancing Productivity
Strategies for Improvement:
• Raise the level of abstraction in designs.
• Employ structured design methodologies.
• Reuse intellectual property (IP) components.
• Utilize Electronic Design Automation (EDA) support.

Integration Practices
Current Approaches: Many existing practices are ad hoc and rely on low-level interfaces.
Synchronization: Low-level synchronization methods (e.g., interrupts, semaphores) are often used,
increasing design complexity.

Task Transaction Level (TTL) Interface


Goal: Improve MPSoC integration by raising the abstraction level.
Features:
Supports parallel application models.
Provides high-level interfaces for inter-task communication and multi-tasking.
Facilitates IP development and integration.
Questions:
(1) Describe the main components typically found in a Multi-Processor System-on-Chip
(MPSoC). Explain how these components collaborate to enhance system performance.
Processors:
Multiple processing units (CPUs/GPUs) that perform computations in parallel.
Can include heterogeneous architectures (e.g., different types of processors for specific tasks).

Memory:
Shared memory for data storage and communication among processors.
Can include heterogeneous memory systems, where some memory blocks are accessible only to
specific processors.

Interconnects:
Communication pathways (e.g., buses or networks) that connect processors and memory.
Facilitate data transfer and coordination between processors.

Peripherals:
Additional components such as graphics processors, Wi-Fi modules, and sensors that provide
functionality beyond basic processing.

Control Units:
Manage task scheduling, resource allocation, and communication protocols among processors.

Collaboration for Enhanced Performance


Parallel Processing: Multiple processors work simultaneously on different tasks or parts of a task,
significantly improving throughput and reducing execution time.
Shared Memory Communication: Processors can access a common memory space, allowing for
efficient data sharing and reducing the complexity of message-passing mechanisms.
Task Offloading: Specific processors can be assigned tasks based on their strengths (e.g., a GPU
handling graphics while a CPU manages general processing), optimizing resource usage.
Dynamic Power Management: By monitoring workload and adjusting the operating states of
processors and peripherals, the system can maintain performance while minimizing power
consumption.

(2) Explain the concept of Multi-Processor System on a Chip (MpSoC) design using an
example of your preference.
Smart Camera System
A smart camera system designed for real-time image processing and analysis is a good example of
an MPSoC application. This system can be used in surveillance, autonomous vehicles, or industrial
monitoring.

Components
Processors:
Image Processing Unit (IPU): Specialized processor for handling image data and executing
algorithms for object detection and recognition.
Central Processing Unit (CPU): Manages system operations, user interfaces, and overall control
logic.

Memory:
Shared RAM: Used for storing image frames and processing results accessible by both the IPU
and CPU.
Cache Memory: Local memory for faster access to frequently used data by processors.
Interconnects:
High-speed Bus: Connects the IPU and CPU with memory and peripherals, ensuring rapid data
transfer during processing tasks.

Peripherals:
Camera Sensor: Captures video input.
Wi-Fi Module: Enables wireless communication for remote monitoring and data transfer.
Control Units:
Task Scheduler: Allocates processing tasks between the IPU and CPU based on current workloads
and priorities.

Collaboration in the Smart Camera System


Real-Time Processing: The camera sensor continuously streams video to the IPU for immediate
image processing, while the CPU handles user requests and system management.
Efficient Data Handling: Shared memory allows quick access to image data, enabling the IPU to
perform complex calculations without the overhead of extensive data transfers.
Adaptive Task Management: The system dynamically adjusts which processor handles specific
tasks based on current processing loads, ensuring optimal performance and power efficiency.
Remote Monitoring: The Wi-Fi module allows users to access the camera feed and control settings
via a smartphone app, leveraging the processing capabilities of the MPSoC to deliver a responsive
user experience.

10. Networked Embedded Systems (NES)


Introduction to Networked Embedded Systems
Definition: NES are distributed computing devices that incorporate wireline and/or wireless
communication interfaces. They are embedded in various products, including automobiles,
medical devices, and sensor networks.
Terminology: Also referred to as Embedded Network Systems (EmNets) or Networked Embedded
System Technology (NEST).

Functionality of NES
Interaction: NES are designed to interact with their environment and users, collecting data via
sensors and processing it for real-time feedback.
Key Components:
Sensors: Measure environmental parameters (temperature, moisture, movement, etc.).
Communication Mechanisms: Wireline or wireless connections for data transfer.
Computation Engines: Perform computations on acquired data.

Constraints on NES
Size and Weight: Must be compact to minimize environmental interference.
Environmental Conditions: Should tolerate harsh conditions (temperature variations, physical
stress).
Energy Availability: Often operate on limited power sources; energy efficiency is crucial.
Cost: Must be affordable, especially when deployed in large quantities.

Examples of NES Applications


Automobiles: Safety-critical systems (e.g., airbags, braking systems) and telematics (e.g.,
navigation, diagnostics).
Environmental Monitoring: Sensor networks for habitat, agriculture, and weather monitoring.
Defense: Surveillance systems for military applications.
Biomedical Applications: Health monitoring devices.
Disaster Management: Systems for monitoring and responding to natural disasters.

Design Considerations for NES


Deployment: Strategies for placing nodes in the environment, considering safety and durability.
Random Deployment: Useful for inaccessible areas; involves arbitrary placement of sensors.
Strategic Deployment: Planned placement to maximize coverage and minimize damage risks.
Environment Interaction: NES must autonomously interact with their surroundings and adapt to
changes (e.g., moisture levels in agriculture).
Life Expectancy: Focus on long-lasting designs due to the difficulty of accessing deployed nodes
for maintenance.
Communication Protocols: Use of wired and wireless links; ability to adapt to node failures and
maintain connectivity.
Re-configurability: Allowing remote adjustments to functionality or parameters without physical
access.
Security: Protecting against malicious attacks, especially important for military and sensitive
applications.
Energy Constraints: Developing low-power networking protocols and applications to enhance
energy efficiency.
Operating Systems: Utilization of specialized real-time operating systems (RTOS) that meet the
constraints of NES (e.g., memory, power).
Design Methodologies: Need for new or modified methodologies to address the unique
requirements of NES.
Advanced Concepts
Motes and Smart Dust: Tiny, distributed sensor networks capable of gathering information in
various environments. Applications range from military to structural health monitoring.

Challenges and Future Directions


Scalability: As NES proliferate, the need for efficient management and deployment of large
networks becomes critical.
Interdisciplinary Collaboration: Successful NES design demands cooperation across fields,
including hardware design, software development, and networking.

Questions:
(1) Explain the concepts of open system and closed system in the context of Networked
Embedded Systems.
Concepts of Open System and Closed System in NES
Open System:
An open system allows for the integration of new components without major redesigns. In the
context of NES, this means components can be upgraded or replaced easily, enabling the adaptation
of the system to incorporate advancements in technology or to add new functionalities. For
example, an automobile's telematics system can accept new sensors or modules without requiring
a complete overhaul.
Closed System:
A closed system is designed with fixed components that cannot be altered or upgraded after
deployment. In NES, this means that once the system is built (e.g., safety-critical components in
vehicles), it cannot easily integrate new technologies or functionalities. This could lead to
obsolescence as newer technologies emerge.

(2) Kilinochchi is one of the districts that receive the highest sunlight in Srilanka. The faculty
of Engineering decided to deploy a pilot project to analyze the feasibility of installing solar
panels across the university premises.
You have been assigned to design a solar monitoring device that can capture solar radiation,
temperature, and solar energy. Device should transfer the sensor data to a central server.
Solar Monitoring Device is consisted of;
Solar Radiation Sensor (Pyranometer)
Temperature Sensor
System On Chip (SOC)
Energy Meter
(a) Briefly describe what is networked embedded system (NES).
A Networked Embedded System (NES) is a distributed computing system that integrates
embedded devices with communication capabilities, allowing them to connect and interact with
each other and their environment. NES can gather data from sensors, process that data, and
communicate results to central servers or other devices, facilitating real-time monitoring and
control.
(b) Explain the functionalities of networked embedded systems.
Data Acquisition: Collecting data from various sensors (e.g., temperature, solar radiation).
Real-time Processing: Performing computations on the collected data to derive meaningful
insights.
Communication: Transmitting data to a central server for storage, analysis, and monitoring.
Autonomous Operation: Operating independently in the field, adapting to changes in the
environment without human intervention.
Feedback Mechanism: Providing responses based on environmental data, such as adjusting power
usage based on solar energy availability.

(c) If you are assigned to design the network for the solar monitoring device, explain five
design considerations you will make to design this network.
Communication Protocol: Choose appropriate protocols (e.g., MQTT, HTTP) to ensure reliable
data transmission from sensors to the central server.
Energy Efficiency: Implement low-power communication strategies to extend the life of battery-
operated sensors, minimizing energy consumption.
Scalability: Design the network to accommodate additional sensors or devices in the future without
significant redesigns.
Data Integrity: Ensure secure data transmission to prevent data loss or corruption, employing
encryption if necessary.
Environmental Resilience: Design the network to withstand local environmental conditions (e.g.,
heat, humidity) to ensure consistent performance.

(d) In this design, it is proposed to choose Multi-Processor SoC (MPSoC) instead of SoC.
i. What is Multi-Processor SoC (MPSoC)?
A Multi-Processor System-on-Chip (MPSoC) is an integrated circuit that contains multiple
processing units (CPUs, GPUs) on a single chip, allowing for parallel processing and improved
performance. MPSoCs are designed to handle complex computational tasks efficiently by
distributing workloads across multiple processors.

ii. Give three reasons why you could go for a MPSoC design?
Enhanced Performance: MPSoCs can perform multiple tasks simultaneously, significantly
increasing processing speed and efficiency compared to single-core SoCs.
Energy Efficiency: By distributing tasks among multiple processors, MPSoCs can optimize energy
consumption, reducing the overall power required for computations.
Flexibility: MPSoCs can be tailored to specific applications, allowing for a mix of different types
of processors (e.g., general-purpose, real-time) to meet diverse processing needs.

iii. Describe two challenges you will face when designing Multi-Processor SoC.
Complexity in Design: The integration of multiple processors requires careful consideration of
communication strategies, memory management, and synchronization, making the design process
more complex.
Heat Management: With multiple processors operating simultaneously, managing heat dissipation
becomes critical to avoid overheating and ensure reliability.

11. RTOS
Real-Time Operating Systems (RTOS): An RTOS is an operating system that supports real-time
applications by ensuring that tasks are completed within a specified time frame, offering
deterministic behavior and limited resource utilization. RTOS is not required for all embedded
systems but is essential for large, complex applications, such as flight control systems.

Features of an Operating System


Multitasking: Enables multiple tasks to run concurrently.
Synchronization: Ensures tasks coordinate correctly to share resources.
Inter-task Communication: Facilitates data sharing between tasks.
Interrupt and Event Handling: Manages asynchronous events.
Input/Output (I/O): Manages interaction with hardware devices.
Timers and Clocks: Provides accurate timing mechanisms.
Memory Management: Controls how memory is allocated and used.

Real-Time Embedded Systems with RTOS (Classifications of RTOS)


Hard Real-Time Systems: Have strict deadlines; missing them leads to catastrophic failures (e.g.,
medical devices).
Firm Real-Time Systems: Missing deadlines degrades performance but doesn't cause total failure
(e.g., manufacturing systems).
Soft Real-Time Systems: Can tolerate missed deadlines without severe consequences (e.g.,
multimedia systems).

Kernel Architecture and Services


Kernel: The core part of the OS, responsible for managing resources like memory and devices.
Kernel Models:
• Monolithic Kernel: Provides rich abstractions, leading to faster performance.
• Microkernel: Focuses on basic communication and I/O, offering more stability.
• Exokernel: Directly manages resources, allowing for fine-grained control.
Task Management:
• Task Components: Includes the Task Control Block (TCB), Task Stack, and Task Routine.
• Scheduler: Manages the state of tasks, deciding which task to execute next.
• Context Switching: Process of saving the state of one task and restoring another.

Task Management and Scheduling


Task States: Running, Ready, Blocked, Dormant.
Scheduling Types:
• Non-Preemptive Scheduling: Tasks run until they voluntarily give up the CPU.
• Preemptive Scheduling: Higher-priority tasks can interrupt lower-priority ones.
Scheduling Algorithms:
• Clock Driven: Pre-schedules tasks based on known parameters.
• Weighted Round Robin: Allocates CPU time based on task priority.
• Priority Scheduling: Tasks are assigned based on priority levels (e.g., Earliest Deadline
First, Rate Monotonic Scheduling).

Task Synchronization and Inter-Task Communication


Synchronization Mechanisms:
• Event Objects: Used for tasks that need to wait for specific events.
• Semaphores: Ensures that shared resources are accessed by one task at a time.
• Semaphore Types: Binary, Counting, Mutual Exclusion (Mutex).

Inter-Task Communication Mechanisms:


• Message Queues: Allows tasks to send and receive messages.
• Pipes: Facilitates simple data exchange between tasks.
• Remote Procedure Calls (RPC): Allows tasks to execute procedures on remote systems.

Memory Management
Memory Hierarchy: Ranges from fast, small CPU registers to slower, larger remote storage.
Stack and Heap Management:
• Stack: Primarily used for context switching.
• Heap: Used for dynamic memory allocation by the kernel.
Dynamic Memory Management:
• First-Fit, Best-Fit, Buddy System: Different strategies for allocating memory dynamically.

Timer and Interrupt Management


Timer Management: Essential for scheduling tasks based on time, using relative and absolute
timers.
Interrupt Handling: Manages hardware interrupts to ensure data integrity and minimize latency.

Device I/O Management


I/O Management: Provides a framework for managing diverse hardware devices.
Protection Mechanisms: Ensures reliable communication with hardware, minimizing the need for
privileged instructions.

Developing and Selecting an RTOS


Approaches:
• Adapting Existing OS: Modifying general-purpose OS for embedded use (e.g., Linux).
• Purpose-Built Embedded OS: Designed specifically for embedded systems with real-time
requirements.
Selection Criteria:
• Scalability, Portability, Run-time Facilities, Performance Metrics.
• Development Tools: Importance of debugging and profiling tools.
• Commercial Aspects: Consideration of costs, licensing, and vendor stability.

RTOS Examples
Fire Alarm System: Example of a real-time embedded system that uses RTOS to manage multiple
sensors, controllers, and a central server with a focus on fast response time and secure
communication.

Questions:
1. Explain the concept of Real-Time Operating Systems (RTOS) and discuss the main
classifications of RTOS used in embedded systems.
Concept of RTOS: A Real-Time Operating System (RTOS) is a specialized operating system
designed to manage hardware resources, run applications, and process data in real-time. The key
characteristic of an RTOS is its deterministic behavior, ensuring that tasks are executed and
completed within strict timing constraints. This makes RTOS crucial for applications where timing
is critical, such as in embedded systems for medical devices, automotive controls, and industrial
automation.

Classifications of RTOS: RTOS can be classified into three main types based on their timing
requirements:

Hard Real-Time Systems:


In hard real-time systems, missing a deadline can lead to catastrophic consequences. These systems
have zero tolerance for delays, and failure to meet timing constraints can result in severe
consequences.
Example: Flight control systems, pacemakers.

Firm Real-Time Systems:


Firm real-time systems have strict timing requirements, but missing a deadline does not lead to
total system failure. Instead, it results in degraded system performance or reduced output quality.
Example: Manufacturing systems with robot assembly lines.

Soft Real-Time Systems:


Soft real-time systems have more lenient timing constraints, allowing for occasional deadline
misses without significantly impacting the overall system functionality. The system can recover
and continue to function.
Example: Multimedia systems, online databases.
2. Discuss the trade-offs between using a monolithic RTOS kernel and a microkernel in
embedded systems. Compare the advantages and disadvantages of each approach,
considering factors such as footprint, modularity, and system complexity.
Monolithic RTOS Kernel:
• Advantages:
o Performance: Since all services (e.g., file system, network stack, device drivers)
run in the same address space, monolithic kernels generally offer better
performance due to fewer context switches and reduced inter-process
communication overhead.
o Simplicity: The monolithic approach can simplify development because all
components are integrated into a single, cohesive system.
o Speed: Less overhead in messaging and context switching contributes to faster
execution, which is beneficial in real-time applications.
• Disadvantages:
o Footprint: Monolithic kernels can be large and require more memory, which is a
significant drawback in resource-constrained embedded systems.
o Modularity: Adding or modifying services can be challenging, as changes to one
part of the system might affect the entire kernel.
o Stability: A bug in any kernel component can potentially crash the entire system,
making it less stable.

Microkernel RTOS:
• Advantages:
o Modularity: Microkernels follow a more modular design, with only essential
services running in kernel mode. Other services run in user space, making it easier
to update and modify.
o Stability: Since most services run in user mode, a failure in one service is less likely
to crash the entire system. This improves overall system stability.
o Footprint: Microkernels can be more lightweight, making them ideal for
embedded systems with limited resources.
• Disadvantages:
o Performance Overhead: Microkernels require more context switches and inter-
process communication, which can introduce performance overhead.
o Complexity: The separation of services can make system design and debugging
more complex.

3. An Real Time Operating System (RTOS) is an OS for response time-controlled and event-
controlled processes that provides a required level of service in a bounded response time.
(a) Explain when is RTOS more necessary than Superloop in embedded projects.

When is RTOS More Necessary than Superloop? RTOS is necessary when an embedded project
has strict timing requirements, complex task management, and needs to handle multiple tasks
simultaneously. A superloop, which is a simple infinite loop controlling the execution flow, is
insufficient for complex applications where tasks must be executed in parallel with precise timing.
An RTOS is required in scenarios like real-time data acquisition, motor control systems, and
applications where missed deadlines could result in system failure.

(b) State and describe three Task Swapping Methods.


Cooperative Multitasking:
• Tasks voluntarily yield control of the CPU, allowing other tasks to run. The system relies
on tasks to cooperate by yielding when appropriate.
Preemptive Multitasking:
• The RTOS decides when a task should be suspended to run a higher-priority task. Tasks
can be interrupted at any time, ensuring that high-priority tasks are always executed first.
Round-Robin Scheduling:
• Tasks are given equal time slices and are rotated in a cyclic order. Each task gets a turn to
execute, making it fair but less responsive to priority needs.

(c) What are the three disadvantages of multitasking?


Increased Complexity: Managing multiple tasks adds complexity to system design and
debugging.
Resource Contention: Multiple tasks competing for shared resources can lead to conflicts and
race conditions.
Overhead: Context switching between tasks introduces overhead, which can reduce overall
system performance.

(d) Pre-emption and Co-operative multitasking are two task swapping methods used in
RTOS.
A RTOS system is assigned with five processes named P1,P2,P3,P4 and P5. The arrival time
and execution time of each task are given in the following.

Task P1--> Arrival time (ms) = 8, Execution time (ms) = 8


Task P2--> Arrival time (ms) = 5, Execution time (ms) = 6
Task P3--> Arrival time (ms) = 3, Execution time (ms) = 4
Task P4--> Arrival time (ms) = 0, Execution time (ms) = 3
Task P5--> Arrival time (ms) = 4, Execution time (ms) = 3

i. Draw the task timeline for cooperative multitasking for the above given tasks.

ii. Draw the task timeline for pre-emption under short job first (SJF) priority scheduling.
iii. find the average waiting time for both multi-tasking.

4. Briefly explain the degree of 'real-time' in Real Time Operating Systems (RTOS) with
examples?
The degree of 'real-time' in RTOS refers to the strictness of the timing requirements that the system
must meet. This can be classified into hard, firm, and soft real-time systems:

Hard Real-Time: Tasks must be completed within strict deadlines; missing a deadline results in
catastrophic failure. Example: Pacemakers.
Firm Real-Time: Deadlines are crucial, but missing one results in degraded system performance
rather than total failure. Example: Airline reservation systems.
Soft Real-Time: Deadlines can be missed occasionally, with minimal impact on the system.
Example: Video streaming services.

5. Give two reasons that why full-feature or general purpose operating system is not always
the good solution in embedded systems?
Resource Constraints:
Embedded systems often have limited resources such as memory, processing power, and battery
life. General-purpose operating systems (GPOS) are designed for resource-rich environments and
might be too large and resource-intensive for embedded systems.

Real-Time Requirements:
GPOS is typically designed for fairness and throughput, rather than strict timing requirements. In
contrast, embedded systems often require predictable and deterministic behavior, which is better
provided by an RTOS. GPOS may not be able to guarantee the timely execution of tasks in time-
sensitive applications.

You might also like