UNIT-1 Notes Embedded Systems
UNIT-1 Notes Embedded Systems
UNIT-1 Notes Embedded Systems
Bootloader
A bootloader is a small piece of software that executes soon after you power up a computer. On a
desktop PC, the bootloader resides on the master boot record (MBR) of the hard drive, and is
executed after the PC BIOS performs various system initializations. The bootloader then passes
system information to the kernel (for instance, the hard drive partition to mount as root) and then
executes the kernel.
In an embedded system, the role of the bootloader is more complicated, since an embedded
system does not have a BIOS to perform the initial system configuration. The low-level
initialization of the microprocessor, memory controllers, and other board-specific hardware
varies from board to board and CPU to CPU. These initializations must be performed before a
kernel image can execute.
At a minimum, a bootloader for an embedded system performs the following functions:
Initializes the hardware, especially the memory controller.
Provides boot parameters for the operating system image.
Starts the operating system image.
Additionally, most bootloaders also provide convenient features that simplify development and
update of the firmware, such as:
Reading and writing arbitrary memory locations.
Uploading new binary images to the board's RAM via a serial line or Ethernet.
Copying binary images from RAM to Flash memory.
Kernel
The kernel is the fundamental part of an operating system. It is responsible for managing the
resources and the communication between hardware and software components.
The kernel offers hardware abstraction to the applications and provides secure access to the
system memory. It also includes an interrupt handler that handles all requests or completed I/O
operations.
Kernel modules
Modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They
extend the functionality of the kernel without requiring a system reboot.
For example, one type of module is the device driver, which allows the kernel to access hardware
connected to the system. Without these modules, Linux developers would have to build
monolithic kernels and add new functionality directly into the kernel image. The result would be
a large, cumbersome kernel. Another disadvantage of working without a kernel module is that
you would have to rebuild and reboot the kernel every time you add new functionality.
In embedded systems, where functionality can be activated depending on the needs, kernel
modules become a very effective way of adding features without enlarging the kernel image size.
Applications
Software applications are programs that employ the capabilities and resources of a computer to
do a particular task.
Applications make use of hardware devices by communicating with device drivers, which are
part of the kernel.
Cross-compilation
If you generate code for an embedded target on a development system with a different
microprocessor architecture, you need a cross-development environment. A cross-development
compiler is one that executes in the development system (for example, an x86 PC), but generates
code that executes in a different processor (for example, if the target is ARM).
Logic gates
Digital systems are said to be constructed by using logic gates. These gates are the AND, OR,
NOT, NAND, NOR, EXOR and EXNOR gates. The basic operations are described below with
the aid of truth tables.
AND gate
The AND gate is an electronic circuit that gives a high output (1) only if all its inputs are
high. A dot (.) is used to show the AND operation i.e. A.B. Bear in mind that this dot is
sometimes omitted i.e. AB
OR gate
The OR gate is an electronic circuit that gives a high output (1) if one or more of its
inputs are high. A plus (+) is used to show the OR operation.
NOT gate
The NOT gate is an electronic circuit that produces an inverted version of the input at its
output. It is also known as an inverter. If the input variable is A, the inverted output is
known as NOT A. This is also shown as A', or A with a bar over the top, as shown at the
outputs. The diagrams below show two ways that the NAND logic gate can be configured
to produce a NOT gate. It can also be done using NOR logic gates in the same way.
NAND gate
This is a NOT-AND gate which is equal to an AND gate followed by a NOT gate. The
outputs of all NAND gates are high if any of the inputs are low. The symbol is an AND
gate with a small circle on the output. The small circle represents inversion.
NOR gate
This is a NOT-OR gate which is equal to an OR gate followed by a NOT gate. The
outputs of all NOR gates are low if any of the inputs are high.
The symbol is an OR gate with a small circle on the output. The small circle represents
inversion.
EXOR gate
The 'Exclusive-OR' gate is a circuit which will give a high output if either, but not
both, of its two inputs are high. An encircled plus sign ( ) is used to show the EOR
operation.
EXNOR gate
The 'Exclusive-NOR' gate circuit does the opposite to the EOR gate. It will give a low output
if either, but not both, of its two inputs are high. The symbol is an EXOR gate with a small
circle on the output. The small circle represents inversion.
MEMORY
INTRODUCTION
There are different types of memories available to be used in computers as well as
embedded system.
TYPES OF MEMORY
There are three main types of memories, they are
It is non-volatile memory, i.e. the contents are retained even after electricity is
switched off and available after it is switched on.
HYBRID
TYPES OF RAM
There are 2 important memory device in the RAM family.
SRAM (Static RAM)
If the power is turned off then its contents will be lost forever.
TYPES OF ROM
Masked ROM
This memory device comes in an un-programmed state i.e. at the time of purchased it is in an
un-programmed state and it allows the user to write his/her own program or code into this ROM.
PROMs are also known as one-time-programmable (OTP) device because any data can be
written on it only once. If the data on the chip has some error and needs to be modified this
memory chip has to be discarded and the modified data has to be written to another new PROM.
The erase operation in case of an EPROM is performed by exposing the chip to a source of
ultraviolet light.
The reprogramming ability makes EPROM as essential part of software development and testing
process.
EEPROMs
EEPROMs stand for Electrically Erasable and Programmable ROM.
Flash
Flash memory devices are high density, low cost, nonvolatile, fast (to read, but not to write), and
electrically reprogrammable.
Flash is much more popular than EEPROM and is rapidly displacing many of the ROM devices.
Flash devices can be erased only one sector at a time, not byte by byte.
NVRAM
When power is turned on, the NVRAM operates just like any other SRAM but when
power is off, the NVRAM draws enough electrical power from the battery to retain its
content.
In the absence of DMA the processor must read the data from one device and write it to the
other one byte or word at a time.
DMA Presence Advantage: The DMA Controller performs entire transfer with little help
from the Processor.
Working of DMA
The Processor provides the DMA Controller with source and destination address &
total number of bytes of the block of data which needs transfer.
After copying each byte each address is incremented & remaining bytes are reduced by one.
When number of bytes reaches zeros the block transfer ends & DMA Controller sends an
Interrupt to Processor.
MICROPROCESSOR BUS
Bus is a group of conducting wires which carries information, all the peripherals are
connected to microprocessor through Bus. Diagram to represent bus organization system of 8085
Microprocessor. There are three types of buses. It is a group of conducting wires which carries
address only.
INTERRUPT
An interrupt is a signal to the processor emitted by hardware or software indicating an event that
needs immediate attention. Whenever an interrupt occurs, the controller completes the
execution of the current instruction and starts the execution of an Interrupt Service
Routine (ISR) or Interrupt Handler. ISR tells the processor or controller what to do when the
interrupt occurs. The interrupts can be either hardware interrupts or software interrupts.
Hardware Interrupt
A hardware interrupt is an electronic alerting signal sent to the processor from an external
device, like a disk controller or an external peripheral. For example, when we press a key on the
keyboard or move the mouse, they trigger hardware interrupts which cause the processor to read
the keystroke or mouse position.
Software Interrupt
For every interrupt, there must be an interrupt service routine (ISR), or interrupt handler.
When an interrupt occurs, the microcontroller runs the interrupt service routine. For every
interrupt, there is a fixed location in memory that holds the address of its interrupt service
routine, ISR. The table of memory locations set aside to hold the addresses of ISRs is called as
the Interrupt Vector Table.
Reset 0000 9
When the reset pin is activated, the 8051 jumps to the address location 0000. This is
power-up reset.
Two interrupts are set aside for the timers: one for timer 0 and one for timer 1. Memory
locations are 000BH and 001BH respectively in the interrupt vector table.
Two interrupts are set aside for hardware external interrupts. Pin no. 12 and Pin no. 13 in
Port 3 are for the external hardware interrupts INT0 and INT1, respectively. Memory
locations are 0003H and 0013H respectively in the interrupt vector table.
Serial communication has a single interrupt that belongs to both receive and transmit.
Memory location 0023H belongs to this interrupt.
Steps to Execute an Interrupt
When an interrupt gets active, the microcontroller goes through the following steps −
The microcontroller closes the currently executing instruction and saves the address of
the next instruction (PC) on the stack.
It also saves the current status of all the interrupts internally (i.e., not on the stack).
It jumps to the memory location of the interrupt vector table that holds the address of the
interrupts service routine.
The microcontroller gets the address of the ISR from the interrupt vector table and jumps
to it. It starts to execute the interrupt service subroutine, which is RETI (return from
interrupt).
Upon executing the RETI instruction, the microcontroller returns to the location where it
was interrupted. First, it gets the program counter (PC) address from the stack by
popping the top bytes of the stack into the PC. Then, it start to execute from that
address.
Imagine you are a software engineer working at a company. Your team is responsible for
designing an automatic dog entry door. This embedded device can be wirelessly updated
with RFID tags for dogs or other pets to be allowed entry.
The door needs to automatically unlock for dogs that are in the vicinity of the door. A pet must
be allowed to enter even when the table of RFID tags is being updated. The RFID tag IDs are
shared data since the interrupt service routine that must update the tag IDs and the main ()
program that is responsible for automatically unlocking the door when dogs are in the vicinity
both share and use this data. A problem will occur when the doggy door is in the middle of an
RFID tag ID update when a dog needs to get through the door. We wouldn’t want to let the poor
dog wait outside in the freezing cold while the device is in the middle of an RFID tag update!
How do we create a solution that solves the shared data problem? The RFID tags need to be
updated regularly but that same data is needed regularly by the main () program to let dogs enter
when they need to. Let’s solve this now.
Interrupt latency refers primarily to the software interrupt handling latencies. In other words, the
amount of time that elapses from the time that an external interrupt arrives at the processor
until the time that the interrupt processing begins.
One of the most important aspects of kernel real-time performance is the ability to service an
interrupt request (IRQ) within a specified amount of time.
Here are the sources contributing the interrupt latency (abstracts from Reduce RTOS latency in
interrupt-intensive apps):
Operating system (OS) interrupt latency
An RTOS must sometimes disable interrupts while accessing critical OS data structures. The
maximum time that an RTOS disables interrupts is referred to as the OS interrupt latency.
Although this overhead will not be incurred on most interrupts since the RTOS disables
interrupts relatively infrequently, developers must always factor in this interrupt latency to
understand the worst-case scenario.
Low-level interrupt-related operations
When an interrupt occurs, the context must be initially saved and then later restored after the
interrupt processing has been completed. The amount of context that needs to be saved depends
on how many registers would potentially be modified by the ISR (Interrupt Service Routine).
Enabling the ISR to interact with the RTOS
An ISR will typically interact with an RTOS by making a system call such as a semaphore post.
To ensure the ISR function can complete and exit before any context switch to a task is made,
the RTOS interrupt dispatcher must disable preemption before calling the ISR function.
Once the ISR function completes, preemption is re-enabled and the application will context
switch to the highest priority thread that is ready to run. If there is no need for an ISR to make an
RTOS system call, the disable/enable kernel preemption operations would again add overhead. It
is logical to handle such an ISR outside of the RTOS.
Context switching
When an ISR defers processing to an RTOS task or other thread, a context switch needs to occur
for the task to run. Context switching will still typically be the largest part of any-RTOS related
interrupt processing overhead.
IRQ (Interrupt Request)
An (or IRQ) is a hardware signal sent to the processor that temporarily stops a running program
and allows a special program, an interrupt handler, to run instead. Interrupts are used to handle
such events as data receipt from a modem or network, or a key press or mouse movement.
FIQ (Fast Interrupt Request)
An FIQ is just a higher priority interrupt request, that is prioritized by disabling IRQ and other
FIQ handlers during request servicing. Therefore, no other interrupts can occur during the
processing of the active FIQ interrupt.
Embedded systems are on the rise as the technology paves the way for the future of smart
manufacturing across a range of industries. Microcontrollers — the hardware at the center of
embedded systems — are improving quickly, allowing for better machine control and
monitoring. In this article, we will discuss the emerging trends for embedded systems in 2019
that will enable enhanced security, better control, and improved scalability.
To give context into how large the embedded systems industry is, here are a few statistics
The global market for the embedded systems industry was valued at $68.9 billion in 2017
and is expected to rise to $105.7 billion by the end of 2025.
40% of the industrial share for embedded systems market is shared by the top 10 vendors.
In 2015, embedded hardware contributed to 93% of the market share and it is expected to
dominate the market over embedded software in the upcoming years as well.
The industry for embedded systems is growing and there are still several barriers that must be
overcome. Below are five notable trends of the embedded systems market for 2019.
With the rise of the Internet of Things (IoT), the primary focus of developers and manufacturers
is on security. In 2019, advanced technologies for embedded security will emerge as key
generators for identifying devices in an IoT network, and as microcontroller security solutions
that isolate security operations from normal operations.
Getting embedded industrial systems connected to the internet and cloud can take weeks and
months in the traditional development cycle. Consequently, cloud connectivity tools will be an
important future market for embedded systems. These tools are designed to simplify the process
of connecting embedded systems with cloud-based services by reducing the underlying hardware
complexities.
A similar yet innovative market for low-energy IoT device developers is Bluetooth mesh
networks. These solutions can be used for seamless connectivity of nearby devices while
reducing energy consumption and costs.
A key challenge for developers is the optimization of battery-powered devices for low power
consumption and maximum uptime. Several solutions are under development for monitoring and
reducing the energy consumption of embedded devices that we can expect to see in 2019. These
include energy monitors and visualizations that can help developers fine-tune their embedded
systems, and advanced Bluetooth and Wi-Fi modules that consume less power at the hardware
layer.
Developers currently lack tools for monitoring and visualizing their embedded industrial systems
in real time. The industry is working on real-time visualization tools that will give software
engineers the ability to review embedded software execution. These tools will enable developers
to keep a check on key metrics such as raw or processed sensor data and event-based context
switches for tracking the performance of embedded systems.
Deep Learning Applications
Deep learning represents a rich, yet unexplored embedded systems market that has a range of
applications from image processing to audio analysis. Even though developers are primarily
focused on security and cloud connectivity right now, deep learning and artificial intelligence
concepts will soon emerge as a trend in embedded systems.
The industrial sector for embedded systems is undergoing numerous transformations that will
enable developers to build systems that are high-performing, secure, and robust. As a developer
and manufacturer in this industry, it is important to stay updated with the latest technologies and
trends. For 2019, the embedded systems market is shaping up for simplified cloud connectivity,
improved security tools, real-time visualizations, lower power consumption, and deep learning
solutions.
The Round Robin architecture is the easiest architecture for embedded systems. The main
method consists of a loop that runs again and again, checking each of the I/O devices at each turn
in order to see if they need service. No fancy interrupts, no fear of shared data…just a plain
single execution time
Example: Multimeter
no particularly lengthy processing (even very simple microprocessors can check switch,
take measurement and update display several times per second.)
small delays in switch position changes will go unnoticed thread that gets executed again
and again.
Advantages:
Disadvantages:
A sensor connected to the Arduino that urgently needs service must wait its turn.
Fragile. Only as strong as the weakest link. If a sensor breaks or something else breaks,
everything breaks.
Response time has low stability in the event of changes to the code
Round-Robin Problems
If any device needs a response in less time than the worst duration of the loop the system won't
function.
If A and B take 5ms each and Z needs a response time of less than 7ms its not possible. This can
be mitigate somewhat by doing (A,Z,B,Z) in a loop instead of (A,B,Z).
Scalability of this solution is poor. Even if absolute deadlines do not exist, overall response time
may become unacceptably poor.
Round-Robin architecture is fragile – Even if the programmer manages to tune the loop
sufficiently to provide a functional system a single addition or change can ruin everything.
This Round Robin with Interrupts architecture is similar to the Round Robin architecture, except
it has interrupts. When an interrupt is triggered, the main program is put on hold and control
shifts to the interrupt service routine. Code that is inside the interrupt service routines has a
higher priority than the task code.
Advantages:
Shared data
All interrupts could fire off concurrently
1. Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready state or from
waiting state to ready state. The resources (mainly CPU cycles) are allocated to the process for
the limited amount of time and then is taken away, and the process is again placed back in the
ready queue if that process still has CPU burst time remaining. That process stays in ready queue
till it gets next chance to execute.
Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time
First (SRTF), Priority (preemptive version), etc.
2. Non-Preemptive Scheduling:
Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (non preemptive version), etc.
1. In preemptive scheduling the CPU is allocated to the processes for the limited time
whereas in Non-preemptive scheduling, the CPU is allocated to the process till it
terminates or switches to waiting state.
3. In Preemptive Scheduling, there is the overhead of switching the process from ready state
to running state, vise-verse, and maintaining the ready queue. Whereas in case of non-
preemptive scheduling has no overhead of switching the process from running state to
ready state.
4. In preemptive scheduling, if a high priority process frequently arrives in the ready queue
then the process with low priority has to wait for a long, and it may have to starve. On the
other hands, in the non-preemptive scheduling, if CPU is allocated to the process having
larger burst time then the processes with small burst time may have to starve.
5. Preemptive scheduling attain flexible by allowing the critical processes to access CPU as
they arrive into the ready queue, no matter what process is executing currently. Non-
preemptive scheduling is called rigid as even if a critical process enters the ready
queuethe process running CPU is not disturbed.
6. The Preemptive Scheduling has to maintain the integrity of shared data that’s why it
iscost associative as it which is not the case with Non-preemptive Scheduling.
Comparison Chart: