UNIT-1 Notes Embedded Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

SCSA1307 EMBEDDED SYSTEM

UNIT- 1 INTRODUCTION AND REVIEW OF EMBEDDED HARDWARE


UNIT 1 INTRODUCTION AND REVIEW OF EMBEDDED HARDWARE

Embedded systems terminology


Embedded systems are ubiquitous. These dedicated small computers are present in
communications systems, vehicles, manufacturing machinery, detection systems, and many
machines that make our lives easier.
The open nature of Android Linux and its availability for many different hardware architectures
makes it an excellent candidate for embedded platforms.
The following are the most common concepts you should know while working with embedded
devices.

Bootloader
A bootloader is a small piece of software that executes soon after you power up a computer. On a
desktop PC, the bootloader resides on the master boot record (MBR) of the hard drive, and is
executed after the PC BIOS performs various system initializations. The bootloader then passes
system information to the kernel (for instance, the hard drive partition to mount as root) and then
executes the kernel.
In an embedded system, the role of the bootloader is more complicated, since an embedded
system does not have a BIOS to perform the initial system configuration. The low-level
initialization of the microprocessor, memory controllers, and other board-specific hardware
varies from board to board and CPU to CPU. These initializations must be performed before a
kernel image can execute.
At a minimum, a bootloader for an embedded system performs the following functions:
 Initializes the hardware, especially the memory controller.
 Provides boot parameters for the operating system image.
 Starts the operating system image.
Additionally, most bootloaders also provide convenient features that simplify development and
update of the firmware, such as:
 Reading and writing arbitrary memory locations.
 Uploading new binary images to the board's RAM via a serial line or Ethernet.
 Copying binary images from RAM to Flash memory.

Kernel
The kernel is the fundamental part of an operating system. It is responsible for managing the
resources and the communication between hardware and software components.
The kernel offers hardware abstraction to the applications and provides secure access to the
system memory. It also includes an interrupt handler that handles all requests or completed I/O
operations.

Kernel modules
Modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They
extend the functionality of the kernel without requiring a system reboot.
For example, one type of module is the device driver, which allows the kernel to access hardware
connected to the system. Without these modules, Linux developers would have to build
monolithic kernels and add new functionality directly into the kernel image. The result would be
a large, cumbersome kernel. Another disadvantage of working without a kernel module is that
you would have to rebuild and reboot the kernel every time you add new functionality.
In embedded systems, where functionality can be activated depending on the needs, kernel
modules become a very effective way of adding features without enlarging the kernel image size.

Root file system


Operating systems normally rely on a set of files and directories. The root file system is the top
of the hierarchical file tree. It contains the files and directories critical for system operation,
including the device directory and programs for booting the system. The root file system also
contains mount points where other file systems can be mounted to connect to the root file system
hierarchy.

Applications
Software applications are programs that employ the capabilities and resources of a computer to
do a particular task.
Applications make use of hardware devices by communicating with device drivers, which are
part of the kernel.

Cross-compilation
If you generate code for an embedded target on a development system with a different
microprocessor architecture, you need a cross-development environment. A cross-development
compiler is one that executes in the development system (for example, an x86 PC), but generates
code that executes in a different processor (for example, if the target is ARM).

Logic gates

Digital systems are said to be constructed by using logic gates. These gates are the AND, OR,
NOT, NAND, NOR, EXOR and EXNOR gates. The basic operations are described below with
the aid of truth tables.
AND gate

The AND gate is an electronic circuit that gives a high output (1) only if all its inputs are
high. A dot (.) is used to show the AND operation i.e. A.B. Bear in mind that this dot is
sometimes omitted i.e. AB

OR gate

The OR gate is an electronic circuit that gives a high output (1) if one or more of its
inputs are high. A plus (+) is used to show the OR operation.

NOT gate

The NOT gate is an electronic circuit that produces an inverted version of the input at its
output. It is also known as an inverter. If the input variable is A, the inverted output is
known as NOT A. This is also shown as A', or A with a bar over the top, as shown at the
outputs. The diagrams below show two ways that the NAND logic gate can be configured
to produce a NOT gate. It can also be done using NOR logic gates in the same way.
NAND gate

This is a NOT-AND gate which is equal to an AND gate followed by a NOT gate. The
outputs of all NAND gates are high if any of the inputs are low. The symbol is an AND
gate with a small circle on the output. The small circle represents inversion.

NOR gate

This is a NOT-OR gate which is equal to an OR gate followed by a NOT gate. The
outputs of all NOR gates are low if any of the inputs are high.

The symbol is an OR gate with a small circle on the output. The small circle represents
inversion.

EXOR gate

The 'Exclusive-OR' gate is a circuit which will give a high output if either, but not
both, of its two inputs are high. An encircled plus sign ( ) is used to show the EOR
operation.
EXNOR gate

The 'Exclusive-NOR' gate circuit does the opposite to the EOR gate. It will give a low output
if either, but not both, of its two inputs are high. The symbol is an EXOR gate with a small
circle on the output. The small circle represents inversion.

MEMORY

INTRODUCTION
There are different types of memories available to be used in computers as well as
embedded system.

TYPES OF MEMORY
There are three main types of memories, they are

RAM (Random Access Memory)


It is read write memory.

Data at any memory location can be read or written.

It is volatile memory, i.e. retains the contents as long as electricity is supplied.

Data access to RAM is very fast

ROM (Read Only Memory)

It is read only memory.

Data at any memory location can be only read.

It is non-volatile memory, i.e. the contents are retained even after electricity is
switched off and available after it is switched on.

Data access to ROM is slow compared to RAM

HYBRID

It is combination of RAM as well as ROM

It has certain features of RAM and some of ROM


Like RAM the contents to hybrid memory can be read and written

Like ROM the contents of hybrid memory are non volatile

The following figure gives a classification of different types of memory

TYPES OF RAM
 There are 2 important memory device in the RAM family.
SRAM (Static RAM)

DRAM (Dynamic RAM)

SRAM (Static RAM)


It retains the content as long as the power is applied to the chip.

If the power is turned off then its contents will be lost forever.

DRAM (Dynamic RAM)


DRAM has extremely short Data lifetime(usually less than a quarter of second). This is
true even when power is applied constantly.

A DRAM controller is used to make DRAM behave more like SRAM.


The DRAM controller periodically refreshes the data stored in the DRAM. By refreshing the
data several times a second, the DRAM controller keeps the contents of memory alive for a long
time.

TYPES OF ROM

There are three types of ROM described as follows:

Masked ROM

These are hardwired memory devices found on system.

It contains pre-programmed set of instruction and data and it cannot be modified or


appended in any way. (it is just like an Audio CD that contains songs pre-written on

it and does not allow to write any other data)

The main advantage of masked ROM is low cost of production.

PROM (PROGRAMMABLE ROM )

This memory device comes in an un-programmed state i.e. at the time of purchased it is in an
un-programmed state and it allows the user to write his/her own program or code into this ROM.

In the un-programmed state the data is entirely made up of 1’s.

PROMs are also known as one-time-programmable (OTP) device because any data can be
written on it only once. If the data on the chip has some error and needs to be modified this
memory chip has to be discarded and the modified data has to be written to another new PROM.

EPROM (ERASABLE-AND-PROGRAMABLE ROM)

It is same as PROM and is programmed in same manner as a PROM.

It can be erased and reprogrammed repeatedly as the name suggests.

The erase operation in case of an EPROM is performed by exposing the chip to a source of
ultraviolet light.

The reprogramming ability makes EPROM as essential part of software development and testing
process.

TYPES OF HYBRID MEMORY


There are three types of Hybrid memory devices:

EEPROMs
EEPROMs stand for Electrically Erasable and Programmable ROM.

It is same as EPROM, but the erase operation is performed electrically.

Any byte in EEPROM can be erased and rewritten as desired

Flash

Flash memory is the most recent advancement in memory technology.

Flash memory devices are high density, low cost, nonvolatile, fast (to read, but not to write), and
electrically reprogrammable.

Flash is much more popular than EEPROM and is rapidly displacing many of the ROM devices.

Flash devices can be erased only one sector at a time, not byte by byte.

NVRAM

NVRAM is usually just a SRAM with battery backup.

When power is turned on, the NVRAM operates just like any other SRAM but when
power is off, the NVRAM draws enough electrical power from the battery to retain its
content.

NVRAM is fairly common in embedded systems.

It is more expensive than SRAM.

DIRECT MEMORY ACCESS (DMA)


DMA is a technique for transferring blocks of data directly between two hardware devices.

In the absence of DMA the processor must read the data from one device and write it to the
other one byte or word at a time.

DMA Absence Disadvantage: If the amount of data to be transferred is large or frequency of


transfer is high the rest of the software might never get a chance to run.

DMA Presence Advantage: The DMA Controller performs entire transfer with little help
from the Processor.

Working of DMA

The Processor provides the DMA Controller with source and destination address &
total number of bytes of the block of data which needs transfer.
After copying each byte each address is incremented & remaining bytes are reduced by one.

When number of bytes reaches zeros the block transfer ends & DMA Controller sends an
Interrupt to Processor.

MICROPROCESSOR BUS

Bus is a group of conducting wires which carries information, all the peripherals are
connected to microprocessor through Bus. Diagram to represent bus organization system of 8085
Microprocessor. There are three types of buses. It is a group of conducting wires which carries
address only.

There are three types of buses in a microprocessor −


 Data Bus − Lines that carry data to and from memory are called data bus. It is a
bidirectional bus with width equal to word length of the microprocessor.
 Address Bus − It is a unidirectional responsible for carrying address of a memory
location or I/O port from CPU to memory or I/O port.
 Control Bus − Lines that carry control signals like clock signals, interrupt
signal or ready signal are called control bus. They are bidirectional. Signal that denotes
that a device is ready for processing is called ready signal. Signal that indicates to a
device to interrupt its process is called an interrupt signal.

INTERRUPT
An interrupt is a signal to the processor emitted by hardware or software indicating an event that
needs immediate attention. Whenever an interrupt occurs, the controller completes the
execution of the current instruction and starts the execution of an Interrupt Service
Routine (ISR) or Interrupt Handler. ISR tells the processor or controller what to do when the
interrupt occurs. The interrupts can be either hardware interrupts or software interrupts.

Hardware Interrupt

A hardware interrupt is an electronic alerting signal sent to the processor from an external
device, like a disk controller or an external peripheral. For example, when we press a key on the
keyboard or move the mouse, they trigger hardware interrupts which cause the processor to read
the keystroke or mouse position.

Software Interrupt

A software interrupt is caused either by an exceptional condition or a special instruction in the


instruction set which causes an interrupt when it is executed by the processor. For example, if
the processor's arithmetic logic unit runs a command to divide a number by zero, to cause a
divide-by-zero exception, thus causing the computer to abandon the calculation or display an
error message. Software interrupt instructions work similar to subroutine calls.

Interrupt Service Routine

For every interrupt, there must be an interrupt service routine (ISR), or interrupt handler.
When an interrupt occurs, the microcontroller runs the interrupt service routine. For every
interrupt, there is a fixed location in memory that holds the address of its interrupt service
routine, ISR. The table of memory locations set aside to hold the addresses of ISRs is called as
the Interrupt Vector Table.

Interrupt Vector Table

There are six interrupts including RESET in 8051.

Interrupts ROM Location (Hex) Pin


Interrupts ROM Location (HEX)

Serial COM (RI and TI) 0023

Timer 1 interrupts(TF1) 001B

External HW interrupt 1 (INT1) 0013 P3.3 (13)

External HW interrupt 0 (INT0) 0003 P3.2 (12)

Timer 0 (TF0) 000B

Reset 0000 9

 When the reset pin is activated, the 8051 jumps to the address location 0000. This is
power-up reset.
 Two interrupts are set aside for the timers: one for timer 0 and one for timer 1. Memory
locations are 000BH and 001BH respectively in the interrupt vector table.
 Two interrupts are set aside for hardware external interrupts. Pin no. 12 and Pin no. 13 in
Port 3 are for the external hardware interrupts INT0 and INT1, respectively. Memory
locations are 0003H and 0013H respectively in the interrupt vector table.
 Serial communication has a single interrupt that belongs to both receive and transmit.
Memory location 0023H belongs to this interrupt.
Steps to Execute an Interrupt

When an interrupt gets active, the microcontroller goes through the following steps −
 The microcontroller closes the currently executing instruction and saves the address of
the next instruction (PC) on the stack.
 It also saves the current status of all the interrupts internally (i.e., not on the stack).
 It jumps to the memory location of the interrupt vector table that holds the address of the
interrupts service routine.
 The microcontroller gets the address of the ISR from the interrupt vector table and jumps
to it. It starts to execute the interrupt service subroutine, which is RETI (return from
interrupt).
 Upon executing the RETI instruction, the microcontroller returns to the location where it
was interrupted. First, it gets the program counter (PC) address from the stack by
popping the top bytes of the stack into the PC. Then, it start to execute from that
address.

THE SHARED DATA PROBLEM


A big problem in embedded systems occurs in embedded software when an interrupt service
routine and the main program share the same data. What happens if the main program is in the
middle of doing some important calculations using some piece of data…an interrupt occurs that
alters that piece of data…and then the main program finishes its calculation? Oops!
The calculation performed by the main program might be corrupted because it is based off the
wrong/different data value. This is known as the shared data problem.

Example of Shared Data Problem

Imagine you are a software engineer working at a company. Your team is responsible for
designing an automatic dog entry door. This embedded device can be wirelessly updated
with RFID tags for dogs or other pets to be allowed entry.
The door needs to automatically unlock for dogs that are in the vicinity of the door. A pet must
be allowed to enter even when the table of RFID tags is being updated. The RFID tag IDs are
shared data since the interrupt service routine that must update the tag IDs and the main ()
program that is responsible for automatically unlocking the door when dogs are in the vicinity
both share and use this data. A problem will occur when the doggy door is in the middle of an
RFID tag ID update when a dog needs to get through the door. We wouldn’t want to let the poor
dog wait outside in the freezing cold while the device is in the middle of an RFID tag update!
How do we create a solution that solves the shared data problem? The RFID tags need to be
updated regularly but that same data is needed regularly by the main () program to let dogs enter
when they need to. Let’s solve this now.

• This embedded device can be wirelessly updated with RFID tags.


• Dogs or other pets must be allowed entry when they are in the vicinity of the door.
• Dog must be allowed to enter even when the table of RFID tags is being updated.
• RFID tag IDs are shared data which must be managed.
• In the shared data problem for the doggy door controller, we need to make sure the dog
can enter at all times while the RFID tags are being updated. Because this is a dog, it is
unacceptable for the door to remain locked and keep a dog waiting.
INTERRUPTS LATENCY

Interrupt latency refers primarily to the software interrupt handling latencies. In other words, the
amount of time that elapses from the time that an external interrupt arrives at the processor
until the time that the interrupt processing begins.
One of the most important aspects of kernel real-time performance is the ability to service an
interrupt request (IRQ) within a specified amount of time.

Here are the sources contributing the interrupt latency (abstracts from Reduce RTOS latency in
interrupt-intensive apps):
Operating system (OS) interrupt latency
An RTOS must sometimes disable interrupts while accessing critical OS data structures. The
maximum time that an RTOS disables interrupts is referred to as the OS interrupt latency.
Although this overhead will not be incurred on most interrupts since the RTOS disables
interrupts relatively infrequently, developers must always factor in this interrupt latency to
understand the worst-case scenario.
Low-level interrupt-related operations

When an interrupt occurs, the context must be initially saved and then later restored after the
interrupt processing has been completed. The amount of context that needs to be saved depends
on how many registers would potentially be modified by the ISR (Interrupt Service Routine).
Enabling the ISR to interact with the RTOS

An ISR will typically interact with an RTOS by making a system call such as a semaphore post.
To ensure the ISR function can complete and exit before any context switch to a task is made,
the RTOS interrupt dispatcher must disable preemption before calling the ISR function.
Once the ISR function completes, preemption is re-enabled and the application will context
switch to the highest priority thread that is ready to run. If there is no need for an ISR to make an
RTOS system call, the disable/enable kernel preemption operations would again add overhead. It
is logical to handle such an ISR outside of the RTOS.
Context switching

When an ISR defers processing to an RTOS task or other thread, a context switch needs to occur
for the task to run. Context switching will still typically be the largest part of any-RTOS related
interrupt processing overhead.
IRQ (Interrupt Request)

An (or IRQ) is a hardware signal sent to the processor that temporarily stops a running program
and allows a special program, an interrupt handler, to run instead. Interrupts are used to handle
such events as data receipt from a modem or network, or a key press or mouse movement.
FIQ (Fast Interrupt Request)

An FIQ is just a higher priority interrupt request, that is prioritized by disabling IRQ and other
FIQ handlers during request servicing. Therefore, no other interrupts can occur during the
processing of the active FIQ interrupt.

EMBEDDED SYSTEM EVOLUTION TRENDS

Embedded systems are on the rise as the technology paves the way for the future of smart
manufacturing across a range of industries. Microcontrollers — the hardware at the center of
embedded systems — are improving quickly, allowing for better machine control and
monitoring. In this article, we will discuss the emerging trends for embedded systems in 2019
that will enable enhanced security, better control, and improved scalability.

Current Trends in Embedded Systems Applications

An embedded system is an application-specific system designed with a combination of hardware


and software to meet real-time constraints. The key characteristics of embedded industrial
systems include speed, security, size, and power. The major trends in the embedded systems
market revolve around the improvement of these characteristics.

To give context into how large the embedded systems industry is, here are a few statistics

 The global market for the embedded systems industry was valued at $68.9 billion in 2017
and is expected to rise to $105.7 billion by the end of 2025.
 40% of the industrial share for embedded systems market is shared by the top 10 vendors.
 In 2015, embedded hardware contributed to 93% of the market share and it is expected to
dominate the market over embedded software in the upcoming years as well.

Future Trends of Embedded Systems Industry

The industry for embedded systems is growing and there are still several barriers that must be
overcome. Below are five notable trends of the embedded systems market for 2019.

Improved Security for Embedded Devices

With the rise of the Internet of Things (IoT), the primary focus of developers and manufacturers
is on security. In 2019, advanced technologies for embedded security will emerge as key
generators for identifying devices in an IoT network, and as microcontroller security solutions
that isolate security operations from normal operations.

Cloud Connectivity and Mesh Networking

Getting embedded industrial systems connected to the internet and cloud can take weeks and
months in the traditional development cycle. Consequently, cloud connectivity tools will be an
important future market for embedded systems. These tools are designed to simplify the process
of connecting embedded systems with cloud-based services by reducing the underlying hardware
complexities.

A similar yet innovative market for low-energy IoT device developers is Bluetooth mesh
networks. These solutions can be used for seamless connectivity of nearby devices while
reducing energy consumption and costs.

Reduced Energy Consumption

A key challenge for developers is the optimization of battery-powered devices for low power
consumption and maximum uptime. Several solutions are under development for monitoring and
reducing the energy consumption of embedded devices that we can expect to see in 2019. These
include energy monitors and visualizations that can help developers fine-tune their embedded
systems, and advanced Bluetooth and Wi-Fi modules that consume less power at the hardware
layer.

Visualization Tools with Real Time Data

Developers currently lack tools for monitoring and visualizing their embedded industrial systems
in real time. The industry is working on real-time visualization tools that will give software
engineers the ability to review embedded software execution. These tools will enable developers
to keep a check on key metrics such as raw or processed sensor data and event-based context
switches for tracking the performance of embedded systems.
Deep Learning Applications

Deep learning represents a rich, yet unexplored embedded systems market that has a range of
applications from image processing to audio analysis. Even though developers are primarily
focused on security and cloud connectivity right now, deep learning and artificial intelligence
concepts will soon emerge as a trend in embedded systems.

Embedded System Innovations

The industrial sector for embedded systems is undergoing numerous transformations that will
enable developers to build systems that are high-performing, secure, and robust. As a developer
and manufacturer in this industry, it is important to stay updated with the latest technologies and
trends. For 2019, the embedded systems market is shaping up for simplified cloud connectivity,
improved security tools, real-time visualizations, lower power consumption, and deep learning
solutions.

ROUND ROBIN ARCHITECTURE

The Round Robin architecture is the easiest architecture for embedded systems. The main
method consists of a loop that runs again and again, checking each of the I/O devices at each turn
in order to see if they need service. No fancy interrupts, no fear of shared data…just a plain
single execution time

Example: Multimeter

 very small number of I/O: (switch, display, probes)

 no particularly lengthy processing (even very simple microprocessors can check switch,
take measurement and update display several times per second.)

 measurements can be taken at any time.

 display can be written to at any speed.

 small delays in switch position changes will go unnoticed thread that gets executed again
and again.

Advantages:

 Simplest of all the architectures


 No interrupts
 No shared data
 No latency concerns
 No tight response requirements

Disadvantages:

 A sensor connected to the Arduino that urgently needs service must wait its turn.
 Fragile. Only as strong as the weakest link. If a sensor breaks or something else breaks,
everything breaks.
 Response time has low stability in the event of changes to the code

Round-Robin Problems

If any device needs a response in less time than the worst duration of the loop the system won't
function.

If A and B take 5ms each and Z needs a response time of less than 7ms its not possible. This can
be mitigate somewhat by doing (A,Z,B,Z) in a loop instead of (A,B,Z).
Scalability of this solution is poor. Even if absolute deadlines do not exist, overall response time
may become unacceptably poor.

Round-Robin architecture is fragile – Even if the programmer manages to tune the loop
sufficiently to provide a functional system a single addition or change can ruin everything.

Round Robin with Interrupts

This Round Robin with Interrupts architecture is similar to the Round Robin architecture, except
it has interrupts. When an interrupt is triggered, the main program is put on hold and control
shifts to the interrupt service routine. Code that is inside the interrupt service routines has a
higher priority than the task code.

Advantages:

 Greater control over the priority levels


 Flexible
 Fast response time to I/O signals
 Great for managing sensors that need to be read at prespecified time intervals
Disadvantages:

 Shared data
 All interrupts could fire off concurrently

PREEMPTIVE AND NON-PREEMPTIVE SCHEDULING

Prerequisite – CPU Scheduling

1. Preemptive Scheduling:

Preemptive scheduling is used when a process switches from running state to ready state or from
waiting state to ready state. The resources (mainly CPU cycles) are allocated to the process for
the limited amount of time and then is taken away, and the process is again placed back in the
ready queue if that process still has CPU burst time remaining. That process stays in ready queue
till it gets next chance to execute.

Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time
First (SRTF), Priority (preemptive version), etc.

2. Non-Preemptive Scheduling:

Non-preemptive Scheduling is used when a process terminates, or a process switches from


running to waiting state. In this scheduling, once the resources (CPU cycles) is allocated to a
process, the process holds the CPU till it gets terminated or it reaches a waiting state. In case of
non-preemptive scheduling does not interrupt a process running CPU in middle of the execution.
Instead, it waits till the process complete its CPU burst time and then it can allocate the CPU to
another process.

Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (non preemptive version), etc.

Key Differences between Preemptive and Non-Preemptive Scheduling:

1. In preemptive scheduling the CPU is allocated to the processes for the limited time
whereas in Non-preemptive scheduling, the CPU is allocated to the process till it
terminates or switches to waiting state.

2. The executing process in preemptive scheduling is interrupted in the middle of execution


when higher priority one comes whereas, the executing process in non-preemptive
scheduling is not interrupted in the middle of execution and wait till its execution.

3. In Preemptive Scheduling, there is the overhead of switching the process from ready state
to running state, vise-verse, and maintaining the ready queue. Whereas in case of non-
preemptive scheduling has no overhead of switching the process from running state to
ready state.

4. In preemptive scheduling, if a high priority process frequently arrives in the ready queue
then the process with low priority has to wait for a long, and it may have to starve. On the
other hands, in the non-preemptive scheduling, if CPU is allocated to the process having
larger burst time then the processes with small burst time may have to starve.

5. Preemptive scheduling attain flexible by allowing the critical processes to access CPU as
they arrive into the ready queue, no matter what process is executing currently. Non-
preemptive scheduling is called rigid as even if a critical process enters the ready
queuethe process running CPU is not disturbed.

6. The Preemptive Scheduling has to maintain the integrity of shared data that’s why it
iscost associative as it which is not the case with Non-preemptive Scheduling.

Comparison Chart:

Paramenter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are


In this resources(CPU Cycle) are allocated to a process, the process holds
allocated to a process for a limited it till it completes its burst time or
Basic time. switches to waiting state.

Process can be interrupted in Process can not be interrupted until it


Interrupt between. terminates itself or its time is up.

If a process having high priority


frequently arrives in the ready If a process with long burst time is
queue, low priority process may running CPU, then later coming process
Starvation starve. with less CPU burst time may starve.

It has overheads of scheduling the


Overhead processes. It does not have overheads.

Flexibility Flexible rigid

Cost cost associated no cost associated

CPU In preemptive scheduling, CPU


Utilization utilization is high. It is low in non preemptive scheduling.

Examples of preemptive Examples of non-preemptive scheduling


scheduling are Round Robin and are First Come First Serve and Shortest
Examples Shortest Remaining Time First. Job First.

You might also like