Rtos Module 2 Notes PDF
Rtos Module 2 Notes PDF
Rtos Module 2 Notes PDF
Syllabus
Process Scheduling: FCFS, SJF, Priority, Round-Robin, Multilevel Queue and Multilevel Feedback
Queue Scheduling. Thread: Structure. User and kernel level threads, multi-threading models,
multiprocessor scheduling.
Q. Scheduling Queues:-
1. Job queue:- As processes enter the system, they are put into a job queue. This queue
consists of all processes in the system.
2. Ready queue:-The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called the ready queue. This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list. We extend each
PCB to include a pointer field that points to the next PCB in the ready queue.
3. Device queue: - The list of processes waiting for a particular I/O device is called adevice queue.
Each device has its own device queue.
Q. Queuing diagram:-
o The process could be removed forcibly from the CPU, as a result of an interrupt, and be
put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it
terminates.
2
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Scheduler may attempt to keep a good mix of processor bound & I/O bound processes.
For interactive processes in time sharing s/m the Os will accept the process creation request by users
until the s/m is saturated.
Short-term scheduler or CPU scheduler:-
The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute, and allocates the CPU to one of them.
Assigning the CPU to a process is called “Process Dispatching”.
Also known as dispatcher.
It executes most frequently
short-term scheduler is involved whenever an event occurs that may lead to the blocking of
current process and to choose another process for execution.
Clock interrupts
I/O interrupts
OS s/m calls & terps
Signals ( semaphores)
Medium term Scheduler:-
When a process is suspended after making an I/O request, it cannot make any progress
till the requested I/O is completed by the OS.
Sometimes, a suspended process is removed from the main memory and shifted to the
secondary storage(Swapping-Out),to make space for another process.
Medium term scheduler is charged with the handling of swapped –out process.
It determines when a swapped- out can be moved back into the memory(swapping-
in),and resumed from the same point ,Where it was swapped – out.
Swapping may be necessary to improve the process mix or because a change in m/y requirements
has over committed available memory
4
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Medium-Term
Long-Term Scheduler Short-Term Scheduler Scheduler
It is a process swapping
It is a job scheduler It is a CPU scheduler.. scheduler.
The execution of a process consists of a cycle of CPU execution and I/O wait.
o A process begins with a CPU burst, followed by an I/O burst, followed by another
CPU burst and so on. The last CPU burst will end will a system request to
terminate the execution.
The CPU burst durations vary from process to process and computer to computer.
o An I/O bound program has many very short CPU bursts.
o A CPU bound program might have a few very long CPU bursts.
5
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Q. Context Switching:-
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process - this is called context switch.
The time it takes is dependent on hardware support.
Context-switch time is overhead; the system does no useful work while switching.
Steps in Context Switching
Save context of processor including program counter and other registers.
Update the PCB of the running process with its new state and other associate
information.
Move PCB to appropriate queue - ready, blocked,
Select another process for execution.
Update PCB of the selected process.
Restore CPU context from that of the selected process.
When the process is switched, the following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register Changed State
• I/O State information
• Accounting information
6
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state.
2. When a process switches from the running state to the ready state.
3. When a process switches from the waiting state to the ready state.
4. When a process terminates.
Under 1 & 4 scheduling scheme is non preemptive .i.e.,
o When a process switches from the running state to the waiting state.
o When a process terminates
Otherwise the scheduling scheme is preemptive. i.e.,
o When a process switches from the running state to the ready state.
o When a process switches from the waiting state to the ready state.
Under non preemptive scheduling, once the CPU has been allocated a process, the
process keeps the CPU until it releases the CPU either by termination or by switching to
the waiting state.
This scheduling method is used by the Microsoft windows environment.
Preemptive:
Currently running process may be interrupted and moved to the Ready state by the OS.
Allows for better service since any one process cannot monopolize the processor for very long.
Q. Dispatcher:-The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler. This function involves:
7
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
Q. Scheduling Criteria:
8
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Q. Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU. There are many different CPU scheduling algorithms,
are given below.
A. First -Come, First-Served Scheduling:-
The process that requests the CPU first is allocated the CPU first. It is non- preemptive
algorithm.
Can easily be implemented with a FIFO queue.
When a process enters the ready queue, its PCB is linked onto the tail of the queue.
When CPU is free, it is allocated to the process at the head of the queue.
Example:
If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart: which is a bar chart that illustrates a
particular schedule, including the start and finish times of each of the participating
processes:
9
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Thus, the average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the processes CPU burst times vary greatly
The FCFS algorithm is particularly troublesome for time – sharing systems, where it is
important that each user get a share of the CPU at regular intervals.
Whenever twp processes have the same arrival time, We have picked up the one which
is having the least process ID
Advantages
Very simple
Disadvantages
Long average and worst-case waiting times
Poor dynamic behavior (convoy effect - short process behind long process).
Convoy effect in FCFS:
Assume one CPU-bound process and many I/O-bound processes. As the processes flow around
the system, the following scenario may result.
The CPU-bound process will get and hold the CPU. During this time, all the other processes will
finish their I/O and will move into the ready queue, waiting for the CPU. While the processes
wait in the ready queue, the I/O devices are idle. Eventually, the CPU-bound process finishes its
CPU burst and moves to an I/O device.
All the I/O-bound processes, which have short CPU bursts, execute quickly and move, back to
the I/O queues. At this point, the CPU sits idle. The CPU-bound process will then move back to
the ready queue and be allocated the CPU. Again, all the I/O processes end up waiting in the
ready queue until the CPU-bound process is done.
There is a convoy effect as all the other processes wait for the one big process to get off the
CPU.
The convoy effect results in lower CPU and device utilization.
The FCFS scheduling algorithm is non-preemptive.
Once the CPU has been allocated to a process, that process keeps the CPU until it releases the
CPU, either by terminating or by requesting I/O. The FCFS algorithm is thus particularly
troublesome for time-sharing systems.
10
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
P1 6
P2 8
P3 7
3
P4
Gantt Chart:-
P3 2 9
p4 3 5
P1 P2 P4 P3
0 8 12 17 26
AWT = 0 + (8 – 1) + (12 – 3) + (17 – 2) / 4 = 7.75 ms
In SJF, the time duration we have specified are actually the execution Time of Next
CPU Burst.
In case of FCFS there is no such problem. i.e., the only criteria are job which is arrived
in the ready queue first that has to be executed first. Without the bothering of what is the
next CPU burst.
In case of SJF, we must have the idea of what is the next CPU bust duration. We are
11
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
executing the next CPU Burst time is minimum and Job which take maximum time that
must be executed last.
The next CPU burst is generally predicted as an exponential average of the measured lengths of
previous CPU bursts. We can define the exponential average with the following formula
Let tn be the length of the nth CPU burst, and let n+1 be our predicted value for the next CPU burst.
Then, for 0≤ ≤ 1, define
The value of tn contains our most recent information; n stores the past history.
The parameter a controls the relative weight of recent and past history in our prediction.
If = 0, then n+1 = n, and recent history has no effect (current conditions are assumed to be
transient).
If = 1, then n+1 = tn, and only the most recent CPU burst matters (history is assumed to be old and
irrelevant).
More commonly, = 1/2, so recent history and past history are equally weighted.
The initial o can be defined as a constant or as an overall system average.
To understand the behavior of the exponential average, we can expand the formula for n+1 by
substituting for n, then
n+1 = tn + (1 - ) tn-1 + · · · + (1-)i tn-i + · · · + (1-)n+1 o.
Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its
predecessor.
c) Preemptive SJF – Shortest Remaining Time First:-
Preemptive version of SJF.
If a new process arrives with CPU burst length less than remaining time of current
executing process, preempt the currently executing process and allocate the CPU to the
new process.
Advantages:
Minimizes average waiting times.
Problems:
How to determine length of next CPU burst?
Problem: starvation of jobs with long CPU bursts.
Example :
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
p4 3 5
12
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Q: What is starvation?
In the case of preemptive algorithm
A job which is currently in active stage that will also have some priority level.
Now, while the job is in active state and if any other job arrived in the ready queue with
priority higher than that , then we have to pre emptive the job which is currently active
and give this Higher priority Job to CPU for execution
If we do so, then there is a danger. i.e., If we given a Job for CPU for execution which is
currently of highest priority.
Here the situation is dynamic and every time new Jobs arrive in the queue, taking the
priority of that Job with the priority of active Job.
Every time a new Job arrive in the queue, we found that priority of that Job is greater
than the some Job already in the ready queue with lowest priority that will never be
executed. It will starve for the CPU Time
Q.: What is aging?
While the job remain in the ready queue at regular intervals of time, increment the
priority level of the Job
That means the priority of the job now it is not only decided by the user/administrator.
The priority of the job will also be decided by the duration of time it is remaining in the
ready queue.
Because regular interval of time we are incrementing the priority of the Job so there will
be somewhere the job will be reach the maximum priority Level.
And when it is reaches maximum priority level, it will be executed by the CPU.
This is the one way in which starvation can be solved.
13
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
This scheduling algorithm is designed especially for time sharing systems. It is similar
to FCFS scheduling, but preemption is added to switch between processes.
A small unit of time, called a time quantum, or time slice, is assigned to each process.
Usually a quantum is 10 to 100 ms.
This scheduler allocates the CPU to each process in the ready queue for a time interval
of up to 1 time quantum in FIFO (circular) fashion.
If the process still running at the end of the quantum; it will be preempted from the
CPU.
A context switch will be executed, and the process will be put at the tail of the ready
queue.
Then the scheduler will select the next process in the ready queue.
However, if the process has blocked or finished before the quantum has elapsed, the
scheduler will then proceed with the next process in the ready queue.
The ready queue is treated as a circular queue.
Example:-
Process Burst Time
P1 24
P2 3
P3 3
Time quantum = 4 ms
Gantt chart
14
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Arrival Burst
Process time ms ms Priority
P1 0 10 2
P2 1 5 2
P3 3 6 4
P4 5 4 0
15
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
16
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
18
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
INDEPENDENT PARALLELISM
With independent parallelism, there is no explicit synchronization among processes.
Each represents a separate, independent application or job.
A typical use of this type of parallelism is in a time-sharing system.
• Each user is performing a particular application, such as word processing or
using a spreadsheet.
• The multiprocessor provides the same service as a multiprogrammed
uniprocessor.
• Because more than one processor is available, average response time to the
users will be less.
MEDIUM-GRAINED PARALLELISM
A single application can be effectively implemented as a collection of threads within a
single process.
In this case, the programmer must explicitly specify the potential parallelism of an
application.
Typically, there will need to be rather a high degree of coordination and interaction
among the threads of an application, leading to a medium-grain level of synchronization.
Because the various threads of an application interact so frequently, scheduling decisions
concerning one thread may affect the performance of the entire application.
FINE-GRAINED PARALLELISM
Fine-grained parallelism represents a much more complex use of parallelism than is
found in the use of threads.
Although much work has been done on highly parallel applications, this is so far a
specialized and fragmented area, with many different approaches
19
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
20
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
All processes go into one global queue and are scheduled to any available processor.
Dynamic Assignment
With Dynamic Assignment, threads are moved for a queue for one processor to a queue
for another processor;
Linux uses this approach.
Both dynamic and static methods require some way of assigning a process to a processor
Two methods:
• Master/Slave
• Peer
I. Master/Slave Architecture
Key kernel functions of the operating system always run on a particular processor. The
other processors may only execute user programs.
The master is responsible for scheduling jobs.
Once a process is active, if the slave needs service (e.g., an I/O call), it must send a
request to the master and wait for the service to be performed.
THREADS
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack.
It shares with other threads belonging to the same process its code section, data section, and other
OS resources, such as open files and signals.
Conventional (or heavy weight) process has a single thread of control. If a process has multiple
threads of control (of light weight process), it can perform more than one task at a time.
A thread plays an important role in IPC.
The difference between a single-threaded process and a multiple-threaded process is shown below.
Most OS kernels are multithreaded, ie several threads operate in the kernel, and each thread
performs a specific task, such as managing devices or interrupts handling.
Advantages of multi-threading
22
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Economy: Allocating memory and resources for process creation is costly. Because threads share
the resources of the process to which they belong, it is more economical to create and context-
switch threads.
Default resource sharing: Processes share resources only by using IPC methods
THREAD SCHEDULING
With threads, the concept of execution is separated from the rest of the definition of a process.
An application can be implemented as a set of threads, which cooperate and execute concurrently
in the same address space.
Threads on a uniprocessor system can be used as a program structuring aid and to overlap I/O with
processing.
However, the full power of threads becomes evident in a multiprocessor system.
Threads can be used to exploit true parallelism in an application.
If the various threads of an application are simultaneously run on separate processors, dramatic
gains in performance are possible.
23
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Disadvantages:
The central queue occupies a region of memory that must be accessed in a manner that
enforces mutual exclusion.
Thus, it may become a bottleneck if many processors look for work at the same time.
Preempted threads are unlikely to resume execution on the same processor.
o If each processor is equipped with a local cache, caching becomes less
efficient.
If all threads are treated as a common pool of threads, it is unlikely that all of the threads
of a program will gain access to processors at the same time.
If a high degree of coordination is required between the threads of a program, the process
switches involved may seriously compromise performance.
2. Gang Scheduling
A set of related threads is scheduled to run on a set of processors at the same time, on a
one-to-one basis.
If closely related processes execute in parallel, synchronization blocking may be reduced,
less process switching may be necessary, and performance will increase.
Scheduling overhead may be reduced because a single decision affects a number of
processors and processes at one time.
The use of gang scheduling creates a requirement for processor allocation.
24
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Consider an example in which there are two applications, one with four threads and one
with one thread.
Using uniform time allocation wastes 37.5% of the processing resource, because when
the single-thread application runs, three processors are left idle.
If there are several one-thread applications, these could all be fit together to increase
processor utilization.
If that option is not available, an alternative to uniform scheduling is scheduling that is
weighted by the number of threads.
Thus, the four-thread application could be given four-fifths of the time and the one-
thread application given only one-fifth of the time, reducing the processor waste to 15%.
Dedicated Processor Assignment
Each program is allocated a number of processors equal to the number of threads in the
program, for the duration of the program execution.
When the program terminates, the processors return to the general pool for possible
allocation to another program.
This approach would appear to be extremely wasteful of processor time.
If a thread of an application is blocked waiting for I/O or for synchronization with
another thread, then that thread’s processor remains idle: there is no multiprogramming of
processors.But
o In a highly parallel system, with tens or hundreds of processors, each of which
represents a small fraction of the cost of the system, processor utilization is no
longer so important as a metric for effectiveness or performance.
o The total avoidance of process switching during the lifetime of a program should
result in a substantial speedup of that program.
Dynamic Scheduling
The number of threads in a process can be altered during the course of execution
allowing the operating system to adjust the load to improve utilization.
In this both operating systems and application are involved in making scheduling
decisions.
The operating system is responsible for partitioning the processors among the jobs.
Each job uses the processor to execute some subset of its runnable tasks by maping these
tasks to threads
An appropriate decision about which subset to run as well as which thread to suspend
when a process is pre emptied, is done by the individual application.
In this approach the scheduling responsibility of the operating system is primarily limited
to processor allocation and proceeds according to the following policy.
25
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally
ECT426 REAL TIME OPERATING SYSTEMS MODULE II
Q. Real-Time Scheduling:-
o Real-time computing is divided into two types.
o Hard real-time systems are required to complete a critical task within a guaranteed
amount of time
o Generally, a process is submitted along with a statement of the amount of time in which
it needs to complete or perform I/O.
o The scheduler then either admits the process, guaranteeing that the process will
complete on time, or rejects the request as impossible.
o This is known as resource reservation.
o Soft real-time computing is less restrictive.
o It requires that critical processes receive priority over less fortunate ones.
o Implementing soft real-time functionality requires careful design of the scheduler and
related aspects of the operating system.
o First, the system must have priority scheduling, and real-time processes must have the
highest priority.
o The priority of real-time processes must not degrade over time, even though the priority
of non-real-time processes may.
o Second, the dispatch latency must be small. The smaller the latency, the faster a real-
time process can start executing once it is runnable.
o The high-priority process would be waiting for a lower-priority one to finish.
o This situation is known as priority inversion. In fact, a chain of processes could all be
accessing resources that the high-priority process needs.
o This problem can be solved via the priority-inheritance protocol, in which all these
processes (the ones accessing resources that the high-priority process needs) inherit the
high priority until they are done with the resource in question.
o When they are finished, their priority reverts to its original value.
26
Asst. Prof. Anuja Mohan, Department of ECE, VKCET, Parippally