Process Management
Process Management
Process Management
Process Concept
Process is the execution of a program that performs
the actions specified in that program. It can be
defined as an execution unit where a program runs.
The OS helps you to create, schedule, and
terminates the processes which is used by CPU. A
process created by the main process is called a
child process.
Process operations can be easily controlled with the
help of PCB(Process Control Block). You can
consider it as the brain of the process, which
contains all the crucial information related to
processing like process id, priority, state, CPU
registers, etc.
In this Operating system tutorial, you will learn:
• What is a Process?
• What is Process Management?
• Process Architecture
• Process Control Blocks
• Process States
• Process Control Block(PCB)
What is Process Management?
Process management involves various tasks like creation,
scheduling, termination of processes, and a dead lock.
Process is a program that is under execution, which is an
important part of modern-day operating systems. The OS
must allocate resources that enable processes to share
and exchange information. It also protects the resources
of each process from other methods and allows
synchronization among processes.
It is the job of OS to manage all the running
processes of the system. It handles operations by
performing tasks like process scheduling and such
as resource allocation.
Process Control Blocks
The PCB is a full form of Process Control Block. It is
a data structure that is maintained by the Operating
System for every process. The PCB should be
identified by an integer Process ID (PID). It helps
you to store all the information required to keep
track of all the running processes.
It is also accountable for storing the contents of
processor registers. These are saved when the
process moves from the running state and then
returns back to it. The information is quickly
updated in the PCB by the OS as soon as the
process makes the state transition.
Process States
Process Scheduling
Process Scheduling is an OS task that schedules
processes of different states like ready, waiting, and
running.
Process scheduling allows OS to allocate a time
interval of CPU execution for each process. Another
important reason for using a process scheduling
system is that it keeps the CPU busy all the time.
This allows you to get the minimum response time
for programs.
In this process scheduling tutorial, you will learn:
• What is Process Scheduling?
• Process Scheduling Queues
• Two State Process Model
• Scheduling Objectives
• Type of Process Schedulers
• Long Term Scheduler
• Medium Term Scheduler
• Short Term Scheduler
• Difference between Schedulers
• What is Context switch?
Process Scheduling Queues
Process Scheduling Queues help you to maintain a
distinct queue for each and every process states
and PCBs. All the process of the same execution
state are placed in the same queue. Therefore,
whenever the state of a process is modified, its
PCB needs to be unlinked from its existing queue,
which moves back to the new state queue.
CPU Scheduler
• Whenever the CPU becomes idle, it is the job of the CPU
Scheduler ( a.k.a. the short-term scheduler ) to select
another process from the ready queue to run next.
• The storage structure for the ready queue and the algorithm
used to select the next process are not necessarily a FIFO
queue. There are several alternatives to choose from, as
well as numerous adjustable parameters for each
algorithm, which is the basic subject of this entire chapter.
Preemptive Scheduling
• CPU scheduling decisions take place under one of four
conditions:
When a process switches from the running state to the
waiting state, such as for an I/O request or invocation
of the wait( ) system call.
When a process switches from the running state to the
ready state, for example in response to an interrupt.
When a process switches from the waiting state to the
ready state, say at completion of I/O or a return from
wait( ).
When a process terminates.
• For conditions 1 and 4 there is no choice - A new process must
be selected.
• For conditions 2 and 3 there is a choice - To either continue
running the current process, or select a different one.
• If scheduling takes place only under conditions 1 and 4, the
system is said to be non-preemptive, or cooperative. Under
these conditions, once a process starts running it keeps
running, until it either voluntarily blocks or until it
finishes. Otherwise the system is said to be preemptive.
• Windows used non-preemptive scheduling up to Windows 3.x,
and started using pre-emptive scheduling with Win95.
Macs used non-preemptive prior to OSX, and pre-emptive
since then. Note that pre-emptive scheduling is only
possible on hardware that supports a timer interrupt.
• Note that pre-emptive scheduling can cause problems when
two processes share data, because one process may get
interrupted in the middle of updating shared data
structures.
• Preemption can also be a problem if the kernel is busy
implementing a system call ( e.g. updating critical kernel
data structures ) when the preemption occurs. Most
modern UNIXes deal with this problem by making the
process wait until the system call has either completed or
blocked before allowing the preemption Unfortunately this
solution is problematic for real-time systems, as real-time
response can no longer be guaranteed.
• Some critical sections of code protect themselves from
concurrency problems by disabling interrupts before
entering the critical section and re-enabling interrupts on
exiting the section. Needless to say, this should only be
done in rare situations, and only on very short pieces of
code that will finish quickly, ( usually just a few machine
instructions. )
Dispatcher
The dispatcher is the module that gives control of the
CPU to the process selected by the scheduler. This
function involves:
Switching context.
Switching to user mode.
Jumping to the proper location in the newly loaded
program.
The dispatcher needs to be as fast as possible, as it is run
on every context switch. The time consumed by the
dispatcher is known as dispatch latency.
Scheduling Criteria
• There are several different criteria to consider when trying to
select the "best" scheduling algorithm for a particular
situation and environment, including:
CPU utilization - Ideally the CPU would be busy 100%
of the time, so as to waste 0 CPU cycles. On a real
system CPU usage should range from 40% ( lightly
loaded ) to 90% ( heavily loaded. )
Throughput - Number of processes completed per unit
time. May range from 10 / second to 1 / hour
depending on the specific processes.
Turnaround time - Time required for a particular
process to complete, from submission time to
completion. ( Wall clock time. )
Waiting time - How much time processes spend in the
ready queue waiting their turn to get on the CPU.
( Load average - The average number of processes
sitting in the ready queue waiting their turn to
get into the CPU. Reported in 1-minute, 5-
minute, and 15-minute averages by "uptime"
and "who". )
Response time - The time taken in an interactive
program from the issuance of a command to the
commence of a response to that command.
• In general one wants to optimize the average value of a criteria
( Maximize CPU utilization and throughput, and minimize
all the others. ) However some times one wants to do
something different, such as to minimize the maximum
response time.
• Sometimes it is most desirable to minimize the variance of a
criteria than the actual value. I.e. users are more accepting
of a consistent predictable system than an inconsistent one,
even if it is a little bit slower.
Scheduling Algorithms
The following subsections will explain several common
scheduling strategies, looking at only a single CPU burst each
for a small number of processes. Obviously real systems have to
deal with a lot more simultaneous processes executing their
CPU-I/O burst cycles.
First-Come First-Serve Scheduling, FCFS
• FCFS is very simple - Just a FIFO queue, like customers
waiting in line at the bank or the post office or at a copying
machine.
• Unfortunately, however, FCFS can yield some very long
average wait times, particularly if the first process to get
there takes a long time. For example, consider the
following three processes:
Process Burst Time
P1 24
P2 3
P3 3
• In the first Gantt chart below, process P1 arrives first. The
average waiting time for the three processes is ( 0 + 24 +
27 ) / 3 = 17.0 ms.
• In the second Gantt chart below, the same three processes have
an average wait time of ( 0 + 3 + 6 ) / 3 = 3.0 ms. The total
run time for the three bursts is the same, but in the second
case two of the three finish much quicker, and the other
process is only delayed by a short amount.
References:
• Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin,
"Operating System Concepts, Eighth Edition ", Chapter 5