Process Scheduling

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

Process Scheduling

Prepared By
Dr.Rekha.K.S
Associate Professor
Dept of CS&E,NIE
Process Scheduling
• In a single-processor system, only one process can run
at a time; any others must wait until the CPU is free and
can be rescheduled.
• The objective of multiprogramming is to have some
process ruming at all times, to maximize CPU utilization
• CPU scheduling is the basis of multiprogram operating
systems.
• By switching the CPU among processes, the operating
system can make the computer more productive.
Schedulers

• Schedulers are software which selects an available program to be


assigned to CPU.
• A long-term scheduler or Job scheduler – selects jobs from the job pool
(of secondary memory,disk) and loads them into the memory.
• If more processes are submitted, than that can be executed
immediately, such processes will be in secondary memory. It runs
infrequently and can take time to select the next process.
• The short-term scheduler, or CPU Scheduler – selects job from memory
and assigns the CPU to it. It must select the new process for CPU
frequently.

• The medium-term scheduler - selects the process in ready queue and


reintroduced into the memory.
Scheduling Criteria
• CPU utilization: The CPU utilization can range from 0 to 100 percent. In a real
system, it should range from 40 percent (for a lightly loaded system) to 90
percent (for a heavily used system).
• Throughput: One measure of work is the number of processes that are
completed per time unit, called throughput.

• Turnaround time The interval from the time of submission of a process to the
time of completion is the turnaround time. Turnaround time is the sum of the
periods spent waiting to get into memory, waiting in the ready queue, executing
on the CPU, and doing I/O.
Time spent waiting (to get into memory + ready queue + execution + I/O)

TAT = Wait Time + Burst Time


TAT = Completion time- Arrival Time
Scheduling Criteria
• Burst time . It refers to the time required by a process for its execution on the CPU.
Burst time is considered only the CPU time of a process and does not include the I/O
time. It is also known as the execution time or running time of the process. During this
time, a process makes a transition from the Running state to the Completion State
• Waiting time. The CPU-scheduling algorithm does not affect the amount of time
during which a process executes or does I/0; it affects only the amount of time that a
process spends waiting in the ready queue. Waiting time is the sum of the periods
spent waiting in the ready queue.
WT= TAT-BT

• Response time: another measure is the time from the submission of a request until
the first response is produced. This measure, called response time, is the time it takes
to start responding, not the time it takes to output the response.

• If non preemptive algorithm, then response time and waiting time will be equal
What is Preemptive Scheduling?

• Preemptive scheduling is used when a process switches


from the running state to the ready state or from the
waiting state to the ready state.
• The resources (mainly CPU cycles) are allocated to the
process for a limited amount of time and then taken
away, and the process is again placed back in the ready
queue if that process still has CPU burst time
remaining.
What is Non-Preemptive
Scheduling?
• Non-preemptive Scheduling is used when a process
terminates, or a process switches from running to the
waiting state.
• In this scheduling, once the resources (CPU cycles) are
allocated to a process, the process holds the CPU till it
gets terminated or reaches a waiting state.
• In the case of non-preemptive scheduling does not
interrupt a process running CPU in the middle of the
execution.
• Instead, it waits till the process completes its CPU burst
time, and then it can allocate the CPU to another
process.
First-Come, First-Served Scheduling
• The Algorithm is non preemptive.

• the process that requests the CPU first is allocated the CPU
first.
• The implementation of the FCFS policy is easily managed
with a FIFO queue.
• When a process enters the ready queue, its PCB(Process
Control Block) is linked onto the tail of the queue.
• When the CPU is free, it is allocated to the process at the
head of the queue.
• The running process is then removed from the queue. The
code for FCFS scheduling is simple to write and understand.
First-Come, First-Served Scheduling
• Process Burst Time
• p1 24
• p2 3
• P3 3
• If the processes arrive in the order P1, P2, P3, and are
served in FCFS order, we get the result shown in the
following Gantt chart
• Gantt Chart: is a bar chart that illustrates a particular
schedule, including the start and finish times of each of
the participating processes
Average waiting time is = (6+0+3)/3= 3
milliseconds
Convoy Effect
• convoy effect as all the other processes wait for the one
big process to get off the CPU.
• This effect results in lower CPU and device utilization
than might be possible if the shorter processes were
allowed to go first.
Shortest-Job-First Scheduling

• This algorithm associates with each process the length


of the process's next CPU burst.
• When the CPU is available, it is assigned to the process
that has the smallest next CPU burst.
• If the next CPU bursts of two processes are the same,
FCFS scheduling is used to break the tie.
Shortest-Job-First Scheduling

You might also like