Operating System (Day - 1)
Operating System (Day - 1)
Operating System (Day - 1)
Operating System
• Operating System is a program that:
– manages the computer hardware
– provides a basis for application programs, and
– acts as an intermediary between the computer user
and the computer hardware.
• Two Basic Functions of Operating System are:
– Resource Management
– Virtual Machine
Operating System
Heap Structure
Data
Text (Program)
19
Threads…
• A thread has its own unique:
– Registers, Program Counter
– Stack, Stack Pointer
• A thread does not have:
– Address Space, Program Code
– Global Variables, Heap
– OS Resources (Files; I/O devices)
20
Threads…
21
Why Threads? (Benefits)
Process with multiple threads makes a great server (e.g.
print server)
Increase responsiveness, i.e. with multiple threads in a
process, if one threads blocks then other can still
continue executing
Sharing of common data reduce requirement of inter-
process communication
Proper utilization of multiprocessor by increasing
concurrency
Threads are cheap to create and use very little resources
Context switching is fast (only have to save/reload PC,
Stack, SP, Registers) 22
Process Vs Thread
PROCESS THREAD
Doesn’t share memory (loosely Shares memories and files (tightly
coupled) coupled)
Creation is time consuming Fast
Execution slow Fast
More time to terminate Less time
More time to switch between Less time
processes
System calls are required for Not required
communication
more resources are required fewer resources are required
Not suitable for parallelism Suitable
23
Implementation of Threads
• User Level Threads
– Thread supported at user level
– Resides above kernel and are managed without the kernel support
– Provides a thread library of functions to allow user process to
create and manage their threads
– No need of knowledge at the Operating System level
• Kernel Level Threads
– Supported directly by Operating Systems
– Kernel provides system calls for creating and managing the threads
• Combined or Hybrid Threads
– Combines both user level and kernel level threads
– Thread creation is done in user space
Implementation of Threads…
a. User level Threads
c. Hybrid Threads
25
Multithreading
26
Multithreading Models
• The way of establishing the relationship
between user level thread and kernel level
thread
• Three basic models:
– Many-to-One model
– One-to-One model
– Many-to-many model
27
Many-to-One model…
User Thread
Kernel Thread
28
One-to-One model
User Thread
Kernel Thread
29
Many-to-One model
User Thread
Kernel Thread
30
Scheduling
• When in time sharing system, CPU switches
between processes
• The switch between the processes maximize the
CPU utilization
• One process can get CPU time when another is
waiting for I/O operations to complete
• Increases throughput
• Process scheduler selects available process for
execution in CPU
Scheduling…(CPU switching between processes)
Scheduling Queues
• Job queue
– set of all processes in the system.
• Ready queue
– set of all processes residing in main memory,
ready and waiting to execute.
– Queue header points to the first and last processes in ready
queue
• Device queues
– set of processes waiting for an I/O device.
• Process migrate between various queues(sometimes
multiple times) for its termination
Scheduling…
• At first process is kept into ready queue
• Then, from the queue any of the processes is
selected for execution (i.e. dispatched)
• During execution, one of the following three cases
may arise:
– Process could issue an I/O request and kept in I/O queue
– May create child process and wait for its termination
– Removed forcibly from CPU, due to interrupt, this is kept
back to ready queue
Scheduling…
• Response Time:
– Interval between the submission of request and start of the
system’s first response
• All these times should be as less as possible
• Other criteria: fairness, balance etc.
Algorithms Used In Different Systems
– First-Come-First-Serve
– Shortest Job First
– Shortest Remaining Time Next
– Round Robin
– Priority
– Multilevel Queue
Scheduling Algorithms (Types)
Preemptive Algorithm
CPU can be taken away before the
completion
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case.
In example 1:
Short processes coming later than long process have to wait more for CPU. This
effect is called Convoy effect
FCFS...
Example:
Gantt chart:
Calculation:
Shortest Job First (SJF)
• Allocate the CPU to the process with least CPU
burst time
• If two processes have same CPU burst, then FCFS is
used to break the tie
• Two schemes;
– Non Preemptive :
• once CPU given to the process it cannot be preempted until
completes its CPU burst
– Preemptive (also called Shortest Remaining First -SRTF):
• if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt.
SJF (non-preemptive)…
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)
P1 P3 P2 P4
0 3 7 8 12 16
Gantt chart:
Calculation:
Shortest Remaining Time First (SRTF) –
SJF Preemptive
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
Gantt chart:
Calculation:
Question:
• Make Gantt Chart and Find:
– Average Turnaround Time (ATAT)
– Average Waiting Time (AWT)
• Case1: FCFS
• Case2: SJF (preemptive)
• Case3: SJF (non preemptive)
Gantt Chart: P1 P2 P3 P2 P1
0 1 2 4 8 17
AWT = {0+(8-1)+0+(4-2)+0}/3
= 9/3 = 3 ms.
Preemptive Priority (Problems)
• Indefinite blocking of low priority processes by
high priority processes called Starvation
• Completion of a process within finite time is
not guaranteed
• Solution:
– Aging: Increasing priority of process that has
waited for long time. Older process gets high
priority and hence can complete in finite time.
Non-Preemptive Priority
• Example:
Process Arrival Burst Time Priority
Time (AT)
P0 0 10 5
P1 1 6 4
P2 3 2 2
P3 5 4 0
Gantt Chart: P0 P3 P2 P1
0 10 14 16 22
AWT = {0+(10-5)+(14-3)+(16-1)}/4
= 31/4 = 7.75 ms.
Priority Scheduling
Example:
Non-Preemptive Solution:
Preemptive Solution:
Round Robin(RR) Scheduling
• Preemptive FCFS algorithm
• Each process gets a small unit of CPU time called time quantum (10-100
milliseconds).
• After this time has elapsed, the process is preempted and added to the end of the
ready queue.
• Let,
– n = number of processes,
– q = time quantum, then
• no process runs more than one time slice (i.e. q)
• no process waits more than (n-1)*q time units
• If a running process releases control due to some I/O, another process is scheduled
to run
• Note:
– q must be large w.r.t. context switch, otherwise overhead is too high.
RR Algorithm…
Example: Time Quantum = 20 ms
Process Burst Time
P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
Time Quantum
Gantt chart:
Calculation:
Multilevel Queue Scheduling
• Ready queue is partitioned into separate queues:
– foreground (interactive)
– background (batch)
• Each queue has its own scheduling algorithm,
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues.
– Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time which
it can schedule amongst its processes; i.e., 80% to foreground in
RR; 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• A process can move between the various queues
• aging can be implemented this way