0% found this document useful (0 votes)
4 views59 pages

Unit II

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 59

Unit II

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Process Concept
● An operating system executes a variety of programs:
● Batch system – jobs
● Time-shared systems – user programs or tasks
● Textbook uses the terms job and process almost interchangeably
● Process – a program in execution; process execution must
progress in sequential fashion
● Multiple parts
● The program code, also called text section
● Current activity including program counter, processor
registers
● Stack containing temporary data
4 Function parameters, return addresses, local variables
● Data section containing global variables
● Heap containing memory dynamically allocated during run
time

Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Process Concept (Cont.)
● Program is passive entity stored on disk (executable file),
process is active
● Program becomes process when executable file loaded into
memory
● Execution of program started via GUI mouse clicks, command
line entry of its name, etc
● One program can be several processes
● Consider multiple users executing the same program

Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne ©2013
Process in Memory

Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
Process State

● As a process executes, it changes state


● new: The process is being created
● running: Instructions are being executed
● waiting: The process is waiting for some event to occur
● ready: The process is waiting to be assigned to a processor
● terminated: The process has finished execution

Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
Diagram of Process State

Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
Process Control Block (PCB)
Information associated with each process
(also called task control block)
● Process state – running, waiting, etc
● Program counter – location of
instruction to next execute
● CPU registers – contents of all
process-centric registers
● CPU scheduling information- priorities,
scheduling queue pointers
● Memory-management information –
memory allocated to the process
● Accounting information – CPU used,
clock time elapsed since start, time
limits
● I/O status information – I/O devices
allocated to process, list of open files

Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
CPU Switch From Process to Process

Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013
Threads
● So far, process has a single thread of execution
● Consider having multiple program counters per process
● Multiple locations can execute at once
4 Multiple threads of control -> threads
● Must then have storage for thread details, multiple program
counters in PCB

Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013
Multithreaded Server Architecture

Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne ©2013
Benefits

● Responsiveness – may allow continued execution if part of


process is blocked, especially important for user interfaces
● Resource Sharing – threads share resources of process, easier
than shared memory or message passing
● Economy – cheaper than process creation, thread switching
lower overhead than context switching
● Scalability – process can take advantage of multiprocessor
architectures

Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013
Multicore Programming

● Multicore or multiprocessor systems putting pressure on


programmers, challenges include:
● Dividing activities
● Balance
● Data splitting
● Data dependency
● Testing and debugging
● Parallelism implies a system can perform more than one task
simultaneously
● Concurrency supports more than one task making progress
● Single processor / core, scheduler providing concurrency

Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
Multicore Programming (Cont.)

● Types of parallelism
● Data parallelism – distributes subsets of the same data
across multiple cores, same operation on each
● Task parallelism – distributing threads across cores, each
thread performing unique operation
● As # of threads grows, so does architectural support for
threading
● CPUs have cores as well as hardware threads
● Consider Oracle SPARC T4 with 8 cores, and 8 hardware
threads per core

Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013
Concurrency vs. Parallelism
● Concurrent execution on single-core system:

● Parallelism on a multi-core system:

Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013
Single and Multithreaded Processes

Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne ©2013
User Threads and Kernel Threads

● User threads - management done by user-level threads library


● Three primary thread libraries:
● POSIX Pthreads
● Windows threads
● Java threads
● Kernel threads - Supported by the Kernel
● Examples – virtually all general purpose operating systems, including:
● Windows
● Solaris
● Linux
● Tru64 UNIX
● Mac OS X

Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013
Multithreading Models

● Many-to-One

● One-to-One

● Many-to-Many

Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Many-to-One

● Many user-level threads mapped to


single kernel thread
● One thread blocking causes all to block
● Multiple threads may not run in parallel
on muticore system because only one
may be in kernel at a time
● Few systems currently use this model
● Examples:
● Solaris Green Threads
● GNU Portable Threads

Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne ©2013
One-to-One
● Each user-level thread maps to kernel thread
● Creating a user-level thread creates a kernel thread
● More concurrency than many-to-one
● Number of threads per process sometimes
restricted due to overhead
● Examples
● Windows
● Linux
● Solaris 9 and later

Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013
Many-to-Many Model
● Allows many user level threads to be
mapped to many kernel threads
● Allows the operating system to create
a sufficient number of kernel threads
● Solaris prior to version 9
● Windows with the ThreadFiber
package

Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Process Scheduling

● Maximize CPU use, quickly switch processes onto CPU for


time sharing
● Process scheduler selects among available processes for
next execution on CPU
● Maintains scheduling queues of processes
● Job queue – set of all processes in the system
● Ready queue – set of all processes residing in main
memory, ready and waiting to execute
● Device queues – set of processes waiting for an I/O
device
● Processes migrate among the various queues

Operating System Concepts – 9th Edition 6.21 Silberschatz, Galvin and Gagne ©2013
Ready Queue And Various I/O Device Queues

Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne ©2013
Representation of Process Scheduling

● Queueing diagram represents queues, resources, flows

Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
Schedulers
● Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
● Sometimes the only scheduler in a system
● Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be
fast)
● Long-term scheduler (or job scheduler) – selects which processes should
be brought into the ready queue
● Long-term scheduler is invoked infrequently (seconds, minutes) ⇒
(may be slow)
● The long-term scheduler controls the degree of multiprogramming
● Processes can be described as either:
● I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
● CPU-bound process – spends more time doing computations; few
very long CPU bursts
● Long-term scheduler strives for good process mix

Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
Addition of Medium Term Scheduling
● Medium-term scheduler can be added if degree of multiple
programming needs to decrease
● Remove process from memory, store on disk, bring back
in from disk to continue execution: swapping

Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013
Multitasking in Mobile Systems
● Some mobile systems (e.g., early version of iOS) allow only one
process to run, others suspended
● Due to screen real estate, user interface limits iOS provides for a
● Single foreground process- controlled via user interface
● Multiple background processes– in memory, running, but not
on the display, and with limits
● Limits include single, short task, receiving notification of
events, specific long-running tasks like audio playback
● Android runs foreground and background, with fewer limits
● Background process uses a service to perform tasks
● Service can keep running even if background process is
suspended
● Service has no user interface, small memory use

Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
Context Switch
● When CPU switches to another process, the system must save
the state of the old process and load the saved state for the
new process via a context switch
● Context of a process represented in the PCB
● Context-switch time is overhead; the system does no useful
work while switching
● The more complex the OS and the PCB 🡺 the longer the
context switch
● Time dependent on hardware support
● Some hardware provides multiple sets of registers per CPU
🡺 multiple contexts loaded at once

Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013
Operations on Processes

● System must provide mechanisms for:


● process creation,
● process termination,
● and so on as detailed next

Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013
Process Creation
● Parent process create children processes, which, in turn
create other processes, forming a tree of processes
● Generally, process identified and managed via a process
identifier (pid)
● Resource sharing options
● Parent and children share all resources
● Children share subset of parent’s resources
● Parent and child share no resources
● Execution options
● Parent and children execute concurrently
● Parent waits until children terminate

Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne ©2013
Process Termination

● Process executes last statement and then asks the operating


system to delete it using the exit() system call.
● Returns status data from child to parent (via wait())
● Process’ resources are deallocated by operating system
● Parent may terminate the execution of children processes using
the abort() system call. Some reasons for doing so:
● Child has exceeded allocated resources
● Task assigned to child is no longer required
● The parent is exiting and the operating systems does not
allow a child to continue if its parent terminates

Operating System Concepts – 9th Edition 6.30 Silberschatz, Galvin and Gagne ©2013
Process Termination

● Some operating systems do not allow child to exists if its parent


has terminated. If a process terminates, then all its children must
also be terminated.
● cascading termination. All children, grandchildren, etc. are
terminated.
● The termination is initiated by the operating system.
● The parent process may wait for termination of a child process by
using the wait()system call. The call returns status information
and the pid of the terminated process
pid = wait(&status);
● If no parent waiting (did not invoke wait()) process is a zombie
● If parent terminated without invoking wait , process is an orphan

Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne ©2013
Basic Concepts

● Maximum CPU utilization


obtained with multiprogramming
● CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
● CPU burst followed by I/O burst
● CPU burst distribution is of main
concern

Operating System Concepts – 9th Edition 6.32 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
● Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
● Queue may be ordered in various ways
● CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
● Scheduling under 1 and 4 is nonpreemptive
● All other scheduling is preemptive
● Consider access to shared data
● Consider preemption while in kernel mode
● Consider interrupts occurring during crucial OS activities

Operating System Concepts – 9th Edition 6.33 Silberschatz, Galvin and Gagne ©2013
Dispatcher

● Dispatcher module gives control of the CPU to the process


selected by the short-term scheduler; this involves:
● switching context
● switching to user mode
● jumping to the proper location in the user program to
restart that program
● Dispatch latency – time it takes for the dispatcher to stop
one process and start another running

Operating System Concepts – 9th Edition 6.34 Silberschatz, Galvin and Gagne ©2013
Scheduling Criteria

● CPU utilization – keep the CPU as busy as possible


● Throughput – # of processes that complete their execution per
time unit
● Turnaround time – amount of time to execute a particular
process
● Waiting time – amount of time a process has been waiting in the
ready queue
● Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)

Operating System Concepts – 9th Edition 6.35 Silberschatz, Galvin and Gagne ©2013
Scheduling Algorithm Optimization Criteria

● Max CPU utilization


● Max throughput
● Min turnaround time
● Min waiting time
● Min response time

Operating System Concepts – 9th Edition 6.36 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

● Waiting time for P1 = 0; P2 = 24; P3 = 27


● Average waiting time: (0 + 24 + 27)/3 = 17

Operating System Concepts – 9th Edition 6.37 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
● The Gantt chart for the schedule is:

● Waiting time for P1 = 6; P2 = 0; P3 = 3


● Average waiting time: (6 + 0 + 3)/3 = 3
● Much better than previous case
● Convoy effect - short process behind long process
● Consider one CPU-bound and many I/O-bound processes

Operating System Concepts – 9th Edition 6.38 Silberschatz, Galvin and Gagne ©2013
Shortest-Job-First (SJF) Scheduling

● Associate with each process the length of its next CPU burst
● Use these lengths to schedule the process with the shortest
time
● SJF is optimal – gives minimum average waiting time for a given
set of processes
● The difficulty is knowing the length of the next CPU request
● Could ask the user

Operating System Concepts – 9th Edition 6.39 Silberschatz, Galvin and Gagne ©2013
Example of SJF

ProcessArriva l Time Burst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

● SJF scheduling chart

● Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Operating System Concepts – 9th Edition 6.40 Silberschatz, Galvin and Gagne ©2013
Example of Shortest-remaining-time-first

● Now we add the concepts of varying arrival times and preemption to


the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
● Preemptive SJF Gantt Chart

● Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

Operating System Concepts – 9th Edition 6.41 Silberschatz, Galvin and Gagne ©2013
Priority Scheduling

● A priority number (integer) is associated with each process

● The CPU is allocated to the process with the highest priority


(smallest integer ≡ highest priority)
● Preemptive
● Nonpreemptive

● SJF is priority scheduling where priority is the inverse of predicted


next CPU burst time

● Problem ≡ Starvation – low priority processes may never execute

● Solution ≡ Aging – as time progresses increase the priority of the


process

Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne ©2013
Example of Priority Scheduling

ProcessA arri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

● Priority scheduling Gantt Chart

● Average waiting time = 8.2 msec

Operating System Concepts – 9th Edition 6.43 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)

● Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
● If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more
than (n-1)q time units.
● Timer interrupts every quantum to schedule next process
● Performance
● q large ⇒ FIFO
● q small ⇒ q must be large with respect to context switch,
otherwise overhead is too high

Operating System Concepts – 9th Edition 6.44 Silberschatz, Galvin and Gagne ©2013
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
● The Gantt chart is:

● Typically, higher average turnaround than SJF, but better response


● q should be large compared to context switch time
● q usually 10ms to 100ms, context switch < 10 usec

Operating System Concepts – 9th Edition 6.45 Silberschatz, Galvin and Gagne ©2013
Time Quantum and Context Switch Time

Operating System Concepts – 9th Edition 6.46 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue
● Ready queue is partitioned into separate queues, eg:
● foreground (interactive)
● background (batch)
● Process permanently in a given queue
● Each queue has its own scheduling algorithm:
● foreground – RR
● background – FCFS
● Scheduling must be done between the queues:
● Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
● Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
● 20% to background in FCFS

Operating System Concepts – 9th Edition 6.47 Silberschatz, Galvin and Gagne ©2013
Multilevel Queue Scheduling

Operating System Concepts – 9th Edition 6.48 Silberschatz, Galvin and Gagne ©2013
Multilevel Feedback Queue

● A process can move between the various queues; aging can be


implemented this way
● Multilevel-feedback-queue scheduler defined by the following
parameters:
● number of queues
● scheduling algorithms for each queue
● method used to determine when to upgrade a process
● method used to determine when to demote a process
● method used to determine which queue a process will enter
when that process needs service

Operating System Concepts – 9th Edition 6.49 Silberschatz, Galvin and Gagne ©2013
Example of Multilevel Feedback Queue

● Three queues:
● Q0 – RR with time quantum 8
milliseconds
● Q1 – RR time quantum 16 milliseconds
● Q2 – FCFS

● Scheduling
● A new job enters queue Q0 which is
served FCFS
4 When it gains CPU, job receives 8
milliseconds
4 If it does not finish in 8
milliseconds, job is moved to
queue Q1
● At Q1 job is again served FCFS and
receives 16 additional milliseconds
4 If it still does not complete, it is
preempted and moved to queue Q2

Operating System Concepts – 9th Edition 6.50 Silberschatz, Galvin and Gagne ©2013
Thread Scheduling

● Distinction between user-level and kernel-level threads


● When threads supported, threads scheduled, not processes
● Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP
● Known as process-contention scope (PCS) since scheduling
competition is within the process
● Typically done via priority set by programmer
● Kernel thread scheduled onto available CPU is system-contention
scope (SCS) – competition among all threads in system

Operating System Concepts – 9th Edition 6.51 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling
● CPU scheduling more complex when multiple CPUs are
available
● Homogeneous processors within a multiprocessor
● Asymmetric multiprocessing – only one processor accesses
the system data structures, alleviating the need for data sharing
● Symmetric multiprocessing (SMP) – each processor is
self-scheduling, all processes in common ready queue, or each
has its own private queue of ready processes
● Currently, most common
● Processor affinity – process has affinity for processor on
which it is currently running
● soft affinity
● hard affinity
● Variations including processor sets

Operating System Concepts – 9th Edition 6.52 Silberschatz, Galvin and Gagne ©2013
NUMA and CPU Scheduling

Note that memory-placement algorithms can also consider affinity

Operating System Concepts – 9th Edition 6.53 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling – Load Balancing

● If SMP, need to keep all CPUs loaded for efficiency


● Load balancing attempts to keep workload evenly distributed
● Push migration – periodic task checks load on each
processor, and if found pushes task from overloaded CPU to
other CPUs
● Pull migration – idle processors pulls waiting task from busy
processor

Operating System Concepts – 9th Edition 6.54 Silberschatz, Galvin and Gagne ©2013
Multicore Processors

● Recent trend to place multiple processor cores on same


physical chip
● Faster and consumes less power
● Multiple threads per core also growing
● Takes advantage of memory stall to make progress on
another thread while memory retrieve happens

Operating System Concepts – 9th Edition 6.55 Silberschatz, Galvin and Gagne ©2013
Multithreaded Multicore System

Operating System Concepts – 9th Edition 6.56 Silberschatz, Galvin and Gagne ©2013
Real-Time CPU Scheduling
● Can present obvious
challenges
● Soft real-time systems – no
guarantee as to when critical
real-time process will be
scheduled
● Hard real-time systems –
task must be serviced by its
deadline
● Two types of latencies affect
performance
1. Interrupt latency – time from
arrival of interrupt to start of
routine that services interrupt
2. Dispatch latency – time for
schedule to take current process
off CPU and switch to another

Operating System Concepts – 9th Edition 6.57 Silberschatz, Galvin and Gagne ©2013
Real-Time CPU Scheduling (Cont.)

● Conflict phase of
dispatch latency:
1. Preemption of
any process
running in kernel
mode
2. Release by
low-priority
process of
resources
needed by
high-priority
processes

Operating System Concepts – 9th Edition 6.58 Silberschatz, Galvin and Gagne ©2013
Priority-based Scheduling
● For real-time scheduling, scheduler must support preemptive,
priority-based scheduling
● But only guarantees soft real-time
● For hard real-time must also provide ability to meet deadlines
● Processes have new characteristics: periodic ones require CPU at
constant intervals
● Has processing time t, deadline d, period p
● 0≤t≤d≤p
● Rate of periodic task is 1/p

Operating System Concepts – 9th Edition 6.59 Silberschatz, Galvin and Gagne ©2013

You might also like