Lecture 03
Lecture 03
Lecture 03
System
Professor Mangal Sain
Lecture 3
Threads
MOTIVATION
Types of parallelism
Data parallelism – distributes subsets of the
same data across multiple cores, same operation
on each
Task parallelism – distributing threads across
cores, each thread performing unique operation
As # of threads grows, so does architectural
support for threading
CPUs have cores as well as hardware threads
Consider Oracle SPARC T4 with 8 cores, and 8
hardware threads per core
CONCURRENCY VS. PARALLELISM
Concurrent execution on single-core system:
Many-to-One
One-to-One
Many-to-Many
MANY-TO-ONE
Threads
THREAD POOLS
Create a number of threads in a pool where
they await work
Advantages:
Usually slightly faster to service a request with
an existing thread than create a new thread
Allows the number of threads in the
application(s) to be bound to the size of the pool
Separating task to be performed from mechanics
of creating task allows different strategies for
running task
i.e.Tasks could be scheduled to run periodically
OPENMP
Set of compiler directives and an API for
C, C++, FORTRAN
Provides support for parallel
programming in shared-memory
environments
Identifies parallel regions – blocks of
code that can run in parallel
Windows Threads
Linux Threads
WINDOWS THREADS
CPU Scheduling
CONTENT
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
BASIC CONCEPTS
Maximum CPU
utilization obtained with
multiprogramming
CPU–I/O Burst Cycle –
Process execution
consists of a cycle of
CPU execution and I/O
wait
CPU burst followed by
I/O burst
CPU burst distribution
is of main concern
CPU SCHEDULER
Short-term scheduler selects from among the
processes in ready queue, and allocates the CPU to one
of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
DISPATCHER
Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler; this
involves:
switching context
switching to user mode
jumping to the proper location in the user program to
restart that program
Dispatch latency – time it takes for the
dispatcher to stop one process and start another
running
SCHEDULING CRITERIA
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their
execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been
waiting in the ready queue
Response time – amount of time it takes from when
a request was submitted until the first response is
produced, not output (for time-sharing environment)
SCHEDULING ALGORITHM OPTIMIZATION CRITERIA
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
P4 P1 P3 P2
0 3 9 16 24
P1 P2 P4 P1 P3
0 1 5 10 17 26
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
moved to queue Q1
At Q1 job is again served FCFS and receives 16
additional milliseconds
If it still does not complete, it is preempted and
moved to queue Q2
MULTIPLE-PROCESSOR SCHEDULING
CPU scheduling more complex when multiple CPUs are
available
Homogeneous processors within a multiprocessor
Asymmetric multiprocessing – only one processor accesses
the system data structures, alleviating the need for data
sharing
Symmetric multiprocessing (SMP) – each processor is
self-scheduling, all processes in common ready queue, or each
has its own private queue of ready processes
Currently, most common
Processor affinity – process has affinity for processor on
which it is currently running
soft affinity
hard affinity
Variations including processor sets
MULTIPLE-PROCESSOR SCHEDULING – LOAD BALANCING