Slides 03
Slides 03
Slides 03
Introduction to Threads
Basic idea
We build virtual processors in software, on top of physical processors:
2 / 34
Processes 3.1 Threads
Context Switching
Contexts
Processor context: The minimal collection of values stored in the
registers of a processor used for the execution of a series of
instructions (e.g., stack pointer, addressing registers, program
counter).
Thread context: The minimal collection of values stored in
registers and memory, used for the execution of a series of
instructions (i.e., processor context, state).
Process context: The minimal collection of values stored in
registers and memory, used for the execution of a thread (i.e.,
thread context, but now also at least MMU register values).
3 / 34
Processes 3.1 Threads
Context Switching
Contexts
Processor context: The minimal collection of values stored in the
registers of a processor used for the execution of a series of
instructions (e.g., stack pointer, addressing registers, program
counter).
Thread context: The minimal collection of values stored in
registers and memory, used for the execution of a series of
instructions (i.e., processor context, state).
Process context: The minimal collection of values stored in
registers and memory, used for the execution of a thread (i.e.,
thread context, but now also at least MMU register values).
3 / 34
Processes 3.1 Threads
Context Switching
Contexts
Processor context: The minimal collection of values stored in the
registers of a processor used for the execution of a series of
instructions (e.g., stack pointer, addressing registers, program
counter).
Thread context: The minimal collection of values stored in
registers and memory, used for the execution of a series of
instructions (i.e., processor context, state).
Process context: The minimal collection of values stored in
registers and memory, used for the execution of a thread (i.e.,
thread context, but now also at least MMU register values).
3 / 34
Processes 3.1 Threads
Context Switching
Observations
1 Threads share the same address space. Thread context switching
can be done entirely independent of the operating system.
2 Process switching is generally more expensive as it involves
getting the OS in the loop, i.e., trapping to the kernel.
3 Creating and destroying threads is much cheaper than doing so
for processes.
4 / 34
Processes 3.1 Threads
Main issue
Should an OS kernel provide threads, or should they be implemented as
user-level packages?
User-space solution
All operations can be completely handled within a single process ⇒
implementations can be extremely efficient.
All services provided by the kernel are done on behalf of the process in
which a thread resides ⇒ if the kernel decides to block a thread, the
entire process will be blocked.
Threads are used when there are lots of external events: threads block
on a per-event basis ⇒ if the kernel can’t distinguish threads, how can it
support signaling events to them?
5 / 34
Processes 3.1 Threads
Kernel solution
The whole idea is to have the kernel contain the implementation of a thread
package. This means that all operations return as system calls
Operations that block a thread are no longer a problem: the kernel
schedules another available thread within the same process.
Handling external events is simple: the kernel (which catches all events)
schedules the thread associated with the event.
The problem is (or used to be) the loss of efficiency due to the fact that
each thread operation requires a trap to the kernel.
Conclusion – but
Try to mix user-level and kernel-level threads into a single concept, however,
performance gain has not turned out to outweigh the increased complexity.
6 / 34
Processes 3.1 Threads
7 / 34
Processes 3.1 Threads
Improve performance
Starting a thread is much cheaper than starting a new process.
Having a single-threaded server prohibits simple scale-up to a
multiprocessor system.
As with clients: hide network latency by reacting to next request while
previous one is being replied.
Better structure
Most servers have high I/O demands. Using simple, well-understood
blocking calls simplifies the overall structure.
Multithreaded programs tend to be smaller and easier to understand due
to simplified flow of control.
8 / 34
Processes 3.2 Virtualizaton
Virtualization
Observation
Virtualization is becoming increasingly important:
Hardware changes faster than software
Ease of portability and code migration
Isolation of failing or attacked components
Program
Interface A
Program Implementation of
mimicking A on B
Interface A Interface B
(a) (b)
9 / 34
Processes 3.2 Virtualizaton
Architecture of VMs
Observation
Virtualization can take place at very different levels, strongly depending
on the interfaces as offered by various systems components:
Library
System calls
10 / 34
Processes 3.2 Virtualizaton
Application Applications
Runtime system Operating system
Runtime system Operating system
Runtime system Operating system
Hardware Hardware
(a) (b)
Practice
We’re seeing VMMs run on top of existing operating systems.
Perform binary translation: while executing an application or
operating system, translate instructions to that of the underlying
machine.
Distinguish sensitive instructions: traps to the orginal kernel (think
of system calls, or privileged instructions).
Sensitive instructions are replaced with calls to the VMM.
12 / 34
Processes 3.3 Clients
Essence
A major part of client-side software is focused on (graphical) user
interfaces.
Xlib Xlib
Local OS Local OS X protocol
X kernel
Device drivers
13 / 34
Processes 3.3 Clients
Client-Side Software
14 / 34
Processes 3.4 Servers
Basic model
A server is a process that waits for incoming service requests at a
specific transport address. In practice, there is a one-to-one mapping
between a port and a service.
15 / 34
Processes 3.4 Servers
Type of servers
Superservers: Servers that listen to several ports, i.e., provide several
independent services. In practice, when a service request comes
in, they start a subprocess to handle the request (UNIX inetd)
Iterative vs. concurrent servers: Iterative servers can handle only one
client at a time, in contrast to concurrent servers
16 / 34
Processes 3.4 Servers
Stateless servers
Never keep accurate information about the status of a client after having
handled a request:
Don’t record whether a file has been opened (simply close it again after
access)
Don’t promise to invalidate a client’s cache
Don’t keep track of your clients
Consequences
Clients and servers are completely independent
State inconsistencies due to client or server crashes are reduced
Possible loss of performance because, e.g., a server cannot anticipate
client behavior (think of prefetching file blocks)
18 / 34
Processes 3.4 Servers
Stateless servers
Never keep accurate information about the status of a client after having
handled a request:
Don’t record whether a file has been opened (simply close it again after
access)
Don’t promise to invalidate a client’s cache
Don’t keep track of your clients
Consequences
Clients and servers are completely independent
State inconsistencies due to client or server crashes are reduced
Possible loss of performance because, e.g., a server cannot anticipate
client behavior (think of prefetching file blocks)
18 / 34
Processes 3.4 Servers
Stateful servers
Keeps track of the status of its clients:
Record that a file has been opened, so that prefetching can be
done
Knows which data a client has cached, and allows clients to keep
local copies of shared data
Observation
The performance of stateful servers can be extremely high, provided
clients are allowed to keep local copies. As it turns out, reliability is not
a major problem.
20 / 34
Processes 3.4 Servers
Stateful servers
Keeps track of the status of its clients:
Record that a file has been opened, so that prefetching can be
done
Knows which data a client has cached, and allows clients to keep
local copies of shared data
Observation
The performance of stateful servers can be extremely high, provided
clients are allowed to keep local copies. As it turns out, reliability is not
a major problem.
20 / 34