5.Threads

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 33

Threads

Process Concept
• Classically, processes are executed programs that have ...

• Resource Ownership

• Process includes a virtual address space to hold the process image

• Operating system prevents unwanted interference between processes

• Scheduling/Execution

• Process follows an execution path that may be interleaved with other


processes

• Process has an execution state (Running, Ready, etc.) and a dispatching


priority and is scheduled and dispatched by the operating system

• Today, the unit of dispatching is referred to as a thread or lightweight process

• The unit of resource ownership remains the process or task

2
Control Blocks
• Information associated with each process: Process Control
Block

• Memory management information

• Accounting information

• Information associated with each thread: Thread Control Block

• Program counter

• CPU registers

• CPU scheduling information

• Pending I/O information

3
Control Blocks
Process ID (PID)
Parent PID

Next Process Block PCB

List of open files Handle Table

Image File Name


Thread Control Block (TCB)
List of Thread
Control Blocks Next TCB

Program Counter

Registers

0
Single & Multithreading
•Process
Each thread has

• An execution state (Running, Ready, etc.)

• Saved thread context when not running

• An execution stack

• Some per-thread static storage for local


variables

• Access to the memory and resources of its


process (all threads of a process share
this)

• Suspending a process involves


suspending all threads of the process

• Termination of a process terminates all threads


within the process

6 PT / FF 2014
Why Multithreading
• Advantages

• Better responsiveness - dedicated threads for handling user events

• Simpler resource sharing - all threads in a process share same address


space

• Utilization of multiple cores


for parallel execution

• Faster creation and


termination of activities

• Disadvantages

• Coordinated
termination

• Signal and error


handling

• Reentrant vs. non-


reentrant system calls:
Thread States
• The typical states for a thread are running, ready, blocked

• Typical thread operations associated with a change in thread state are:

• Spawn: a thread within a process may spawn another thread

• Provides instruction pointer and arguments for the new thread

• New thread gets its own register context and stack space

• Block: a thread needs to wait for an event

• Saving its user registers, program counter, and stack pointers

• Unblock: When the event for which a thread is blocked occurs

• Finish: When a thread completes, its register context and stacks are
deallocated.
Thread
Dispatchi
ng Thread T1
Interrupt or system call Thread T2

executing
Save state into TCB1 ready or
waiting
Reload state from TCB2

Interrupt or system call


ready or executing
waiting

Save state into TCB2

Reload state from TCB1


ready or
waiting
executing
Threads
Threads specific
Threads share…. Attributes….
 Thread ID
 Global memory
 Thread specific data
 Process ID and parent
 CPU affinity
process ID  Stack (local variables and function
 Controlling terminal call linkage information)
 Process credentials (user  ……
)
 Open file information
 Timers
 ………
Process Vs. Threads
Multicore Programming

Concurrent Execution on a Single-core System

Parallel Execution on a Multicore System


Multicore Programming
 Multicore systems putting pressure on
programmers, challenges include
◦ Dividing activities
 What tasks can be separated to run on different
processors
◦ Balance
 Balance work on all processors
◦ Data splitting
 Separate data to run with the tasks
◦ Data dependency
 Watch for dependences between tasks
◦ Testing and debugging
 Harder!!!!
Types of Parallelism
 Data Parallelism: focus on distributing
data across different parallel computing
nodes
 Task Parallelism: focus on distributing

execution processes(threads) across


different parallel computing nodes
Types of Parallelism
Data vs. Task Parallelism
Data Parallelism Task Parallelism
Same operations are performed Different operations are
on different subsets of same data. performed on the same or
different data.
Synchronous computation Asynchronous computation
Speedup is less as each processor
Speedup is more as there is only
will execute a different thread or
one execution thread operating
process on the same or different
on all sets of data.
set of data.
Amount of parallelization is Amount of parallelization is
proportional to the input data proportional to the number of
size. independent tasks to be
performed
Designed for optimum Load balancing depends on the
load balance on multi processor availability of the hardware and
system. scheduling algorithms like static
and dynamic scheduling.
Amdahl’s Law
 gives the theoretical speedup in latency of
the execution of a task at fixed workload
that can be expected of a system whose
resources are improved

Where S = portion of program executed serially


N = Processing Cores
Amdahl’s Law Example
 we have an application that is 75 percent
parallel and 25 percent serial. If we run this
application on a system with two processing
cores?
 S=25%=0.25, N= 2

 If we add two additional cores , calculate


speedup?
Fork – Join Model
Multithreading Models
 Support provided at either

User level -> user threads


Supported above the kernel and managed without
kernel support

Kernel level -> kernel threads


Supported and managed directly by the operating
system

What is the relationship between user and kernel


threads?
User Threads
 Thread management done by user-level
threads library

 Three primary thread libraries:


◦ POSIX Pthreads
◦ Win32 threads
◦ Java threads
Kernel Threads
 Supported by the Kernel

 Examples
◦ Windows XP/2000
◦ Solaris
◦ Linux
◦ Tru64 UNIX
◦ Mac OS X
User vs. Kernel Thread
Multithreading Models
User Thread – to - Kernel Thread

 Many-to-One

 One-to-One

 Many-to-Many
Many-to-One
Many user-level threads
mapped to single kernel
thread

 Only one thread can


access the kernel at a
time,

 multiple threads are


unable to run in parallel
on multicore systems.

 the entire process will block


if a thread makes a blocking
system call
One-to-One
Each user-level thread maps to kernel thread
 more concurrency than the many-to-one model by allowing

another thread to run when a thread makes a blocking


system call.
 Allows multiple threads to run in parallel on

multiprocessors.
 drawback is, creating a user thread requires creating the

corresponding kernel thread


Many-to-Many Model
 multiplexes many user-level
threads to a smaller or equal
number of kernel threads

 developers can create as


many user threads as
necessary, and the
corresponding

 kernel threads can run in


parallel on a multiprocessor.

 When thread performs a


blocking system call, the
kernel can schedule another
thread for
execution.
Thread Libraries
 Three main thread libraries in use today:
◦ POSIX Pthreads
 May be provided either as user-level or kernel-level
 A POSIX standard (IEEE 1003.1c) API for thread
creation and synchronization
 API specifies behavior of the thread library,
implementation is up to development of the library
◦ Win32
 Kernel-level library on Windows system
◦ Java
 Java threads are managed by the JVM
 Typically implemented using the threads model
provided by underlying OS
POSIX Compilation on
Linux

On Linux, programs that use the Pthreads API


must be compiled with
–pthread or –lpthread

gcc –o thread –lpthread thread.c


POSIX: Thread Creation

int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void


*(*start)(void *), void *arg);

 *thread Is the location where the ID of the newly created thread


should be stored, or NULL if the thread ID is not required.

 attr Is the thread attribute object specifying the attributes for the
thread that is being created. If attr is NULL, the thread is created
with default attributes.

 start Is the main function for the thread; the thread begins
executing user code at this address.

 arg Is the argument passed to start.


POSIX: Thread ID
#include <pthread.h>

pthread_t pthread_self()

returns : ID of current (this) thread


POSIX: Wait for Thread
Completion
#include <pthread.h>

pthread_join (thread, NULL)

 returns : 0 on success, some error code on failure.


POSIX: Thread Termination
#include <pthread.h>

Void pthread_exit (return_value)

Threads terminate in one of the following ways:


 The thread's start functions performs a return
specifying a return value for the thread.
 Thread receives a request asking it to terminate
using pthread_cancel()
 Thread initiates termination pthread_exit()
 Main process terminates
 int main()
 {
 pthread_t thread1, thread2; /* thread variables */
 thdata data1, data2; /* structs to be passed to threads */

 /* initialize data to pass to thread 1 */


 data1.thread_no = 1;
 strcpy(data1.message, "Hello!");

 /* initialize data to pass to thread 2 */


 data2.thread_no = 2;
 strcpy(data2.message, "Hi!");

 /* create threads 1 and 2 */


 pthread_create (&thread1, NULL, (void *) &print_message_function, (void *)
&data1);
 pthread_create (&thread2, NULL, (void *) &print_message_function, (void *)
&data2);

 /* Main block now waits for both threads to terminate, before it exits
 If main block exits, both threads exit, even if the threads have not
 finished their work */
 pthread_join(thread1, NULL);
 pthread_join(thread2, NULL);
 Example code but not complete
 exit(0);

You might also like