Nscet E-Learning Presentation: Listen Learn Lead
Nscet E-Learning Presentation: Listen Learn Lead
Nscet E-Learning Presentation: Listen Learn Lead
E-LEARNING
PRESENTATION
LISTEN … LEARN… LEAD…
COMPUTER SCIENCE AND ENGINEERING
P.MAHALAKSHMI,M.E,MISTE PHOTO
ASSISTANT PROFESSOR
Nadar Saraswathi College of Engineering & Technology,
Vadapudupatti, Annanji (po), Theni – 625531.
UNIT III
SHARED MEMORY
PROGRAMMING WITH OpenMP
Introduction
OpenMP - Open Multi-Processing
OpenMP is an API for shared-memory parallel programming. The “MP” in OpenMP
stands for “multiprocessing,” a term that is synonymous with shared-memory parallel
computing.
OpenMP is designed for systems in which each thread or process can potentially have
access to all available memory. When programming with OpenMP, view system as a
collection of cores or CPUs, all of which have access to main memory
OpenMP is a set of compiler directives as well as an API for programs written in C, C++,
or FORTRAN that provides support for parallel programming in shared-memory
environments.
OpenMP is available on several open-source and commercial compilers for Linux,
Windows, and Mac OS X systems.
All openMp program begin as single process(Master thread) until parallel region
encountered.
FORK – creates team of parallel threads.
JOIN – When team threads completed then leaving the master thread only.
The number of parallel regions and the threads that comprise them are arbitrary
parallel
Defines a parallel region, which is code that will be executed by multiple threads in parallel.
Syntax
#pragma omp parallel [clauses]
{
code_block
}
Library functions
Introduction
OpenMP provides several run-time library routines to manage program in parallel mode.
Many of these run-time library routines have corresponding environment variables that can be set
as defaults.
The run-time library routines let you dynamically change these factors to assist in controlling your
program. In all cases, a call to a run-time library routine overrides any corresponding environment
variable.
Execution Environment Routines
Function Description
omp_set_num_threads(nthreads) Sets the number of threads to use for subsequent parallel
regions.
omp_get_num_threads( ) Returns the number of threads that are being used in the
current parallel region.
omp_get_max_threads( ) Returns the maximum number of threads that are available
for parallel execution.
Timing Routines
omp_get_wtime( ) Returns a double-precision value equal to the elapsed wall clock time
(in seconds) relative to an arbitrary reference time. The reference time
does not change during program execution.
omp_get_wtick( ) Returns a double-precision value equal to the number of seconds
between successive clock ticks.
Dynamic scheduling is expensive: there is some communication between the threads after each iteration of
the loop
Department of CSE, NSCET, Theni Page-51
iii) Guided
The guided scheduling type is similar to the dynamic scheduling type. OpenMP again
divides the iterations into chunks.
Each thread executes a chunk of iterations and then requests another chunk until there
are no more chunks available.
The difference with the dynamic scheduling type is in the size of chunks.
The size of a chunk is proportional to the number of unassigned iterations divided by
the number of the threads. Therefore the size of the chunks decreases.
iv) Auto
The auto scheduling type delegates the decision of the scheduling to the compiler
and/or runtime system.
v) Runtime
The runtime scheduling type defers the decision about the scheduling until the runtime.