Concurrency in Operating System
Concurrency in Operating System
Concurrency in Operating System
Principles of Concurrency :
With current technology such as multi core processors, and parallel
processing, which allow for multiple processes/threads to be executed
concurrently - that is at the same time - it is possible to have more than a
single process/thread accessing the same space in memory, the same
declared variable in the code, or even attempting to read/write to the
same file.
Problems in Concurrency :
Drawbacks of Concurrency :
Issues of Concurrency :
Non-atomic
Operations that are non-atomic but interruptible by multiple
processes can cause problems. (an atomic operation is one that
runs completely independently of any other processes/threads - any
process that is dependent on another process/thread is non-
atomic)
Race conditions
A race condition is a behavior which occurs in software applications
where the output is dependent on the timing or sequence of other
uncontrollable events. Race conditions also occur in software which
supports multithreading, use a distributed environment or are
interdependent on shared resources
Blocking
A process that is blocked is one that is waiting for some event, such
as a resource becoming available or the completion of an I/O
operation.[Processes can block waiting for resources. A process
could be blocked for long period of time waiting for input from a
terminal. If the process is required to periodically update some data,
this would be very undesirable.
Starvation
A problem encountered in concurrent computing where a process is
perpetually denied necessary resources to process its
work. Starvation may be caused by errors in a scheduling or mutual
exclusion algorithm, but can also be caused by resource leaks
Deadlock
In concurrent computing, a deadlock is a state in which each
member of a group waits for another member, including itself, to
take action, such as sending a message or more commonly
releasing a lock. Deadlocks are a common problem in
multiprocessing systems, parallel computing, and distributed
systems, where software and hardware locks are used to arbitrate
shared resources and implement process synchronization