OS-Course Material-UNIT-3
OS-Course Material-UNIT-3
OS-Course Material-UNIT-3
COURSE
MATERIAL
UNIT 3
COURSE B.TECH
COMPUTER SCIENCE
DEPARTMENT AND ENGINEERING
SEMESTER 31
Version V-5
BTECH_CSE-SEM 31
SVCE TIRUPATI
TABLE OF CONTENTS – UNIT
3
S. NO CONTENT PAGE
S NO.
1 COURSE OBJECTIVES 1
2 PREREQUISITES 1
3 SYLLABUS 1
4 COURSE OUTCOMES 1
5 CO - PO/PSO MAPPING 1
6 LESSON PLAN 2
7 ACTIVITY BASED LEARNING 2
8 LECTURE NOTES 2
3.1 MEMORY MANAGEMENT INTRODUCTION 4
3.2 SWAPPING 7
3.3 CONTIGUOUS MEMORY ALLOCATION 7
3.4 SEGMENTATION 9
3.5 PAGING 10
3.6 VIRTUAL MEMORY 15
3.7 THRASHING 23
3.8 DEADLOCKS 25
3.9 DEADLOCK PREVENTION 27
3.10 DEADLOCK DETECTION AND AVOIDANCE 28
3.11 RECOVERY FROM DEADLOCK 31
9 PRACTICE QUIZ 32
10 ASSIGNMENTS 33
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 33
12 PART B QUESTIONS 35
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 35
14 REAL TIME APPLICATIONS 35
15 CONTENTS BEYOND THE SYLLABUS 35
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 36
17 MINI PROJECT SUGGESTION 36
BTECH_CSE-SEM 31
SVCE TIRUPATI
1. Course Objectives
The objectives of this course is to
1. To make the students understand the basic operating system concepts such
as processes, threads, scheduling, synchronization, deadlocks, memory
management, file and I/O subsystems and protection.
2. To get acquaintance with the class of abstractions afford by general purpose
operating systems that aid the development of user applications.
2. Prerequisites
Students should have knowledge on
1. System Software
2. Computer Organization and Architecture
3. Basics of Computing
3. Syllabus
UNIT III
Memory Management: Swapping, contiguous memory allocation, segmentation,
paging, structure of the page table. Virtual memory: demand paging, page-
replacement, Allocation of frames, Thrashing, Memory-Mapped Files, Allocating
Kernel Memory Deadlocks: System Model, deadlock characterization, Methods of
handling Deadlocks, Deadlock prevention, Detection and Avoidance, Recovery
from deadlock.
4. Course outcomes
1.Student must be able to understand the significance of operating system in
computing devices
2. Student must be able to analyze connection of application programs and
hardware devices through system calls.
3. Student should be able to Investigate and illustrate various process scheduling
algorithms
4. Student must be able to Apply appropriate memory and file management
schemes.
5. Student must be able to design solutions for various disk scheduling problems.
5. Co-PO / PSO Mapping
Machin
PO PO PO PO PO PO6 PO7 PO8 PO9 P10 PO1 PO1 PSO PSO
e
1 2 3 4 5 1 2 1 2
Tools
CO1 3 2 2
CO2 3 2 2 2
CO3 3 2 2 3 2
CO4 2 3 3 2 2 2
CO5 2 3 3 2 2 2
1 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
6. Lesson Plan
2 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE
TIRUPATI
3 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Dynamic Loading:
For a process to be executed it should be loaded in to the physical memory.
The size of the process is limited to the size of the physical memory.
Dynamic loading is used to obtain better memory utilization. In dynamic
loading the routine or procedure will not be loaded until it is called. Whenever a
routine is called, the calling routine first checks whether the called routine is
already loaded or not. If it is not loaded it cause the loader to load the
desired program in to the memory and updates the programs address table to
indicate the change and control is passed to newly called routine.
Advantage: Gives better memory utilization. Unused routine is never loaded.
Do not need special operating system support. This method is useful when large
amount of codes are needed to handle in frequently occurring cases.
More than one version of the library can be loaded in memory at a time and
each program uses its version of the library. Only the programs that are
compiled with the new version are affected by the changes incorporated in
it. Other programs linked before new version is installed will continue using
older libraries this type of system is called “shared library”.
4 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Swapping
Swapping is a technique of temporarily removing inactive programs from
the memory of the system. A process can be swapped temporarily out of the
memory to a backing store and then brought back in to the memory for
continuing the execution. This process is called swapping.
Example
In a multi-programming environment with a round robin CPU scheduling
whenever the time quantum expires then the process that has just finished
is swapped out and a new process swaps in to the memory for execution.
Normally the process which is swapped out will be swapped back to the
same memory space that is occupied previously. This depends upon address
binding.
If the binding is done at load time, then the process is moved to same memory
location. If the binding is done at run time, then the process is moved to
different memory location. This is because the physical address is computed
during run time.
When process terminates, the memory partition becomes available for another
process. Batch OS uses the fixed size partition scheme.
The OS keeps a table indicating which part of the memory is free and is
occupied. When the process enters the system it will be loaded in to the
input queue. The OS keeps track of the memory requirement of each
process and the amount of memory available and determines which process
to allocate the memory. When a process requests, the OS searches for large
hole for this process, hole is a large block of free memory available. If the hole
is too large it is split in to two. One part is allocated to the requesting process
and other is returned to the set of holes. The set of holes are searched to
determine which
5 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
hole is best to allocate. There are three strategies to select a free hole:
First bit:-Allocates first hole that is big enough. This algorithm scans memory
from the beginning and selects the first available block that is large enough
to hold the process.
Best bit:-It chooses the hole i.e., closest in size to the request. It allocates
the smallest hole i.e., big enough to hold the process.
Worst fit:-It allocates the largest hole to the process request. It searches for the
largest hole in the entire list.
First fit and best fit are the most popular algorithms for dynamic memory
allocation. First fit is generally faster. Best fit searches for the entire list to find the
smallest hole i.e., large enough. Worst fit reduces the rate of production of
smallest holes. All these algorithms suffer from fragmentation.
Memory Protection
Memory protection means protecting the OS from user process and
protecting process from one another. Memory protection is provided by using a
re-location register, with a limit register. Re-location register contains the values
of smallest physical address and limit register contains range of logical
addresses. (Re- location = 100040 and limit = 74600). The logical address must
be less than the limit register; the MMU maps the logical address dynamically
by adding the value in re-location register. When the CPU scheduler selects
a process for execution, the dispatcher loads the re-location and limit
register with correct values as a part of context switch. Since every address
generated by the CPU is checked against these register we can protect the
OS and other users programs and data from being modified.
Fragmentation
Memory fragmentation can be of two types: Internal Fragmentation External
Fragmentation Internal Fragmentation there is wasted space internal to a
portion due to the fact that bloc k of data loaded is smaller than the partition.
Example
If there is a block of 50kb and if the process requests 40kb and if the block is
allocated to the process then there will be 10kb of memory left.
6 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Segmentation
Most users do not think memory as a linear array of bytes rather the users
thinks memory as a collection of variable sized segments which are
dedicated to a particular use such as code, data, stack, heap etc. A logical
address is a collection of segments. Each segment has a name and length.
The address specifies both the segment name and the offset within the
segments. The users specify address by using two quantities: a segment
name and an offset. For simplicity the segments are numbered and referred
by a segment number. So the logical address consists of <segment number,
offset>.
7 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
program consisting of several segments can be shared. We can also share parts
of a program.
Advantages
Eliminates fragmentation. x Provides virtual growth. Allows dynamic segment
growth. Assist dynamic linking. Segmentation is visible.
Paging
Programs are divided in to fixed size pages.
Division is performed by the OS.
Paging is faster than segmentation.
Invisible to user.
Suffers from internal fragmentation.
No external fragmentation.
Process or user page number, offset to calculate absolute address.
Paging
Paging is a memory management scheme that permits the physical address
space of a process to be non-contiguous. Support for paging is handled by
hardware. It is used to avoid external fragmentation. Paging avoids the
considerable problem of fitting the varying sized memory chunks on to the
backing store. When some code or date residing in main memory need to be
swapped out, space must be found on backing store.
Basic Method:
Physical memory is broken in to fixed sized blocks called frames (f). Logical
memory is broken in to blocks of same size called pages (p). When a process is
to be executed its pages are loaded in to available frames from backing
store. The blocking store is also divided in to fixed sized blocks of same size as
memory frames. The following figure shows
paging hardware:
8 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
If the page table is large then the use of registers is not visible. So the page
table is kept in the main memory and a page table base register [PTBR]
points to the page table. Changing the page table requires only one register
which reduces the context switching type. The problem with this approach is the
time required to access memory location. To access a location [i] first we
have t index the page table using PTBR offset. It gives the frame number
which is combined with the page offset to produce the actual address. Thus
we need two memory accesses for a byte.
The only solution is to use special, fast, lookup hardware cache called
9 | OS - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
translation look aside buffer [TLB] or associative register. LB is built with
associative register with high speed memory. Each register contains two paths a
key and a value.
Protection:
Memory protection in paged environment is done by protection bits that are
associated with each frame these bits are kept in page table. x One bit can
define a page to be read-write or read-only.
To find the correct frame number every reference to the memory should go
through page table. At the same time physical address is computed. The
protection bits can be checked to verify that no writers are made to read-
only page. Any attempt to write in to read-only page causes a hardware trap
to the OS. This approach can be used to provide protection to read-only,
read-write or execute-only pages. One more bit is generally added to each
entry in the page table: a valid, invalid bit. A valid bit indicates that
associated page is in the processes logical address space and thus it is a legal
or valid page.
10 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
If the bit is invalid, it indicates the page is not in the processes logical addressed
space and illegal. Illegal addresses are trapped by using the valid-invalid bit.
The OS sets this bit for each page to allow or disallow accesses to that page.
One way is to use two-level paging algorithm in which the page table itself is
also paged.
Example
In a 32 bit machine with page size of 4kb. A logical address is divided in to a
page number consisting of 20 bits and a page offset of 12 bit. The page table is
further divided since the page table is paged, the page number is further
divided in to 10 bit page number and a 10 bit offset.
Working:
Virtual page number is taken from virtual address. Virtual page number is
hashed in to hash table. Virtual page number is compared with the first element
11 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
of linked list. Both the values are matched, that value is (page frame) used
for calculating the physical address. If not match then entire linked list is
searched for matching virtual page number. Clustered pages are similar to hash
table but one difference is that each entity in the hash table refer to several
pages.
BTECH_CSE-SEM 31
SVCE TIRUPATI
need 8000k for 40 users. If the code is reentrant it can be shared. Consider the
following figure.
If the code is reentrant then it never changes during execution. Thus two or
more processes can execute same code at the same time. Each process has its
own copy of registers and the data of two processes will vary. Only one copy of
the editor is kept in physical memory. Each users page table maps to same
physical copy of editor but date pages are mapped to different frames. So to
support 40 users we need only one copy of editor (150k) plus 40 copies of 50k of
data space i.e., only 2150k instead of 8000k.
Virtual memory is the separation of users logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided when
these is less physical memory. Separating logical memory from physical
memory also allows files and memory to be shared by several different
processes through page sharing. Virtual memory is implemented using Demand
Paging.
Demand Paging
A demand paging is similar to paging system with swapping when we want
to execute a process we swap the process the in to memory otherwise it will
not be loaded in to memory. A swapper manipulates the entire processes, where
as a pager manipulates individual pages of the process.
Basic concept-Instead of swapping the whole process the pager swaps
only the necessary pages in to memory. Thus it avoids reading unused
pages and decreases the swap time and amount of physical memory
needed.
13 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Fig. 12 Page table when some pages are not in main memory
The valid-invalid bit scheme can be used to distinguish between the pages that
are on the disk and that are in memory.
If the bit is valid then the page is both legal and is in memory.
If the bit is invalid then either page is not valid or is valid but is currently on
the disk.
Marking a page as invalid will have no effect if the processes never access to
that page. Suppose if it access the page which is marked invalid, causes a
page fault trap. This may result in failure of OS to bring the desired page in
to memory.
The step for handling page fault is straight forward and is given below:
We check the internal table of the process to determine whether the
reference made is valid or invalid.
If invalid terminate the process,. If valid, then the page is not yet loaded and
we now page it
We find a free frame.
We schedule disk operation to read the desired page in to newly allocated
frame
When disk reed is complete, we modify the internal table kept with the
process to indicate that the page is now in memory.
We restart the instruction which was interrupted by illegal address trap. The
process can now access the page. In extreme cases, we start the process
without pages in memory. When the OS points to the instruction of process it
generates a page fault. After this page is brought in to memory the process
continues to execute, faulting as necessary until every demand paging i.e., it
never brings the page in to memory until it is required.
14 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Page table-Has the ability to mark an entry invalid through valid-
invalid bit.
Secondary memory-This holds the pages that are not present in
main memory.
Performance of demand paging-Demand paging can have
significant effect on the performance of the computer system.
Let P be the probability of the page fault (0<=P<=1)
Effective access time = (1-P) * ma + P * page fault.
Where P = page fault and ma = memory access time.
Effective access time is directly proportional to page fault rate. It is important to
keep page fault rate low in demand paging. A page fault causes the
following sequence to occur- Trap to the OS. Save the user registers and
process state. Determine that the interrupt was a page fault. Checks the
page references were legal and determine the location of page on disk.
Issue a read from disk to a free frame.
If waiting, allocate the CPU to some other user. Interrupt from the disk. Save the
registers and process states of other users. Determine that the interrupt was from
the disk. Correct the page table and other table to show that the desired
page is now in memory.
15 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Memory Mapping
Standard system calls i.e., open (), read () and write () is used for sequential
read of a file. Virtual memory is used for this. In memory mapping a file allows a
part of the virtual address space to be logically associated with a file.
Memory mapping a file is possible by mapping a disk block to page in memory.
Page Replacement
Demand paging shares the I/O by not loading the pages that are never used.
Demand paging also improves the degree of multiprogramming by allowing
more process to run at the sometime. Page replacement policy deals with
the solution of pages in memory to be replaced by a new page that must be
brought in. When a user process is executing a page fault occurs. The hardware
traps to the operating system, which checks the internal table to see that this is
a page fault and not an illegal memory access. The operating system
determines where the derived page is residing on the disk, and this finds
that are no free frames on the list of free frames.
When all the frames are in main memory, it is necessary to bring a new page to
satisfy the page fault, replacement policy is concerned with selecting a page
currently in memory to be replaced. The page i,e to be removed should be
the page i,e least likely to be referenced in future.
Victim Page
The page that is supported out of physical memory is called victim page. x If no
frames are free, the two page transforms come (out and one in) are read.
This will see the effective access time.
Each page or frame may have a dirty (modify) bit associated with the
hardware. The modify bit for a page is set by the hardware whenever any word
or byte in the page is written into, indicating that the page has been modified.
When we select the page for replacement, we check its modify bit. If the
bit is set, then the page is modified since it was read from the disk.
If the bit was not set, the page has not been modified since it was read
into memory.
16 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Therefore, if the copy of the page has not been modified we can avoid
writing the memory page to the disk, if it is already there. Sum pages cannot
be modified. We must solve two major problems to implement demand
paging: we must develop a frame allocation algorithm and a page
replacement algorithm. If we have multiple processors in memory, we must
decide how many frames to allocate and page replacement is needed.
The first three references (7,0,1) cases page faults and are brought into the
empty frames. The next references 2 replaces page 7 because the page
7 was brought in first. Since 0 is the next references and 0 is already in
memory e has no page faults. The next references 3 results in page 0 being
replaced so that the next references to 0 causer page fault. This will
continue till the end of string. There are 15 faults all together.
Belady’s Anamoly
For some page replacement algorithm, the page fault may increase as
the number of allocated frames increases. FIFO replacement algorithm
may face this problem.
Optimal Algorithm
Optimal page replacement algorithm is mainly to solve the problem of
Belady’s Anamoly. Optimal page replacement algorithm has the lowest
page fault rate of all algorithms. An optimal page replacement algorithm
exists and has been called OPT. The working is simple “Replace the
page that will not be used for the longest period of time” Example:
consider the following reference string.
17 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
The first three references cause faults that fill the three empty frames. The
references to page 2 replaces page 7, because 7 will not be used until
reference 18. The page 0 will be used at 5 and page 1 at 14. With only 9
page faults, optimal replacement is much better than a FIFO, which had 15
faults. This algorithm is difficult t implement because it requires future
knowledge of reference strings.
18 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Two implementation are possible:
Counters: In this we associate each page table entry a time -of -use
field, and add to the cpu a logical clock or counter. The clock is
incremented for each memory reference. When a reference to a page is
made, the contents of the clock register are copied to the time-of-use
field in the page table entry for that page. In this way we have the time
of last reference to each page we replace the page with smallest time
value. The time must also be maintained when page tables are changed.
Stack: Another approach to implement LRU replacement is to keep a stack
of page numbers when a page is referenced it is removed from the stack
and put on to the top of stack. In this way the top of stack is always the
most recently used page and the bottom in least recently used page. Since
the entries are removed from the stack it is best implement by a doubly
linked list, with a head and tail pointer. Neither optimal replacement nor
LRU replacement suffers from Belady’s Anamoly. These are called stack
algorithms.
LRU Approximation
An LRU page replacement algorithm should update the page removal
status information after every page reference updating is done by
software, cost increases. But hardware LRU mechanism tend to degrade
execution performance at the same time, then substantially increases
the cost. For this reason, simple and efficient algorithm that approximation
the LRU have been developed. With h/w support the reference bit was
used. A reference bit associate with each memory block and this bit
automatically set to 1 by the h/w whenever the page is referenced. The
single reference bit per clock can be used to approximate LRU removal.
The page removal s/w periodically resets the reference bit to 0, write the
execution of the users job causes some reference bit to be set to 1. If the
reference bit is 0 then the page has not been referenced since the last
time the reference bit was set to 0.
19 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
b) Most Frequently Used(MFU)
This is based on the principle that the page with the smallest count was
probably just brought in and has yet to be used.
Allocation of Frames
The allocation policy in a virtual memory controls the operating system
decision regarding the amount of real memory to be allocated to each
active process. In a paging system if more real pages are allocated, it
reduces the page fault frequency and improved turnaround throughput.
If too few pages are allocated to a process its page fault frequency and
turnaround times may deteriorate to unacceptable levels.
20 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Thrashing
If the number of frames allocated to a low-priority process falls below
the minimum number required by the computer architecture then we
suspend the process execution. A process is thrashing if it is spending
more time in paging than executing. If the processes do not have
enough number of frames, it will quickly page fault. During this it must
replace some page that is not currently in use. Consequently it quickly
faults again and again. The process continues to fault, replacing pages
for which it then faults and brings back. This high paging activity is called
thrashing. The phenomenon of excessively moving pages back and forth
b/w memory and secondary has been called thrashing.
Fig.18 Thrashing
21 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Working set model
Working set model algorithm uses the current memory requirements to
determine the number of page frames to allocate to the process, an
informal definition is “the collection of pages that a process is working with
and which must be resident if the process to avoid thrashing”. The idea is to
use the recent needs of a process to predict its future reader. The
working set is an approximation of programs locality. Ex: given a
sequence of memory reference, if the working set window size to
memory references, then working set at time t1 is {1,2,5,6,7} and at t2 is
changed to {3,4} At any given time, all pages referenced by a process in
its last 4 seconds of execution are considered to compromise its working
set.
A process will never execute until its working set is resident in main
memory. Pages outside the working set can be discarded at any
movement. Working sets are not enough and we must also introduce
balance set. If the sum of the working sets of all the run able process is
greater than the size of memory the refuse some process for a while.
Divide the run able process into two groups, active and inactive. The
collection of active set is called the balance set. When a process is made
active its working set is loaded.
Deadlocks
When processes request a resource and if the resources are not
available at that time the process enters into waiting state. Waiting process
may not change its state because the resources they are requested are
held by other process. This situation is called deadlock. The situation
where the process waiting for the resource i.e., not available is called
deadlock.
System Model
A system may consist of finite number of resources and is distributed among
number of processes. There resources are partitioned into several instances
each with identical instances. A process must request a resource before
using it and it must release the resource after using it. It can request
any number of resources to carry out a designated task. The amount of
resource requested may not exceed the total number of resources
available.
BTECH_CSE-SEM 31
SVCE TIRUPATI
drive and process Pj request a printer then a deadlock occurs.
Multithread
BTECH_CSE-SEM 31
SVCE TIRUPATI
programs are good candidates for deadlock because they compete for
shared resources.
Deadlock Characterization:
Necessary Conditions: A deadlock situation can occur if the following 4
conditions occur simultaneously in a system:
1. Mutual Exclusion: Only one process must hold the resource at a time. If
any other process requests for the resource, the requesting process must be
delayed until the resource has been released.
2. Hold and Wait:-A process must be holding at least one resource and
waiting to acquire additional resources that are currently being held by the
other process.
3. No Preemption:-Resources can’t be preempted i.e., only the process
holding the resources must release it after the process has completed
its task.
4.Circular Wait:-A set {P0,P1 Pn} of waiting process must exist such that
P0 is waiting for a resource i.e., held by P1, P1 is waiting for a resource i.e.,
held by P2. Pn-1 is waiting for resource held by process Pn and Pn is waiting
for the resource i.e., held by P1. All the four conditions must hold for a
deadlock to occur.
23 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
occurred. If each resource has several instances then a cycle do not
necessarily implies that a deadlock has occurred.
To ensure that the deadlock never occur the system can use either
deadlock avoidance or a deadlock prevention. Deadlock prevention is a
set of method for ensuring that at least one of the necessary conditions
does not occur.
Deadlock Prevention
For a deadlock to occur each of the four necessary conditions must hold. If
at least one of the there condition does not hold then we can prevent
occurrence of deadlock.
Mutual Exclusion: This holds for non-sharable resources. Eg:-A printer can
be used by only one process at a time. Mutual exclusion is not possible
in sharable resources and thus they cannot be involved in deadlock.
Read- only files are good examples for sharable resources. A process
never waits for accessing a sharable resource. So we cannot prevent
deadlock by denying the mutual exclusion condition in non-sharable
resources.
Hold and Wait: This condition can be eliminated by forcing a process
to release all its resources held by it when it request a resource i.e., not
available. One protocol can be used is that each process is allocated
with all of its resources before its start execution.
Example
Consider a process that copies the data from a tape drive to the disk, sorts
the file and then prints the results to a printer. If all the resources are
allocated at the beginning then the tape drive, disk files and printer are
assigned to the process. The main problem with this is it leads to low
resource utilization because it requires printer at the last and is
allocated with it from the beginning so that no other process can use
24 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
it.
25 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Another protocol that can be used is to allow a process to request a
resource when the process has none. i.e., the process is allocated with tape
drive and disk file. It performs the required operation and releases both.
Then the process once again request for disk file and the printer and the
problem and with this is starvation is possible.
Circular Wait:-The fourth and the final condition for deadlock is the circular
wait condition. One way to ensure that this condition never, is to impose
ordering on all resource types and each process requests resource in an
increasing order.
25 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
For each requests it requires to check the resources currently available,
resources that are currently allocated to each processes future requests
and release of each process to decide whether the current requests can
be satisfied or must wait to avoid future possible deadlock.
Safe State
A state is a safe state in which there exists at least one order in which all
the process will run completely without resulting in a deadlock. A system
is in safe state if there exists a safe sequence. A sequence of
processes
<P1,P2,………..Pn> is a safe sequence for the current allocation state if
for each Pi the resources that Pi can request can be satisfied by the
currently available resources. If the resources that Pi requests are not
currently available then Pi can obtain all of its needed resource to
complete its designated task. A safe state is not a deadlock state.
Whenever a process request a resource i.e., currently available, the system
must decide whether resources can be allocated immediately or whether
the process must wait.
The request is granted only if the allocation leaves the system in safe state.
In this, if a process requests a resource i.e., currently available it must still
have to wait. Thus resource utilization may be lower than it would be
without a deadlock avoidance algorithm.
Banker’s Algorithm
This algorithm is applicable to the system with multiple instances of
each resource types, but this is less efficient then the resource
allocation graph algorithm.
26 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
When a new process enters the system it must declare the maximum
number of resources that it may need. This number may not exceed the
total number of resources in the system. The system must determine
that whether the allocation of the resources will leave the system in a safe
state or not. If it is so resources are allocated else it should wait until the
process release enough resources. Several data structures are used to
implement the banker’s algorithm.
Let ‘n’ be the number of processes in the system and ‘m’ be the number of
resources types. We need the following data structures:
Available:-A vector of length m indicates the number of available
resources. If Available[i]=k, then k instances of resource type Rj is available.
Max:-An n*m matrix defines the maximum demand of each process if
Max[i,j]=k, then Pi may request at most k instances of resource type Rj.
Allocation:-An n*m matrix defines the number of resources of each
type currently allocated to each process. If Allocation[i,j]=k, then Pi is
currently k instances of resource type Rj.
Safety Algorithm
This algorithm is used to find out whether or not a system is in safe state or
not.
Step 1. Let work and finish be two vectors of length M and N
respectively. Initialize work = available and Finish[i]=false for
i=1,2,3,…….n. Step 2. Find i such that both Finish[i]=false Need i <= work If
no such i exist then go 4.
Step3. Work = work + Allocation Finish[i]=true Go to step 2.
Step 4. If finish[i]=true for all i, then the system is in safe state. This algorithm
may require an order of m*n*n operation to decide whether a state is safe.
Since the resources are not available. If the system want to allocate the
requested resources to process Pi then modify the state as follows.
27 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
Allocation(i) = Allocation(i) + Request(i)
Need(i) = Need(i) – Request(i)
Deadlock Detection
If a system does not employ either deadlock prevention or a deadlock
avoidance algorithm then a deadlock situation may occur. In this
environment the system may provide. An algorithm that examines the state
of the system to determine whether a deadlock has occurred. An algorithm
to recover from the deadlock.
Fig 20a. Resouce Allocation Graph Fig 20b.coresponding wait for graph
28 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
1. Available:-Is a vector of length m indicating the number of available
resources of each type.
2. Allocation:-Is an m*n matrix which defines the number of resources
of each type currently allocated to each process.
3. Request:-Is an m*n matrix indicating the current request of each
process. If request[i,j]=k then Pi is requesting k more instances of resources
type Rj.
Step 1. let work and finish be vectors of length m and n
respectively. Initialize Work = available/expression For i=0,1,2 n
if allocation(i)!=0 then Finish[i]=0
else Finish[i]=true
Step 2. Find an index(i) such that both
Finish[i] = false Request(i)<=work
If no such I exist go to step 4.
Step 3. Work = work + Allocation(i) Finish[i] = true Go to step 2.
Step 4. If Finish[i] = false for some I where m>=i>=1. When a system is in
a deadlock state. This algorithm needs an order of m*n square operations to
detect whether the system is in deadlock state or not.
9. Practice Quiz
1. Which of the following page replacement algorithms suffers from Belady’s anomaly?
a) FIFO
b) LRU
c) Optimal page replacement
d) Both LRU and FIFO
2.What is the swap space in the disk used for?
a) Saving temporary html pages
b) Saving process data
c) Storing the super-block
d) Storing device drivers
3.Increasing the RAM of a computer typically improves performance because
a) Virtual memory increases
b) Larger RAMs are faster
c) Fewer page faults occur
d) Fewer segmentation faults occur
4. Virtual memory is
a) Large secondary memory
b) Large main memory
c) Illusion of large main memory
d) None of the above
5.Page fault occurs when
a) When a requested page is in memory
b) When a requested page is not in memory
29 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
c) When a page is currupted
d) When an exception is throw
6.Thrashing occurs when
a) When a page fault occurs
b) Processes on system frequently access pages not memory
c) Processes on system are in running state
d) Processes on system are in waiting state
7.The essential content(s) in each entry of a page table is / are
a) Virtual page number
b) Page frame number
c) Both virtual page number and page frame number
d) Access right information
8. Assume that there are 3 page frames which are initially empty. If the
page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6, the number of page
faults using the optimal replacement policy is .
a) 5
b) 6
c) 7
d) 8
9. Which of the following is not a form of memory?
a) instruction cache
b) instruction register
c) instruction opcode
d) translation lookaside buffer
10. The optimal page replacement algorithm will select the page that
a) Has not been used for the longest time in the past.
b) Will not be used for the longest time in the future.
c) Has been used least number of times.
d) Has been used most number of times
10. Assignments
S.N Questio BL CO
o n
1 Give short notes on any two page replacement algorithms? 2 4
2 Define paging? Explain paging concept in detail? 2 4
Given memory partition of 100 KB, 500 KB, 200 KB and 600 KB (in
order). Show with neat sketch how would each of the first-fit,
3 best-fit and worst-fit algorithms place processes of 412 KB, 317 KB, 3 1
112 KB and 326 KB (in order). Which algorithm is most efficient in
memory allocation?
If the contents of reference string is: 0, 2, 1, 6, 4, 0, 1, 0, 3, 1, 2, 1
4 and there are four frames available in the memory then find 3 4
page fault and page fault rate using optimal page algorithm
5 Explain about deadlocks in detail? 2 2
30 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
31 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
6 What is meant by Paging? Give its advantages.
Ans. Paging is a Memory-management scheme that permits
the physical –address space of a process to be Non-
contiguous. 1 4
Advantages
Avoids the considerable problem of fitting the varying -sized
memory chunks onto the baking store Fragmentation problems
32 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
are also prevalent baking store, except that access is much
slower, so compaction is important.
7 Write about Internal and External
Fragmentation? Ans. Memory fragmentation
can be of two types: Internal Fragmentation
External Fragmentation.
Internal Fragmentation: there is wasted space internal to
a portion due to the fact that block of data loaded is smaller
than the partition.
Example-If there is a block of 50kb and if the process requests 2 4
40kb and if the block is allocated to the process then there
will be 10kb of memory left.
External Fragmentation exists when there is enough memory
space exists to satisfy the request, but it not contiguous i.e.,
storage is fragmented in to large number of small holes.
External Fragmentation may be either minor or a major
problem. One
solution for over-coming external fragmentation is compaction.
8 Write short notes on swapping?
Ans. Swapping is a technique of temporarily removing
inactive programs from the memory of the system. A process
can be swapped temporarily out of the memory to a backing
store and then brought back in to the memory for continuing
2 2
the execution. This process is called swapping.
For example, in a multi-programming environment with a round
robin CPU scheduling whenever the time quantum expires then
the process that has just finished is swapped out and a new
process swaps in to the memory for execution.
9 What is the Translation Lookaside Buffer (TLB)?
Ans. To overcome this problem a high-speed cache is set up for
page table entries called a Translation Lookaside Buffer
(TLB). Translation Lookaside Buffer (TLB) is nothing but a
special cache used to keep track of recently used 2 4
transactions. TLB contains page table entries that have been
most recently used. Given a virtual address, the processor
examines the TLB if a page table entry is present (TLB hit), the
frame number is retrieved and the
real address is formed.
12.Part B- Questions
S.N Questio BL CO
o n
1 Explain the Banker’s algorithm for deadlock avoidance with an 1 4
example.
2 Briefly explain demand paging. List out advantages and 2 4
disadvantages of demand paging.
3 Discuss about optimal page replacement algorithm. 2 4
4 If the contents of reference string is: 0, 2, 1, 6, 4, 0, 1, 0, 3, 1, 2, 1 3 4
and there are four frames available in the memory then find
page fault and page fault rate using optimal page algorithm.
33 |5O S - UExplain
N I T - 3 about deadlocks in detail? 2 2
BTECH_CSE-SEM 31
SVCE TIRUPATI
34 | O S - U N I T - 3
BTECH_CSE-SEM 31
SVCE TIRUPATI
35 | O S - U N I T - 3
BTECH_CSE-SEM 31