OS-Course Material-UNIT-3

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 39

SVCE TIRUPATI

COURSE
MATERIAL

SUBJECT OPERATING SYSTEMS (15A05501)

UNIT 3

COURSE B.TECH

COMPUTER SCIENCE
DEPARTMENT AND ENGINEERING

SEMESTER 31

PREPARED BY B BALAKONDA REDDY


(Faculty Name/s) Assistant Professor

Version V-5

PREPARED / REVISED DATE 31-12-2020

BTECH_CSE-SEM 31
SVCE TIRUPATI
TABLE OF CONTENTS – UNIT
3
S. NO CONTENT PAGE
S NO.
1 COURSE OBJECTIVES 1
2 PREREQUISITES 1
3 SYLLABUS 1
4 COURSE OUTCOMES 1
5 CO - PO/PSO MAPPING 1
6 LESSON PLAN 2
7 ACTIVITY BASED LEARNING 2
8 LECTURE NOTES 2
3.1 MEMORY MANAGEMENT INTRODUCTION 4
3.2 SWAPPING 7
3.3 CONTIGUOUS MEMORY ALLOCATION 7
3.4 SEGMENTATION 9

3.5 PAGING 10
3.6 VIRTUAL MEMORY 15
3.7 THRASHING 23
3.8 DEADLOCKS 25
3.9 DEADLOCK PREVENTION 27
3.10 DEADLOCK DETECTION AND AVOIDANCE 28
3.11 RECOVERY FROM DEADLOCK 31

9 PRACTICE QUIZ 32
10 ASSIGNMENTS 33
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 33
12 PART B QUESTIONS 35
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 35
14 REAL TIME APPLICATIONS 35
15 CONTENTS BEYOND THE SYLLABUS 35
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 36
17 MINI PROJECT SUGGESTION 36

BTECH_CSE-SEM 31
SVCE TIRUPATI
1. Course Objectives
The objectives of this course is to
1. To make the students understand the basic operating system concepts such
as processes, threads, scheduling, synchronization, deadlocks, memory
management, file and I/O subsystems and protection.
2. To get acquaintance with the class of abstractions afford by general purpose
operating systems that aid the development of user applications.

2. Prerequisites
Students should have knowledge on
1. System Software
2. Computer Organization and Architecture
3. Basics of Computing

3. Syllabus
UNIT III
Memory Management: Swapping, contiguous memory allocation, segmentation,
paging, structure of the page table. Virtual memory: demand paging, page-
replacement, Allocation of frames, Thrashing, Memory-Mapped Files, Allocating
Kernel Memory Deadlocks: System Model, deadlock characterization, Methods of
handling Deadlocks, Deadlock prevention, Detection and Avoidance, Recovery
from deadlock.
4. Course outcomes
1.Student must be able to understand the significance of operating system in
computing devices
2. Student must be able to analyze connection of application programs and
hardware devices through system calls.
3. Student should be able to Investigate and illustrate various process scheduling
algorithms
4. Student must be able to Apply appropriate memory and file management
schemes.
5. Student must be able to design solutions for various disk scheduling problems.
5. Co-PO / PSO Mapping
Machin
PO PO PO PO PO PO6 PO7 PO8 PO9 P10 PO1 PO1 PSO PSO
e
1 2 3 4 5 1 2 1 2
Tools
CO1 3 2 2

CO2 3 2 2 2

CO3 3 2 2 3 2

CO4 2 3 3 2 2 2

CO5 2 3 3 2 2 2

1 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
6. Lesson Plan

Lecture Weeks Topics to be Reference


No. covered s
1 Memory Management: Swapping, contiguous memory allocation, T1

2 Segmentation, paging, structure of the page table. T1, R1


1
3 Virtual memory: demand paging, T1, R1

4 page-replacement, Allocation of frames, T1, R1

5 Thrashing, Memory-Mapped Files, Allocating Kernel Memory T1, R2

6 Deadlocks: System Model, Deadlock characterization, T1, R1


2
7 Methods of handling Deadlocks, Deadlock prevention, T1, R1
Detection and Avoidance, Recovery from deadlock.
8 T1, R1

9 Discussion of objective type questions & Short answer questions T1, R1


3
10 Discussion of Previous year university questions in question papers T1, R1

7. Activity Based Learning


1. Memory Management approaches to analyze space utilization
2. Deadlock prevention, detection, avoidance to grant resources in an efficient
way without deadlocks
8. Lecture Notes
MEMORY MANAGEMENT INTRODUCTION:
Memory management is concerned with managing the primary memory.
Memory consists of array of bytes or words each with their own address. The
instructions are fetched from the memory by the CPU based on the value
program counter.
Functions of memory management:
 Keeping track of status of each memory location.
 Determining the allocation policy.
 Memory allocation technique.
 De-allocation technique.
Address Binding:
Programs are stored on the secondary storage disks as binary executable
files. When the programs are to be executed they are brought in to the main
memory and placed within a process. The collection of processes on the disk
waiting to enter the main memory forms the input queue. One of the processes
which are to be executed is fetched from the queue and placed in the main
memory. During the execution it fetches instruction and data from main
memory. After the process terminates it returns back the memory space. During
execution the process will go through different steps and in each step the
address is represented in different ways. In source program the address is
symbolic. The compiler converts the symbolic address to re-locatable
address. The loader will convert this re-locatable address to absolute address.

2 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE
TIRUPATI

Fig. 1 Multi step processing of a process


Binding of instructions and data can be done at any step along the way:
Compile time:-If we know whether the process resides in memory then
absolute code can be generated. If the static address changes then it is
necessary to re- compile the code from the beginning.
Load time:-If the compiler doesn’t know whether the process resides in
memory then it generates the re-locatable code. In this the binding is
delayed until the load time.
Execution time:-If the process is moved during its execution from one memory
segment to another then the binding is delayed until run time. Special
hardware is used for this. Most of the general purpose operating system uses
this method.

Logical versus physical address:


The address generated by the CPU is called logical address or virtual
address. The address seen by the memory unit i.e., the one loaded in to the
memory register is called the physical address. Compile time and load time
address binding methods generate some logical and physical address. The
execution time addressing binding generate different logical and physical
address. Set of logical address space generated by the programs is the
logical address space. Set of physical address corresponding to these logical
addresses is the physical address space. The mapping of virtual address to
physical address during run time is done by the hardware device called
memory management unit (MMU). The base register is also called re-
location register. Value of the re-location register is added to every address
generated by the user process at the time it is sent to memory.

3 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

Fig. 2 Dynamic allocation using relocation register


Dynamic re-location using a re-location registers
The above figure shows that dynamic re-location which implies mapping
from virtual addresses space to physical address space and is performed by
the hardware at run time. Relocation is performed by the hardware and is
invisible to the user dynamic relocation makes it possible to move a partially
executed process from one area of memory to another without affecting.

Dynamic Loading:
For a process to be executed it should be loaded in to the physical memory.
The size of the process is limited to the size of the physical memory.
Dynamic loading is used to obtain better memory utilization. In dynamic
loading the routine or procedure will not be loaded until it is called. Whenever a
routine is called, the calling routine first checks whether the called routine is
already loaded or not. If it is not loaded it cause the loader to load the
desired program in to the memory and updates the programs address table to
indicate the change and control is passed to newly called routine.
Advantage: Gives better memory utilization. Unused routine is never loaded.
Do not need special operating system support. This method is useful when large
amount of codes are needed to handle in frequently occurring cases.

Dynamic linking and Shared libraries:


Some operating system supports only the static linking. In dynamic linking only
the main program is loaded in to the memory. If the main program requests
a procedure, the procedure is loaded and the link is established at the time
of references. This linking is postponed until the execution time. With
dynamic linking a “stub” is used in the image of each library referenced routine.
A “stub” is a piece of code which is used to indicate how to locate the
appropriate memory resident library routine or how to load library if the routine
is not already present. When “stub” is executed it checks whether the
routine is present is memory or not. If not it loads the routine in to the memory.
This feature can be used to update libraries i.e., library is replaced by a new
version and all the programs can make use of this library.

More than one version of the library can be loaded in memory at a time and
each program uses its version of the library. Only the programs that are
compiled with the new version are affected by the changes incorporated in
it. Other programs linked before new version is installed will continue using
older libraries this type of system is called “shared library”.

4 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Swapping
Swapping is a technique of temporarily removing inactive programs from
the memory of the system. A process can be swapped temporarily out of the
memory to a backing store and then brought back in to the memory for
continuing the execution. This process is called swapping.
Example
In a multi-programming environment with a round robin CPU scheduling
whenever the time quantum expires then the process that has just finished
is swapped out and a new process swaps in to the memory for execution.

A variation of swap is priority based scheduling. When a low priority is executing


and if a high priority process arrives then a low priority will be swapped out and
high priority is allowed for execution. This process is also called as Roll out
and Roll in.

Normally the process which is swapped out will be swapped back to the
same memory space that is occupied previously. This depends upon address
binding.

If the binding is done at load time, then the process is moved to same memory
location. If the binding is done at run time, then the process is moved to
different memory location. This is because the physical address is computed
during run time.

Swapping requires backing store and it should be large enough to


accommodate the copies of all memory images. The system maintains a ready
queue consisting of all the processes whose memory images are on the
backing store or in memory that are ready to run. Swapping is constant by other
factors: To swap a process, it should be completely idle. A process may be
waiting for an i/o operation. If the i/o is asynchronously accessing the user
memory for i/o buffers, then the process cannot be swapped.

Contiguous memory allocation


One of the simplest method for memory allocation is to divide memory in to
several fixed partition. Each partition contains exactly one process. The degree
of multiprogramming depends on the number of partitions. In multiple partition
method, when a partition is free, process is selected from the input queue and is
loaded in to free partition of memory.

When process terminates, the memory partition becomes available for another
process. Batch OS uses the fixed size partition scheme.

The OS keeps a table indicating which part of the memory is free and is
occupied. When the process enters the system it will be loaded in to the
input queue. The OS keeps track of the memory requirement of each
process and the amount of memory available and determines which process
to allocate the memory. When a process requests, the OS searches for large
hole for this process, hole is a large block of free memory available. If the hole
is too large it is split in to two. One part is allocated to the requesting process
and other is returned to the set of holes. The set of holes are searched to
determine which

5 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
hole is best to allocate. There are three strategies to select a free hole:

First bit:-Allocates first hole that is big enough. This algorithm scans memory
from the beginning and selects the first available block that is large enough
to hold the process.

Best bit:-It chooses the hole i.e., closest in size to the request. It allocates
the smallest hole i.e., big enough to hold the process.

Worst fit:-It allocates the largest hole to the process request. It searches for the
largest hole in the entire list.

First fit and best fit are the most popular algorithms for dynamic memory
allocation. First fit is generally faster. Best fit searches for the entire list to find the
smallest hole i.e., large enough. Worst fit reduces the rate of production of
smallest holes. All these algorithms suffer from fragmentation.

Memory Protection
Memory protection means protecting the OS from user process and
protecting process from one another. Memory protection is provided by using a
re-location register, with a limit register. Re-location register contains the values
of smallest physical address and limit register contains range of logical
addresses. (Re- location = 100040 and limit = 74600). The logical address must
be less than the limit register; the MMU maps the logical address dynamically
by adding the value in re-location register. When the CPU scheduler selects
a process for execution, the dispatcher loads the re-location and limit
register with correct values as a part of context switch. Since every address
generated by the CPU is checked against these register we can protect the
OS and other users programs and data from being modified.

Fragmentation
Memory fragmentation can be of two types: Internal Fragmentation External
Fragmentation Internal Fragmentation there is wasted space internal to a
portion due to the fact that bloc k of data loaded is smaller than the partition.
Example
If there is a block of 50kb and if the process requests 40kb and if the block is
allocated to the process then there will be 10kb of memory left.

External Fragmentation exists when there is enough memory space exists to


satisfy the request, but it not contiguous i.e., storage is fragmented in to
large number of small holes. External Fragmentation may be either minor or a
major problem.

One solution for over-coming external fragmentation is compaction. The goal is


to move all the free memory together to form a large block. Compaction is
not possible always. If the relocation is static and is done at load time then
compaction is not possible. Compaction is possible if the re-location is dynamic
and done at execution time.

6 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

Another possible solution to the external fragmentation problem is to permit the


logical address space of a process to be non-contiguous, thus allowing the
process to be allocated physical memory whenever the latter is available.

Segmentation
Most users do not think memory as a linear array of bytes rather the users
thinks memory as a collection of variable sized segments which are
dedicated to a particular use such as code, data, stack, heap etc. A logical
address is a collection of segments. Each segment has a name and length.
The address specifies both the segment name and the offset within the
segments. The users specify address by using two quantities: a segment
name and an offset. For simplicity the segments are numbered and referred
by a segment number. So the logical address consists of <segment number,
offset>.

Hardware support: We must define an implementation to map 2D user


defined address in to 1D physical address. This mapping is affected by a
segment table. Each entry in the segment table has a segment base and
segment limit. The segment base contains the starting physical address where
the segment resides and limit specifies the length of the segment.

Fig.3 Segmentation Hardware


The use of segment table is shown in the above figure: Logical address consists
of two parts:
 segment number‘s’
 an offset‘d’ to that segment.
The segment number is used as an index to segment table.
The offset’d’ must bi in between 0 and limit, if not an error is reported to OS. If
legal the offset is added to the base to generate the actual physical address.
The segment table is an array of base limit register pairs.

Protection and Sharing: A particular advantage of segmentation is the


association of protection with the segments. The memory mapping hardware
will check the protection bits associated with each segment table entry to
prevent illegal access to memory like attempts to write in to read-only segment.
Another advantage of segmentation involves the sharing of code or data.
Each process has a segment table associated with it. Segments are shared
when the entries in the segment tables of two different processes points to
same physical location. Sharing occurs at the segment table. Any
information can be shared at the segment level. Several segments can be
shared so a

7 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
program consisting of several segments can be shared. We can also share parts
of a program.

Advantages
Eliminates fragmentation. x Provides virtual growth. Allows dynamic segment
growth. Assist dynamic linking. Segmentation is visible.

Differences between segmentation and paging


Segmentation
 Program is divided in to variable sized segments. x User is responsible for
dividing the program in to segments.
 Segmentation is slower than paging.
 Visible to user.
 Eliminates internal fragmentation.
 Suffers from external fragmentation.
 Process or user segment number, offset to calculate absolute address.

Paging
 Programs are divided in to fixed size pages.
 Division is performed by the OS.
 Paging is faster than segmentation.
 Invisible to user.
 Suffers from internal fragmentation.
 No external fragmentation.
 Process or user page number, offset to calculate absolute address.

Paging
Paging is a memory management scheme that permits the physical address
space of a process to be non-contiguous. Support for paging is handled by
hardware. It is used to avoid external fragmentation. Paging avoids the
considerable problem of fitting the varying sized memory chunks on to the
backing store. When some code or date residing in main memory need to be
swapped out, space must be found on backing store.
Basic Method:
Physical memory is broken in to fixed sized blocks called frames (f). Logical
memory is broken in to blocks of same size called pages (p). When a process is
to be executed its pages are loaded in to available frames from backing
store. The blocking store is also divided in to fixed sized blocks of same size as
memory frames. The following figure shows
paging hardware:

Fig. 4 Paging Hardware

8 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

Logical address generated by the CPU is divided in to two parts:


 page number (p)
 page offset (d)
The page number (p) is used as index to the page table. The page table
contains base address of each page in physical memory. This base address
is combined with the page offset to define the physical memory i.e., sent to
the memory unit. The page size is defined by the hardware. The size of a power
of 2, varying between 512 bytes and 10Mb per page. If the size of logical
address space is 2^m address unit and page size is 2^n, then high order m-n
designates the page number and n low order bits represents page offset.

Fig. 5 Paging Model of logical and physical memory


Example
To show how to map logical memory in to physical memory consider a page
size of 4 bytes and physical memory of 32 bytes (8 pages). Logical address 0 is
page 0 and offset 0. Page 0 is in frame 5. The logical address 0 maps to physical
address 20. [(5*4) + 0]. ogical address 3 is page 0 and offset 3 maps to physical
address 23 [(5*4) + 3]. Logical address 4 is page 1 and offset 0 and page 1
is mapped to frame 6. So logical address 4 maps to physical address 24 [(6*4) +
0].

Logical address 13 is page 3 and offset 1 and page 3 is mapped to frame 2.


So logical address 13 maps to physical address 9 [(2*4) + 1].

Hardware Support for Paging:


The hardware implementation of the page table can be done in several
ways: The simplest method is that the page table is implemented as a set of
dedicated registers. These registers must be built with very high speed logic for
making paging address translation. Every accessed memory must go
through paging map. The use of registers for page table is satisfactory if the
page table is small.

If the page table is large then the use of registers is not visible. So the page
table is kept in the main memory and a page table base register [PTBR]
points to the page table. Changing the page table requires only one register
which reduces the context switching type. The problem with this approach is the
time required to access memory location. To access a location [i] first we
have t index the page table using PTBR offset. It gives the frame number
which is combined with the page offset to produce the actual address. Thus
we need two memory accesses for a byte.

The only solution is to use special, fast, lookup hardware cache called

9 | OS - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
translation look aside buffer [TLB] or associative register. LB is built with
associative register with high speed memory. Each register contains two paths a
key and a value.

When an associative register is presented with an item, it is compared with


all the key values, if found the corresponding value field is return and searching
is fast. TLB is used with the page table as follows: TLB contains only few page
table entries.

Fig. 6 Paging Hardware with TLB


When a logical address is generated by the CPU, its page number along with
the frame number is added to TLB. If the page number is found its frame
memory is used to access the actual memory. If the page number is not in
the TLB (TLB miss) the memory reference to the page table is made. When
the frame number is obtained use can use it to access the memory. If the TLB is
full of entries the OS must select anyone for replacement. Each time a new
page table is selected the TLB must be flushed [erased] to ensure that next
executing process do not use wrong information. The percentage of time that
a page number is found in the TLB is called HIT ratio.

Protection:
Memory protection in paged environment is done by protection bits that are
associated with each frame these bits are kept in page table. x One bit can
define a page to be read-write or read-only.
To find the correct frame number every reference to the memory should go
through page table. At the same time physical address is computed. The
protection bits can be checked to verify that no writers are made to read-
only page. Any attempt to write in to read-only page causes a hardware trap
to the OS. This approach can be used to provide protection to read-only,
read-write or execute-only pages. One more bit is generally added to each
entry in the page table: a valid, invalid bit. A valid bit indicates that
associated page is in the processes logical address space and thus it is a legal
or valid page.

Fig. 7 Valid or invalid (i) bit in a page table

10 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

If the bit is invalid, it indicates the page is not in the processes logical addressed
space and illegal. Illegal addresses are trapped by using the valid-invalid bit.
The OS sets this bit for each page to allow or disallow accesses to that page.

Structure of Page Table


a. Hierarchical paging:
Recent computer system support a large logical address apace from 2^32
to 2^64. In this system the page table becomes large. So it is very difficult
to allocate contiguous main memory for page table. One simple solution to
this problem is to divide page table in to smaller pieces.

There are several ways to accomplish this division.

One way is to use two-level paging algorithm in which the page table itself is
also paged.

Example
In a 32 bit machine with page size of 4kb. A logical address is divided in to a
page number consisting of 20 bits and a page offset of 12 bit. The page table is
further divided since the page table is paged, the page number is further
divided in to 10 bit page number and a 10 bit offset.

Fig. 8 A Two-level Page table Scheme


b. Hashed page table:
Hashed page table handles the address space larger than 32 bit. The virtual
page number is used as hashed value. Linked list is used in the hash table which
contains a list of elements that hash to the same location.
c. Inverted Page Tables:
Since the address spaces have grown to 64 bits, the traditional page tables
become a problem. Even with two level page tables. The table can be too
large to handle. An inverted page table has only entry for each page in
memory. Each entry consisted of virtual address of the page stored in that read-
only location with information about the process that owns that page. Each
element in the hash table contains the following three fields: Virtual page
number Mapped page frame value x Pointer to the next element in the
linked list.

Working:
Virtual page number is taken from virtual address. Virtual page number is
hashed in to hash table. Virtual page number is compared with the first element

11 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
of linked list. Both the values are matched, that value is (page frame) used
for calculating the physical address. If not match then entire linked list is
searched for matching virtual page number. Clustered pages are similar to hash
table but one difference is that each entity in the hash table refer to several
pages.

Fig. 9 Hashed Based Table


Each virtual address in the Inverted page table consists of triple <process-id,
page number, offset >. The inverted page table entry is a pair <process-id,
page number>. When a memory reference is made, the part of virtual address
i.e., <process-id, page number> is presented in to memory sub-system. The
inverted page table is searched for a match. If a match is found at entry I then
the physical address <i , offset> is generated. If no match is found then an
illegal address access has been attempted. This scheme decreases the
amount of memory needed to store each page table, it increases the
amount of time needed to search the table when a page reference occurs. If
the whole table is to be searched it takes too long.

Fig. 10 Inverted Page Table


Advantage
 Eliminates fragmentation.
 Support high degree of multiprogramming.
 Increases memory and processor utilization
 Compaction overhead required for the re-locatable partition scheme is
also eliminated.
Disadvantage
 Page address mapping hardware increases the cost of the computer.
 Memory must be used to store the various tables like page tables,
memory map table etc.
 Some memory will still be unused if the number of available block is not
sufficient for the address space of the jobs to be run.
Shared Pages:
Another advantage of paging is the possibility of sharing common code. This
is useful in timesharing environment. Eg:-Consider a system with 40 users, each
executing a text editor. If the text editor is of 150k and data space is 50k,
we
12 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
need 8000k for 40 users. If the code is reentrant it can be shared. Consider the
following figure.

If the code is reentrant then it never changes during execution. Thus two or
more processes can execute same code at the same time. Each process has its
own copy of registers and the data of two processes will vary. Only one copy of
the editor is kept in physical memory. Each users page table maps to same
physical copy of editor but date pages are mapped to different frames. So to
support 40 users we need only one copy of editor (150k) plus 40 copies of 50k of
data space i.e., only 2150k instead of 8000k.

Fig. 11 Sharing of code in a paging environmnet


1.6 Virtual Memory
Virtual memory is a technique that allows for the execution of partially
loaded process. There are many memory that is available user can able to
write in to large virtual space. Since each program takes less amount of
physical memory, more than one program could be run at the same time
which can increase the throughput and CPU utilization. Less i/o operation is
needed to swap or load user program in to memory. So each user program
could run faster.

Virtual memory is the separation of users logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided when
these is less physical memory. Separating logical memory from physical
memory also allows files and memory to be shared by several different
processes through page sharing. Virtual memory is implemented using Demand
Paging.
Demand Paging
A demand paging is similar to paging system with swapping when we want
to execute a process we swap the process the in to memory otherwise it will
not be loaded in to memory. A swapper manipulates the entire processes, where
as a pager manipulates individual pages of the process.
Basic concept-Instead of swapping the whole process the pager swaps
only the necessary pages in to memory. Thus it avoids reading unused
pages and decreases the swap time and amount of physical memory
needed.

13 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

Fig. 12 Page table when some pages are not in main memory
The valid-invalid bit scheme can be used to distinguish between the pages that
are on the disk and that are in memory.
 If the bit is valid then the page is both legal and is in memory.
 If the bit is invalid then either page is not valid or is valid but is currently on
the disk.

Marking a page as invalid will have no effect if the processes never access to
that page. Suppose if it access the page which is marked invalid, causes a
page fault trap. This may result in failure of OS to bring the desired page in
to memory.

The step for handling page fault is straight forward and is given below:
 We check the internal table of the process to determine whether the
reference made is valid or invalid.
 If invalid terminate the process,. If valid, then the page is not yet loaded and
we now page it
 We find a free frame.
 We schedule disk operation to read the desired page in to newly allocated
frame
 When disk reed is complete, we modify the internal table kept with the
process to indicate that the page is now in memory.

We restart the instruction which was interrupted by illegal address trap. The
process can now access the page. In extreme cases, we start the process
without pages in memory. When the OS points to the instruction of process it
generates a page fault. After this page is brought in to memory the process
continues to execute, faulting as necessary until every demand paging i.e., it
never brings the page in to memory until it is required.

Fig. 13 steps in handling page fault


Hardware support
For demand paging the same hardware is required as paging and swapping.

14 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
 Page table-Has the ability to mark an entry invalid through valid-
invalid bit.
 Secondary memory-This holds the pages that are not present in
main memory.
 Performance of demand paging-Demand paging can have
significant effect on the performance of the computer system.
Let P be the probability of the page fault (0<=P<=1)
Effective access time = (1-P) * ma + P * page fault.
Where P = page fault and ma = memory access time.
Effective access time is directly proportional to page fault rate. It is important to
keep page fault rate low in demand paging. A page fault causes the
following sequence to occur- Trap to the OS. Save the user registers and
process state. Determine that the interrupt was a page fault. Checks the
page references were legal and determine the location of page on disk.
Issue a read from disk to a free frame.

If waiting, allocate the CPU to some other user. Interrupt from the disk. Save the
registers and process states of other users. Determine that the interrupt was from
the disk. Correct the page table and other table to show that the desired
page is now in memory.

Comparison of demand paging with segmentation


Segmentation
 Segment may of different size.
 Segment can be shared.
 Allows for dynamic growth of segments.
 Segment map table indicate the address of each segment in memory.
 Segments are allocated to the program while compilation.
Demand Paging
 Pages are of same size.
 Pages can’t be shared.
 Page size is fixed.
 Page table keeps track of pages in memory.
 Pages are allocated in memory on demand.
Copy-On-Write
Demand paging is used when reading a file from disk in to memory. Fork ()
is used to create a process and it initially bypass the demand paging using a
technique called page sharing. Page sharing provides rapid speed for
process creation and reduces the number of pages allocated to the newly
created process. Copy-on-write technique initially allows the parent and the
child to share the same pages. These pages are marked as copy-on-write
pages i.e., if either process writes to a shared page, a copy of shared page is
created.
Example
If a child process try to modify a page containing portions of the stack; the
OS recognizes them as a copy-on-write page and create a copy of this page
and maps it on to the address space of the child process. So the child
process will modify its copied page and not the page belonging to parent.
The new pages are obtained from the pool of free pages.

15 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Memory Mapping
Standard system calls i.e., open (), read () and write () is used for sequential
read of a file. Virtual memory is used for this. In memory mapping a file allows a
part of the virtual address space to be logically associated with a file.
Memory mapping a file is possible by mapping a disk block to page in memory.
Page Replacement
Demand paging shares the I/O by not loading the pages that are never used.
Demand paging also improves the degree of multiprogramming by allowing
more process to run at the sometime. Page replacement policy deals with
the solution of pages in memory to be replaced by a new page that must be
brought in. When a user process is executing a page fault occurs. The hardware
traps to the operating system, which checks the internal table to see that this is
a page fault and not an illegal memory access. The operating system
determines where the derived page is residing on the disk, and this finds
that are no free frames on the list of free frames.
When all the frames are in main memory, it is necessary to bring a new page to
satisfy the page fault, replacement policy is concerned with selecting a page
currently in memory to be replaced. The page i,e to be removed should be
the page i,e least likely to be referenced in future.

Fig. 14 Page Replacement


Working of Page Replacement Algorithm
1. Find the location of derived page on the disk.
2. Find a free frame x If there is a free frame, use it. x Otherwise, use a
replacement algorithm to select a victim. x Write the victim page to the disk;
change the page and frame tables accordingly.
3. Read the desired page into the free frame; change the page and
frame tables. Restart the user process.

Victim Page
The page that is supported out of physical memory is called victim page. x If no
frames are free, the two page transforms come (out and one in) are read.
This will see the effective access time.
 Each page or frame may have a dirty (modify) bit associated with the
hardware. The modify bit for a page is set by the hardware whenever any word
or byte in the page is written into, indicating that the page has been modified.
 When we select the page for replacement, we check its modify bit. If the
bit is set, then the page is modified since it was read from the disk.
 If the bit was not set, the page has not been modified since it was read
into memory.

16 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Therefore, if the copy of the page has not been modified we can avoid
writing the memory page to the disk, if it is already there. Sum pages cannot
be modified. We must solve two major problems to implement demand
paging: we must develop a frame allocation algorithm and a page
replacement algorithm. If we have multiple processors in memory, we must
decide how many frames to allocate and page replacement is needed.

PAGE REPLACEMENT ALGORITHMS


FIFO Algorithm
This is the simplest page replacement algorithm. A FIFO replacement algorithm
associates each page the time when that page was brought into memory.
When a Page is to be replaced the oldest one is selected. We replace the
queue at the head of the queue. When a page is brought into memory, we
insert it at the tail of the queue.

Example Consider the following references string with frames initially


empty.

Fig.15 FIFO Page Replacement Algorithm

The first three references (7,0,1) cases page faults and are brought into the
empty frames. The next references 2 replaces page 7 because the page
7 was brought in first. Since 0 is the next references and 0 is already in
memory e has no page faults. The next references 3 results in page 0 being
replaced so that the next references to 0 causer page fault. This will
continue till the end of string. There are 15 faults all together.
Belady’s Anamoly
For some page replacement algorithm, the page fault may increase as
the number of allocated frames increases. FIFO replacement algorithm
may face this problem.

Optimal Algorithm
Optimal page replacement algorithm is mainly to solve the problem of
Belady’s Anamoly. Optimal page replacement algorithm has the lowest
page fault rate of all algorithms. An optimal page replacement algorithm
exists and has been called OPT. The working is simple “Replace the
page that will not be used for the longest period of time” Example:
consider the following reference string.

17 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

Fig. 16 Optimal Page Replacement Algorithm

The first three references cause faults that fill the three empty frames. The
references to page 2 replaces page 7, because 7 will not be used until
reference 18. The page 0 will be used at 5 and page 1 at 14. With only 9
page faults, optimal replacement is much better than a FIFO, which had 15
faults. This algorithm is difficult t implement because it requires future
knowledge of reference strings.

Least Recently Used (LRU) Algorithm


If the optimal algorithm is not feasible, an approximation to the optimal
algorithm is possible. The main difference b/w OPTS and FIFO is that; FIFO
algorithm uses the time when the pages was built in and OPT uses the time
when a page is to be used. The LRU algorithm replaces the pages that
have not been used for longest period of time.
The LRU associated its pages with the time of that pages last use. This
strategy is the optimal page replacement algorithm looking backward in
time rather than forward.

Example consider the following reference string

Fig. 17 LRU Page Replacement Algorithm

The first 5 faults are similar to optimal replacement. When reference to


page 4 occurs, LRU sees that of the three frames, page 2 as used least
recently. The most recently used page is page 0 and just before page 3
was used. The LRU policy is often used as a page replacement algorithm
and considered to be good. The main problem to how to implement LRU
is the LRU requires additional h/w assistance.

18 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Two implementation are possible:
Counters: In this we associate each page table entry a time -of -use
field, and add to the cpu a logical clock or counter. The clock is
incremented for each memory reference. When a reference to a page is
made, the contents of the clock register are copied to the time-of-use
field in the page table entry for that page. In this way we have the time
of last reference to each page we replace the page with smallest time
value. The time must also be maintained when page tables are changed.
Stack: Another approach to implement LRU replacement is to keep a stack
of page numbers when a page is referenced it is removed from the stack
and put on to the top of stack. In this way the top of stack is always the
most recently used page and the bottom in least recently used page. Since
the entries are removed from the stack it is best implement by a doubly
linked list, with a head and tail pointer. Neither optimal replacement nor
LRU replacement suffers from Belady’s Anamoly. These are called stack
algorithms.

LRU Approximation
An LRU page replacement algorithm should update the page removal
status information after every page reference updating is done by
software, cost increases. But hardware LRU mechanism tend to degrade
execution performance at the same time, then substantially increases
the cost. For this reason, simple and efficient algorithm that approximation
the LRU have been developed. With h/w support the reference bit was
used. A reference bit associate with each memory block and this bit
automatically set to 1 by the h/w whenever the page is referenced. The
single reference bit per clock can be used to approximate LRU removal.
The page removal s/w periodically resets the reference bit to 0, write the
execution of the users job causes some reference bit to be set to 1. If the
reference bit is 0 then the page has not been referenced since the last
time the reference bit was set to 0.

Count Based Page Replacement


There is many other algorithms that can be used for page replacement, we
can keep a counter of the number of references that has made to a
page.

a) LFU (least frequently used)


This causes the page with the smallest count to be replaced. The reason for
this selection is that actively used page should have a large reference
count. This algorithm suffers from the situation in which a page is used
heavily during the initial phase of a process but never used again. Since
it was used heavily, it has a large count and remains in memory even
though it is no longer needed.

19 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
b) Most Frequently Used(MFU)
This is based on the principle that the page with the smallest count was
probably just brought in and has yet to be used.

Allocation of Frames
The allocation policy in a virtual memory controls the operating system
decision regarding the amount of real memory to be allocated to each
active process. In a paging system if more real pages are allocated, it
reduces the page fault frequency and improved turnaround throughput.
If too few pages are allocated to a process its page fault frequency and
turnaround times may deteriorate to unacceptable levels.

The minimum number of frames per process is defined by the architecture,


and the maximum number of frames. This scheme is called equal
allocation. With multiple processes competing for frames, we can
classify page replacement into two broad categories.
a) Local Replacement: requires that each process selects frames from
only its own sets of allocated frame.
b) Global Replacement: allows a process to select frame from the set of
all frames. Even if the frame is currently allocated to some other process,
one process can take a frame from another.
In local replacement the number of frames allocated to a process do not
change but with global replacement number of frames allocated to a
process do not change global replacement results in greater system
throughput.
Other consideration

There is much other consideration for the selection of a replacement


algorithm and allocation policy.
Preparing: This is an attempt to present high level of initial paging. This
strategy is to bring into memory all the pages at one time. 2) TLB Reach:
The TLB reach refers to the amount of memory accessible from the TLB
and is simply the no of entries multiplied by page size.
Page Size: following parameters are considered a) page size us always
power of 2 (from 512 to 16k) b) Internal fragmentation is reduced by a small
page size. c) A large page size reduces the number of pages needed.
Invented Page table: This will reduces the amount of primary memory
i,e. needed to track virtual to physical address translations. 5) Program
Structure: Careful selection of data structure can increases the locality and
hence lowers the page fault rate and the number of pages in working
state.
Real time Processing: Real time system almost never has virtual memory.
Virtual memory is the antithesis of real time computing, because it can
introduce unexpected long term delay in the execution of a process.

20 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Thrashing
If the number of frames allocated to a low-priority process falls below
the minimum number required by the computer architecture then we
suspend the process execution. A process is thrashing if it is spending
more time in paging than executing. If the processes do not have
enough number of frames, it will quickly page fault. During this it must
replace some page that is not currently in use. Consequently it quickly
faults again and again. The process continues to fault, replacing pages
for which it then faults and brings back. This high paging activity is called
thrashing. The phenomenon of excessively moving pages back and forth
b/w memory and secondary has been called thrashing.

Cause of Thrashing: Thrashing results in severe performance problem.


The operating system monitors the CPU utilization is low. We increase
the degree of multi programming by introducing new process to the
system. A global page replacement algorithm replaces pages with no
regards to the process to which they belong. The figure shows the
thrashing as the degree of multi programming increases, more slowly until a
maximum is reached. If the degree of multi programming is increased
further thrashing sets in and the CPU utilization drops sharply.

Fig.18 Thrashing

At this point, to increases CPU utilization and stop thrashing, we must


increase degree of multiprogramming. We can limit the effect of
thrashing by using a local replacement algorithm. To prevent thrashing,
we must provide a process as many frames as it needs.

Locality of Reference: As the process executes it moves from locality


to locality. A locality is a set of pages that are actively used. A program
may consist of several different localities, which may overlap. Locality is
caused by loops in code that find to reference arrays and other data
structures by indices. The ordered list of page number accessed by a
program is called reference string.

Locality is of two types


1) spatial locality 2) temporal locality

21 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Working set model
Working set model algorithm uses the current memory requirements to
determine the number of page frames to allocate to the process, an
informal definition is “the collection of pages that a process is working with
and which must be resident if the process to avoid thrashing”. The idea is to
use the recent needs of a process to predict its future reader. The
working set is an approximation of programs locality. Ex: given a
sequence of memory reference, if the working set window size to
memory references, then working set at time t1 is {1,2,5,6,7} and at t2 is
changed to {3,4} At any given time, all pages referenced by a process in
its last 4 seconds of execution are considered to compromise its working
set.

A process will never execute until its working set is resident in main
memory. Pages outside the working set can be discarded at any
movement. Working sets are not enough and we must also introduce
balance set. If the sum of the working sets of all the run able process is
greater than the size of memory the refuse some process for a while.
Divide the run able process into two groups, active and inactive. The
collection of active set is called the balance set. When a process is made
active its working set is loaded.

Deadlocks
When processes request a resource and if the resources are not
available at that time the process enters into waiting state. Waiting process
may not change its state because the resources they are requested are
held by other process. This situation is called deadlock. The situation
where the process waiting for the resource i.e., not available is called
deadlock.

System Model
A system may consist of finite number of resources and is distributed among
number of processes. There resources are partitioned into several instances
each with identical instances. A process must request a resource before
using it and it must release the resource after using it. It can request
any number of resources to carry out a designated task. The amount of
resource requested may not exceed the total number of resources
available.

A process may utilize the resources in only the following sequences:


Request-If the request is not granted immediately then the requesting
process must wait it can acquire the resources.
Use:-The process can operate on the resource.
Release:-The process releases the resource after using it.

Deadlock may involve different types of resources. For eg:- Consider a


system with one printer and one tape drive. If a process Pi currently holds a
printer and a process Pj holds the tape drive. If process Pi request a tape
22 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
drive and process Pj request a printer then a deadlock occurs.
Multithread

BTECH_CSE-SEM 31
SVCE TIRUPATI
programs are good candidates for deadlock because they compete for
shared resources.
Deadlock Characterization:
Necessary Conditions: A deadlock situation can occur if the following 4
conditions occur simultaneously in a system:
1. Mutual Exclusion: Only one process must hold the resource at a time. If
any other process requests for the resource, the requesting process must be
delayed until the resource has been released.
2. Hold and Wait:-A process must be holding at least one resource and
waiting to acquire additional resources that are currently being held by the
other process.
3. No Preemption:-Resources can’t be preempted i.e., only the process
holding the resources must release it after the process has completed
its task.
4.Circular Wait:-A set {P0,P1 Pn} of waiting process must exist such that
P0 is waiting for a resource i.e., held by P1, P1 is waiting for a resource i.e.,
held by P2. Pn-1 is waiting for resource held by process Pn and Pn is waiting
for the resource i.e., held by P1. All the four conditions must hold for a
deadlock to occur.

Resource Allocation Graph:


Deadlocks are described by using a directed graph called system
resource allocation graph. The graph consists of set of vertices (v) and
set of edges (e). The set of vertices (v) can be described into two
different types of nodes P={P1,P2……..Pn} i.e., set consisting of all
active processes and R={R1,R2……….Rn}i.e., set consisting of all
resource types in the system. A directed edge from process Pi to
resource type Rj denoted by Pi->Ri indicates that Pi requested an
instance of resource Rj and is waiting. This edge is called Request edge. A
directed edge Ri-> Pj signifies that resource Rj is held by process Pi. This is
called Assignment edge

Fig.19 Resource Allocation Graph

If the graph contain no cycle, then no process in the system is deadlock.


If the graph contains a cycle then a deadlock may exist. If each resource
type has exactly one instance than a cycle implies that a deadlock has

23 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
occurred. If each resource has several instances then a cycle do not
necessarily implies that a deadlock has occurred.

Methods for Handling Deadlocks


There are three ways to deal with deadlock problem x We can use a
protocol to prevent deadlocks ensuring that the system will never enter into
the deadlock state. x We allow a system to enter into deadlock state,
detect it and recover from it. x We ignore the problem and pretend that
the deadlock never occur in the system. This is used by most OS
including UNIX.

To ensure that the deadlock never occur the system can use either
deadlock avoidance or a deadlock prevention. Deadlock prevention is a
set of method for ensuring that at least one of the necessary conditions
does not occur.

Deadlock avoidance requires the OS is given advance information about


which resource a process will request and use during its lifetime.
If a system does not use either deadlock avoidance or deadlock
prevention then a deadlock situation may occur. During this it can
provide an algorithm that examines the state of the system to determine
whether a deadlock has occurred and algorithm to recover from
deadlock. Undetected deadlock will result in deterioration of the system
performance.

Deadlock Prevention
For a deadlock to occur each of the four necessary conditions must hold. If
at least one of the there condition does not hold then we can prevent
occurrence of deadlock.
Mutual Exclusion: This holds for non-sharable resources. Eg:-A printer can
be used by only one process at a time. Mutual exclusion is not possible
in sharable resources and thus they cannot be involved in deadlock.
Read- only files are good examples for sharable resources. A process
never waits for accessing a sharable resource. So we cannot prevent
deadlock by denying the mutual exclusion condition in non-sharable
resources.
Hold and Wait: This condition can be eliminated by forcing a process
to release all its resources held by it when it request a resource i.e., not
available. One protocol can be used is that each process is allocated
with all of its resources before its start execution.

Example
Consider a process that copies the data from a tape drive to the disk, sorts
the file and then prints the results to a printer. If all the resources are
allocated at the beginning then the tape drive, disk files and printer are
assigned to the process. The main problem with this is it leads to low
resource utilization because it requires printer at the last and is
allocated with it from the beginning so that no other process can use
24 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
it.

25 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Another protocol that can be used is to allow a process to request a
resource when the process has none. i.e., the process is allocated with tape
drive and disk file. It performs the required operation and releases both.
Then the process once again request for disk file and the printer and the
problem and with this is starvation is possible.

No Preemption: To ensure that this condition never occurs the


resources must be preempted. The following protocol can be used. If a
process is holding some resource and request another resource that
cannot be immediately allocated to it, then all the resources currently
held by the requesting process are preempted and added to the list of
resources for which other processes may be waiting. The process will be
restarted only when it regains the old resources and the new resources
that it is requesting.

When a process request resources, we check whether they are available or


not. If they are available we allocate them else we check that whether
they are allocated to some other waiting process. If so we preempt the
resources from the waiting process and allocate them to the requesting
process. The requesting process must wait.

Circular Wait:-The fourth and the final condition for deadlock is the circular
wait condition. One way to ensure that this condition never, is to impose
ordering on all resource types and each process requests resource in an
increasing order.

Let R={R1,R2,………Rn} be the set of resource types. We assign


each resource type with a unique integer value. This will allows us to
compare two resources and determine whether one precedes the other
in ordering. Egxample-we can define a one to one function F:R N as
follows :-F(disk drive)=5 F(printer)=12 F(tape drive)=1.

Deadlock can be prevented by using the following protocol: Each


process can request the resource in increasing order. A process can
request any number of instances of resource type say Ri and it can request
instances of resource type Rj only F(Rj) > F(Ri).Alternatively when a process
requests an instance of resource type Rj, it has released any resource Ri
such that F(Ri)
>= F(Rj). If these two protocol are used then the circular wait can’t hold.

Deadlock Detection and Avoidance


Deadlock prevention algorithm may lead to low device utilization and
reduces system throughput. Avoiding deadlocks requires additional
information about how resources are to be requested. With the
knowledge of the complete sequences of requests and releases we can
decide for each requests whether or not the process should wait.

25 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
For each requests it requires to check the resources currently available,
resources that are currently allocated to each processes future requests
and release of each process to decide whether the current requests can
be satisfied or must wait to avoid future possible deadlock.

A deadlock avoidance algorithm dynamically examines the resources


allocation state to ensure that a circular wait condition never exists. The
resource allocation state is defined by the number of available and
allocated resources and the maximum demand of each process.

Safe State
A state is a safe state in which there exists at least one order in which all
the process will run completely without resulting in a deadlock. A system
is in safe state if there exists a safe sequence. A sequence of
processes
<P1,P2,………..Pn> is a safe sequence for the current allocation state if
for each Pi the resources that Pi can request can be satisfied by the
currently available resources. If the resources that Pi requests are not
currently available then Pi can obtain all of its needed resource to
complete its designated task. A safe state is not a deadlock state.
Whenever a process request a resource i.e., currently available, the system
must decide whether resources can be allocated immediately or whether
the process must wait.

The request is granted only if the allocation leaves the system in safe state.
In this, if a process requests a resource i.e., currently available it must still
have to wait. Thus resource utilization may be lower than it would be
without a deadlock avoidance algorithm.

Resource Allocation Graph Algorithm


This algorithm is used only if we have one instance of a resource type. In
addition to the request edge and the assignment edge a new edge
called claim edge is used. For eg:-A claim edge Pi to Rj indicates that
process Pi may request Rj in future. The claim edge is represented by a
dotted line. When a process Pi requests the resource Rj, the claim edge
is converted to a request edge.

When resource Rj is released by process Pi, the assignment edge Rj to Pi is


replaced by the claim edge Pi to Rj. When a process Pi requests resource Rj
the request is granted only if converting the request edge Pi to Rj to as
assignment edge Rj to Pi do not result in a cycle. Cycle detection algorithm
is used to detect the cycle. If there are no cycles then the allocation of
the resource to process leave the system in safe state.

Banker’s Algorithm
This algorithm is applicable to the system with multiple instances of
each resource types, but this is less efficient then the resource
allocation graph algorithm.
26 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
When a new process enters the system it must declare the maximum
number of resources that it may need. This number may not exceed the
total number of resources in the system. The system must determine
that whether the allocation of the resources will leave the system in a safe
state or not. If it is so resources are allocated else it should wait until the
process release enough resources. Several data structures are used to
implement the banker’s algorithm.

Let ‘n’ be the number of processes in the system and ‘m’ be the number of
resources types. We need the following data structures:
Available:-A vector of length m indicates the number of available
resources. If Available[i]=k, then k instances of resource type Rj is available.
Max:-An n*m matrix defines the maximum demand of each process if
Max[i,j]=k, then Pi may request at most k instances of resource type Rj.
Allocation:-An n*m matrix defines the number of resources of each
type currently allocated to each process. If Allocation[i,j]=k, then Pi is
currently k instances of resource type Rj.

Need:-An n*m matrix indicates the remaining resources need of each


process. If Need[i,j]=k, then Pi may need k more instances of resource type
Rj to compute its task. So Need[i,j]=Max[i,j]- Allocation[i]

Safety Algorithm
This algorithm is used to find out whether or not a system is in safe state or
not.
Step 1. Let work and finish be two vectors of length M and N
respectively. Initialize work = available and Finish[i]=false for
i=1,2,3,…….n. Step 2. Find i such that both Finish[i]=false Need i <= work If
no such i exist then go 4.
Step3. Work = work + Allocation Finish[i]=true Go to step 2.
Step 4. If finish[i]=true for all i, then the system is in safe state. This algorithm
may require an order of m*n*n operation to decide whether a state is safe.

Resource Request Algorithm


Let Request(i) be the request vector of process Pi. If Request(i)[j]=k, then
process Pi wants K instances of the resource type Rj. When a request for
resources is made by process Pi the following actions are taken.
If Request(i) <= Need(i) go to step 2 otherwise raise an error condition since
the process has exceeded its maximum claim. If Request(i) <= Available go
to step 3 otherwise Pi must wait.

Since the resources are not available. If the system want to allocate the
requested resources to process Pi then modify the state as follows.

Available = Available – Request(i)

27 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
Allocation(i) = Allocation(i) + Request(i)
Need(i) = Need(i) – Request(i)

If the resulting resource allocation state is safe, the transaction is


complete and Pi is allocated its resources. If the new state is unsafe then
Pi must wait for Request(i) and old resource allocation state is restored.

Deadlock Detection
If a system does not employ either deadlock prevention or a deadlock
avoidance algorithm then a deadlock situation may occur. In this
environment the system may provide. An algorithm that examines the state
of the system to determine whether a deadlock has occurred. An algorithm
to recover from the deadlock.

Single Instances of each Resource Type


If all the resources have only a single instance then we can define
deadlock detection algorithm that uses a variant of resource allocation
graph called a wait for graph. This graph is obtained by removing the
nodes of type resources and removing appropriate edges.

1. An edge from Pi to Pj in wait for graph implies that Pi is waiting for Pj


to release a resource that Pi needs.
2. An edge from Pi to Pj exists in wait for graph if and only if the
corresponding resource allocation graph contains the edges Pi Rq and
Rq Pj.

Deadlock exists within the system if and only if there is a cycle. To


detect deadlock the system needs an algorithm that searches for cycle in a
graph.

Fig 20a. Resouce Allocation Graph Fig 20b.coresponding wait for graph

1.11 Recovery From Deadlocks Several Instances of a Resource Types


The wait for graph is applicable to only a single instance of a resource type.
The following algorithm applies if there are several instances of a
resource type. The following data structures are used:

28 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
1. Available:-Is a vector of length m indicating the number of available
resources of each type.
2. Allocation:-Is an m*n matrix which defines the number of resources
of each type currently allocated to each process.
3. Request:-Is an m*n matrix indicating the current request of each
process. If request[i,j]=k then Pi is requesting k more instances of resources
type Rj.
Step 1. let work and finish be vectors of length m and n
respectively. Initialize Work = available/expression For i=0,1,2 n
if allocation(i)!=0 then Finish[i]=0
else Finish[i]=true
Step 2. Find an index(i) such that both
Finish[i] = false Request(i)<=work
If no such I exist go to step 4.
Step 3. Work = work + Allocation(i) Finish[i] = true Go to step 2.
Step 4. If Finish[i] = false for some I where m>=i>=1. When a system is in
a deadlock state. This algorithm needs an order of m*n square operations to
detect whether the system is in deadlock state or not.

9. Practice Quiz

1. Which of the following page replacement algorithms suffers from Belady’s anomaly?
a) FIFO
b) LRU
c) Optimal page replacement
d) Both LRU and FIFO
2.What is the swap space in the disk used for?
a) Saving temporary html pages
b) Saving process data
c) Storing the super-block
d) Storing device drivers
3.Increasing the RAM of a computer typically improves performance because
a) Virtual memory increases
b) Larger RAMs are faster
c) Fewer page faults occur
d) Fewer segmentation faults occur
4. Virtual memory is
a) Large secondary memory
b) Large main memory
c) Illusion of large main memory
d) None of the above
5.Page fault occurs when
a) When a requested page is in memory
b) When a requested page is not in memory

29 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
c) When a page is currupted
d) When an exception is throw
6.Thrashing occurs when
a) When a page fault occurs
b) Processes on system frequently access pages not memory
c) Processes on system are in running state
d) Processes on system are in waiting state
7.The essential content(s) in each entry of a page table is / are
a) Virtual page number
b) Page frame number
c) Both virtual page number and page frame number
d) Access right information
8. Assume that there are 3 page frames which are initially empty. If the
page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6, the number of page
faults using the optimal replacement policy is .
a) 5
b) 6
c) 7
d) 8
9. Which of the following is not a form of memory?
a) instruction cache
b) instruction register
c) instruction opcode
d) translation lookaside buffer
10. The optimal page replacement algorithm will select the page that
a) Has not been used for the longest time in the past.
b) Will not be used for the longest time in the future.
c) Has been used least number of times.
d) Has been used most number of times

10. Assignments
S.N Questio BL CO
o n
1 Give short notes on any two page replacement algorithms? 2 4
2 Define paging? Explain paging concept in detail? 2 4
Given memory partition of 100 KB, 500 KB, 200 KB and 600 KB (in
order). Show with neat sketch how would each of the first-fit,
3 best-fit and worst-fit algorithms place processes of 412 KB, 317 KB, 3 1
112 KB and 326 KB (in order). Which algorithm is most efficient in
memory allocation?
If the contents of reference string is: 0, 2, 1, 6, 4, 0, 1, 0, 3, 1, 2, 1
4 and there are four frames available in the memory then find 3 4
page fault and page fault rate using optimal page algorithm
5 Explain about deadlocks in detail? 2 2

30 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

11. Part A- Question & Answers


S.N Question& BL CO
o Answers
1 What are the functions of memory
management? Ans.1. Keeping track of status of each
memory location 1 4
2. Determining the allocation policy
3. Memory allocation technique
4. De-allocation technique
2 Explain the basic function of paging?
Ans. Paging is a Memory-management scheme that
2 2
permits the physical –address space of a
process to be Non-
contiguous.
3 List the reasons for process
suspension. Ans. - Normal completion
- Time limit exceeded
- Memory unavailable
1 2
- Protection error
- Arithmetic error
- I/O failure
- Invalid instruction
4 What are the differences between Logical and
Physical address?
Ans. The address generated by the CPU is called logical
address or virtual address. The address seen by the memory
unit i.e., the one loaded in to the memory register is called
the physical address.
Compile time and load time address binding methods 1 4
generate some logical and physical address.
The execution time addressing binding generate different
logical and physical address.
Set of logical address space generated by the programs is the
logical address space. Set of physical address corresponding to
these logical addresses is the physical address space.
5 What is meant by Global Replacement and Local
Replacement?
Ans. Global Page Replacement allows a process to select a
replacement frame from the set of all frames, even if that frame
1 4
is currently allocated to some other process.
Local Replacement requires that each process select from
only its own set of allocated frames. Here the number of
frames
allocated to a process doesn’t change.

31 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
6 What is meant by Paging? Give its advantages.
Ans. Paging is a Memory-management scheme that permits
the physical –address space of a process to be Non-
contiguous. 1 4
Advantages
Avoids the considerable problem of fitting the varying -sized
memory chunks onto the baking store Fragmentation problems

32 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI
are also prevalent baking store, except that access is much
slower, so compaction is important.
7 Write about Internal and External
Fragmentation? Ans. Memory fragmentation
can be of two types: Internal Fragmentation
External Fragmentation.
Internal Fragmentation: there is wasted space internal to
a portion due to the fact that block of data loaded is smaller
than the partition.
Example-If there is a block of 50kb and if the process requests 2 4
40kb and if the block is allocated to the process then there
will be 10kb of memory left.
External Fragmentation exists when there is enough memory
space exists to satisfy the request, but it not contiguous i.e.,
storage is fragmented in to large number of small holes.
External Fragmentation may be either minor or a major
problem. One
solution for over-coming external fragmentation is compaction.
8 Write short notes on swapping?
Ans. Swapping is a technique of temporarily removing
inactive programs from the memory of the system. A process
can be swapped temporarily out of the memory to a backing
store and then brought back in to the memory for continuing
2 2
the execution. This process is called swapping.
For example, in a multi-programming environment with a round
robin CPU scheduling whenever the time quantum expires then
the process that has just finished is swapped out and a new
process swaps in to the memory for execution.
9 What is the Translation Lookaside Buffer (TLB)?
Ans. To overcome this problem a high-speed cache is set up for
page table entries called a Translation Lookaside Buffer
(TLB). Translation Lookaside Buffer (TLB) is nothing but a
special cache used to keep track of recently used 2 4
transactions. TLB contains page table entries that have been
most recently used. Given a virtual address, the processor
examines the TLB if a page table entry is present (TLB hit), the
frame number is retrieved and the
real address is formed.

12.Part B- Questions
S.N Questio BL CO
o n
1 Explain the Banker’s algorithm for deadlock avoidance with an 1 4
example.
2 Briefly explain demand paging. List out advantages and 2 4
disadvantages of demand paging.
3 Discuss about optimal page replacement algorithm. 2 4
4 If the contents of reference string is: 0, 2, 1, 6, 4, 0, 1, 0, 3, 1, 2, 1 3 4
and there are four frames available in the memory then find
page fault and page fault rate using optimal page algorithm.
33 |5O S - UExplain
N I T - 3 about deadlocks in detail? 2 2

BTECH_CSE-SEM 31
SVCE TIRUPATI

34 | O S - U N I T - 3

BTECH_CSE-SEM 31
SVCE TIRUPATI

13.Supportive Online Certification Courses


1. Operating systems Fundamentals By Prof. Santhanu Chattopadyaya,
conducted by IIT Kharagpur – 12 weeks
2. Introduction to Operating Systems By Prof.Chester Rebeiro, conducted by IIT
Madras – 8 weeks.

14. Real Time Applications


S.N Applicatio CO
o n
1 Almost all modern Telecommunications systems make use of RTOS 4
2 RADAR systems,network switching control systems 4
3 Automative applications: Automative breaking systems,Path Tracking 4
4 Microwave OVEN 1
5 Weather Predictions 2

15. Contents Beyond the Syllabus


1. MVT and MFT memory management techniques.
2. Cache memory
3. Flash memory, Magnetic disk, Magnetic Tape
16.Prescribed Text Books & Reference
Books Text Book
1. Operating System Concepts, Abraham Silberchatz, Peter B. Galvin, Greg Gagne,
Wiley, Eight Edition, 2014.
References:
1. Operating Systems, S.Haldar, A.A.Aravind, Pearson Education.
2. Modern Operating systems,Andrew S Tanenbaum,Second Edition,PHI

17. Mini Project Suggestion


1. Redis Memory Analyzer
2. Build a virtual machine monitor that can run multiple guests (for example,
multiple instances of JOS), using x86 VM support.
3. Implement paging to disk in xv6 or JOS, so that processes can be bigger than
RAM. Extend your pager with swapping.
4. Creating Virtual memory management.

35 | O S - U N I T - 3

BTECH_CSE-SEM 31

You might also like