Os-Unit Iii

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

22UBCAC 52- OPERATING SYSEM UNIT III

UNIT III: MEMORY MANAGEMENT


Memory Management – Basics Concept of Memory – Address Binding – Logical and
Physical Address Space – Memory Partitioning – Memory Allocation – Paging – Segmentation –
Demand Paging – Page Replacement Algorithm and its types.
-----------------------------------------------------------------------------------------------------------------

3.0 Introduction to Memory Management


What do you mean by memory management?
Memory is the important part of the computer that is used to store the data. Its management is
critical to the computer system because the amount of main memory available in a computer
system is very limited. At any time, many processes are competing for it. Moreover, to increase
performance, several processes are executed simultaneously. For this, we must keep several
processes in the main memory, so it is even more important to manage them effectively.

Role of Memory management


Following are the important roles of memory management in a computer system:
o Memory manager is used to keep track of the status of memory locations, whether it is
free or allocated. It addresses primary memory by providing abstractions so that software
perceives a large memory is allocated to it.

-1 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using
the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process
from being corrupted by another process. If this is not ensured, then the system may
exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.

3.1 Memory Management Techniques:


The memory management techniques can be classified into following main categories:

o Contiguous memory management schemes


o Non-Contiguous memory management schemes

1. Contiguous memory management schemes:


In a Contiguous memory management scheme, each program occupies a single contiguous block
of storage locations, i.e., a set of memory locations with consecutive addresses.
A) Single contiguous memory management schemes:
-2 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III

The Single contiguous memory management scheme is the simplest memory management
scheme used in the earliest generation of computer systems. In this scheme, the main memory is
divided into two contiguous areas or partitions. The operating systems reside permanently in one
partition, generally at the lower memory, and the user process is loaded into the other partition.

Advantages of Single contiguous memory management schemes:


o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is given
full processor's time, and no other processor will interrupt it.

Disadvantages of Single contiguous memory management schemes:


o Wastage of memory space due to unused memory as the process is unlikely to use all the
available memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main
memory.
o It cannot be executed if the program is too large to fit the entire available main memory
space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs
simultaneously.

B) Multiple Partitioning:

The single Contiguous memory management scheme is inefficient as it limits computers to


execute only one program at a time resulting in wastage in memory space and CPU time. The
problem of inefficient CPU use can be overcome using multiprogramming that allows more than
one program to run concurrently. To switch between two processes, the operating systems need
to load both processes into the main memory. The operating system needs to divide the available
main memory into multiple parts to load multiple processes into the main memory. Thus
multiple processes can reside in the main memory simultaneously.
The multiple partitioning schemes can be of two types:
o Fixed Partitioning

-3 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

o Dynamic Partitioning

I) Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or different
sizes. Each partition can hold a single process. The number of partitions determines the degree of
multiprogramming, i.e., the maximum number of processes in memory. These partitions are
made at the time of system generation and remain fixed after that.
II) Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme.
In a dynamic partitioning scheme, each process occupies only as much memory as they require
when loaded for processing. Requested processes are allocated memory until the entire physical
memory is exhausted or the remaining space is insufficient to hold the requesting process. In this
scheme the partitions used are of variable size, and the number of partitions is not defined at the
system generation time.
ii) Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into different blocks
and loaded at different portions of the memory that need not necessarily be adjacent to one
another. This scheme can be classified depending upon the size of blocks and whether the blocks
reside in the main memory or not.
I)What is paging?
Paging is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into fixed-size blocks of physical memory called
frames. The size of a frame should be kept the same as that of a page to maximize the main
memory and avoid external fragmentation.

Advantages of paging:
o Pages reduce external fragmentation.
o Simple to implement.
o Memory efficient.

-4 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

o Due to the equal size of frames, swapping becomes very easy.


o It is used for faster access of data.

What is Segmentation?
Segmentation is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into variable-size blocks of physical memory
called segments. It is based on the way the programmer follows to structure their programs.
With segmented memory allocation, each job is divided into several segments of different sizes,
one for each module. Functions, subroutines, stack, array, etc., are examples of such modules.
3.2.Address binding:
Mapping of instructions and data from one address to another address in memory.
Three different stages of binding:
1. Compile time: Must generate absolute code if memory location is known in prior.
2. Load time: Must generate relocatable code if memory location is not known at
compile time
3. Execution time: Need hardware support for address maps (e.g., base
and limit registers).
3.4 Logical vs. Physical Address Space
 Logical address – generated by the CPU; also referred to as “virtual address“

 Physical address – address seen by the memory unit.


-5 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III

 Logical and physical addresses are the same in ―compile-time and load-time
address-binding schemes
 Logical (virtual) and physical addresses differ in ―execution-time address-
binding scheme
Translating Logical Address into Physical Address-

 CPU always generates a logical address.


 A physical address is needed to access the main memory.

Following steps are followed to translate logical address into physical address-

Step-01:

 The translation scheme uses two registers that are under the control of operating system.
 During context switching, the values corresponding to the process being loaded are set in
the registers.

These two registers are-


 Relocation Register
 Limit Register

 Relocation Register stores the base address or starting address of the process in the main
memory.
 Limit Register stores the size or length of the process.
Step-02:

-6 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

 CPU generates a logical address containing the address of the instruction that it wants to
read.
Step-03:

 The logical address generated by the CPU is compared with the limit of the process.
 Now, two cases are possible-
Case-01: Generated Address >= Limit

 If address is found to be greater than or equal to the limit, a trap is generated.


 This helps to prevent unauthorized access.

Case-02: Generated Address < Limit

 The address must always lie in the range [0, limit-1].


 If address is found to be smaller than the limit, then the request is treated as a valid
request.
 Then, generated address is added with the base address of the process.
 The result obtained after addition is the address of the memory location storing the
required word.
Diagram-

The following diagram illustrates the above steps of translating logical address into physical
address-
address-

-7 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

Advantages-
The advantages of dynamic partitioning are-
 It does not suffer from internal fragmentation.
 Degree of multiprogramming is dynamic.
 There is no limitation on the size of processes.
Disadvantages-
The disadvantages of dynamic partitioning are-
 It suffers from external fragmentation.
 Allocation and deallocation of memory is complex.

3.3.Memory-Management Unit (MMU)


 It is a hardware device that maps virtual / Logical address to physical address
 In this scheme, the relocation register‘s value is added to Logical address generated by a
user process.
 The user program deals with logical addresses; it never sees the real
 physical addresses
 Logical address range: 0 to max
 Physical address range: R+0 to R+max, where R—value in relocation register
Note: relocation register is a base register.

3.3.1. Memory-Management Unit (MMU)

-8 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

3.3.2.Dynamic Loading
 Through this, the routine is not loaded until it is called.
 Better memory-space utilization; unused routine is never loaded
 Useful when large amounts of code are needed to handle infrequently
occurring cases
 No special support from the operating system is required implemented
through program design
3.3.3.Dynamic Linking
 Linking postponed until execution time & is particularly useful for libraries
 Small piece of code called stub, used to locate the appropriate memory- resident
library routine or function.
 Stub replaces itself with the address of the routine, and executes the routine
 Operating system needed to check if routine is in processes‘ memory address
 Shared libraries: Programs linked before the new library was installed will
continue using the older library

Overlays:
Swapping
 A process can be swapped temporarily out of memory to a backing store (SWAP
OUT)and then brought back into memory for continued execution (SWAP IN).
-9 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III

 Backing store – fast disk large enough to accommodate copies of all memory
images for all users & it must provide direct access to these memory images
 Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed
 Transfer time :
 Major part of swap time is transfer time
 Total transfer time is directly proportional to the amount of memory swapped.
 Example: Let us assume the user process is of size 1MB & the backing store is
a standard hard disk with a transfer rate of 5MBPS.
Transfer time = 1000KB/5000KB per second
= 1/5 sec = 200ms

3.5 Memory Allocation


To gain proper memory utilization, memory allocation must be allocated efficient manner. One
of the simplest methods for allocating memory is to divide memory into several fixed-sized
partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.

 Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
-10 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III

 Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes.
Initially, all memory is available for user processes and is considered one large block of
available memory. This available memory is known as a “Hole”. When the process
arrives and needs memory, we search for a hole that is large enough to store this
process. If the requirement is fulfilled then we allocate memory to process, otherwise
keeping the rest available to satisfy future requests. While allocating a memory
sometimes dynamic storage allocation problems occur, which concerns how to satisfy a
request of size n from a list of free holes. There are some solutions to this problem:

1.First Fit

In the First Fit, the first available free hole fulfil the requirement of the process

allocated.

First Fit

Here, in this diagram, a 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory space.

2.Best Fit

In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this,
we search the entire list, unless the list is ordered by size.

-11 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

Best Fit

Here in this example, first, we traverse the complete list and find the last hole 25KB is the best
suitable hole for Process A(size 25KB). In this method, memory utilization is maximum as
compared to other memory allocation techniques.

3.Worst Fit

In the Worst Fit, allocate the largest available hole to process. This method produces the
largest leftover hole.

Worst Fit

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.

-12 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

3.5.Memory Partitioning

Contiguous Memory Allocation


Each process is contained in a single contiguous section of memory.
 There are two methods namely :
 Fixed – Partition Method
 Variable – Partition Method

 Fixed – Partition Method :


o Divide memory into fixed size partitions, where each partition has exactly one
process.
o The drawback is memory space unused within a
partition is wasted.(eg.when process size < partition size)

 Variable-partition method:
o Divide memory into variable size partitions, depending upon the size of the
incoming process.
o When a process terminates, the partition becomes available for another process.
o As processes complete and leave they create holes in the main memory.
Hole – block of available memory; holes of various size are scattered throughout

 Contiguous memory allocation is a memory allocation technique.


 It allows to store the process only in a contiguous fashion.
 Thus, entire process has to be stored as a single entity at one place inside the memory.

-13 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

Static Partitioning-

 Static partitioning is a fixed size partitioning scheme.


 In this technique, main memory is pre-divided into fixed size partitions.
 The size of each partition is fixed and cannot be changed.
 Each partition is allowed to store only one process.

Example-
Under fixed size partitioning scheme, a memory of size 10 KB may be divided into fixed size
partitions as-

 These partitions are allocated to the processes as they arrive.


 The partition allocated to the arrived process depends on the algorithm followed.

Dynamic Partitioning-
 Dynamic partitioning is a variable size partitioning scheme.
 It performs the allocation dynamically.
 When a process arrives, a partition of size equal to the size of process is created.

-14 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

 Then, that partition is allocated to the process.


3.6.Partition Allocation Algorithms
 The processes arrive and leave the main memory.
 As a result, holes of different size are created in the main memory.
 These holes are allocated to the processes that arrive in future.

1. First Fit Algorithm


2. Best Fit Algorithm
3. Worst Fit Algorithm
3.6.1. First Fit Algorithm

 This algorithm starts scanning the partitions serially from the starting.
 When an empty partition that is big enough to store the process is found, it is allocated to
the process.
 Obviously, the partition size has to be greater than or at least equal to the process size.
3.6.2. Best Fit Algorithm

 This algorithm first scans all the empty partitions.


 It then allocates the smallest size partition to the process.

3.6.3 Worst Fit Algorithm

 This algorithm first scans all the empty partitions.


 It then allocates the largest size partition to the process.

-15 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

3.7 Paging
 Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory.
 This scheme permits the physical address space of a process to be non –
contiguous.
 Paging is a fixed size partitioning scheme.
 In paging, secondary memory and main memory are divided into equal fixed size
partitions.
 The partitions of secondary memory are called as pages.
 The partitions of main memory are called as frames.

 Each process is divided into parts where size of each part is same as page size.
 The size of the last part may be less than the page size.
 The pages of process are stored in the frames of main memory depending upon their
availability.

Address generated by CPU is divided into


 Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
 Page offset(d): Number of bits required to represent particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into

-16 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

 Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
 Frame offset(d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
 The hardware implementation of page table can be done by using dedicated
registers.
 But the usage of register for the page table is satisfactory only if page table is small.
 If page table contain large number of entries then we can use TLB (translation Look-
aside buffer), a special, small, fast look up hardware cache.
1. The TLB is associative, high speed memory.
2. Each entry in TLB consists of two parts: a tag and a value.
3. When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then corresponding value is returned.

Advantages-

-17 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

The advantages of paging are-


 It allows to store parts of a single process in a non-contiguous fashion.
 It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of paging are-
 It suffers from internal fragmentation.
 There is an overhead of maintaining a page table for each process.
 The time taken to fetch the instruction increases since now two memory accesses are
required.
Page Table-
 Page table is a data structure.
 It maps the page number referenced by the CPU to the frame number where that page
is stored.
Characteristics-
 Page table is stored in the main memory.

 Number of entries in a page table = Number of pages in which the process is divided.
 Page Table Base Register (PTBR) contains the base address of page table.
 Each process has its own independent page table.
 Page Table Base Register (PTBR) provides the base address of the page table.
 The base address of the page table is added with the page number referenced by the
CPU.
 It gives the entry of the page table containing the frame number where the
referenced page is stored.
Page Table Entry format:
 A page table entry contains several information about the page.
 The information contained in the page table entry varies from operating system to
operating system.
 The most important information in a page table entry is frame number, Present / Absent
Bit, Protection Bit, Reference Bit, Dirty Bit (Modified bit)

-18 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

3.8 Segmentation:
 Segmentation is a memory management technique in which each job is divided into
several segments of different sizes, one for each module that contains pieces that
perform related functions.
 Each segment is actually a different logical address space of the program.
 When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of
available memory.
 Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size.
 The operating system maintains a segment map table for every process and a list of free
memory blocks along with segment numbers, their size and corresponding memory
locations in main memory.
 For each segment, the table stores the starting address of the segment and the length of
the segment.
 A reference to a memory location includes a value that identifies a segment and an
offset.
 Segment Table – It maps two-dimensional Logical address into one- dimensional
Physical address. It’s each table entry has:
1. Base Address: It contains the starting physical address where the segments reside
in memory.
2. Limit: It specifies the length of the segment.

-19 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

Translation of Two Dimensional Logical Address to one dimensional Physical Address.

Address generated by the CPU is divided into:

-20 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

 Segment number (s): Number of bits required to represent the segment.


 Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmentation –
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory space is
broken into little pieces, causing External fragmentation.
3.9 Fragmentation:
 As processes are loaded and removed from memory, the free memory space is broken
into little pieces.
 It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused.
 This problem is known as Fragmentation.
Fragmentation is of two types –

S.N. Fragmentation & Description


External fragmentation: Total memory space is enough to satisfy a request or to
1
reside a process in it, but it is not contiguous, so it cannot be used.
Internal fragmentation: Memory block assigned to process is bigger. Some portion of
2
memory is left unused, as it cannot be used by another process.

3.10 What is Demand Paging in OS?


Demand paging is a technique used in virtual memory systems where the pages are brought in
the main memory only when required or demanded by the CPU. Hence, it is also called lazy
swapper because the swapping of pages is done only when the CPU requires it. Virtual memory
is commonly implemented in demand paging.

-21 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

How does Demand Paging Work?

The demand paging system depends on the page table implementation because the page table
helps map logical memory to physical memory. Bitwise operators are implemented in the page
table to indicate that pages are ok or not (valid or invalid). All valid pages exist in primary
memory, and invalid pages exist in secondary memory.
What is Thrashing?
Thrashing is the term used to describe a state in which excessive paging activity takes place in
computer systems, especially in operating systems that use virtual memory, severely impairing
system performance. Thrashing occurs when a system’s high memory demand and low
physical memory capacity cause it to spend a large amount of time rotating pages between
main memory (RAM) and secondary storage, which is typically a hard disc.

It is caused due to insufficient physical memory, overloading and poor memory management.
The operating system may use a variety of techniques to lessen thrashing, including lowering
the number of running processes, adjusting paging parameters, and improving memory
allocation algorithms. Increasing the system’s physical memory (RAM) capacity can also
lessen thrashing by lowering the frequency of page swaps between RAM and the disc.

-22 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

Pure Demand Paging


Pure demand paging is a specific implementation of demand paging. The operating system
only loads pages into memory when the program needs them. In on-demand paging only, no
pages are initially loaded into memory when the program starts, and all pages are initially
marked as being on disk.

Operating systems that use pure demand paging as a memory management strategy do so
without preloading any pages into physical memory prior to the commencement of a task.
Demand paging loads a process’s whole address space into memory one step at a time,
bringing just the parts of the process that are actively being used into memory from disc as
needed.

It is useful for executing huge programs that might not fit totally in memory or for computers
with limited physical memory. If the program accesses a lot of pages that are not in memory
right now, it could also result in a rise in page faults and possible performance overhead.
Operating systems frequently use caching techniques and improve page replacement
algorithms to lessen the negative effects of page faults on system performance as a whole.

Working Process of Demand Paging


Let us understand this with the help of an example. Suppose we want to run a process P which
have four pages P0, P1, P2, and P3. Currently, in the page table, we have pages P1 and P3.

-23 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

The operating system‘s demand paging mechanism follows a few steps in its operation.

 Program Execution: Upon launching a program, the operating system allocates a certain
amount of memory to the program and establishes a process for it.
 Creating Page Tables: To keep track of which program pages are currently in memory
and which are on disk, the operating system makes page tables for each process.
 Handling Page Fault: When a program tries to access a page that isn’t in memory at the
moment, a page fault happens. In order to determine whether the necessary page is on disk,
the operating system pauses the application and consults the page tables.
 Page Fetch: The operating system loads the necessary page into memory by retrieving it
from the disk if it is there.
 The page’s new location in memory is then reflected in the page table.
 Resuming The Program: The operating system picks up where it left off when the
necessary pages are loaded into memory.

-24 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

 Page Replacement: If there is not enough free memory to hold all the pages a program
needs, the operating system may need to replace one or more pages currently in memory
with pages currently in memory. on the disk. The page replacement algorithm used by the
operating system determines which pages are selected for replacement.
 Page Cleanup: When a process terminates, the operating system frees the memory
allocated to the process and cleans up the corresponding entries in the page tables.
How Demand Paging in OS Affects System Performance?
Demand paging can improve system performance by reducing the memory needed for
programs and allowing multiple programs to run simultaneously. However, if not implemented
properly, it can cause performance issues. When a program needs a part that isn’t in the main
memory, the operating system must fetch it from the hard disk, which takes time and pauses
the program. This can cause delays, and if the system runs out of memory, it will need to
frequently swap pages in and out, increasing delays and reducing performance.

Common Algorithms Used for Demand Paging in OS


Demand paging is a memory management technique that loads parts of a program into memory
only when needed. If a program needs a page that isn’t currently in memory, the system fetches
it from the hard disk. Several algorithms manage this process:

 FIFO (First-In-First-Out): Replaces the oldest page in memory with a new one. It’s
simple but can cause issues if pages are frequently swapped in and out, leading to
thrashing.
 LRU (Least Recently Used): Replaces the page that hasn’t been used for the longest time.
It reduces thrashing more effectively than FIFO but is more complex to implement.
 LFU (Least Frequently Used): Replaces the page used the least number of times. It helps
reduce thrashing but requires extra tracking of how often each page is used.
 MRU (Most Recently Used): Replaces the page that was most recently used. It’s simpler
than LRU but not as effective in reducing thrashing.
 Random: Randomly selects a page to replace. It’s easy to implement but unpredictable in
performance.

-25 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

Page Fault:
 A page fault occurs when a program attempts to access a block of memory that is not
stored in the physical memory, or RAM.
 The fault notifies the operating system that it must locate the data in virtual memory,
then transfer it from the storage device, such as an HDD or SSD, to the system RAM.

3.11 Page Replacement Algorithms :


There are three types of Page Replacement Algorithms. They are:
o Optimal Page Replacement Algorithm
o First In First Out Page Replacement Algorithm
o Least Recently Used (LRU) Page Replacement Algorithm

1. First In First Out (FIFO)


 This is the simplest page replacement algorithm.
 In this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue.
 When a page needs to be replaced page in the front of the queue is selected for
removal.

Example-1 Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number
of page faults.

Total Page fault =6

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots
—> 3 Page Faults.
-26 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III

when 3 comes, it is already in memory so —> 0 Page Faults.


Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —
>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not available so it replaces 0 1 page fault
Hence Fage Fault Ratio = 6/7 = 0.85

2. Optimal Page replacement


In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4-page frame.
Find number of page fault.

Hence Fage Fault Ratio = 6/14 = 0.85

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.

-27 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available
in the memory.

Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.

3. Least Recently Used –


In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames. Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault

-28 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI


22UBCAC 52- OPERATING SYSEM UNIT III

0 is already in memory so —> 0 Page fault. 4


will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.

Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the


memory is 3. Find out the number of page faults respective to:
1. Optimal Page Replacement Algorithm
2. FIFO Page Replacement Algorithm
3. LRU Page Replacement Algorithm
Optimal Page Replacement Algorithm

Number of Page Faults in Optimal Page Replacement Algorithm = 5

LRU Page Replacement Algorithm

Number of Page Faults in LRU = 6


-29 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III

FIFO Page Replacement Algorithm

Number of Page Faults in FIFO = 6

-30 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI

You might also like