OS - Unit5 - Memorymanagement - Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

Memory Management

Contents

5.1 Background – Basic hardware, Address binding, Logical versus physical address
Space, Dynamic loading, Dynamic linking and shared libraries
5. 2 Swapping
5.3 Contiguous Memory Allocation – Memory mapping and protection, Memory
Allocation, Fragmentation
5.4 Paging – Basic Method, Hardware support, Protection, Shared Pages
5.5 Segmentation – Basic concept, Hardware
5.6 Virtual Memory Management – Background, Demand paging, Performance of
Demand Paging, Page replacement – FIFO, Optimal, LRU, MFU

Introduction:
Computer Memory is basically known as a collection of data that is represented in binary
format. Main Memory refers to a physical memory that is the internal memory of the
computer. The word main is used to distinguish it from external mass storage devices such as
disk drives. Main memory is also known as RAM. The computer is able to change only data
that is in the main memory. Therefore, every program we execute and every file we access
must be copied from a storage device into main memory.
5.1 Memory Management in Operating System:
Memory Management is the process of coordinating and controlling the memory in a
computer. It is the functionality of an operating system which handles primary memory and
moves processes back and forth between main memory and disk during execution. Memory
management keeps track of each and every memory location, regardless of either it is
allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some
memory gets freed or unallocated and correspondingly it updates the status.
5.1.2 Need for Memory Management in OS
● This technique helps in placing the programs in memory in such a way so that memory
is utilized at its fullest extent.
● This technique helps to protect different processes from each other so that they do not
interfere with each other's operations.
● It helps to allocate space to different application routines.
● This technique allows you to check how much memory needs to be allocated to
processes that decide which processor should get memory at what time.
● It keeps the track of each memory location whether it is free or allocated.
● This technique keeps the track of inventory whenever memory gets freed or unallocated and it
will update the status accordingly.

5.1.3 Basic Hardware


Main memory and the registers of processor are the only storage that the CPU can access
directly. Hence, any instructions in execution and any data being used by the instructions must
be in one of these direct access storage devices. Registers that are built into CPU are generally
accessible within one cycle of CPU clock means it stores 1 bit at a time. Main memory is
accessed via a transaction on the memory bus. Memory access may take many cycles of the
CPU clock to complete, in which case the processor needs to stall since it does not have data
required to complete the instruction that it is executing.

Operating System

Process 1

Process 2

…..

Protection of OS from access by user processes and protection of user processes from one
another has to be ensured by hardware.
Each process should have a separate memory space. The protection can be provided by using
two registers – base and limit register. The base register holds the smallest legal physical
memory address and the limit register specifies the size of the range. The base and limit
registers can be loaded only by the OS which uses a special privileged instruction. Since
privileged instructions can be executed only in kernel mode, and since only the OS executes
in kernel mode, only the OS can load the base and limit registers. The OS executing in kernel
mode is given unrestricted access to both OS and users memory.
All the programs are loaded in the main memory for execution.
There are two ways of loading programs into the main memory.
5.1.4 Static loading
Loading the entire program into the main memory before start of the program execution is
called as static loading.
5.1.5 Static linking
Static linking is the process of linking all library modules or dependent programs used in the
program whenever it starts execution of program.
5.1.6 Dynamic Loading
Loading of a certain part or routine of the program into the main memory & loads required
part of program only when it is called by the program, this mechanism is called Dynamic
Loading. This increases the utilization of memory space which enhances the performance.
5.1.7 Dynamic Linking
The CPU links the library modules or dependent programs to the main executing program
when it is required. This mechanism is known as Dynamic Linking.
5.1.8 Difference between Static and Dynamic Loading
Static Loading Dynamic Loading
Static loading is used when you want to load In a Dynamically loaded program,
your program statically. Then at the time of references will be provided and the loading
compilation, the entire program will be will be done at the time of execution.
linked and compiled without need of any
external module or program dependency.
At loading time, the entire program is loaded Routines of the library are loaded into
into memory and starts its execution. memory only when they are required in the
program.
5.1.9 Difference between Static and Dynamic Linking
Static Linking Dynamic Linking
Static linking is used to combine all other When dynamic linking is used, it does not
modules, which are required by a program need to link the actual module or library with
into a single executable code. This helps OS the program. Instead
prevent any runtime dependency.

5.1.10 Logical Address:


Address generated by CPU while a program is running is referred as Logical Address. The logical
address is virtual address as it does not exist physically, therefore, it is also known as Virtual Address.
This address is used as a reference to access the physical memory location by CPU. The term Logical
Address Space is used for the set of all logical addresses generated by a program perspective.
5.1.11 Physical Address:
Physical Address identifies a physical location of required data in a memory. The user cannot
access the physical address directly but can access by its corresponding logical address. The
user program generates the logical address and thinks that the program is running in this
logical address but the program needs physical memory for its execution, therefore, the
logical address must be mapped to the physical address by MMU (Memory Management
Unit) before they are used. The term Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address space.
The logical address is mapped to its corresponding physical address by a hardware device
called Memory-Management Unit. The address-binding method used by MMU generates
identical logical and physical address during compile time and load time. However, while run-
time the address-binding methods generate different logical and physical address.
5.1.12 Differences between Logical and Physical Address:
Parameter Logical Address Physical Address
Basic generated by CPU location in a memory unit
Logical Address Space is set of all Physical Address is set of all physical
Address Space logical addresses generated by CPU addresses mapped to the corresponding
in reference to a program. logical addresses.
Visibility User can view the logical address User can never view physical address
of a program. of program.

Generation generated by the CPU Computed by MMU

The user can use the logical address The user can indirectly access physical
Access
to access the physical address. address but not directly.

5.1.13 Address Binding:


Address binding is the process of mapping from one address space to another address space.
Logical address is an address generated by the CPU during execution, whereas Physical
Address refers to the location in the memory unit (the one that is loaded into memory).User
cannot directly access the physical address so logical address maps to physical address.

Logical Address to Physical Address Mapping:

● Compile Time Address Binding:


The first type of address binding is compiling time address binding. This allocates a space in
memory to the machine code of a computer when the program is compiled to an executable
binary file.
● Load Time Address Binding
It is the translation of the logical addresses to physical addresses at the time of loading. If
memory allocation is designated at the time the program is allocated, then no program can
ever transfer from one computer to another in its compiled state. This is because the
executable code will contain memory allocations that may already be in use by other
programs on the new computer. In this instance, the program's logical addresses are not bound
to physical addresses until the program is invoked and loaded into memory.
● Execution Time Address Binding
Execution time address binding usually applies only to variables in programs and is the most
common form of binding for scripts, which don't get compiled. In this scenario, the program
requests memory space for a variable in a program the first time that variable is encountered
during the processing of instructions in the script. The memory will allocate space to that
variable until the program sequence ends, or unless a specific instruction within the script
releases the memory address bound to a variable.
5.2 Swapping:

Swapping is a mechanism in which a process can be swapped temporarily out of main


memory (or move) to secondary storage (disk) and make that memory available to other
processes. At some later time, the system swaps back the process from the secondary storage
to main memory.

Though performance is usually affected by swapping process but it helps in running multiple
and big processes in parallel and that's the reason Swapping is also known as a technique for
memory compaction.
Swapping of the processes also depends on the priority-based preemptive scheduling.
Whenever a process with higher priority arrives the memory manager swaps out the process
with the lowest priority to the disk and swaps in the process with the highest priority in the
main memory for execution. When the highest priority process is finished, the lower priority
process is swapped back in memory and continues to execute. This Variant of swapping is
termed as roll-out, roll-in or swap-out swap-in.
When a process is swapped out and is swapped in again, does it occupy the same
memory space as it has occupied previously?
This depends on the technique of address binding. If the address binding is done statically
i.e. while compiling or loading, then it is difficult to relocate the process in memory when it is
swapped in again to resume its execution. In case, the binding is done at the execution
time then the process can be swapped into the different location in memory as here the
physical addresses are computed during the execution.

5.3 Memory Allocation Techniques


To store the data and to manage the processes, we need a large-sized memory and, at the same
time, we need to access the data as fast as possible. But if we increase the size of memory, the
access time will also increase and, as we know, the CPU always generates addresses for
secondary memory, i.e. logical addresses. But we want to access the main memory, so we
need address translation of logical address into physical address.
The main memory interacts with both the user processes and the operating system. So we
need to efficiently use the main memory. Main memory is divided into non-overlapping
memory regions called partitions.
The main memory can be broadly allocated in two ways –
1. Contiguous memory allocation
2. Non-Contiguous memory allocation

Memory Management Techniques

Contiguous Memory Allocation Non Contiguous Memory


Techniques Allocation

Variable Size
Fixed Size Partitioning Fixed Size Variable Size
Partitioning First Fit, Best Fit, Partitioning Partitioning
First Fit, Best Fit, Worst Fit, Next Fit Paging Segmentation
Worst Fit, Next Fit

5.3.1 Fragmentation in OS:


As processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation. Fragmentation is of two types

External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not
contiguous, so it cannot be used. Enough memory space is available but it’s not contiguous so
external fragmentation occurs.
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as it
cannot be used by another process. It happens when fixed size partitioning is used. Internal
fragmentation happens because of contagious memory allocation.
5.3.2 Contiguous Memory Allocation:
In contiguous memory allocation, each process is contained in a single contiguous block of
memory. Memory is divided into several fixed-size partitions. Each partition contains exactly
one process. When a partition is free, a process is selected from the input queue and loaded
into it. The free blocks of memory are known as holes. The set of holes is searched to
determine which hole is best to allocate.
Fixed Size Partitioning:
This is the oldest and simplest technique used to put more than one processes in the main
memory. In this partitioning, number of partitions (non-overlapping) in RAM is fixed but
size of each partition may or may not be same. As it is contiguous allocation, hence no
spanning is allowed. Here partition are made before execution or during system configure.
Operating P1- Free P2 -7MB Free P3 -5MB Free P4 - Free
System 2MB 2MB 1MB 3MB 12MB 4MB

Block size 4MB Block size 8MB Block size 8MB Block size 16MB
Fixed size partitioning

Advantages of Fixed Partitioning –


1. Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to implement.
2. Little OS overhead:
Processing of Fixed Partitioning require lesser excess and indirect computational power.
Disadvantages of Fixed Partitioning –
1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This can cause internal fragmentation.
2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be used to load the
processes even though there is space available but not in the contiguous form (as spanning
is not allowed).
3. Limit process size:
Process of size greater than size of partition in Main Memory cannot be accommodated.
Partition size cannot be varied according to the size of incoming process’s size. Hence,
process size of 32MB in above stated example is invalid.
4. Limitation on Degree of Multiprogramming:
Partition in Main Memory is made before execution or during system configure. Main
Memory is divided into fixed number of partition. Suppose if there are partitions in RAM

and are the number of processes, then condition must be fulfilled. Number of processes
greater than number of partitions in RAM is invalid in Fixed Partitioning.
Variable size Partitioning:
It is a part of Contiguous allocation technique. It is used to alleviate the problem faced
by Fixed Partitioning. In contrast with fixed partitioning, partitions are not made before
the execution or during system configure.
Various features associated with variable partitioning -
1. Initially RAM is empty and partitions are made during the run-time according to
process’s need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.
3. The partition size varies according to the need of the process so that the internal
fragmentation can be avoided to ensure efficient utilization of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of incoming
process and Main Memory’s size.
Operating P1-2MB P2 -7MB P3 -5MB P4 -12MB Free 10 MB
System

Variable size/Dynamic Partitioning


Advantages of Variable Size Partitioning –
1. No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly according to the need
of process, hence there is no case of internal fragmentation. There will be no unused space
left in the partition.
2. No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence of internal
fragmentation. A process can be loaded until the memory is empty.
3. No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of the largest partition
could not be loaded and process cannot be divided as it is invalid in contiguous allocation
technique. Here, In variable partitioning, the process size can’t be restricted since the
partition size is decided according to the process size.

Disadvantages of Variable Partitioning –


1. Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it
involves allocation of memory during run-time rather than during system configure.
2. External Fragmentation:
There will be external fragmentation in spite of absence of internal fragmentation.
For example, suppose in above example- process P1(2MB) and process P3(1MB) completed
their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose process P5 of
size 3MB comes. The empty space in memory cannot be allocated as no spanning is allowed
in contiguous allocation. The rule says that process must be contiguously present in main
memory to get executed. Hence it results in External Fragmentation.
5.3.3 Partition Allocation:
Memory is divided into different blocks or partitions. Each process is allocated according to
the requirement. Partition allocation is an ideal method to avoid internal fragmentation.
Below are the various partition allocation schemes:
 First Fit: In this type fit, the partition is allocated, which is the first sufficient block
from the beginning of the main memory.
 Best Fit: It allocates the process to the partition that is the first smallest partition
among the free partitions.
 Worst Fit: It allocates the process to the partition, which is the largest sufficient freely
available partition in the main memory.
 Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient
partition from the last allocation point.
5.3.4 Non-Contiguous Allocation in Operating System:
Paging and Segmentation are the two ways which allow a process’s physical address space to
be non-contiguous. It has advantage of reducing memory wastage but it increases the
overheads due to address translation. It slows the execution of the memory because time is
consumed in address translation. In non-contiguous allocation, Operating system needs to
maintain the table which is called Page Table for each process which contains the base
address of the each block which is acquired by the process in memory space. In non-
contiguous memory allocation, different parts of a process are allocated different places in
Main Memory. Spanning is allowed which is not possible in other techniques like Dynamic or
Static Contiguous memory allocation. That’s why paging is needed to ensure effective
memory allocation. Paging is done to remove External Fragmentation
5.3.5 Difference between Contiguous and Non-contiguous Memory Allocation:
Sr. No. Contiguous Memory Allocation Non-Contiguous Memory Allocation
1 Contiguous memory allocation Non-Contiguous memory allocation
allocates consecutive blocks of allocates separate blocks of memory to a
memory to a file/process. file/process.
2 Faster in Execution. Slower in Execution.
3 It is easier for the OS to control. It is difficult for the OS to control.
4 Overhead is minimum as not much More Overheads are there as there are
address translations are there while more address translations.
executing a process.
5 Both Internal fragmentation and External fragmentation occurs in Non-
external fragmentation occurs in Contiguous memory allocation method.
Contiguous memory allocation
method.
6 It includes single partition allocation It includes paging and segmentation.
and multi-partition allocation.
7 Wastage of memory is there. No memory wastage is there.
8 In contiguous memory allocation, In non-contiguous memory allocation,
swapped-in processes are arranged in swapped-in processes can be arranged in
the originally allocated space. any place in the memory.
5.3.6 Paging in Operating System:
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
● Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
● Logical Address Space or Virtual Address Space( represented in words or bytes): The
set of all logical addresses generated by a program
● Physical Address (represented in bits): An address actually available on memory unit
● Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses
The paging technique divides the physical memory (main memory) into fixed-size blocks that
are known as Frames and also divide the logical memory (secondary memory) into blocks
of the same size that are known as Pages. Each process is mainly divided into parts where the
size of each part is the same as the page size. The Frame has the same size as that of a Page. A
frame is basically a place where a (logical) page can be (physically) placed.

 Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.
 One page of a process is mainly stored in one of the frames of the memory. Also,
the pages can be stored at different locations of the memory but always the main
priority is to find contiguous frames.
 The CPU always generates a logical address.
 In order to access the main memory always a physical address is needed.

The logical address generated by CPU is divided into two parts:

1. Page Number (p): is used to specify the specific page of the process from which the
CPU wants to read the data and it is also used as an index to the page table.
2. Page offset (d): is mainly used to specify the specific word on the page that the CPU
wants to read.
The Physical Address is divided into two parts:

1. Frame number (f): Number of bits required to represent the frame of Physical
Address Space or Frame number.
2. Frame offset (d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.

Frame Number(P)m-n Offset(d) n

Page Table:
Page table has page table entries where each page table entry stores a frame number and
optional status (like protection) bits. Many of status bits used in the virtual memory system.
The most important thing in PTE is frame Number.
Page table entry has the following information –
Frame
Present/Absent Protection Reference Caching Dirty
Number

🡨---------------------------Optional Information -----------------------------------🡪

1. Frame Number – It gives the frame number in which the current page you are looking for
is present. The number of bits required depends on the number of frames. Frame bit is also
known as address translation bit.
2. Number of bits for frame = Size of physical memory/frame size

3. Present/Absent bit – Present or absent bit says whether a particular page you are looking
for is present or absent. In case if it is not present, that is called Page Fault. It is set to 0 if
the corresponding page is not in memory. Used to control page fault by the operating
system to support virtual memory. Sometimes this bit is also known as valid/invalid bits.
4. Protection bit – Protection bit says that what kind of protection you want on that page.
So, these bit for the protection of the page frame (read, write etc).
5. Referenced bit – Referenced bit will say whether this page has been referred in the last
clock cycle or not. It is set to 1 by hardware when the page is accessed.
6. Caching enabled/disabled – Sometimes we need the fresh data. Let us say the user is
typing some information from the keyboard and your program should run according to the
input given by the user. In that case, the information will come into the main memory.
Therefore main memory contains the latest information which is typed by the user. Now if
you try to put that page in the cache, that cache will show the old information. So
whenever freshness is required, we don’t want to go for caching or many levels of the
memory. The information present in the closest level to the CPU and the information
present in the closest level to the user might be different. So we want the information has
to be consistency, which means whatever information user has given, CPU should be able
to see it as first as possible. That is the reason we want to disable caching. So, this
bit enables or disables caching of the page.
7. Modified bit – Modified bit says whether the page has been modified or not. Modified
means sometimes you might try to write something on to the page. If a page is modified,
then whenever you should replace that page with some other page, then the modified
information should be kept on the hard disk or it has to be written back or it has to be
saved back. It is set to 1 by hardware on write-access to page which is used to avoid
writing when swapped out. Sometimes this modified bit is also called as the Dirty bit.
Thus page table mainly provides the corresponding frame number (base address of the
frame) where that page is stored in the main memory.
The frame number is combined with the page offset and forms the required physical
address. Page table contains the base address of each page in the Physical memory. The
base address is then combined with the page offset in order to define the physical memory
address which is then sent to the memory unit.
So, the physical address consists of two parts:
1. Page offset(d)
2. Frame Number(f)
Where, The Frame number is used to indicate the specific frame where the required page is
stored and Page Offset indicates the specific word that has to be read from that page.
The Page size (like the frame size) is defined with the help of hardware. It is important to note
here that the size of the page is typically the power of 2 that varies between 512 bytes and 16
MB per page and it mainly depends on the architecture of the computer.
If the size of logical address space is 2m and page size is 2n addressing units then the high
order m-n bits of logical address designates the page number and the n low-order bits
designate the page offset. The logical address is as follows:

Page Number(P) m-n Offset n

where p indicates the index into the page table, and d indicates the displacement within the
page.
In diagram of paging there is the translation of the Logical address into the Physical address.
The PTBR in the above diagram means page table base register and it basically holds the base
address for the page table of the current process. The PTBR is mainly a processor register and
is managed by the operating system. Commonly, each process running on a processor needs
its own logical address space.
But there is a problem with this approach and that is with the time required to access a user
memory location. Suppose if we want to find the location, we must first find the index into the
page table by using the value in the PTBR offset by the page number for instruction. And this
task requires memory access. It then provides us the frame number which is combined with
the page offset in order to produce the actual address. After that, we can then access the
desired place in the memory.
With the above scheme, two memory accesses are needed in order to access a byte( one for
the page-table entry and one for byte). Thus memory access is slower by a factor of 2 and in
most cases, this scheme slowed by a factor of 2.
Advantages of Paging:
Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.
With the help of Paging, the problem of external fragmentation is solved. Paging is one of
the simplest algorithms for memory management.
Disadvantages of Paging:
In Paging, sometimes the page table consumes more memory.
● Internal fragmentation is caused by this technique.
● There is an increase in time taken to fetch the instruction since now two memory
accesses are required.

5.3.7 Translation of look-aside buffer (TLB)


There is the standard solution for the above problem that is to use a special, small, and fast-
lookup hardware cache that is commonly known as Translation of look-aside buffer(TLB).
● TLB is associative and high-speed memory.
● Each entry in the TLB mainly consists of two parts: a key(that is the tag) and a value.
● When associative memory is presented with an item, then the item is compared with
all keys simultaneously. In case if the item is found then the corresponding value is
returned.
● The search with TLB is fast though the hardware is expensive.
● The number of entries in the TLB is small and generally lies in between 64 and 1024.

TLB is used with Page Tables in the following ways:

The TLB contains only a few of the page-table entries. Whenever the logical address is
generated by the CPU then its page number is presented to the TLB.
● If the page number is found, then its frame number is immediately available and is
used in order to access the memory. The above whole task may take less than 10
percent longer than would if an unmapped memory reference were used.
● In case if the page number is not in the TLB (which is known as TLB miss), then a
memory reference to the Page table must be made.When the frame number is obtained
it can be used to access the memory. Additionally, page number and frame number is
added to the TLB so that they will be found quickly on the next reference.
● In case if the TLB is already full of entries then the Operating system must select
one for replacement.
● TLB allows some entries to be wired down, which means they cannot be removed
from the TLB. Typically TLB entries for the kernel code are wired down.
5.4 Segmentation:
Segmentation supports contiguous as well as noncontiguous memory allocation. Here
memory is divided into unequal sizes of the pieces called as a segments. It is another scheme
of memory management and it generally supports the user view of memory. The Logical
address space is basically the collection of segments. Each segment has a name and a length.
Basically, a process is divided into segments. Like paging, segmentation divides or segments
the memory. But there is a difference and that is while the paging divides the memory into
a fixed size and on the other hand, segmentation divides the memory into variable
segments these are then loaded into logical memory space.
A Program is basically a collection of segments. And a segment is a logical unit such as:
Main program:
● Procedure
● Function
● Method
● Object
● Local variable and global variables.
● Symbol table
● Common block
● Stack
● Arrays
5.4.1 Types of Segmentation:
● Virtual Memory Segmentation With this type of segmentation, each process is
segmented into n divisions and the most important thing is they are not segmented all
at once.
● Simple Segmentation With the help of this type, each process is segmented into n
divisions and they are all together segmented at once exactly but at the runtime and
can be non-contiguous (that is they may be scattered in the memory).
5.4.2 Characteristics of Segmentation
Some characteristics of the segmentation technique are as follows:
● The Segmentation partitioning scheme is variable-size.
● Partitions of the secondary memory are commonly known as segments.
● Partition size mainly depends upon the length of modules.
● Thus with the help of this technique, secondary memory and main memory are divided
into unequal-sized partitions.
5.4.3 Need of Segmentation
One of the important drawbacks of memory management in the Operating system is the
separation of the user's view of memory and the actual physical memory. And Paging is a
technique that provides the separation of these two memories.
User view is basically mapped onto physical storage. And this mapping allows differentiation
between Physical and logical memory.
It is possible that the operating system divides the same function into different pages and
those pages may or may not be loaded into the memory at the same time also Operating system does
not care about the User's view of the process. Due to this technique system's efficiency decreases.
Segmentation is a better technique because it divides the process into segments.
5.4.4 Basic Method
A computer system that is using segmentation has a logical address space that can be viewed
as multiple segments. And the size of the segment is of the variable that is it may grow or
shrink. During the execution each segment has a name and length. And the address mainly
specifies both thing name of the segment and the displacement within the segment.
Therefore the user specifies each address with the help of two quantities: segment name and
offset. For simplified Implementation segments are numbered; thus referred to as segment number
rather than segment name.
Thus the logical address consists of two tuples:
<Segment-number, offset>
Segment Number(s): Segment Number is used to represent the number of bits that are
required to represent the segment.
Offset (d) Segment offset is used to represent the number of bits that are required to represent the size
of the segment.
5.4.5 Segmentation Architecture
Segment Table:
A Table that is used to store the information of all segments of the process is commonly
known as Segment Table. Generally, there is no simple relationship between logical addresses
and physical addresses in this scheme.
● The mapping of a two-dimensional Logical address into a one-dimensional Physical
address is done using the segment table.
● This table is mainly stored as a separate segment in the main memory.
In the segment table each entry has:
1. Segment Base/base address: The segment base mainly contains the starting physical
address where the segments reside in the memory.
2. Segment Limit: The segment limit is mainly used to specify the length of the segment.
Segment Table Base Register (STBR) The STBR register is used to point the segment
table's location in the memory.
Segment Table Length Register (STLR) This register indicates the number of segments
used by a program. The segment number s is legal if s<STLR

Limit Base
Segment 0 1400 1400
Segment 1 400 6200

Segment 2 1100 4400


Segment 3 1300 4800

Segment Table

5.4.6 Segmentation Hardware:

The logical address generated by CPU consists of two parts:


1. Segment Number(s): It is used as an index into the segment table.
2. Offset (d): It must lie in between '0' and 'segment limit'. In this case, if the
Offset exceeds the segment limit then the trap is generated.
Thus; correct offset + segment base = Address in Physical memory and segment table is
basically an array of base-limit register pair. Given below figure shows the segmentation
hardware
5.4.7 Advantages of Segmentation:
● The segment table occupies less space as compared to the paging table.
● There is no Internal Fragmentation.
● Segmentation generally allows us to divide the program into modules that provide
better visualization.
● Segments are of variable size.
5.4.8 Disadvantages of Segmentation:
● Maintaining a segment table for each process leads to overhead
● This technique is expensive.
● The time is taken in order to fetch the instruction increases since now two memory
accesses are required.
● Segments are of unequal size in segmentation and thus are not suitable for swapping.
● This technique leads to external fragmentation as the free space gets broken down into
smaller pieces along with the processes being loaded and removed from the main
memory then this will result in a lot of memory waste.

5.4.9 Memory Protection:


Memory protection is a phenomenon by which we control memory access rights on a
computer. The main aim of it is to prevent a process from accessing memory that has not been
allocated to it. Hence prevents a bug within a process from affecting other processes, or the
operating system itself, and instead results in a segmentation fault or storage violation
exception being sent to the disturbing process, generally killing of process.

5.4.10 Difference between Paging & Segmentation:


Characteristic Paging Segmentation
In segmentation, an address
In paging, a process address
is space is broken into a
Memory size is broken into fixed sized
varying sized blocks called
blocks called pages
sections
Accountability Operating system divides the The compiler is responsible
to calculate the segment size,
memory into pages the virtual address and actual
address
Page size is ultimately
Section size is determined
Size determined by the available
by the user
memory
Paging is faster in terms of Segmentation as a whole is
Speed
memory access slower than paging
May cause external
May cause internal
fragmentation as some of the
Fragmentation fragmentation as some pages
memory block may not be
may go underutilized
used at all
Logical address is divided Logical address is divided
Logical address into page number and page into section number and
offset section offset
Page table stores the page Segmentation table stores
Data storage
data the segmented data

5.6 Virtual Memory in Operating System:


Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the
corresponding machine addresses. Virtual Memory is a storage scheme that provides user an
illusion of having a very big main memory. This is done by treating a part of secondary
memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.
How Virtual Memory Works?
In this scheme, whenever some pages needs to be loaded in the main memory for the
execution and the memory is not available for those many pages, then in that case, instead of
stopping the pages from entering in the main memory, the OS search for the RAM area that
are least used in the recent times or that are not referenced and copy that into the secondary
memory to make the space for the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is
having the unlimited RAM.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.

1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of main memory such that it occupies different places in main memory at
different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time
address translation and use of page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand
Paging or Demand Segmentation.
Advantages of Virtual Memory:
1) The degree of Multiprogramming will be increased.
2) User can run large application with less real RAM.
3) There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory
1) The system becomes slower since swapping takes time.
2) It takes more time in switching between applications.
3) The user will have the lesser hard disk space for its use.
5.6.1 Demand Paging:
According to the concept of Virtual Memory, in order to execute some process, only a part of
the process needs to be present in the main memory which means that only a few pages will
only be present in the main memory at any time.
However, deciding, which pages need to be kept in the main memory and which need to be
kept in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time?
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.
It suggests keeping all pages of the frames in the secondary memory until they are required. In
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.
It suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory. Demand Paging is a popular method of virtual memory
management in which the pages of a process which are least used, get stored in the secondary
memory. The process of loading the page into memory on demand (whenever page fault
occurs) is known as demand paging.
5.6.2 Thrashing:
If the number of page faults is equal to the number of referred pages or the number of page
faults are so high so that the CPU remains busy in just reading the pages from the secondary
memory then the effective access time will be the time taken by the CPU to read one word
from the secondary memory and it will be so high. The concept is called thrashing.
Demand Paging includes the following steps:
1. If CPU try to refer a page that is currently not available in the main memory, it
generates an interrupt indicating memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed,
the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision making of replacing
the page in physical address space.
5. The page table will updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place
the process back into ready state.
Hence whenever a page fault occurs these steps are followed by the operating system
and the required page is brought into memory.
5.6.3 Advantages of Demand Paging:
More processes may be maintained in the main memory, because we are going to load only
some of the pages of any particular process, there is room for more processes. This leads to
more efficient utilization of the processor because it is more likely that at least one of the
more numerous processes will be in the ready state at any particular time.
● A process may be larger than all of main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in main
memory as required.
● It allows greater multiprogramming levels by using less of the available (primary)
memory for each process.
5.6.4 Page Fault Service Time:
The time taken to service the page fault is called as page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
5.6.5 Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when new page comes in.
Page Fault:
If the referred page is not present in the main memory then there will be a miss and the
concept is called Page miss or page fault.
The CPU has to access the missed page from the secondary memory. If the number of page
fault is very high then the effective access time of the system will become very high.
A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In
case of page fault, Operating System might have to replace one of the existing pages with the
newly needed page. Different page replacement algorithms suggest different ways to decide
which page to replace. The target for all algorithms is to reduce the number of page faults.
Page Hit – If CPU tries to retrieve the needed page from main memory, and that page is
existed in the main memory (RAM), then it is known as “PAGE HIT”.
Page Miss – If required page is not existed in the RAM then it is known as “PAGE MISS”.
Page Fault Rate – That rate, on which threads find the page fault in the memory, it is known
as “PAGE FAULT RATE”. Page Fault Rate is measured in Per Second.
Page Replacement Algorithms
5.6.6 First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When
a page needs to be replaced page in the front of the queue is selected for removal.
Example-1 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 3 page frames. Find
number of page faults.

Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1

Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 1 1 1 1 1 1 1 1 1
Frame 2 7 7 7 7 7 7 2 2 2 2 2
Frame 3 6 6 6 6 6 6 7 7 7 7
Hit/Miss M M M M H H H M M H H H
Page Faults = 6
Example-2 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 3 page frames.
Find number of page faults.
Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4 4 4 5 5 5 5 5 5
Frame 2 2 2 2 1 1 1 1 1 3 3 3
Frame 3 3 3 3 2 2 2 2 2 4 4
Hit/Miss M M M M M M M H H M M H
Page faults = 9

Example-3 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 4 page frames.


Find number of page faults.
Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 5 5 5 5 4 4
Frame 2 2 2 2 2 2 2 1 1 1 1 5
Frame 3 3 3 3 3 3 3 2 2 2 2
Frame 4 4 4 4 4 4 4 3 3 3
Hit/Miss M M M M H H M M M M M M
Page Fault = 10
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For above example, if we consider 3 frame size, we get total 9 page
faults, but if we increase frame size to 4, we get 10 page faults. Bélády’s anomaly is the
name given to the phenomenon where increasing the number of page frames results in an
increase in the number of page faults for a given memory access pattern.

Example-4 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.
Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1

Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 2 2 2 2 2
Frame 2 7 7 7 7 7 7 7 7 7 7 7
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 1 1 1 1 1
Hit/Miss M M M M H H H M H H H H
Page faults = 5

5.6.7 Least Recently Used (LRU)


Least Recently Used page replacement algorithm keeps track of page usage over a short
period of time. It works on the idea that the pages that have been most heavily used in the past
are most likely to be used heavily in the future too. In LRU, whenever page replacement
happens, the page which has not been used for the longest amount of time is replaced.
Example-1 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3 page
frames. Find number of page faults.
Page Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 0
Frame 2 0 0 0 0 0 0 0 0 3 3 3 3 3
Frame 3 1 1 1 3 3 3 2 2 2 2 2 2
Hit/Miss M M M M H M H M H M M H H H
Page Faults = 8

Example-2 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 3 page frames.


Find number of page faults.
Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4 4 4 5 5 5 3 3 3
Frame 2 2 2 2 1 1 1 1 1 1 4 4
Frame 3 3 3 3 2 2 2 2 2 2 5
Hit/Miss M M M M M M M H H M M M
Page faults = 10

Example-3 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 4 page frames. Find


number of page faults.

Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 1 1 5
Frame 2 2 2 2 2 2 2 2 2 2 2 2
Frame 3 3 3 3 3 5 5 5 5 4 4
Frame 4 4 4 4 4 4 4 3 3 3
Hit/Miss M M M M H H M H H M M M
Page Fault = 8

Example-4 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.
Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1

Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 2 2 2 2 2
Frame 2 7 7 7 7 7 7 7 7 7 7 7
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 1 1 1 1 1
Hit/Miss M M M M H H H M H H H H
Page faults = 5

5.6.8 Optimal Page Replacement:

Optimal Page Replacement algorithm is the best page replacement algorithm as it gives the
least number of page faults. It is also known as OPT, clairvoyant replacement algorithm, or
Belady’s optimal page replacement policy. In this algorithm, pages are replaced which
would not be used for the longest duration of time in the future, i.e., the pages in the memory
which are going to be referred farthest in the future are replaced.

This algorithm was introduced long back and is difficult to implement because it requires
future knowledge of the program behavior. However, it is possible to implement optimal page
replacement on the second run by using the page reference information collected on the first
run. If two or more page request is not present in future then FIFO strategy should be applied.

Example-1 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3 page


frames. Find number of page faults.

Page Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3

Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 2 2 2 2 2 2 2 2
Frame 2 0 0 0 0 0 0 4 4 4 0 0 0 0
Frame 3 1 1 1 3 3 3 3 3 3 3 3 3
Hit/Miss M M M M H M H M H H M H H H
Page Faults = 7

Example-2 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 3 page frames.


Find number of page faults.

Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 3 3 3
Frame 2 2 2 2 2 2 2 2 2 2 4 4
Frame 3 3 4 4 4 5 5 5 5 5 5
Hit/Miss M M M M H H M H H M M H
Page faults = 7

Example-3 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 4 page frames.


Find number of page faults.

Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 1 4 4
Frame 2 2 2 2 2 2 2 2 2 2 2 2
Frame 3 3 3 3 3 3 3 3 3 3 3
Frame 4 4 4 4 5 5 5 5 5 5
Hit/Miss M M M M H H M H H H M H
Page Fault = 6

Example-4 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.
Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1
Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 2 2 2 2 2
Frame 2 7 7 7 7 7 7 7 7 7 7 7
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 1 1 1 1 1
Hit/Miss M M M M H H H M H H H H
Page faults = 5

5.6.9 Most Recently Used (MRU)


Optimal page replacement algorithm replaces the most recently used page to minimize the
page faults. This is because most recently page will be required after the longest time. Thus,
optimal page replacement algorithm acts as Most Recently Used (MRU) page replacement
algorithm.
Example-1 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3 page
frames. Find number of page faults.
Page Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 7 7 7 7 7 7 7 7 7 7 7
Frame 2 0 0 0 0 3 0 4 4 4 4 4 4 4
Frame 3 1 2 2 2 2 2 2 3 0 3 2 3
Hit/Miss M M M M H M M M H M M M M M
Page Faults = 12

Example-2 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 3 page frames.

Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5


Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 2 3 3 3
Frame 2 2 2 2 2 2 5 5 5 5 5 5
Frame 3 3 4 4 4 4 4 4 4 4 4
Hit/Miss M M M M H H M H M M H H
Page faults = 7

Example-2 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 3 page frames.


Find number of page faults.
Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 2 2 2 2
Frame 2 2 2 2 2 2 5 5 5 5 5 5
Frame 3 3 3 3 3 3 3 3 3 3 3
Frame 4 4 4 4 4 4 4 4 4 4
Hit/Miss M M M M H H M H M H H H
Page Fault = 6
Example-3 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.

Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1

Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 4 4 4 4 4
Frame 2 7 7 7 7 7 7 7 7 7 7 1
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 2 2 2 2 2
Hit/Miss M M M M H H H M H H H M
Page faults = 6

5.6.10 Least frequently Used (LFU)

● The page with the smallest count is the one which will be selected for replacement.

● This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.

● Here frequency count should be maintain throughout execution. For Every page miss
less frequency count page number is used as victim page. If same frequency count is
same then uses FIFO.

Example-1 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3


page frames. Find number of page faults.
Page Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 2 4 4 3 3 3 3 3
Frame 2 0 0 0 0 0 0 0 0 0 0 0 0 0
Frame 3 1 1 1 3 3 3 2 2 2 2 2 2
Freq. Count 7 0 1 2=1 0 3=1 0 4=1 2 3 0 3 2 3
= = = & = & = & = = = = = =
1 1 1 7 =0 2 1=0 3 2=0 1 2 4 3 2 4
Hit/ Miss M M M M H M H M M M H H H H
Page Faults: 8
For first miss we see that frequency count of 7 is 1, 0 is 1 & 1is 1 is 1 now for
page request 2 we use page 7 as victim because all are same frequency count, so use
FIFO strategy. When we use page 7 as victim its frequency count become 0. For next
page request 0, its already present in frame so hit that page & now frequency count
become for 2 is 1,0 is 2, & 1 is 1. For next page request 3, from 2, 0, 1 pages 0 can’t
become victim as frequency count is 2, remaining from 2 & 1 apply FIFO strategy, as
per that 1 is next victim & so on.

Example-2 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3


page frames. Find number of page faults.

Page Reference String: 1, 2, 1, 3, 4, 2, 3, 1, 2, 3, 4, 5

Request 1 2 1 3 4 2 3 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 1 1 1
Frame 2 2 2 2 4 4 3 3 3 3 3 3
Frame 3 3 3 2 2 2 2 2 4 5
Freq. Count 1 2 1 3 4=1 2=1 3=1 1 2 3 4=1 5=1
= = = = & & & = = = & &
1 1 2 1 2=0 3=0 4=0 2 2 2 2=0 4=0
Hit/ Miss M M H M M M M H H H M M
Page faults: 8
5.6.11 Most frequently Used (MFU)
● This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.

● Here for every page Miss High frequency count page number is used as victim page.
If same frequency count is same then use FIFO.

Example-1 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3 page


frames. Find number of page faults.

Page Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3

Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 0 0 2 2 2 2 2 2
Frame 2 0 0 0 0 3 3 3 3 3 0 3 3 3
Frame 3 1 1 1 1 1 4 4 4 4 4 4 4
Freq. Count 7 0 1 2=1 0 3=1 0 4=1 2 3 0 3 2 3
= = = & = & = & = = = = = =
1 1 1 7 =0 2 1=0 3 2=0 1 2 4 3 2 4
Hit/ Miss M M M M H M M M M H M M H H
Page Faults = 10
Example-2 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3
page frames. Find number of page faults.

Page Reference String: 1, 2, 1, 3, 4, 2, 3, 1, 2, 3, 4, 5

Request 1 2 1 3 4 2 3 1 2 3 4 5
Frame 1 1 1 1 1 4 4 4 4 4 3 3 3
Frame 2 2 2 2 2 2 2 1 1 1 4 4
Frame 3 3 3 3 3 3 2 2 2 5
Freq. Count 1 2 1 3 1=0 2 3 1=1 2=1 3=1 4=1 5=1
= = = = & = = & & & & &
1 1 2 1 4=1 2 2 2=0 3=0 4=0 1=0 2=0
Hit/ Miss M M H M M H H M M M M M
Page faults: 9
Solved Examples:

Q.1 Consider a reference string: 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4


Number of frames in the memory is 3. Find out the number of page faults respective to:
1. FIFO Page Replacement Algorithm
2. LRU Page Replacement Algorithm
3. Optimal Page Replacement Algorithm
Solution :
i) FIFO
Request 3 2 1 0 3 2 4 3 2 1 0 4
Frame 1 3 3 3 0 0 0 4 4 4 4 4 4
Frame 2 2 2 2 3 3 3 3 3 1 1 1
Frame 3 1 1 1 2 2 2 2 2 0 0
Hit/Miss M M M M M M M H H M M H
Page Faults: 9
ii) LRU
Request 3 2 1 0 3 2 4 3 2 1 0 4
Frame 1 3 3 3 0 0 0 4 4 4 1 1 1
Frame 2 2 2 2 3 3 3 3 3 3 0 0
Frame 3 1 1 1 2 2 2 2 2 2 4
Hit/Miss M M M M M M M H H M M M
Page Faults: 10
iii) Optimal Page Replacement:
Request 3 2 1 0 3 2 4 3 2 1 0 4
Frame 1 3 3 3 3 3 3 3 3 3 1 1 1
Frame 2 2 2 2 2 2 2 2 2 2 0 0
Frame 3 1 0 0 0 4 4 4 4 4 4
Hit/Miss M M M M H M M H H M M H
Page Faults: 8
Q.2 Consider a reference string: 5,0,6,2,0,3,0,4,2,3,0,3,2,6,2
Number of frames in the memory is 3. Find out the number of page faults respective to:
1. FIFO Page Replacement Algorithm
2. Optimal Page Replacement Algorithm (April 2010)
Solution:
i) FIFO
Request 5 0 6 2 0 3 0 4 2 3 0 3 2 6 2
Frame 1 5 5 5 2 2 2 2 4 4 4 0 0 0 0 0
Frame 2 0 0 0 0 3 3 3 2 2 2 2 2 6 6
Frame 3 6 6 6 6 0 0 0 3 3 3 3 3 2
Hit/Miss M M M M H M M M M M M H H M M
Page Faults: 12
ii) Optimal Page Replacement:
Request 5 0 6 2 0 3 0 4 2 3 0 3 2 6 2
Frame 1 5 5 5 5 5 3 3 3 3 3 3 3 3 6 6
Frame 2 0 0 0 0 0 0 4 4 4 0 0 0 0 0
Frame 3 6 2 2 2 2 2 2 2 2 2 2 2 2
Hit/Miss M M M M M M H M M M M H H M H
Page Faults: 11

Q.3 Consider a reference string: 7,5,6,2,9,5,7,6,2,7,6,5,2,7,2,7,8


Number of frames in the memory is 3. Find out the number of page faults respective to:
1. LRU Page Replacement Algorithm
2. Optimal Page Replacement Algorithm (April 2013)
Solution:
i) LRU
Request 7 5 6 2 9 5 7 6 2 7 6 5 2 7 2 7 8
Frame 1 7 7 7 2 2 2 7 7 7 7 7 7 2 2 2 2 2
Frame 2 5 5 5 9 9 9 6 6 6 6 6 6 7 7 7 7
Frame 3 6 6 6 5 5 5 2 2 2 5 5 5 5 5 8
Hit/Miss M M M M M M M M M H H M M M H H M
Page Faults: 13
ii) Optimal Page Replacement:
Request 7 5 6 2 9 5 7 6 2 7 6 5 2 7 2 7 8
Frame 1 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 8
Frame 2 5 5 5 5 5 5 5 2 6 6 5 5 5 5 5 5
Frame 3 6 2 9 9 9 6 6 2 2 2 2 2 2 2 2
Hit/Miss M M M M M H H M M M H M H H H H M
Page Faults: 10
Q.4 Consider a reference string: 7,2,8,4,5,8,4,7,6,1,3,7.
Number of frames in the memory is 3. Find out the number of page faults respective to:
3. LRU Page Replacement Algorithm
4. Optimal Page Replacement Algorithm (April 2019)
Solution: i) LRU
Request 7 2 8 4 5 8 4 7 6 1 3 7
Frame 1 7 7 7 4 4 4 4 4 4 1 1 1
Frame 2 2 2 2 5 5 5 7 7 7 3 3
Frame 3 8 8 8 8 8 8 6 6 6 7
Hit/Miss M M M M M H H M M M M M
Page Faults: 10
ii) Optimal Page Replacement:
Request 7 2 8 4 5 8 4 7 6 1 3 7
Frame 1 7 7 7 7 5 5 5 5 5 1 1 1
Frame 2 2 2 4 4 4 4 7 7 7 7 7
Frame 3 8 8 8 8 8 8 6 6 3 3
Hit/Miss M M M M M H H M M M M H
Page Faults: 9
Exercise

Multiple Choice Questions MCQ

1) Memory management technique in which system stores and retrieves data from secondary storage for use in
main memory is called?
a) fragmentation
b) paging
c) mapping
d) none of the mentioned

2) The address of a page table in memory is pointed by ____________


a) stack pointer
b) page table base register
c) page register
d) program counter

3) Program always deals with ____________


a) logical address
b) absolute address
c) physical address
d) relative address

4) The page table contains ____________


a) base address of each page in physical memory
b) page offset
c) page size
d) none of the mentioned

5) Which one of the following is the address generated by CPU?


a) physical address
b) absolute address
c) logical address
d) none of the mentioned
6) What is compaction?
a) a technique for overcoming internal fragmentation
b) a paging technique
c) a technique for overcoming external fragmentation
d) a technique for overcoming fatal error

7) What is Address Binding?


a) going to an address in memory
b) locating an address with the help of another address
c) binding two addresses together to form a new address in a different memory space
d) a mapping from one address space to another
8) Operating System maintains the page table for ____________
a) each process
b) each thread
c) each instruction
d) each address
9) When memory is divided into several fixed sized partitions, each partition may contain ________
a) exactly one process
b) at least one process
c) multiple processes at once
d) none of the mentioned

10) In fixed size partition, the degree of multiprogramming is bounded by ___________


a) the number of partitions
b) the CPU utilization
c) the memory size
d) all of the mentioned
11) In segmentation, each address is specified by ____________
a) a segment number & offset
b) an offset & value
c) a value & segment number
d) a key & value
12) In paging the user provides only ________ which is partitioned by the hardware into ________ and ______
a) one address, page number, offset
b) one offset, page number, address
c) page number, offset, address
d) none of the mentioned
13) Each entry in a segment table has a ____________
a) segment base
b) segment peak
c) segment value
d) none of the mentioned
14) The segment base contains the ____________
a) starting logical address of the process
b) starting physical address of the segment in memory
c) segment length
d) none of the mentioned
15) Which of the following page replacement algorithms suffers from Belady’s anomaly?
a) FIFO
b) LRU
c) Optimal Page Replacement
d) Both LRU & FIFO

Answers:

1. b 2. b 3. a 4. a 5. c

6. c 7. D 8.a 9.a 10.a

11. a 12. a 13. a 14. a 15. a

State True or False

1) A system that provides segmentation without paging puts holes in the physical address space, forcing
the operating system to waste physical memory.
(a) TRUE (b) FALSE

Answer: FALSE. Segmentation potentially puts holes in the VIRTUAL space, but doesn’t require holds
in the physical address space, since the segmentation mapping is free to use any contiguous blocks of
memory.
2) The rate of page faults in a virtual memory system can always be reduced by adding more memory.
(a) TRUE (b) FALSE
Answer: FALSE. There are circumstances in which adding memory can actually increase the number
of faults. Specifically: Belady’s anomaly can come into play for a FIFO replacement policy and certain
access patterns
3) Compulsory misses in a cache can be reduced with prefetching.
(a) TRUE (b) FALSE
Answer: TRUE. If the data is prefetched, it will already be in memory, so when that data is accessed
the first time, there will not be a compulsory miss.
Define
1. Base Resister
2. Logical Address
3. Physical Address
4. Memory Management Unit(MMU)
5. Static Loading
6. Dynamic Loading
7. Static Linking
8. Dynamic Linking
9. PTBR
10. Hit ratio
11. Page Fault
12. Demand Paging
13. Segmentation
One Mark questions
1. Define Swapping.
2. What is External Fragmentation?
3. What is Internal Fragmentation?
4. What do you mean by Memory Compaction?
5. What are Pages and Frames?
6. What is the use of Valid-Invalid Bits in Paging?
7. What is the basic method of Segmentation?
8. What is Virtual Memory?
9. What is Demand Paging?
10. What is the basic approach of Page Replacement?
11. What is the various Page Replacement Algorithms used for Page Replacement?
12. What is a Reference String?
13. How is memory protected in a paged environment?
14. What do you mean by Best Fit, First fit and Worst fit?
Five (5) Marks questions
1.Discuss LRU-Approximation page Replacement.
2. What is swapping and what is its purpose?
3. Explain paging scheme for memory management, discuss the paging hardware and Paging
4. model. Explain about contiguous memory allocation?
5. Explain about first fit, best fit, worst fit, next fit algorithms?
6. Explain about advantages and disadvantages of paging? And Explain difference between paging and
segmentation?
7. Explain about Linux memory management?
8. Explain about the following page replacement algorithms a)FIFO b)OPR, c)LRU
9. Differentiate local and global page replacement algorithm. Differentiate local and global page replacement
algorithm.
10. What is virtual memory? Mention its advantages
11. Differentiate external fragmentation with internal fragmentation.
12. Write short notes on swapping.
13. Consider a reference string: 0 1 5 3 0 1 4 0 1 5 3 4
Number of frames in the memory is 3. Find out the number of page faults respective to:
1. FIFO Page Replacement Algorithm
2. LRU Page Replacement Algorithm
3. Optimal Page Replacement Algorithm
14. Consider a reference string: 0 1 5 3 0 1 4 0 1 5 3 4
Number of frames in the memory is 3. Find out the number of page faults respective to:
1. FIFO Page Replacement Algorithm
2. LRU Page Replacement Algorithm
3. Optimal Page Replacement Algorithm
Contagious Memory Allocation Variable Size Partitioning
Contagious Memory Allocation Fixed Size Partitioning

You might also like