OS - Unit5 - Memorymanagement - Notes
OS - Unit5 - Memorymanagement - Notes
OS - Unit5 - Memorymanagement - Notes
Contents
5.1 Background – Basic hardware, Address binding, Logical versus physical address
Space, Dynamic loading, Dynamic linking and shared libraries
5. 2 Swapping
5.3 Contiguous Memory Allocation – Memory mapping and protection, Memory
Allocation, Fragmentation
5.4 Paging – Basic Method, Hardware support, Protection, Shared Pages
5.5 Segmentation – Basic concept, Hardware
5.6 Virtual Memory Management – Background, Demand paging, Performance of
Demand Paging, Page replacement – FIFO, Optimal, LRU, MFU
Introduction:
Computer Memory is basically known as a collection of data that is represented in binary
format. Main Memory refers to a physical memory that is the internal memory of the
computer. The word main is used to distinguish it from external mass storage devices such as
disk drives. Main memory is also known as RAM. The computer is able to change only data
that is in the main memory. Therefore, every program we execute and every file we access
must be copied from a storage device into main memory.
5.1 Memory Management in Operating System:
Memory Management is the process of coordinating and controlling the memory in a
computer. It is the functionality of an operating system which handles primary memory and
moves processes back and forth between main memory and disk during execution. Memory
management keeps track of each and every memory location, regardless of either it is
allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some
memory gets freed or unallocated and correspondingly it updates the status.
5.1.2 Need for Memory Management in OS
● This technique helps in placing the programs in memory in such a way so that memory
is utilized at its fullest extent.
● This technique helps to protect different processes from each other so that they do not
interfere with each other's operations.
● It helps to allocate space to different application routines.
● This technique allows you to check how much memory needs to be allocated to
processes that decide which processor should get memory at what time.
● It keeps the track of each memory location whether it is free or allocated.
● This technique keeps the track of inventory whenever memory gets freed or unallocated and it
will update the status accordingly.
Operating System
Process 1
Process 2
…..
Protection of OS from access by user processes and protection of user processes from one
another has to be ensured by hardware.
Each process should have a separate memory space. The protection can be provided by using
two registers – base and limit register. The base register holds the smallest legal physical
memory address and the limit register specifies the size of the range. The base and limit
registers can be loaded only by the OS which uses a special privileged instruction. Since
privileged instructions can be executed only in kernel mode, and since only the OS executes
in kernel mode, only the OS can load the base and limit registers. The OS executing in kernel
mode is given unrestricted access to both OS and users memory.
All the programs are loaded in the main memory for execution.
There are two ways of loading programs into the main memory.
5.1.4 Static loading
Loading the entire program into the main memory before start of the program execution is
called as static loading.
5.1.5 Static linking
Static linking is the process of linking all library modules or dependent programs used in the
program whenever it starts execution of program.
5.1.6 Dynamic Loading
Loading of a certain part or routine of the program into the main memory & loads required
part of program only when it is called by the program, this mechanism is called Dynamic
Loading. This increases the utilization of memory space which enhances the performance.
5.1.7 Dynamic Linking
The CPU links the library modules or dependent programs to the main executing program
when it is required. This mechanism is known as Dynamic Linking.
5.1.8 Difference between Static and Dynamic Loading
Static Loading Dynamic Loading
Static loading is used when you want to load In a Dynamically loaded program,
your program statically. Then at the time of references will be provided and the loading
compilation, the entire program will be will be done at the time of execution.
linked and compiled without need of any
external module or program dependency.
At loading time, the entire program is loaded Routines of the library are loaded into
into memory and starts its execution. memory only when they are required in the
program.
5.1.9 Difference between Static and Dynamic Linking
Static Linking Dynamic Linking
Static linking is used to combine all other When dynamic linking is used, it does not
modules, which are required by a program need to link the actual module or library with
into a single executable code. This helps OS the program. Instead
prevent any runtime dependency.
The user can use the logical address The user can indirectly access physical
Access
to access the physical address. address but not directly.
Though performance is usually affected by swapping process but it helps in running multiple
and big processes in parallel and that's the reason Swapping is also known as a technique for
memory compaction.
Swapping of the processes also depends on the priority-based preemptive scheduling.
Whenever a process with higher priority arrives the memory manager swaps out the process
with the lowest priority to the disk and swaps in the process with the highest priority in the
main memory for execution. When the highest priority process is finished, the lower priority
process is swapped back in memory and continues to execute. This Variant of swapping is
termed as roll-out, roll-in or swap-out swap-in.
When a process is swapped out and is swapped in again, does it occupy the same
memory space as it has occupied previously?
This depends on the technique of address binding. If the address binding is done statically
i.e. while compiling or loading, then it is difficult to relocate the process in memory when it is
swapped in again to resume its execution. In case, the binding is done at the execution
time then the process can be swapped into the different location in memory as here the
physical addresses are computed during the execution.
Variable Size
Fixed Size Partitioning Fixed Size Variable Size
Partitioning First Fit, Best Fit, Partitioning Partitioning
First Fit, Best Fit, Worst Fit, Next Fit Paging Segmentation
Worst Fit, Next Fit
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not
contiguous, so it cannot be used. Enough memory space is available but it’s not contiguous so
external fragmentation occurs.
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as it
cannot be used by another process. It happens when fixed size partitioning is used. Internal
fragmentation happens because of contagious memory allocation.
5.3.2 Contiguous Memory Allocation:
In contiguous memory allocation, each process is contained in a single contiguous block of
memory. Memory is divided into several fixed-size partitions. Each partition contains exactly
one process. When a partition is free, a process is selected from the input queue and loaded
into it. The free blocks of memory are known as holes. The set of holes is searched to
determine which hole is best to allocate.
Fixed Size Partitioning:
This is the oldest and simplest technique used to put more than one processes in the main
memory. In this partitioning, number of partitions (non-overlapping) in RAM is fixed but
size of each partition may or may not be same. As it is contiguous allocation, hence no
spanning is allowed. Here partition are made before execution or during system configure.
Operating P1- Free P2 -7MB Free P3 -5MB Free P4 - Free
System 2MB 2MB 1MB 3MB 12MB 4MB
Block size 4MB Block size 8MB Block size 8MB Block size 16MB
Fixed size partitioning
and are the number of processes, then condition must be fulfilled. Number of processes
greater than number of partitions in RAM is invalid in Fixed Partitioning.
Variable size Partitioning:
It is a part of Contiguous allocation technique. It is used to alleviate the problem faced
by Fixed Partitioning. In contrast with fixed partitioning, partitions are not made before
the execution or during system configure.
Various features associated with variable partitioning -
1. Initially RAM is empty and partitions are made during the run-time according to
process’s need instead of partitioning during system configure.
2. The size of partition will be equal to incoming process.
3. The partition size varies according to the need of the process so that the internal
fragmentation can be avoided to ensure efficient utilization of RAM.
4. Number of partitions in RAM is not fixed and depends on the number of incoming
process and Main Memory’s size.
Operating P1-2MB P2 -7MB P3 -5MB P4 -12MB Free 10 MB
System
Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.
One page of a process is mainly stored in one of the frames of the memory. Also,
the pages can be stored at different locations of the memory but always the main
priority is to find contiguous frames.
The CPU always generates a logical address.
In order to access the main memory always a physical address is needed.
1. Page Number (p): is used to specify the specific page of the process from which the
CPU wants to read the data and it is also used as an index to the page table.
2. Page offset (d): is mainly used to specify the specific word on the page that the CPU
wants to read.
The Physical Address is divided into two parts:
1. Frame number (f): Number of bits required to represent the frame of Physical
Address Space or Frame number.
2. Frame offset (d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
Page Table:
Page table has page table entries where each page table entry stores a frame number and
optional status (like protection) bits. Many of status bits used in the virtual memory system.
The most important thing in PTE is frame Number.
Page table entry has the following information –
Frame
Present/Absent Protection Reference Caching Dirty
Number
1. Frame Number – It gives the frame number in which the current page you are looking for
is present. The number of bits required depends on the number of frames. Frame bit is also
known as address translation bit.
2. Number of bits for frame = Size of physical memory/frame size
3. Present/Absent bit – Present or absent bit says whether a particular page you are looking
for is present or absent. In case if it is not present, that is called Page Fault. It is set to 0 if
the corresponding page is not in memory. Used to control page fault by the operating
system to support virtual memory. Sometimes this bit is also known as valid/invalid bits.
4. Protection bit – Protection bit says that what kind of protection you want on that page.
So, these bit for the protection of the page frame (read, write etc).
5. Referenced bit – Referenced bit will say whether this page has been referred in the last
clock cycle or not. It is set to 1 by hardware when the page is accessed.
6. Caching enabled/disabled – Sometimes we need the fresh data. Let us say the user is
typing some information from the keyboard and your program should run according to the
input given by the user. In that case, the information will come into the main memory.
Therefore main memory contains the latest information which is typed by the user. Now if
you try to put that page in the cache, that cache will show the old information. So
whenever freshness is required, we don’t want to go for caching or many levels of the
memory. The information present in the closest level to the CPU and the information
present in the closest level to the user might be different. So we want the information has
to be consistency, which means whatever information user has given, CPU should be able
to see it as first as possible. That is the reason we want to disable caching. So, this
bit enables or disables caching of the page.
7. Modified bit – Modified bit says whether the page has been modified or not. Modified
means sometimes you might try to write something on to the page. If a page is modified,
then whenever you should replace that page with some other page, then the modified
information should be kept on the hard disk or it has to be written back or it has to be
saved back. It is set to 1 by hardware on write-access to page which is used to avoid
writing when swapped out. Sometimes this modified bit is also called as the Dirty bit.
Thus page table mainly provides the corresponding frame number (base address of the
frame) where that page is stored in the main memory.
The frame number is combined with the page offset and forms the required physical
address. Page table contains the base address of each page in the Physical memory. The
base address is then combined with the page offset in order to define the physical memory
address which is then sent to the memory unit.
So, the physical address consists of two parts:
1. Page offset(d)
2. Frame Number(f)
Where, The Frame number is used to indicate the specific frame where the required page is
stored and Page Offset indicates the specific word that has to be read from that page.
The Page size (like the frame size) is defined with the help of hardware. It is important to note
here that the size of the page is typically the power of 2 that varies between 512 bytes and 16
MB per page and it mainly depends on the architecture of the computer.
If the size of logical address space is 2m and page size is 2n addressing units then the high
order m-n bits of logical address designates the page number and the n low-order bits
designate the page offset. The logical address is as follows:
where p indicates the index into the page table, and d indicates the displacement within the
page.
In diagram of paging there is the translation of the Logical address into the Physical address.
The PTBR in the above diagram means page table base register and it basically holds the base
address for the page table of the current process. The PTBR is mainly a processor register and
is managed by the operating system. Commonly, each process running on a processor needs
its own logical address space.
But there is a problem with this approach and that is with the time required to access a user
memory location. Suppose if we want to find the location, we must first find the index into the
page table by using the value in the PTBR offset by the page number for instruction. And this
task requires memory access. It then provides us the frame number which is combined with
the page offset in order to produce the actual address. After that, we can then access the
desired place in the memory.
With the above scheme, two memory accesses are needed in order to access a byte( one for
the page-table entry and one for byte). Thus memory access is slower by a factor of 2 and in
most cases, this scheme slowed by a factor of 2.
Advantages of Paging:
Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.
With the help of Paging, the problem of external fragmentation is solved. Paging is one of
the simplest algorithms for memory management.
Disadvantages of Paging:
In Paging, sometimes the page table consumes more memory.
● Internal fragmentation is caused by this technique.
● There is an increase in time taken to fetch the instruction since now two memory
accesses are required.
The TLB contains only a few of the page-table entries. Whenever the logical address is
generated by the CPU then its page number is presented to the TLB.
● If the page number is found, then its frame number is immediately available and is
used in order to access the memory. The above whole task may take less than 10
percent longer than would if an unmapped memory reference were used.
● In case if the page number is not in the TLB (which is known as TLB miss), then a
memory reference to the Page table must be made.When the frame number is obtained
it can be used to access the memory. Additionally, page number and frame number is
added to the TLB so that they will be found quickly on the next reference.
● In case if the TLB is already full of entries then the Operating system must select
one for replacement.
● TLB allows some entries to be wired down, which means they cannot be removed
from the TLB. Typically TLB entries for the kernel code are wired down.
5.4 Segmentation:
Segmentation supports contiguous as well as noncontiguous memory allocation. Here
memory is divided into unequal sizes of the pieces called as a segments. It is another scheme
of memory management and it generally supports the user view of memory. The Logical
address space is basically the collection of segments. Each segment has a name and a length.
Basically, a process is divided into segments. Like paging, segmentation divides or segments
the memory. But there is a difference and that is while the paging divides the memory into
a fixed size and on the other hand, segmentation divides the memory into variable
segments these are then loaded into logical memory space.
A Program is basically a collection of segments. And a segment is a logical unit such as:
Main program:
● Procedure
● Function
● Method
● Object
● Local variable and global variables.
● Symbol table
● Common block
● Stack
● Arrays
5.4.1 Types of Segmentation:
● Virtual Memory Segmentation With this type of segmentation, each process is
segmented into n divisions and the most important thing is they are not segmented all
at once.
● Simple Segmentation With the help of this type, each process is segmented into n
divisions and they are all together segmented at once exactly but at the runtime and
can be non-contiguous (that is they may be scattered in the memory).
5.4.2 Characteristics of Segmentation
Some characteristics of the segmentation technique are as follows:
● The Segmentation partitioning scheme is variable-size.
● Partitions of the secondary memory are commonly known as segments.
● Partition size mainly depends upon the length of modules.
● Thus with the help of this technique, secondary memory and main memory are divided
into unequal-sized partitions.
5.4.3 Need of Segmentation
One of the important drawbacks of memory management in the Operating system is the
separation of the user's view of memory and the actual physical memory. And Paging is a
technique that provides the separation of these two memories.
User view is basically mapped onto physical storage. And this mapping allows differentiation
between Physical and logical memory.
It is possible that the operating system divides the same function into different pages and
those pages may or may not be loaded into the memory at the same time also Operating system does
not care about the User's view of the process. Due to this technique system's efficiency decreases.
Segmentation is a better technique because it divides the process into segments.
5.4.4 Basic Method
A computer system that is using segmentation has a logical address space that can be viewed
as multiple segments. And the size of the segment is of the variable that is it may grow or
shrink. During the execution each segment has a name and length. And the address mainly
specifies both thing name of the segment and the displacement within the segment.
Therefore the user specifies each address with the help of two quantities: segment name and
offset. For simplified Implementation segments are numbered; thus referred to as segment number
rather than segment name.
Thus the logical address consists of two tuples:
<Segment-number, offset>
Segment Number(s): Segment Number is used to represent the number of bits that are
required to represent the segment.
Offset (d) Segment offset is used to represent the number of bits that are required to represent the size
of the segment.
5.4.5 Segmentation Architecture
Segment Table:
A Table that is used to store the information of all segments of the process is commonly
known as Segment Table. Generally, there is no simple relationship between logical addresses
and physical addresses in this scheme.
● The mapping of a two-dimensional Logical address into a one-dimensional Physical
address is done using the segment table.
● This table is mainly stored as a separate segment in the main memory.
In the segment table each entry has:
1. Segment Base/base address: The segment base mainly contains the starting physical
address where the segments reside in the memory.
2. Segment Limit: The segment limit is mainly used to specify the length of the segment.
Segment Table Base Register (STBR) The STBR register is used to point the segment
table's location in the memory.
Segment Table Length Register (STLR) This register indicates the number of segments
used by a program. The segment number s is legal if s<STLR
Limit Base
Segment 0 1400 1400
Segment 1 400 6200
Segment Table
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be swapped
in and out of main memory such that it occupies different places in main memory at
different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be continuously
located in the main memory during execution. The combination of dynamic run-time
address translation and use of page or segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments are
present in the main memory during execution. This means that the required pages need to be
loaded into memory whenever required. Virtual memory is implemented using Demand
Paging or Demand Segmentation.
Advantages of Virtual Memory:
1) The degree of Multiprogramming will be increased.
2) User can run large application with less real RAM.
3) There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory
1) The system becomes slower since swapping takes time.
2) It takes more time in switching between applications.
3) The user will have the lesser hard disk space for its use.
5.6.1 Demand Paging:
According to the concept of Virtual Memory, in order to execute some process, only a part of
the process needs to be present in the main memory which means that only a few pages will
only be present in the main memory at any time.
However, deciding, which pages need to be kept in the main memory and which need to be
kept in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time?
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.
It suggests keeping all pages of the frames in the secondary memory until they are required. In
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.
It suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory. Demand Paging is a popular method of virtual memory
management in which the pages of a process which are least used, get stored in the secondary
memory. The process of loading the page into memory on demand (whenever page fault
occurs) is known as demand paging.
5.6.2 Thrashing:
If the number of page faults is equal to the number of referred pages or the number of page
faults are so high so that the CPU remains busy in just reading the pages from the secondary
memory then the effective access time will be the time taken by the CPU to read one word
from the secondary memory and it will be so high. The concept is called thrashing.
Demand Paging includes the following steps:
1. If CPU try to refer a page that is currently not available in the main memory, it
generates an interrupt indicating memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed,
the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address
space. The page replacement algorithms are used for the decision making of replacing
the page in physical address space.
5. The page table will updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place
the process back into ready state.
Hence whenever a page fault occurs these steps are followed by the operating system
and the required page is brought into memory.
5.6.3 Advantages of Demand Paging:
More processes may be maintained in the main memory, because we are going to load only
some of the pages of any particular process, there is room for more processes. This leads to
more efficient utilization of the processor because it is more likely that at least one of the
more numerous processes will be in the ready state at any particular time.
● A process may be larger than all of main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in main
memory as required.
● It allows greater multiprogramming levels by using less of the available (primary)
memory for each process.
5.6.4 Page Fault Service Time:
The time taken to service the page fault is called as page fault service time. The page fault
service time includes the time taken to perform all the above six steps.
Let Main memory access time is: m
Page fault service time is: s
Page fault rate is : p
Then, Effective memory access time = (p*s) + (1-p)*m
5.6.5 Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when new page comes in.
Page Fault:
If the referred page is not present in the main memory then there will be a miss and the
concept is called Page miss or page fault.
The CPU has to access the missed page from the secondary memory. If the number of page
fault is very high then the effective access time of the system will become very high.
A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In
case of page fault, Operating System might have to replace one of the existing pages with the
newly needed page. Different page replacement algorithms suggest different ways to decide
which page to replace. The target for all algorithms is to reduce the number of page faults.
Page Hit – If CPU tries to retrieve the needed page from main memory, and that page is
existed in the main memory (RAM), then it is known as “PAGE HIT”.
Page Miss – If required page is not existed in the RAM then it is known as “PAGE MISS”.
Page Fault Rate – That rate, on which threads find the page fault in the memory, it is known
as “PAGE FAULT RATE”. Page Fault Rate is measured in Per Second.
Page Replacement Algorithms
5.6.6 First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When
a page needs to be replaced page in the front of the queue is selected for removal.
Example-1 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 3 page frames. Find
number of page faults.
Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 1 1 1 1 1 1 1 1 1
Frame 2 7 7 7 7 7 7 2 2 2 2 2
Frame 3 6 6 6 6 6 6 7 7 7 7
Hit/Miss M M M M H H H M M H H H
Page Faults = 6
Example-2 Consider page reference string 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with 3 page frames.
Find number of page faults.
Page Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4 4 4 5 5 5 5 5 5
Frame 2 2 2 2 1 1 1 1 1 3 3 3
Frame 3 3 3 3 2 2 2 2 2 4 4
Hit/Miss M M M M M M M H H M M H
Page faults = 9
Example-4 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.
Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1
Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 2 2 2 2 2
Frame 2 7 7 7 7 7 7 7 7 7 7 7
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 1 1 1 1 1
Hit/Miss M M M M H H H M H H H H
Page faults = 5
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 1 1 5
Frame 2 2 2 2 2 2 2 2 2 2 2 2
Frame 3 3 3 3 3 5 5 5 5 4 4
Frame 4 4 4 4 4 4 4 3 3 3
Hit/Miss M M M M H H M H H M M M
Page Fault = 8
Example-4 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.
Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1
Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 2 2 2 2 2
Frame 2 7 7 7 7 7 7 7 7 7 7 7
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 1 1 1 1 1
Hit/Miss M M M M H H H M H H H H
Page faults = 5
Optimal Page Replacement algorithm is the best page replacement algorithm as it gives the
least number of page faults. It is also known as OPT, clairvoyant replacement algorithm, or
Belady’s optimal page replacement policy. In this algorithm, pages are replaced which
would not be used for the longest duration of time in the future, i.e., the pages in the memory
which are going to be referred farthest in the future are replaced.
This algorithm was introduced long back and is difficult to implement because it requires
future knowledge of the program behavior. However, it is possible to implement optimal page
replacement on the second run by using the page reference information collected on the first
run. If two or more page request is not present in future then FIFO strategy should be applied.
Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 2 2 2 2 2 2 2 2
Frame 2 0 0 0 0 0 0 4 4 4 0 0 0 0
Frame 3 1 1 1 3 3 3 3 3 3 3 3 3
Hit/Miss M M M M H M H M H H M H H H
Page Faults = 7
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 3 3 3
Frame 2 2 2 2 2 2 2 2 2 2 4 4
Frame 3 3 4 4 4 5 5 5 5 5 5
Hit/Miss M M M M H H M H H M M H
Page faults = 7
Request 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 1 4 4
Frame 2 2 2 2 2 2 2 2 2 2 2 2
Frame 3 3 3 3 3 3 3 3 3 3 3
Frame 4 4 4 4 5 5 5 5 5 5
Hit/Miss M M M M H H M H H H M H
Page Fault = 6
Example-4 Consider page reference string 4,7,6,1,7,6,1,2,7,2,7,1 with 4 page frames. Find
number of page faults.
Page Reference String: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2, 7, 1
Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 2 2 2 2 2
Frame 2 7 7 7 7 7 7 7 7 7 7 7
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 1 1 1 1 1
Hit/Miss M M M M H H H M H H H H
Page faults = 5
Request 4 7 6 1 7 6 1 2 7 2 7 1
Frame 1 4 4 4 4 4 4 4 4 4 4 4 4
Frame 2 7 7 7 7 7 7 7 7 7 7 1
Frame 3 6 6 6 6 6 6 6 6 6 6
Frame 4 1 1 1 1 2 2 2 2 2
Hit/Miss M M M M H H H M H H H M
Page faults = 6
● The page with the smallest count is the one which will be selected for replacement.
● This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.
● Here frequency count should be maintain throughout execution. For Every page miss
less frequency count page number is used as victim page. If same frequency count is
same then uses FIFO.
Request 1 2 1 3 4 2 3 1 2 3 4 5
Frame 1 1 1 1 1 1 1 1 1 1 1 1 1
Frame 2 2 2 2 4 4 3 3 3 3 3 3
Frame 3 3 3 2 2 2 2 2 4 5
Freq. Count 1 2 1 3 4=1 2=1 3=1 1 2 3 4=1 5=1
= = = = & & & = = = & &
1 1 2 1 2=0 3=0 4=0 2 2 2 2=0 4=0
Hit/ Miss M M H M M M M H H H M M
Page faults: 8
5.6.11 Most frequently Used (MFU)
● This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.
● Here for every page Miss High frequency count page number is used as victim page.
If same frequency count is same then use FIFO.
Request 7 0 1 2 0 3 0 4 2 3 0 3 2 3
Frame 1 7 7 7 2 2 2 0 0 2 2 2 2 2 2
Frame 2 0 0 0 0 3 3 3 3 3 0 3 3 3
Frame 3 1 1 1 1 1 4 4 4 4 4 4 4
Freq. Count 7 0 1 2=1 0 3=1 0 4=1 2 3 0 3 2 3
= = = & = & = & = = = = = =
1 1 1 7 =0 2 1=0 3 2=0 1 2 4 3 2 4
Hit/ Miss M M M M H M M M M H M M H H
Page Faults = 10
Example-2 Consider page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 3
page frames. Find number of page faults.
Request 1 2 1 3 4 2 3 1 2 3 4 5
Frame 1 1 1 1 1 4 4 4 4 4 3 3 3
Frame 2 2 2 2 2 2 2 1 1 1 4 4
Frame 3 3 3 3 3 3 2 2 2 5
Freq. Count 1 2 1 3 1=0 2 3 1=1 2=1 3=1 4=1 5=1
= = = = & = = & & & & &
1 1 2 1 4=1 2 2 2=0 3=0 4=0 1=0 2=0
Hit/ Miss M M H M M H H M M M M M
Page faults: 9
Solved Examples:
1) Memory management technique in which system stores and retrieves data from secondary storage for use in
main memory is called?
a) fragmentation
b) paging
c) mapping
d) none of the mentioned
Answers:
1. b 2. b 3. a 4. a 5. c
1) A system that provides segmentation without paging puts holes in the physical address space, forcing
the operating system to waste physical memory.
(a) TRUE (b) FALSE
Answer: FALSE. Segmentation potentially puts holes in the VIRTUAL space, but doesn’t require holds
in the physical address space, since the segmentation mapping is free to use any contiguous blocks of
memory.
2) The rate of page faults in a virtual memory system can always be reduced by adding more memory.
(a) TRUE (b) FALSE
Answer: FALSE. There are circumstances in which adding memory can actually increase the number
of faults. Specifically: Belady’s anomaly can come into play for a FIFO replacement policy and certain
access patterns
3) Compulsory misses in a cache can be reduced with prefetching.
(a) TRUE (b) FALSE
Answer: TRUE. If the data is prefetched, it will already be in memory, so when that data is accessed
the first time, there will not be a compulsory miss.
Define
1. Base Resister
2. Logical Address
3. Physical Address
4. Memory Management Unit(MMU)
5. Static Loading
6. Dynamic Loading
7. Static Linking
8. Dynamic Linking
9. PTBR
10. Hit ratio
11. Page Fault
12. Demand Paging
13. Segmentation
One Mark questions
1. Define Swapping.
2. What is External Fragmentation?
3. What is Internal Fragmentation?
4. What do you mean by Memory Compaction?
5. What are Pages and Frames?
6. What is the use of Valid-Invalid Bits in Paging?
7. What is the basic method of Segmentation?
8. What is Virtual Memory?
9. What is Demand Paging?
10. What is the basic approach of Page Replacement?
11. What is the various Page Replacement Algorithms used for Page Replacement?
12. What is a Reference String?
13. How is memory protected in a paged environment?
14. What do you mean by Best Fit, First fit and Worst fit?
Five (5) Marks questions
1.Discuss LRU-Approximation page Replacement.
2. What is swapping and what is its purpose?
3. Explain paging scheme for memory management, discuss the paging hardware and Paging
4. model. Explain about contiguous memory allocation?
5. Explain about first fit, best fit, worst fit, next fit algorithms?
6. Explain about advantages and disadvantages of paging? And Explain difference between paging and
segmentation?
7. Explain about Linux memory management?
8. Explain about the following page replacement algorithms a)FIFO b)OPR, c)LRU
9. Differentiate local and global page replacement algorithm. Differentiate local and global page replacement
algorithm.
10. What is virtual memory? Mention its advantages
11. Differentiate external fragmentation with internal fragmentation.
12. Write short notes on swapping.
13. Consider a reference string: 0 1 5 3 0 1 4 0 1 5 3 4
Number of frames in the memory is 3. Find out the number of page faults respective to:
1. FIFO Page Replacement Algorithm
2. LRU Page Replacement Algorithm
3. Optimal Page Replacement Algorithm
14. Consider a reference string: 0 1 5 3 0 1 4 0 1 5 3 4
Number of frames in the memory is 3. Find out the number of page faults respective to:
1. FIFO Page Replacement Algorithm
2. LRU Page Replacement Algorithm
3. Optimal Page Replacement Algorithm
Contagious Memory Allocation Variable Size Partitioning
Contagious Memory Allocation Fixed Size Partitioning