Memory Management
Memory Management
Memory Management
Memory management
What do you mean by memory management?
Memory is the important part of the computer that is used to store the data. Its
management is critical to the computer system because the amount of main memory
available in a computer system is very limited. At any time, many processes are
competing for it. Moreover, to increase performance, several processes are executed
simultaneously. For this, we must keep several processes in the main memory, so it is
even more important to manage them effectively.
Need for Memory Management in OS
Memory management technique is needed for the following reasons:
This technique helps in placing the programs in memory in such a way so that memory
is utilized at its fullest extent.
This technique helps to protect different processes from each other so that they do not
interfere with each other's operations.
It helps to allocate space to different application routines.
This technique allows you to check how much memory needs to be allocated to
processes that decide which processor should get memory at what time.
It keeps the track of each memory location whether it is free or allocated.
This technique keeps the track of inventory whenever memory gets freed or
unallocated and it will update the status accordingly.
Memory Hierarchy
In the Computer System Design, Memory Hierarchy is an enhancement to organize the
memory such that it can minimize the access time. The Memory Hierarchy was developed
based on a program behavior known as locality of references. The figure below clearly
demonstrates the different levels of memory hierarchy :
Logical and Physical Address Space
The address that is generated by the CPU is commonly referred to as the Logical Address.
It is basically a virtual address. The logical address is basically the address of an instruction
or data used by any program.
The set of all logical addresses that are generated by any program is referred to as
Logical Address Space.
The address that is loaded into the memory-address register of the memory is
commonly referred to as a Physical address. A physical address cannot be accessed by
the user directly but the user can calculate the physical address with the help of a Logical
address.
The user's program mainly generates the logical address and the user thinks that the
program is running in this logical address but the program mainly needs physical
memory in order to complete its execution.
The set of all physical addresses corresponding to the Logical addresses is commonly
known as Physical Address Space.
Memory Management Unit(MMU)
It is a hardware device that does the run-time mapping from the virtual address to the
physical address. It is located within the Central Processing Unit.
Let us understand the concept of mapping with the help of a simple MMU scheme and
that is a base-register scheme.
In the above diagram, the base register is termed the Relocation register. The relocation
register is a special register in the CPU and is used for the mapping of logical addresses
used by a program to physical addresses of the system's main memory.
The value in the relocation register is added to every address that is generated by the
user process at the time when the address is sent to the memory.
Difference Between Logical Address and Physical Address
The procedure by which any process gets removed from the hard disk and placed in
the main memory or RAM commonly known as Swap In.
On the other hand, Swap Out is the method of removing a process from the main
memory or RAM and then adding it to the Hard Disk.
Advantages of Swapping
The advantages/benefits of the Swapping technique are as follows:
The swapping technique mainly helps the CPU to manage multiple processes within a
single main memory.
This technique helps to create and use virtual memory.
With the help of this technique, the CPU can perform several tasks simultaneously. Thus,
processes need not wait too long before their execution.
This technique is economical.
This technique can be easily applied to priority-based scheduling in order to improve its
performance.
Disadvantages of Swapping
The drawbacks of the swapping technique are as follows:
There may occur inefficiency in the case if a resource or a variable is commonly used by
those processes that are participating in the swapping process.
If the algorithm used for swapping is not good then the overall method can increase the
number of page faults and thus decline the overall performance of processing.
If the computer system loses power at the time of high swapping activity then the user
might lose all the information related to the program.
Contiguous memory allocation
It is important to note that these partitions are allocated to the processes as they arrive
and the partition that is allocated to the arrived process basically depends on the algorithm
followed.
If there is some wastage inside the partition then it is termed Internal Fragmentation.
Advantages of Fixed-size Partition Scheme
This scheme is simple and is easy to implement
It supports multiprogramming as multiple processes can be stored inside the main
memory.
Management is easy using this scheme
Disadvantages of Fixed-size Partition Scheme
Some disadvantages of using this scheme are as follows:
1. Internal Fragmentation
Suppose the size of the process is lesser than the size of the partition in that case some
size of the partition gets wasted and remains unused. This wastage inside the memory is
generally termed as Internal fragmentation
As we have shown in the above diagram the 70 KB partition is used to load a process of
50 KB so the remaining 20 KB got wasted.
2. Limitation on the size of the process
If in a case size of a process is more than that of a maximum-sized partition then that
process cannot be loaded into the memory. Due to this, a condition is imposed on the size
of the process and it is: the size of the process cannot be larger than the size of the largest
partition.
3. External Fragmentation
It is another drawback of the fixed-size partition scheme as total unused space by various
partitions cannot be used in order to load the processes even though there is the
availability of space but it is not in the contiguous fashion.
4. Degree of multiprogramming is less
In this partition scheme, as the size of the partition cannot change according to the size of
the process. Thus the degree of multiprogramming is very less and is fixed.
Memory Protection
Memory protection is a phenomenon by which we control memory access rights on a
computer. The main aim of it is to prevent a process from accessing memory that has
not been allocated to it. Hence prevents a bug within a process from affecting other
processes, or the operating system itself, and instead results in a segmentation fault or
storage violation exception being sent to the disturbing process, generally killing of
process.
Memory Allocation in OS
Memory allocation is a process by which computer programs are assigned memory or
space. It is of three types :
First Fit AllocationThe first hole that is big enough is allocated to the program.
Best Fit AllocationThe smallest hole that is big enough is allocated to the program.
Worst Fit AllocationThe largest hole that is big enough is allocated to the program.
Fragmentation
The problem due to which memory space is not getting utilized at all is commonly known
as Fragmentation in the operating system.
Fragmentation is further divided into two types:
Internal Fragmentation in OS
Internal Fragmentation is a problem that
occurs when the process is allocated to a
memory block whose size is more than the size
of that process and due to which some part of
the memory is left unused. Thus the space
wasted inside the allocated memory block is
due to the restriction on the allowed sizes of
allocated blocks.
In the given diagram, the difference between
memory allocated and required space or
memory by the process is called internal
fragmentation.
External Fragmentation in OS
When the memory space in the system can easily satisfy the requirement of the
processes, but this available memory space is non-contiguous, So it can’t be utilized
further. Then this problem is referred to as External Fragmentation.
Suppose, we want to allocate memory to a
process of size 8 MB and as In the above
diagram, we can see that, there is enough
space that is 9 MB to run a process of size 8MB
but the memory that is fragments are not
contiguous.
There is a solution for the problem of External
Fragmentation and it is Compaction. The main
goal is to shuffle the memory contents so as to
place all the free memory together in one large
block.
Also, Compaction is not always possible.
Suppose if relocation is static and is done at
the load time then in that case compaction
cannot be done. Because compaction is only
possible if the relocation is dynamic and is
done at the execution time.
Difference Between Internal and External Fragmentation
Internal Fragmentation External Fragmentation
Internal Fragmentation occurs when the memory External Fragmentation occurs when the memory
blocks of fixed-size are allocated to the processes. blocks of variable-size are allocated to the
processes dynamically.
This type of fragmentation mainly occurs when When the memory space in the system can easily
the fixed size partition is assigned to a process satisfy the requirement of the processes, but this
whose size is less than the size of the partition available memory space is non-contiguous, So it
due to which the rest of the space in the partition can’t be utilized further.
becomes unusable.
The difference between allocated memory and Unused memory spaces between the non-
memory required by a process is called Internal contiguous memory fragments that are too small
fragmentation. to serve a new process are called External
fragmentation.
It mainly refers to the unused space in the It mainly refers to the unused blocks of the
partition that resides in the allocated region; as memory that are not contiguous and hence are
suggested by the name. unable to satisfy the requirements of the process.
Best-fit block can be used to overcome the Compaction, segmentation, and paging can be
problem of Internal fragmentation. used to overcome the problem of External
fragmentation.
First-fit and best-fit suffer from external
Paging suffers from Internal fragmentation. fragmentation.
Coalescing and Compaction
The process of merging adjacent holes(free space) to form a single larger hole is called
coalescing. By coalescing holes, the larger possible contiguous blocks of storage can be
obtained.
The technique of storage compaction involves moving all occupied areas of storage to one
end. This creates a single large free storage hole instead of the numerous small holes
common in variable partition multiprogramming. Now all of the available free storage is
contiguous so that a waiting job can run if its memory requirement is met by a single that
results from compaction.
The drawbacks of compaction are :-
It consumes system resources that could otherwise be used productively.
The system must stop everything while doing compaction. This can result in poor
response times for interactive users and could be disastrous in real time systems.
Compaction involves relocating the jobs that are in storage. This means that relocation
information ordinarily lost when a program is loaded must now be maintained in readily
accessible form.
Virtual Memory is a space where large programs can store themselves in form of pages
while their execution and only the required pages or portions of processes are loaded into
the main memory. This technique is useful as a large virtual memory is provided for user
programs when a very small physical memory is there. Thus Virtual memory is a
technique that allows the execution of processes that are not in the physical memory
completely.
The Need for Virtual Memory
Following are the reasons due to which there is a need for Virtual Memory:
In case, if a computer running the Windows operating system needs more memory or
RAM than the memory installed in the system then it uses a small portion of the hard
drive for this purpose.
Suppose there is a situation when your computer does not have space in the physical
memory, then it writes things that it needs to remember into the hard disk in a swap file
and that as virtual memory.
Advantages of Virtual Memory
Given below are the advantages of using Virtual Memory:
Virtual Memory allows you to run more applications at a time.
With the help of virtual memory, you can easily fit many large programs into smaller
programs.
With the help of Virtual memory, a multiprogramming environment can be easily
implemented.
As more processes should be maintained in the main memory which leads to the
effective utilization of the CPU.
Data should be read from the disk at the time when required.
Common data can be shared easily between memory.
With the help of virtual memory, speed is gained when only a particular segment of the
program is required for the execution of the program.
The process may even become larger than all of the physical memory.
Disadvantages of Virtual Memory
Given below are the drawbacks of using Virtual Memory:
Virtual memory reduces the stability of the system.
The performance of Virtual memory is not as good as that of RAM.
If a system is using virtual memory then applications may run slower.
Virtual memory negatively affects the overall performance of a system.
Virtual memory occupies the storage space, which might be otherwise used for long-
term data storage.
This memory takes more time in order to switch between applications.
PAGING
The paging technique divides the physical memory(main memory) into fixed-size blocks
that are known as Frames and also divide the logical memory(secondary memory) into
blocks of the same size that are known as Pages.
This technique keeps the track of all the free frames.
The Frame has the same size as that of a Page. A frame is basically a place where a
(logical) page can be (physically) placed.
Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.
One page of a process is mainly stored in one of the frames of the memory. Also, the
pages can be stored at different locations of the memory but always the main priority is
to find contiguous frames.
The logical address generated by CPU always consists of two parts:
Page Number(p)
Page Offset (d)
where,
Page Number is used to specify the specific page of the process from which the CPU wants
to read the data. and it is also used as an index to the page table.
and Page offset is mainly used to specify the specific word on the page that the CPU wants
to read.
Advantages of Paging
Given below are some advantages of the Paging technique in the operating system:
Paging mainly allows to storage of parts of a single process in a non-contiguous fashion.
With the help of Paging, the problem of external fragmentation is solved.
Paging is one of the simplest algorithms for memory management.
Disadvantages of Paging
Disadvantages of the Paging technique are as follows:
In Paging, sometimes the page table consumes more memory.
Internal fragmentation is caused by this technique.
There is an increase in time taken to fetch the instruction since now two memory
accesses are required.
The data structure that is used by the virtual memory system in the operating system of a
computer in order to store the mapping between physical and logical addresses is
commonly known as Page Table.
Logical address that is generated by the cpu is translated into the physical address with the
help of the page table.
Thus page table mainly provides the corresponding frame number (base address of the
frame) where that page is stored in the main memory.
Characteristics of the Page Table
Some of the characteristics of the Page
Table are as follows:
It is stored in the main memory.
Generally; the Number of entries in the
page table = the Number of Pages in which
the process is divided.
PTBR means page table base register and
it is basically used to hold the base address
for the page table of the current process.
Each process has its own independent
page table.
The above diagram shows the paging model of Physical and logical memory.
Techniques used for Structuring the Page Table
Some of the common techniques that are used for structuring the Page table are as follows:
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
Hierarchical Paging
Another name for Hierarchical Paging is multilevel paging.
There might be a case where the page table is too big to fit in a contiguous space, so we
may have a hierarchy with several levels.
In this type of Paging the logical address space is broke up into Multiple page tables.
Hierarchical Paging is one of the simplest techniques and for this purpose, a two-level
page table and three-level page table can be used.
Two Level Page Table
Consider a system having 32-bit logical address space and a page size of 1 KB and it is
further divided into:
Page Number consisting of 22 bits.
Page Offset consisting of 10 bits.
Thus the Logical address is as follows:
In the above diagram,
P1 is an index into the Outer Page table.
P2 indicates the displacement within the page of the Inner page Table.
As address translation works from outer page table inward so is known as forward-
mapped Page Table.
Below given figure below shows the Address Translation scheme for a two-level page table
Thus in order to avoid such a large table, there is a solution and that is to divide the outer
page table, and then it will result in a Three-level page table:
Hashed Page Tables
This approach is used to handle address spaces that are larger than 32 bits.
In this virtual page, the number is hashed into a page table.
This Page table mainly contains a chain of elements hashing to the same elements.
Each element mainly consists of :
The virtual page number
The value of the mapped page frame.
A pointer to the next element in the linked list.
Given below figure shows the address translation scheme of the Hashed Page Table:
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults
when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2,
1, 0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page
faults.
Advantages
This algorithm is simple and easy to use.
FIFO does not cause more overhead.
Disadvantages
This algorithm does not make the use of the frequency of last used time rather it just
replaces the Oldest Page.
There is an increase in page faults as page frames increases.
The performance of this algorithm is the worst.
LIFO Page Replacement Algorithm
This Page Replacement algorithm stands for "Last In First Out".This algorithm works in a
similar way to the LIFO principle.
In this, the newest page is replaced which is arrived at last in the primary memory
This algorithm makes use of the stack for monitoring all the pages.
LRU Page Replacement Algorithm
This algorithm stands for "Least recent used" and this algorithm helps the Operating
system to search those pages that are used over a short duration of time frame.
The page that has not been used for the longest time in the main memory will be
selected for replacement.
This algorithm is easy to implement.
This algorithm makes use of the counter along with the even-page.
Example-Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames. Find number of page faults.
Advantages of LRU
It is an efficient technique.
With this algorithm, it becomes easy to identify the faulty pages that are not needed for a
long time.
It helps in Full analysis.
Disadvantages of LRU
It is expensive and has more complexity.
There is a need for an additional data structure.
Optimal Page Replacement Algorithm
This algorithm mainly replaces the page that will not be used for the longest time in the
future. The practical implementation of this algorithm is not possible.
Practical implementation is not possible because we cannot predict in advance those
pages that will not be used for the longest time in the future.
This algorithm leads to less number of page faults and thus is the best-known algorithm
Also, this algorithm can be used to measure the performance of other algorithms.
Example-:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.
Advantages of OPR
This algorithm is easy to use.
This algorithm provides excellent efficiency and is less complex.
For the best result, the implementation of data structures is very easy
Disadvantages of OPR
In this algorithm future awareness of the program is needed.
Practical Implementation is not possible because the operating system is unable to track
the future request
Second Chance page replacement
In the Second Chance page replacement policy, the candidate pages for removal are
considered in a round robin matter, and a page that has been accessed between
consecutive considerations will not be replaced. The page replaced is the one that, when
considered in a round robin matter, has not been accessed since its last consideration.
It can be implemented by adding a “second chance” bit to each memory frame-every
time the frame is considered (due to a reference made to the page inside it), this bit is
set to 1, which gives the page a second chance, as when we consider the candidate page
for replacement, we replace the first one with this bit set to 0 (while zeroing out bits of
the other pages we see in the process). Thus, a page with the “second chance” bit set to
1 is never replaced during the first consideration and will only be replaced if all the other
pages deserve a second chance too!
Example –
Let’s say the reference string is 0 4 1 4 2 4 3 4 2 4 0 4 1 4 2 4 3 4 and we have 3 frames.
Let’s see how the algorithm proceeds by tracking the second chance bit and the pointer.
Initially, all frames are empty so after first 3 passes they will be filled with {0, 4, 1} and the
second chance array will be {0, 0, 0} as none has been referenced yet. Also, the pointer will
cycle back to 0.
Pass-4: Frame={0, 4, 1}, second_chance = {0, 1, 0} [4 will get a second chance], pointer = 0
(No page needed to be updated so the candidate is still page in frame 0), pf = 3 (No
increase in page fault number).
Pass-5: Frame={2, 4, 1}, second_chance= {0, 1, 0} [0 replaced; it’s second chance bit was 0,
so it didn’t get a second chance], pointer=1 (updated), pf=4
Pass-6: Frame={2, 4, 1}, second_chance={0, 1, 0}, pointer=1, pf=4 (No change)
Pass-7: Frame={2, 4, 3}, second_chance= {0, 0, 0} [4 survived but it’s second chance bit
became 0], pointer=0 (as element at index 2 was finally replaced), pf=5
Pass-8: Frame={2, 4, 3}, second_chance= {0, 1, 0} [4 referenced again], pointer=0, pf=5
Pass-9: Frame={2, 4, 3}, second_chance= {1, 1, 0} [2 referenced again], pointer=0, pf=5
Pass-10: Frame={2, 4, 3}, second_chance= {1, 1, 0}, pointer=0, pf=5 (no change)
Pass-11: Frame={2, 4, 0}, second_chance= {0, 0, 0}, pointer=0, pf=6 (2 and 4 got second
chances)
Pass-12: Frame={2, 4, 0}, second_chance= {0, 1, 0}, pointer=0, pf=6 (4 will again get a
second chance)
Pass-13: Frame={1, 4, 0}, second_chance= {0, 1, 0}, pointer=1, pf=7 (pointer updated, pf
updated)
Page-14: Frame={1, 4, 0}, second_chance= {0, 1, 0}, pointer=1, pf=7 (No change)
Page-15: Frame={1, 4, 2}, second_chance= {0, 0, 0}, pointer=0, pf=8 (4 survived again due
to 2nd chance!)
Page-16: Frame={1, 4, 2}, second_chance= {0, 1, 0}, pointer=0, pf=8 (2nd chance updated)
Page-17: Frame={3, 4, 2}, second_chance= {0, 1, 0}, pointer=1, pf=9 (pointer, pf updated)
Page-18: Frame={3, 4, 2}, second_chance= {0, 1, 0}, pointer=1, pf=9 (No change)
Thrashing
Thrashing is when the page fault and
swapping happens very frequently at a
higher rate, and then the operating system
has to spend more time swapping these
pages. This state in the operating system is
known as thrashing. Because of thrashing,
the CPU utilization is going to be reduced or
negligible.
The basic concept involved is that if a process
is allocated too few frames, then there will be
too many and too frequent page faults. As a
result, no valuable work would be done by
the CPU, and the CPU utilization would fall
drastically.
How to Eliminate Thrashing
Thrashing has some negative impacts on hard drive health and system performance.
Therefore, it is necessary to take some actions to avoid it. To resolve the problem of
thrashing, here are the following methods, such as:
Adjust the swap file size:If the system swap file is not configured correctly, disk thrashing
can also happen to you.
Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can handle
tasks easily and don't have to work excessively. Generally, it is the best long-term solution.
Decrease the number of applications running on the computer: If there are too many
applications running in the background, your system resource will consume a lot. And the
remaining system resource is slow that can result in thrashing. So while closing, some
applications will release some resources so that you can avoid thrashing to some extent.
Replace programs: Replace those programs that are heavy memory occupied with
equivalents that use less memory.
Techniques to Prevent Thrashing
The Local Page replacement is better than the Global Page replacement, but local page
replacement has many disadvantages, so it is sometimes not helpful. Therefore below are
some other techniques that are used to handle thrashing:
1. Locality Model
A locality is a set of pages that are actively used together. The locality model states that as
a process executes, it moves from one locality to another. Thus, a program is generally
composed of several different localities which may overlap.
For example, when a function is called, it defines a new locality where memory references
are made to the function call instructions, local and global variables, etc. Similarly, when
the function is exited, the process leaves this locality.
2. Working-Set Model
This model is based on the above-stated concept of the Locality Model.
The basic principle states that if we allocate enough frames to a process to accommodate
its current locality, it will only fault whenever it moves to some new locality. But if the
allocated frames are lesser than the size of the current locality, the process is bound to
thrash.
According to this model, based on parameter A, the working set is defined as the set of
pages in the most recent 'A' page references. Hence, all the actively used pages would
always end up being a part of the working set.
The accuracy of the working set is dependent on the value of parameter A. If A is too large,
then working sets may overlap. On the other hand, for smaller values of A, the locality
might not be covered entirely.
If D is the total demand for frames and WSSi is the working set size for process i,
D = ⅀ WSSi
Now, if 'm' is the number of frames available in the memory, there are two possibilities:
D>m, i.e., total demand exceeds the number of frames, then thrashing will occur as some
processes would not get enough frames.
D<=m, then there would be no thrashing.
If there are enough extra frames, then some more processes can be loaded into the
memory. On the other hand, if the summation of working set sizes exceeds the frames'
availability, some of the processes have to be suspended (swapped out of memory).
This technique prevents thrashing along with ensuring the highest degree of
multiprogramming possible. Thus, it optimizes CPU utilization.
Segmentation
In Operating Systems, Segmentation is a memory management technique in which the
memory is divided into the variable size parts. Each part is known as a segment which can
be allocated to a process.
The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.
Segment table contains mainly two information about segment:
Base: It is the base address of the segment
Limit: It is the length of the segment.
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
•Segment Number
•Offset
For Example:
Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for the
segment offset so the maximum segment size is 4096 and the maximum number of
segments that can be refereed is 16.
When a program is loaded into memory, the segmentation system tries to locate space
that is large enough to hold the first segment of the process, space information is
obtained from the free list maintained by memory manager. Then it tries to locate space
for other segments. Once adequate space is located for all the segments, it loads them
into their respective areas.
The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a program.
Advantages of Segmentation
No internal fragmentation
Average Segment Size is larger than the actual page size.
Less overhead
It is easier to relocate segments than entire address space.
The segment table is of lesser size as compared to the page table in paging.
Disadvantages
It can have external fragmentation.
it is difficult to allocate contiguous memory to variable sized partition.
Costly memory management algorithms.