SPOS - Unit5 - Memory Management

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Sub: SPOS

Unit 5 Memory Management

Basics of Memory Management


Memory management is the process of managing a computer's memory resources
effectively. The primary goal of memory management is to allocate and deallocate
memory blocks to programs and processes in a way that maximizes the system's
performance and efficiency.

Here are some of the basics of memory management:

1. Memory allocation: Memory allocation is the process of assigning memory blocks


to programs and processes. The operating system allocates memory blocks to
programs based on their memory requirements.

2. Memory deallocation: Memory deallocation is the process of freeing up memory


blocks that are no longer in use by a program. This helps to optimize memory
usage and prevent memory leaks.

3. Memory fragmentation: Memory fragmentation occurs when the memory is


allocated and deallocated in a way that creates small, unusable blocks of
memory. This can lead to inefficient use of memory and reduced system
performance.

4. Virtual memory: Virtual memory is a memory management technique that allows a


computer to use more memory than it physically has available. The operating
system can use hard disk space as virtual memory to supplement the physical
memory.

5. Memory swapping: Memory swapping is the process of moving data from the
physical memory to the hard disk to free up space for other processes. This
helps to manage memory resources effectively and prevent memory shortages.
6. Effective memory management is critical for maintaining system performance
and stability. Operating systems use various techniques, such as memory
allocation algorithms and virtual memory management, to manage memory
resources efficiently.

Swapping:
Q. Explain the concept swapping in detail

Swapping is a memory management technique used by operating systems to move data


between the physical memory and the hard disk. It is also known as paging or swapping.

When a process requires more memory than is available in the physical memory, the
operating system moves some of the data from the physical memory to the hard disk to
free up space. This data is moved to a special area on the hard disk called the swap file
or paging file.

The process of moving data between the physical memory and the hard disk is known as
swapping. The operating system performs this task automatically in the background,
without the user's knowledge.

When a process requires the data that has been swapped to the hard disk, the
operating system moves it back into the physical memory. This process is called page
fault, and it can cause a temporary delay in the process's execution.

Swapping helps to manage memory resources effectively, especially when the system
has limited physical memory. It allows multiple processes to run simultaneously, even
when the system does not have enough physical memory to accommodate them all.

However, swapping can also slow down the system's performance if there is excessive
swapping. This can occur when the system does not have enough physical memory or
when the processes require more memory than is available. In such cases, the system
spends more time moving data between the physical memory and the hard disk, which
can cause delays and reduce overall system performance.

Swap in removes the process from hard drive (secondary memory) and swap out
removes the process from RAM (main memory).
Swapping allocates space for growing data and stack segment.

Memory Allocation:

Memory allocation is the process of assigning memory blocks to programs and processes
in an operating system. The memory allocation process is critical for the efficient use
of memory resources and the overall performance of the system.

Here are some of the common memory allocation techniques used in operating systems:
Fixed Partitioning
The earliest and one of the simplest techniques which can be used to load more than
one processes into the main memory is Fixed partitioning or Contiguous memory
allocation.

In this technique, the main memory is divided into partitions of equal or different
sizes. The operating system always resides in the first partition while the other
partitions can be used to store user processes. The memory is assigned to the
processes in contiguous way.

In this method, the operating system maintains a table that indicates which parts of
memory are available and which are occupied by processes.
Initially, all memory is available for user processes and is considered one large block of
available memory. This available memory is known as a “Hole”. When the process arrives
and needs memory, we search for a hole that is large enough to store this process.
If the requirement is fulfilled then we allocate memory to process, otherwise keeping
the rest available to satisfy future requests. While allocating a memory sometimes
dynamic storage allocation problems occur, which concerns how to satisfy a request of
size n from a list of free holes. There are some solutions to this problem:
First fit: -
In the first fit, the first available free hole fulfils the requirement of the process
allocated.

Here, in this diagram 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.

Best fit: -
In the best fit, allocate the smallest hole that is big enough to process requirements.
For this, we search the entire list, unless the list is ordered by size.
Here in this example, first, we traverse the complete list and find the last hole 25KB is
the best suitable hole for Process A (size 25KB).

In this method memory utilization is maximum as compared to other memory allocation


techniques.

Worst fit: -In the worst fit, allocate the largest available hole to process. This
method produces the largest leftover hole.

Here in this example, Process A (Size 25 KB) is allocated to the largest available
memory block which is 60KB. Inefficient memory utilization is a major issue in the
worst fit.

In fixed partitioning,

The partitions cannot overlap.

A process must be contiguously present in a partition for the execution.

There are various cons of using this technique.

1.Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some size
of the partition gets wasted and remain unused. This is wastage of the memory and
called internal fragmentation.
As shown in the image below, the 4 MB partition is used to load only 3 MB process and
the remaining 1 MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the processes even
though there is space available but not in the contiguous form.

As shown in the image below, the remaining 1 MB space of each partition cannot be used
as a unit to store a 4 MB process. Despite of the fact that the sufficient space is
available to load the process, process will not be loaded.

3. Limitation on the size of the process

If the process size is larger than the size of maximum sized partition then that
process cannot be loaded into the memory. Therefore, a limitation can be imposed on
the process size that is it cannot be larger than the size of the largest partition.

4. Degree of multiprogramming is less

By Degree of multi programming, we simply mean the maximum number of processes


that can be loaded into the memory at the same time. In fixed partitioning, the degree
of multiprogramming is fixed and very less due to the fact that the size of the
partition cannot be varied according to the size of processes.
Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In
this technique, the partition size is not declared initially. It is declared at the time of
process loading.

The first partition is reserved for the operating system. The remaining space is divided
into parts. The size of each partition will be equal to the size of the process. The
partition size varies according to the need of the process so that the internal
fragmentation can be avoided.

Advantages of Dynamic Partitioning over fixed partitioning

1. No Internal Fragmentation

Given the fact that the partitions in dynamic partitioning are created according to the
need of the process, It is clear that there will not be any internal fragmentation
because there will not be any unused remaining space in the partition.

2. No Limitation on the size of the process

In Fixed partitioning, the process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient contiguous memory. Here,
In Dynamic partitioning, the process size can't be restricted since the partition size is
decided according to the process size.

3.Degree of multiprogramming is dynamic


Due to the absence of internal fragmentation, there will not be any unused space in the
partition hence more processes can be loaded in the memory at the same time.

Disadvantages of dynamic partitioning

External Fragmentation

Absence of internal fragmentation doesn't mean that there will not be external
fragmentation.

Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded
in the respective partitions of the main memory.

After some time P1 and P3 got completed and their assigned space is freed. Now there
are two unused partitions (1 MB and 1 MB) available in the main memory but they cannot
be used to load a 2 MB process in the memory since they are not contiguously located.

The rule says that the process must be contiguously present in the main memory to get
executed. We need to change this rule to avoid external fragmentation.
Paging
In Operating Systems, Paging is a storage mechanism used to retrieve processes from
the secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to
find the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise, they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame
must be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.

Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process
is divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in
the contiguous way.

Frames, pages and the mapping between the two is shown in the image below.

Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8
frames become empty and therefore other pages can be loaded in that empty place. The
process P5 of size 8 KB (8 pages) is waiting inside the ready queue.
Given the fact that, we have 8 non-contiguous frames available in the memory and paging
provides the flexibility of storing the process at the different places. Therefore, we can
load the pages of process P5 in the place of P2 and P4.

Segmentation:
Q. Explain the concept Segmentation.

Segmentation is another memory management technique used by operating systems to


allocate and manage memory in a computer system. Unlike paging, which divides memory
into fixed-size pages, segmentation divides memory into variable-sized segments. Each
segment corresponds to a logical unit of a program, such as a function, a data
structure, or a code block.
When a program requests memory, the operating system assigns it one or more
contiguous segments in the logical address space of the program. Each segment has a
base address and a length, and the program can reference memory locations relative to
the base address of each segment. The operating system maintains a table called the
"segment table" which maps each segment to its base address and length in physical
memory.

When the program references a memory location, the operating system uses the
segment table to translate the logical address to a physical address, by adding the base
address of the corresponding segment to the offset of the memory location within that
segment. If the program references a memory location outside the bounds of a
segment, or if the segment is not currently in memory, the operating system generates
a segmentation fault, which terminates the program.

Segmentation provides several benefits, including better support for dynamic data
structures, support for sharing and protection of code and data, and flexibility in
memory allocation. However, it also introduces some overhead due to the need to
maintain the segment table and perform bounds checking, which can impact system
performance.

Virtual Memory
Q. Explian the concept Virtual Memory system.

Virtual memory is a memory management technique used by operating systems to allow


programs to access more memory than is physically available in the system. It provides
a way for programs to use a larger amount of memory than the physical memory of the
computer, by temporarily transferring less frequently used parts of the program's
memory to a disk or storage device, freeing up space in physical memory for more
frequently used parts.

Virtual memory works by dividing the logical address space of a program into fixed-size
pages, similar to paging. The pages are mapped to physical memory frames, and if a
required page is not in physical memory, it is fetched from the disk into a free frame in
physical memory. The operating system keeps track of which pages are in physical
memory and which are on disk using a page table, similar to the one used in paging.

When a program references a memory location, the operating system checks the page
table to see if the required page is in physical memory. If it is, the memory access
proceeds as usual. If not, a page fault is triggered, which causes the operating system
to bring the required page from disk into physical memory and update the page table.

Virtual memory allows multiple programs to run simultaneously on a computer, without


requiring each program to have enough physical memory to hold its entire memory
space. It also enables the operating system to efficiently allocate memory resources
among different programs and manage system resources more effectively. However, it
does introduce some overhead due to the need to manage the page table and perform
disk I/O operations, which can impact system performance.

Demand Paging
Q. Explain demand paging with example.

Demand paging is a memory management technique used by modern operating systems


to optimize the use of physical memory. It allows the system to only load pages of data
from secondary storage (such as a hard disk) into physical memory when they are
actually needed, rather than loading the entire program or file into memory at once.
When a process requires a page of data that is not currently in physical memory, the
operating system generates a page fault, indicating that the required page is not
present in memory. The operating system then retrieves the required page from
secondary storage and places it in a free frame in physical memory.

Demand paging is beneficial because it allows the system to use physical memory more
efficiently. Rather than keeping entire programs or files in memory, the system only
loads the pages that are actually needed. This allows more programs and data to be
loaded into memory at once, improving overall system performance.

However, demand paging can also introduce some overhead, as the system needs to
constantly monitor which pages are being used and swap pages in and out of memory as
needed. In addition, if the system runs out of free memory frames, it may need to swap
pages to and from secondary storage frequently, which can result in performance
degradation.

Here is an example of how demand paging might work:

A user opens a word processing program on their computer.

The operating system loads the initial pages of the program into memory, including the
program's instructions and any data that is required for the user interface.

The user begins typing a document. As they type, the program requires more memory to
store the text.

The operating system detects that the program needs more memory and loads
additional pages of the program into memory as they are needed.

If the user stops typing for a period of time, the operating system may decide to
unload some of the program's pages from memory to free up space for other programs.

If the user switches to a different program, the operating system may unload some or
all of the pages from the word processing program to free up memory for the new
program.

Demand paging allows the operating system to allocate memory more efficiently and
only load the parts of a program that are needed at any given time. This can result in
faster program load times and improved system performance, especially on systems
with limited memory.

Page Replacement
Page replacement is a technique used by operating systems to manage the limited
amount of physical memory available in a computer system. When a process needs to
access a page that is not currently in physical memory, the operating system needs to
find a free frame to bring the required page into memory. However, if there are no
free frames available, the operating system needs to evict an existing page from
memory to make room for the new page. This process is known as page replacement.

There are several page replacement algorithms used by operating systems, each with
its own advantages and disadvantages. Some common page replacement algorithms
include:

FIFO (First-In-First-Out): This algorithm replaces the page that has been in memory
the longest. This algorithm is easy to implement, but it may not always make the best
decisions about which pages to replace.

LRU (Least Recently Used): This algorithm replaces the page that has not been
accessed for the longest time. This algorithm is more effective than FIFO because it is
based on actual usage patterns, but it can be more complex to implement.

Optimal: This algorithm replaces the page that will not be needed for the longest time
in the future. This algorithm provides the best possible performance, but it requires
knowledge of future page accesses, which is usually not available in practice.

LRU (Least Recently Used) approximation: This is a page replacement algorithm used by
operating systems to evict pages from physical memory when there is no free memory
available. It is called an "approximation" because it is not always possible to determine
the least recently used page accurately. Instead, the algorithm uses a statistical
approach to approximate the least recently used page.

The basic idea behind LRU approximation is to keep track of which pages have been
accessed recently and which have not. The algorithm maintains a list of pages, ordered
from the most recently accessed to the least recently accessed. When a page needs to
be evicted from memory, the algorithm selects the page at the end of the list (i.e., the
least recently accessed page) for eviction.

However, maintaining a list of all pages in memory can be costly in terms of both
memory and computational overhead. To reduce this overhead, the operating system
can use a sampling approach to estimate which pages are least recently used. In this
approach, the system periodically selects a subset of pages and updates their access
times. Pages that have not been accessed since the last update are considered less
recently used and are more likely to be selected for eviction.

Another approach is to use a clock-based algorithm to approximate LRU. In this


approach, each page is associated with a "use bit" that is set when the page is
accessed. The algorithm uses a circular list of pages and a "hand" that moves around
the list. Pages with a set use bit are skipped, and the use bit is cleared when the "hand"
passes over them. The page selected for eviction is the first page encountered with a
cleared use bit.

Overall, LRU approximation can provide good performance in most cases and is
commonly used in modern operating systems. However, it is not always the best choice
for every situation and may need to be adjusted or combined with other algorithms to
optimize performance.

Clock: This algorithm is similar to FIFO but includes a "use bit" that is set when a page
is accessed. Pages are selected for replacement in a circular fashion, starting from the
current position of a "hand." Pages with a set use bit are skipped, and the use bit is
cleared when the "hand" passes over them. This algorithm is more effective than FIFO
but less complex than LRU.

Overall, the choice of page replacement algorithm depends on the specific


characteristics of the system and the workload being executed.

Page Replacement Algorithms: Example


Q. Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the
memory is 3. Find out the number of page faults respective to:

1. Optimal Page Replacement Algorithm

2. FIFO Page Replacement Algorithm

3. LRU Page Replacement Algorithm

Optimal Page Replacement Algorithm


Number of Page Faults in Optimal Page Replacement Algorithm = 5

LRU Page Replacement Algorithm

Number of Page Faults in LRU = 6

FIFO Page Replacement Algorithm

Number of Page Faults in FIFO = 6

Examples for Practice:

1. For the following Reference String


6, 5, 1, 2, 5, 3, 5, 4, 2, 3, 6, 3, 2, 1 ,2
Count the number of page faults that occur with 3 frames using FIFO, LRU and

Optimal
2. Consider the page reference string

2, 3, 2, 1, 5, 2, 4, 5, 3, 2 5, 2

Discuss working of following page replacement policies. Also count page faults.

FIFO
LRU
Optimal.
(Frame size = 3)
3. Consider the page reference string

1, 2, 3, 4, 2, 3, 4, 5, 6, 7, 3, 2, 4

Calculate page fault and hit ratio for FIFO, LRU and Optimal.

(Frame size = 3)

You might also like