Lecture 8 Memory Management

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

Memory Management

Session Slides
Swapping

 Swapping is a mechanism in which a process can be swapped


temporarily out of main memory (or move) to secondary storage
(disk) and make that memory available to other processes.
 At some later time, the system swaps back the process from the
secondary storage to main memory.
 Though performance is usually affected by swapping process but it
helps in running multiple and big processes in parallel and that's
the reason Swapping is also known as a technique for
memory compaction.
Swapping
 Memory allocations
 Contiguous Memory
 Contiguous memory allocation is a classical memory allocation
model that assigns a process consecutive memory blocks (that is,
memory blocks having consecutive addresses).
 Contiguous memory allocation is one of the oldest memory
allocation schemes.
 When a process needs to execute, memory is requested by the
process.
 The size of the process is compared with the amount of contiguous
main memory available to execute the process.
 If sufficient contiguous memory is found, the process is allocated
memory to start its execution. Otherwise, it is added to a queue of
waiting processes until sufficient free contiguous memory is
available.
 Memory allocations
 Non-Contiguous Memory Allocation.
 Non-contiguous memory allocation is a memory allocation
technique.
 It allows to store parts of a single process in a non-contiguous
fashion.
 Thus, different parts of the same process can be stored at different
places in the main memory.
 Thus, different parts of the same process can be stored at different
places in the main memory.
Paging
 Paging is a fixed size partitioning scheme.
 In paging, secondary memory and main memory are divided into
equal fixed size partitions.
 The partitions of secondary memory are called as pages.
 The partitions of main memory are called as frames.
Paging
Paging
 Each process is divided into parts where size of each part is same
as page size.
 The size of the last part may be less than the page size.
 The pages of process are stored in the frames of main memory
depending upon their availability.
 Example-
 Consider a process is divided into 4 pages P0, P1, P2 and P3.
 Depending upon the availability, these pages may be stored in the
main memory frames in a non-contiguous fashion as shown
Paging
Segmentation
 Like Paging, Segmentation is another non-contiguous memory
allocation technique.
 In segmentation, process is not divided blindly into fixed size pages.
 Rather, the process is divided into modules for better visualization.
 Characteristics-
 Segmentation is a variable size partitioning scheme.
 In segmentation, secondary memory and main memory are divided
into partitions of unequal size.
 The size of partitions depend on the length of modules.
 The partitions of secondary memory are called as segments
 Example-
 Consider a program is divided into 5 segments as
 Segment Table-
 Segment table is a table that stores the information about each
segment of the process.
 It has two columns.
 First column stores the size or length of the segment.
 Second column stores the base address or starting address of the
segment in the main memory.
 Segment table is stored as a separate segment in the main
memory.
 Segment table base register (STBR) stores the base address of the
segment table.
 Segment Table-
 Here,
 Limit indicates the length or size of the segment.
 Base indicates the base address or starting address of the segment
in the main memory.
 In accordance to the above segment table, the segments are stored
in the main memory as-
 Demand Paging
 According to the concept of Virtual Memory, in order to execute some process,
only a part of the process needs to be present in the main memory which means
that only a few pages will only be present in the main memory at any time.
 However, deciding, which pages need to be kept in the main memory and which
need to be kept in the secondary memory, is going to be difficult because we
cannot say in advance that a process will require a particular page at particular
time.
 Therefore, to overcome this problem, there is a concept called Demand Paging is
introduced.
 It suggests keeping all pages of the frames in the secondary memory until they
are required.
 In other words, it says that do not load any page in the main memory until it is
required.
 Whenever any page is referred for the first time in the main memory, then that
page will be found in the secondary memory.
 Page Replacement
 In an operating system that uses paging for memory management,
a page replacement algorithm is needed to decide which page
needs to be replaced when new page comes in.
 Page replacement algorithms
 First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm,
the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue.
 When a page needs to be replaced page in the front of the queue is
selected for removal
 Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Thrashing
 Thrashing in computing is an issue caused when virtual memory is in
use.
 It occurs when the virtual memory of a computer is rapidly
exchanging data for data on hard disk, to the exclusion of most
application-level processing.
 As the main memory gets filled, additional pages need to be
swapped in and out of virtual memory.
 The swapping causes a very high rate of hard disk access.
 Thrashing can continue for a long duration until the underlying issue
is addressed.
 Thrashing can potentially result in total collapse of the hard drive of
the computer.
 Thrashing is also known as disk thrashing.
Thrashing
 Thrashing happens when too many computer processes compete
for inadequate memory resources.
 Thrashing can occur due to several factors, with the most
prominent reason being insufficient RAM or memory leakage.
 In a computer, some applications have higher priorities than others
and this can also attribute to thrashing when there is a lack of
memory resources.
 Thrashing can cause slowdown of the system performance since
data transfer has to be between the hard drive and physical
memory.
Thrashing
 One of the early signs of thrashing is when an application stops
responding while the disk drive light blinks on and off. The
operating system often warns users of low virtual memory when
thrashing is occurring.
 A temporary solution for thrashing is to eliminate one or more
running applications.
 One of the recommended ways to eliminate thrashing is to add
more memory to main memory.
 Another way of resolving the issue of thrashing is by adjusting the
size of the swap file.
Memory Mapped files
 Memory mapping refers to process ability to access files on disk the
same way it accesses dynamic memory.
 It is obvious that accessing RAM is much faster than accessing disk via
read and write system calls.
 This technique saves user applications IO overhead and buffering but
it also has its own drawbacks as we will see
 How does a memory mapped file work?
 Behind the scenes, the operating system utilizes virtual memory
techniques to do the trick.
 The OS splits the memory mapped file into pages (similar to process
pages) and loads the requested pages into physical memory on
demand.
 If a process references an address (i.e. location within the file) that
does not exists, a page fault occurs and the operating system brings
the missing page into memory.
Memory Mapped files
 When to use memory mapped file?
 Here are few scenarios where memory mapping is appealing.
 Randomly accessing a huge file once (or a couple of times).
 Loading a small file once then randomly accessing the file
frequently.
 Sharing a file or a portion of a file between multiple applications.
 When the file contains data of great importance to the application.
Memory Mapped files
 Advantages
 Memory mapping is an excellent technique that has various
benefits. Examples below.
 Efficiency: when dealing with large files, no need to read the entire
file into memory first.
 Fast: accessing virtual memory is much faster than accessing disk.
 Sharing: facilitates data sharing and interprocess communication.
 Simplicity: dealing with memory as opposed to allocating space,
copying data and deallocating space.

You might also like