Logical Address: Physical Address: Memory Management Unit

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Memory Management

Logical address: The address of a memory space generated by the CPU is


called logical address.

Physical address: The actual location of the memory space where the data
elements are stored is commonly known as the physical address.

Memory Management Unit: The run-time mapping from the virtual to the
physical addresses is done by a hardware device called the Memory Management
Unit.

Dynamic Loading
So far, it has been necessary for the entire program and the data of a process to be
in the physical memory for the process to execute. The size of a process has thus
been limited to the size of the physical memory. To obtain better memory space
utilization, dynamic loading is used. With dynamic loading, a routine is not loaded
until it is called. All routines are kept on the disk in a relocatable load format. The
main program is loaded into the memory and is executed. When a routine needs to
call another routine, the calling routine first checks to see whether the other routine
has been loaded. If it has not, the relocatable link loader is called to load the desired
routine into the memory and to update the program’s address table to reflect this
change. Then control is passed to the newly loaded routine.

Advantage of dynamic programming is:

 An unused routine is never loaded.


 Useful when large amount of code are handle infrequently occurring cases.
Swapping
A process must be in memory to be executed. A process, however, can be swapped
temporarily out of the memory to a backing store in order to load another process.
The old process is then bought back into memory to continue its execution.

E.g. , in Round-Robin CPU scheduling algorithm, when the time quantum expires,
the memory manger will swap a new process with a process whose time quantum
has expired.

Contiguous Memory Allocation


The main memory accommodates both the operating system and the user various
processes. So, we need to allocate main memory in the most efficient way possible.
In Contiguous Memory Allocation the each process is contained in a single
contiguous section of memory.

Some of the memory allocation algorithms are-

First fit: Allocates the first hole that is big enough.

Best fit: Allocates the smallest hole that is big enough.

Worst fit: Allocates the largest hole available.

Fragmentation
When a process is loaded into the memory, the process may not occupy the entire
block of memory. In such a case, a part of the memory block is left unoccupied. This
is known as fragmentation. Fragmentations are of two types-

External fragmentation: External fragmentation exist when there is enough total


memory space to satisfy a request but the available space are not contiguous; so the
storage is divided into large number of small holes.

Internal fragmentation: Internal fragmentation exist when a process loaded into


a memory block does not occupy the completely occupy it. In such a case, some
amount of memory space is left unoccupied.

50-percent rule: Statistical analysis of first fit, reveals that even with some
optimization, given N allocated blocks, another 0.5 N blocks will be lost to
fragmentation. This property is known as the 50-percent rule.

One solution to the problem of external fragmentation is compaction.

Compaction: Compaction is the process of shuffle the memory contents so as to


place all free memory together in one large block.
Paging
Paging is a memory management scheme that permits the physical address space of
a process to be non-contiguous. Paging avoids external fragmentation and the need
for compaction. It also solves the considerable problem of fitting memory chunks of
varying sizes onto the backing store.

The basic method of implementing the paging involves breaking physical memory
into fixed-sized blocks called frames and breaking logical memory into blocks of the
same size called pages. When a process is to be executed, its pages are loaded into
any available memory frames from their source. The backing store is divided into
fixed-sized blocks that are of same sized as the memory frames. Every address
generated by the CPU is divided into two parts: a page number (p) and a page
offset (d). The page number is used as an index into a page table. The page table
contains the base address of each page in physical memory. This base address in
combined with the page offset to define the physical memory address that is sent to
the memory unit.

(Note: The selection of a power of 2 as a page size makes the translation of a logical
address into a page number and a page offset particularly easy. If the size of the
logical address space is 2m , and the page size is 2n addressing units, then the high
order m-n bits of logical address designate the page number, and the n low-order
bits designate the page offset.
Translation Look-aside Buffer (TLB)
The TLB is a associative, high-speed memory. Each entry in the TLB consists of two
parts: a key and a value. When the associative memory is presented with an item,
the item is compared with all the keys simultaneously. If the item is found, the
corresponding value field is returned. The search is fast; the hardware is expensive.

The TLB contains only a few of the page-table entries. When a logical address is
generated by the CPU, its page number is presented to the TLB. If the page number
is found, its frame number is immediately available and is used to access memory. If
the page number is not in the TLB (known as a TLB miss),a memory reference to
the page table must be made. When the frame number is obtained, we can use it to
access memory. In addition, we add page number and the frame number to the
TLB, so that they will be found quickly on the next reference. If the TLB is already
full of entries, the operating system must select one for replacement.
Protection of data associated with each frame
One additional bit, called the valid-invalid bit is generally attached to each entry in
the page table. When this bit is set to valid the associated page is in the process’s
logical address space and is thus a legal page. When the bit is set to invalid, the
page is not in the process’s logical address space.

Segmentation
Segmentation is a memory-management scheme that supports this user view of
memory. A logical address space is a collection of segments. Each segment has a
name and a length.

<segment-number, offset>

Each segment is loaded to the physical memory and a segment table is maintained,
which keeps track of the mapping from logical address to the physical address. Each
segment are of different sizes so length of each segment is also kept in the segment
table.
Virtual Memory
It is desirable to be able to execute a process whose logical address space is larger
than the available physical address space. Virtual memory is a technique that
enables us to map a large logical address space onto a smaller physical memory.
Virtual memory allows us to run extremely large processes and to raise the degree
of multi programming, increasing CPU utilization. Further, it frees application
programmers from worrying about the memory availability. In addition, with virtual
memory, several processes can share system libraries and memory.

Demand Paging
Loading the page only when it is needed for execution rather than loading the entire
program into the memory regardless of whether the page is used on not is an
alternative strategy. This technique is known as demand paging and is also
commonly used in virtual memory systems. With demand-paged virtual memory,
pages are only loaded when they are demanded during program execution; pages
that are never accessed are thus never loaded into the physical memory.

Page Replacement
If the total memory requirement exceeds the capacity of the physical memory, then
it may be necessary to replace pages from memory to free frames for new pages.
This technique is called page replacement.

When a page fault occurs, page replacement takes place. The steps for page
replacement are:

1. Find the location of the desired page on the disk.


2. Find a free frame:
a) If there is a free frame, use it.
b) If there is no free frame, use a page replacement algorithm to select a
victim frame.
c) Write the victim frame to the disk; change the page and frame tables
accordingly.
3. Read the desired page into the newly freed frame; change the page and
frame tables.
4. Restart the user process.
Page Replacement algorithms
FIFO Page Replacement
The simplest page replacement algorithm is a First-In-First-Out(FIFO) algorithm. A
FIFO replacement algorithm associates with each page the time when it was brought
into the memory. When a page must be replaced, the oldest page is chosen. To
implement this technique, a FIFO queue is created to hold all the pages in memory.
We replace the page at the head of the queue.

LRU Page Replacement


LRU replacement associates with each page the time of that page’s last use. When a
page must be replaced, LRU chooses the page that has not been used for the
longest period of time.

Optimal Page Replacement


Optimal page replacement algorithm replaces the page that will not be used for the
longest period of time.

Belady’s Anomaly: The number of page fault decrease as the number of frames is
increased. For some page-replacement algorithms the page fault may increase as
the number of allocated frames increases. This is called Belady’s Anomaly.

You might also like