Os-Unit Iii
Os-Unit Iii
Os-Unit Iii
o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using
the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process
from being corrupted by another process. If this is not ensured, then the system may
exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.
The Single contiguous memory management scheme is the simplest memory management
scheme used in the earliest generation of computer systems. In this scheme, the main memory is
divided into two contiguous areas or partitions. The operating systems reside permanently in one
partition, generally at the lower memory, and the user process is loaded into the other partition.
B) Multiple Partitioning:
o Dynamic Partitioning
I) Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition memory
management scheme or static partitioning. These partitions can be of the same size or different
sizes. Each partition can hold a single process. The number of partitions determines the degree of
multiprogramming, i.e., the maximum number of processes in memory. These partitions are
made at the time of system generation and remain fixed after that.
II) Dynamic Partitioning
The dynamic partitioning was designed to overcome the problems of a fixed partitioning scheme.
In a dynamic partitioning scheme, each process occupies only as much memory as they require
when loaded for processing. Requested processes are allocated memory until the entire physical
memory is exhausted or the remaining space is insufficient to hold the requesting process. In this
scheme the partitions used are of variable size, and the number of partitions is not defined at the
system generation time.
ii) Non-Contiguous memory management schemes:
In a Non-Contiguous memory management scheme, the program is divided into different blocks
and loaded at different portions of the memory that need not necessarily be adjacent to one
another. This scheme can be classified depending upon the size of blocks and whether the blocks
reside in the main memory or not.
I)What is paging?
Paging is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into fixed-size blocks of physical memory called
frames. The size of a frame should be kept the same as that of a page to maximize the main
memory and avoid external fragmentation.
Advantages of paging:
o Pages reduce external fragmentation.
o Simple to implement.
o Memory efficient.
What is Segmentation?
Segmentation is a technique that eliminates the requirements of contiguous allocation of main
memory. In this, the main memory is divided into variable-size blocks of physical memory
called segments. It is based on the way the programmer follows to structure their programs.
With segmented memory allocation, each job is divided into several segments of different sizes,
one for each module. Functions, subroutines, stack, array, etc., are examples of such modules.
3.2.Address binding:
Mapping of instructions and data from one address to another address in memory.
Three different stages of binding:
1. Compile time: Must generate absolute code if memory location is known in prior.
2. Load time: Must generate relocatable code if memory location is not known at
compile time
3. Execution time: Need hardware support for address maps (e.g., base
and limit registers).
3.4 Logical vs. Physical Address Space
Logical address – generated by the CPU; also referred to as “virtual address“
Logical and physical addresses are the same in ―compile-time and load-time
address-binding schemes
Logical (virtual) and physical addresses differ in ―execution-time address-
binding scheme
Translating Logical Address into Physical Address-
Following steps are followed to translate logical address into physical address-
Step-01:
The translation scheme uses two registers that are under the control of operating system.
During context switching, the values corresponding to the process being loaded are set in
the registers.
Relocation Register stores the base address or starting address of the process in the main
memory.
Limit Register stores the size or length of the process.
Step-02:
CPU generates a logical address containing the address of the instruction that it wants to
read.
Step-03:
The logical address generated by the CPU is compared with the limit of the process.
Now, two cases are possible-
Case-01: Generated Address >= Limit
The following diagram illustrates the above steps of translating logical address into physical
address-
address-
Advantages-
The advantages of dynamic partitioning are-
It does not suffer from internal fragmentation.
Degree of multiprogramming is dynamic.
There is no limitation on the size of processes.
Disadvantages-
The disadvantages of dynamic partitioning are-
It suffers from external fragmentation.
Allocation and deallocation of memory is complex.
3.3.2.Dynamic Loading
Through this, the routine is not loaded until it is called.
Better memory-space utilization; unused routine is never loaded
Useful when large amounts of code are needed to handle infrequently
occurring cases
No special support from the operating system is required implemented
through program design
3.3.3.Dynamic Linking
Linking postponed until execution time & is particularly useful for libraries
Small piece of code called stub, used to locate the appropriate memory- resident
library routine or function.
Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes‘ memory address
Shared libraries: Programs linked before the new library was installed will
continue using the older library
Overlays:
Swapping
A process can be swapped temporarily out of memory to a backing store (SWAP
OUT)and then brought back into memory for continued execution (SWAP IN).
-9 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III
Backing store – fast disk large enough to accommodate copies of all memory
images for all users & it must provide direct access to these memory images
Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed
Transfer time :
Major part of swap time is transfer time
Total transfer time is directly proportional to the amount of memory swapped.
Example: Let us assume the user process is of size 1MB & the backing store is
a standard hard disk with a transfer rate of 5MBPS.
Transfer time = 1000KB/5000KB per second
= 1/5 sec = 200ms
Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
-10 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III
Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes.
Initially, all memory is available for user processes and is considered one large block of
available memory. This available memory is known as a “Hole”. When the process
arrives and needs memory, we search for a hole that is large enough to store this
process. If the requirement is fulfilled then we allocate memory to process, otherwise
keeping the rest available to satisfy future requests. While allocating a memory
sometimes dynamic storage allocation problems occur, which concerns how to satisfy a
request of size n from a list of free holes. There are some solutions to this problem:
1.First Fit
In the First Fit, the first available free hole fulfil the requirement of the process
allocated.
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory space.
2.Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this,
we search the entire list, unless the list is ordered by size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is the best
suitable hole for Process A(size 25KB). In this method, memory utilization is maximum as
compared to other memory allocation techniques.
3.Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the
largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.
3.5.Memory Partitioning
Variable-partition method:
o Divide memory into variable size partitions, depending upon the size of the
incoming process.
o When a process terminates, the partition becomes available for another process.
o As processes complete and leave they create holes in the main memory.
Hole – block of available memory; holes of various size are scattered throughout
Static Partitioning-
Example-
Under fixed size partitioning scheme, a memory of size 10 KB may be divided into fixed size
partitions as-
Dynamic Partitioning-
Dynamic partitioning is a variable size partitioning scheme.
It performs the allocation dynamically.
When a process arrives, a partition of size equal to the size of process is created.
This algorithm starts scanning the partitions serially from the starting.
When an empty partition that is big enough to store the process is found, it is allocated to
the process.
Obviously, the partition size has to be greater than or at least equal to the process size.
3.6.2. Best Fit Algorithm
3.7 Paging
Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory.
This scheme permits the physical address space of a process to be non –
contiguous.
Paging is a fixed size partitioning scheme.
In paging, secondary memory and main memory are divided into equal fixed size
partitions.
The partitions of secondary memory are called as pages.
The partitions of main memory are called as frames.
Each process is divided into parts where size of each part is same as page size.
The size of the last part may be less than the page size.
The pages of process are stored in the frames of main memory depending upon their
availability.
Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
Frame offset(d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of page table can be done by using dedicated
registers.
But the usage of register for the page table is satisfactory only if page table is small.
If page table contain large number of entries then we can use TLB (translation Look-
aside buffer), a special, small, fast look up hardware cache.
1. The TLB is associative, high speed memory.
2. Each entry in TLB consists of two parts: a tag and a value.
3. When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then corresponding value is returned.
Advantages-
Number of entries in a page table = Number of pages in which the process is divided.
Page Table Base Register (PTBR) contains the base address of page table.
Each process has its own independent page table.
Page Table Base Register (PTBR) provides the base address of the page table.
The base address of the page table is added with the page number referenced by the
CPU.
It gives the entry of the page table containing the frame number where the
referenced page is stored.
Page Table Entry format:
A page table entry contains several information about the page.
The information contained in the page table entry varies from operating system to
operating system.
The most important information in a page table entry is frame number, Present / Absent
Bit, Protection Bit, Reference Bit, Dirty Bit (Modified bit)
3.8 Segmentation:
Segmentation is a memory management technique in which each job is divided into
several segments of different sizes, one for each module that contains pieces that
perform related functions.
Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of
available memory.
Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size.
The operating system maintains a segment map table for every process and a list of free
memory blocks along with segment numbers, their size and corresponding memory
locations in main memory.
For each segment, the table stores the starting address of the segment and the length of
the segment.
A reference to a memory location includes a value that identifies a segment and an
offset.
Segment Table – It maps two-dimensional Logical address into one- dimensional
Physical address. It’s each table entry has:
1. Base Address: It contains the starting physical address where the segments reside
in memory.
2. Limit: It specifies the length of the segment.
The demand paging system depends on the page table implementation because the page table
helps map logical memory to physical memory. Bitwise operators are implemented in the page
table to indicate that pages are ok or not (valid or invalid). All valid pages exist in primary
memory, and invalid pages exist in secondary memory.
What is Thrashing?
Thrashing is the term used to describe a state in which excessive paging activity takes place in
computer systems, especially in operating systems that use virtual memory, severely impairing
system performance. Thrashing occurs when a system’s high memory demand and low
physical memory capacity cause it to spend a large amount of time rotating pages between
main memory (RAM) and secondary storage, which is typically a hard disc.
It is caused due to insufficient physical memory, overloading and poor memory management.
The operating system may use a variety of techniques to lessen thrashing, including lowering
the number of running processes, adjusting paging parameters, and improving memory
allocation algorithms. Increasing the system’s physical memory (RAM) capacity can also
lessen thrashing by lowering the frequency of page swaps between RAM and the disc.
Operating systems that use pure demand paging as a memory management strategy do so
without preloading any pages into physical memory prior to the commencement of a task.
Demand paging loads a process’s whole address space into memory one step at a time,
bringing just the parts of the process that are actively being used into memory from disc as
needed.
It is useful for executing huge programs that might not fit totally in memory or for computers
with limited physical memory. If the program accesses a lot of pages that are not in memory
right now, it could also result in a rise in page faults and possible performance overhead.
Operating systems frequently use caching techniques and improve page replacement
algorithms to lessen the negative effects of page faults on system performance as a whole.
The operating system‘s demand paging mechanism follows a few steps in its operation.
Program Execution: Upon launching a program, the operating system allocates a certain
amount of memory to the program and establishes a process for it.
Creating Page Tables: To keep track of which program pages are currently in memory
and which are on disk, the operating system makes page tables for each process.
Handling Page Fault: When a program tries to access a page that isn’t in memory at the
moment, a page fault happens. In order to determine whether the necessary page is on disk,
the operating system pauses the application and consults the page tables.
Page Fetch: The operating system loads the necessary page into memory by retrieving it
from the disk if it is there.
The page’s new location in memory is then reflected in the page table.
Resuming The Program: The operating system picks up where it left off when the
necessary pages are loaded into memory.
Page Replacement: If there is not enough free memory to hold all the pages a program
needs, the operating system may need to replace one or more pages currently in memory
with pages currently in memory. on the disk. The page replacement algorithm used by the
operating system determines which pages are selected for replacement.
Page Cleanup: When a process terminates, the operating system frees the memory
allocated to the process and cleans up the corresponding entries in the page tables.
How Demand Paging in OS Affects System Performance?
Demand paging can improve system performance by reducing the memory needed for
programs and allowing multiple programs to run simultaneously. However, if not implemented
properly, it can cause performance issues. When a program needs a part that isn’t in the main
memory, the operating system must fetch it from the hard disk, which takes time and pauses
the program. This can cause delays, and if the system runs out of memory, it will need to
frequently swap pages in and out, increasing delays and reducing performance.
FIFO (First-In-First-Out): Replaces the oldest page in memory with a new one. It’s
simple but can cause issues if pages are frequently swapped in and out, leading to
thrashing.
LRU (Least Recently Used): Replaces the page that hasn’t been used for the longest time.
It reduces thrashing more effectively than FIFO but is more complex to implement.
LFU (Least Frequently Used): Replaces the page used the least number of times. It helps
reduce thrashing but requires extra tracking of how often each page is used.
MRU (Most Recently Used): Replaces the page that was most recently used. It’s simpler
than LRU but not as effective in reducing thrashing.
Random: Randomly selects a page to replace. It’s easy to implement but unpredictable in
performance.
Page Fault:
A page fault occurs when a program attempts to access a block of memory that is not
stored in the physical memory, or RAM.
The fault notifies the operating system that it must locate the data in virtual memory,
then transfer it from the storage device, such as an HDD or SSD, to the system RAM.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number
of page faults.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots
—> 3 Page Faults.
-26 DEPARTMENT OF COMPUTER APPLICATIONS – RASC- N.MANIMOZHI
22UBCAC 52- OPERATING SYSEM UNIT III
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark
so that other replacement algorithms can be analyzed against it.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault