Unit 3
Unit 3
Unit 3
Logical and physical addresses are the same in ―compile-time and load-time address-binding schemes‖
Logical (virtual) and physical addresses differ in ―execution-time address- binding scheme‖
In this scheme, the relocation register‘s value is added to Logical address generated by a user process.
CS6401- Operating System
STUDENTSFOCUS.COM
Sri vidya College of Engineering & Technology, Virudhunagar Course material
The user program deals with logical addresses; it never sees the real physical addresses
Dynamic Loading
o Useful when large amounts of code are needed to handle infrequently occurring cases
o No special support from the operating system is required implemented through program
design
Dynamic Linking
Linking postponed until execution time & is particularly useful for libraries
Small piece of code called stub, used to locate the appropriate memory- resident library
routine or function.
Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes’ memory address
Shared libraries: Programs linked before the new library was installed will continue using the older
library.
Swapping
A process can be swapped temporarily out of memory to a backing store (SWAP OUT)and then brought
back into memory for continued execution (SWAP IN).
Backing store – fast disk large enough to accommodate copies of all memory images for all
users & it must provide direct access to these memory images
Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process
is swapped out so higher-priority process can be loaded and executed
Transfer time: Major part of swap time is transfer time. Total transfer time is directly proportional to the
amount of memory swapped.
Example: Let us assume the user process is of size 1MB & the backing store is a standard hard disk with
a transfer rate of 5MBPS.
CS6401- Operating System
STUDENTSFOCUS.COM
Sri vidya College of Engineering & Technology, Virudhunagar Course material
Contiguous Allocation
Each process is contained in a single contiguous section of memory.
There are two methods namely:
Fixed – Partition Method
Variable – Partition Method
Fixed – Partition Method :
o Divide memory into fixed size partitions, where each partition has exactly one process.
o The drawback is memory space unused within a partition is wasted.(eg.when
process size < partition size)
Variable-partition method:
o Divide memory into variable size partitions, depending upon the size of the incoming
process.
o When a process terminates, the partition becomes available for another process.
o As processes complete and leave they create holes in the main memory.
o Hole – block of available memory; holes of various size are scattered throughout memory.
Solution:
NOTE: First-fit and best-fit are better than worst-fit in terms of speed and storage utilization
Fragmentation:
o External Fragmentation – This takes place when enough total memory space exists to
satisfy a request, but it is not contiguous i.e, storage is fragmented into a large number of small holes scattered
throughout the main memory.
o Internal Fragmentation – Allocated memory may be slightly larger than requested memory.
Example: hole = 184 bytes
Process size = 182 bytes.
We are left with a hole of 2 bytes.
o Solutions
1. Coalescing: Merge the adjacent holes together.
2. Compaction: Move all processes towards one end of memory, hole towards other end
of memory, producing one large hole of available memory. This scheme is expensive as
it can be done if relocation is dynamic and done at execution time.
3. Permit the logical address space of a process to be non-contiguous. This is achieved
through two memory management schemes namely paging and segmentation.
Segmentation
o Memory-management scheme that supports user view of memory
o A program is a collection of segments. A segment is a logical unit such as:Main program, Procedure,
Function, Method, Object, Local variables, global variables, Common block, Stack, Symbol table, arrays
Segmentation Hardware
o Logical address consists of a two tuple :
<Segment-number, offset>
o Segment table – maps two-dimensional physical addresses; each table entry has:
Base – contains the starting physical address where the segments reside in memory
Limit – specifies the length of the segment
o Segment-table base register (STBR) points to the segment table‘s location in memory
o Segment-table length register (STLR) indicates number of segments used by a program;
Segment number=s‘ is legal, if s < STLR
o Relocation.
dynamic
by segment table
o Sharing.
shared segments
same segment number
o Allocation.
first fit/best fit
external fragmentation
o Protection: With each entry in segment table associate:
validation bit = 0 illegal segment
read/write/execute privileges
o Protection bits associated with segments; code sharing occurs at segment level
o Since segments vary in length, memory allocation is a dynamic storage- allocation problem
o A segmentation example is shown in the following diagram
EXAMPLE:
o Each process has a segment table associated with it, which the dispatcher uses to define the hardware
segment table when this process is given the CPU.
o Segments are shared when entries in the segment tables of two different processes point to the
same physical location.
o The IBM OS/ 2.32 bit version is an operating system running on top of the Intel 386 architecture.
The 386 uses segmentation with paging for memory management. The maximum number of segments
per process is 16 KB, and each segment can be as large as 4 gigabytes.
The first partition consists of up to 8 KB segments that are private to that process.
The second partition consists of up to 8KB segments that are shared among all the
processes.
o Information about the first partition is kept in the local descriptor table
(LDT), information about the second partition is kept in the global descriptor table (GDT).
o Each entry in the LDT and GDT consist of 8 bytes, with detailed information about a
particular segment including the base location and length of the segment.
The logical address is a pair (selector, offset) where the selector is a16-bit number:
s g p
13 1 2
Where s designates the segment number, g indicates whether the segment is in the GDT
or LDT, and p deals with protection. The offset is a 32-bit number specifying the location of the byte within
the segment in question.
o The base and limit information about the segment in question are used to generate a linear-
address.
o First, the limit is used to check for address validity. If the address is not valid, a memory fault is
generated, resulting in a trap to the operating system. If it is valid, then the value of the offset is added
to the value of the base, resulting in a 32-bit linear address. This address is then translated into a
physical address.
o The linear address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits. Since we page the page table, the page number is further divided into a
10-bit page directory pointer and a 10-bit
page table pointer. The logical address is as follows.
p1 p2 d
10 10 12
o To improve the efficiency of physical memory use. Intel 386 page tables can be swapped to disk. In this
case, an invalid bit is used in the page directory entry to indicate whether the table to which the entry is
pointing is in memory or on disk.
o If the table is on disk, the operating system can use the other 31 bits to specify the disk location of the table;
the table then can be brought into memory on demand.
Paging
It is a memory management scheme that permits the physical address space of a process to be
noncontiguous.
It avoids the considerable problem of fitting the varying size memory chunks on to the backing
store.
(i) Basic Method:
o Divide logical memory into blocks of same size called “pages”. o Divide physical
memory into fixed-sized blocks called “frames” o Page size is a power of 2, between
512 bytes and 16MB.
Address Translation Scheme
o Address generated by CPU(logical address) is divided into:
Page number (p) – used as an index into a page table which contains base address of each page
in physical memory
Page offset (d) – combined with base address to define the physical address i.e.,
Physical address = base address + offset
Paging Hardware
Allocation
o When a process arrives into the system, its size (expressed in pages) is examined.
o Each page of process needs one frame. Thus if the process requires ‗n‘ pages, at least ‗n‘ frames must be available in memory.
CS6401- Operating System
STUDENTSFOCUS.COM
Sri vidya College of Engineering & Technology, Virudhunagar Course material
o If ‗n‘ frames are available, they are allocated to this arriving process.
st
o The 1 page of the process is loaded into one of the allocated frames & the frame number is put into the
page table.
o Repeat the above step for the next pages & so on.
= 140 ns.
(iii) Memory Protection
a) Hierarchical Paging b)
Hashed Page Tables c) Inverte
Page Tables
a) Hierarchical Paging
o Break up the Page table into smaller pieces. Because if the page table is too large then it is qui
difficult to search the page number.
Example: “Two-Level Paging “
Virtual Memory
o It is a technique that allows the execution of processes that may not be completely in main
memory.
o Advantages:
Allows the program that can be larger than the physical memory.
Separation of user logical memory from physical memory
Allows processes to easily share files & address space.
Allows for more efficient process creation.
Demand Paging
Basic Concepts:
o Instead of swapping in the whole processes, the pager brings only those necessary pages into
memory. Thus,
1. It avoids reading into memory pages that will not be used anyway.
2. Reduce the swap time.
3. Reduce the amount of physical memory needed.
o To differentiate between those pages that are in memory & those that are on the disk we use the
Valid-Invalid bit
o A valid – invalid bit is associated with each page table entry.
o Valid associated page is in memory.
In-Valid
invalid page
valid page but is currently on the disk
Page Fault
o If no frames are free, we could find one that is not currently being used &
free it.
o We can free a frame by writing its contents to swap space & changing the page table to indicate that
the page is no longer in memory.
o Then we can use that freed frame to hold the page for which the process faulted.
Basic Page Replacement
- If there is no free frame, use a page replacement algorithm to select a victim frame
- Write the victim page to the disk, change the page & frame tables accordingly.
3. Read the desired page into the (new) free frame. Update the page and frame tables.
4. Restart the process
Example:
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
o Replace the page that will not be used for the longest period of time.
Example:
Drawback:
o Replace the page that has not been used for the longest period of time.
Example:
o Reference bit
With each page associate a reference bit, initially set to 0
When page is referenced, the bit is set to 1
o When a page needs to be replaced, replace the page whose reference bit is 0
o The order of use is not known , but we know which pages were used and which were not used.
oIf reference bit is 00000000 then the page has not been used for 8 time periods.
oIf reference bit is 11111111 then the page has been used atleast once each time period.
oIf the reference bit of page 1 is 11000100 and page 2 is 01110111 then page 2 is the LRU
page.
(ii) Second Chance Algorithm
oBasic algorithm is FIFO
oWhen a page has been selected , check its reference bit.
If 0 proceed to replace the page
If 1 give the page a second chance and move on to the next FIFO page.
When a page gets a second chance, its reference bit is cleared and arrival
time is reset to current time.
Hence a second chance page will not be replaced until all other pages are
replaced.
o Keep a counter of the number of references that have been made to each page
1. Least Frequently Used (LFU )Algorithm: replaces page with smallest count
2. Most Frequently Used (MFU )Algorithm: replaces page with largest count
It is based on the argument that the page with the smallest count
was probably just brought in and has yet to be used
o These are used along with page replacement algorithms to improve their performance
Technique 1:
Then S = ∑ si
ai = si / S * m
where ai is the no.of frames allocated to process i.
Global vs. Local Replacement
o Global replacement – each process selects a replacement frame from the set of all frames; one
process can take a frame from another.
o Local replacement – each process selects from only its own set of allocated frames.
Thrashing
high.
o Thus swapping in and swapping out of pages only takes place.
o This is the cause of thrashing.
1. Working-Set Strategy
Other Issues
o Prepaging
To reduce the large number of page faults that occurs at process
startup
Prepage all or some of the pages a process will need, before they are referenced
But if prepaged pages are unused, I/O and memory are wasted
o Page Size
Page size selection must take into consideration:
o fragmentation
o table size
o I/O overhead
o locality
o TLB Reach
TLB Reach - The amount of memory accessible from the TLB
TLB Reach = (TLB Size) X (Page Size)
Ideally, the working set of each process is stored in the TLB.
Otherwise there is a high degree of page faults.
Increase the Page Size. This may lead to an increase in fragmentation as not all
applications require a large page size
Provide Multiple Page Sizes. This allows applications that require
larger page sizes the opportunity to use them without an increase in fragmentation.
o I/O interlock
Pages must sometimes be locked into memory
Consider I/O. Pages that are used for copying a file from a device must be locked
from being selected for eviction by a page replacement algorithm.
The buddy system allocates memory from a fixed-size segment consisting of physically contiguous
pages. Memory is allocated from this segment using a power-of-2 allocator, which satisfies requests in units
sized as a power of 2 (4KB,8KB,16KB, and so forth). A request in units not appropriately sized is rounded up
to the next highest power of 2. For example, a request for 11 KB is satisfied with a 16K segment
OS Examples
Windows
Windows implements virtual memory using demand paging with clustering. Clustering handles page faults by
bringing in not only the faulting page but also several pages following the faulting page. When a process is first created,
it is assigned a working-set minimum and maximum. The working-set minimum is the minimum number of pages the
process is guaranteed to have in memory.
If sufficient memory is available, a process may be assigned as many pages as its working-set maximum. (In
some circumstances, a process may be allowed to exceed its working-set maximum.) The virtual memory manager
maintains a list of free page frames. Associated with this list is a threshold value that is used to indicate whether
sufficient free memory is available. If a page fault occurs for a process that is below its working-set maximum, the
virtual memory manager allocates a page from this list of free pages. If a process that is at its working-set maximum
incurs a page fault, it must select a page for replacement using a local LRU page-replacement policy.
Solaris
In Solaris, when a thread incurs a page fault, the kernel assigns a page to the faulting thread fromthe list of free
pages it maintains. Therefore, it is imperative that the kernel keep a sufficient amoun t of free memory available.
Associated with this list of free pages is a parameter— lotsfree—that represents a threshold to begin paging. The
lotsfree parameter is typically set to 1/64 the size of the physical memory. Four times per second, the kernel checks
whether the amount of free memory is less than lotsfree. If the number of free pages falls below lotsfree, a process
known as a pageout starts up. The pageout process is similar to the second