Ch08 Reloaded
Ch08 Reloaded
Ch08 Reloaded
8.2
Objectives
To provide a detailed description of various ways of
8.3
Background
Memory consists of a large array of words or bytes,each with its
own adress.The CPU fetches instructions from memory according to the value of the program counter.
Memory unit sees only a stream of adresses
Program must be brought (from disk) into memory and placed
directly
Register access in one CPU clock (or less) Main memory can take many cycles Cache(memory buffer) sits between main memory and CPU
registers
Protection of memory required to ensure correct operation
8.4
limit register specifies the size of the range.The base and limit registers can be loaded only by the operating system which uses a previleged instruction.
8.5
Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers)
8.6
8.7
and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme
Operating System Principles 8.8 Silberschatz, Galvin and Gagne 2005
8.9
8.10
Dynamic Loading
Routine is not loaded until it is called Better memory-space utilization; unused routine is never loaded Useful when large amounts of code are needed to handle
8.11
Dynamic Linking
Linking postponed until execution time Small piece of code, stub, used to locate the appropriate
memory address
Dynamic linking is particularly useful for libraries System also known as shared libraries
8.12
Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution Backing store fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images Roll out, roll in swapping variant used for priority-based scheduling algorithms; lowerpriority process is swapped out so higher-priority process can be loaded and executed A process that is swapped out will be swapped back into the same memory space that it had occupied.
Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped
Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) System maintains a ready queue of ready-to-run processes which have memory images on disk.Whenever Cpu scheduler decides to execute a process,it calls the dispather.The dispather checks to see whether the next process is in memory.If it is not and if there is no free memory the dispather swaps out a process currently in memory and swaps in desired process.
8.13
8.14
Contiguous Allocation
Main memory usually into two partitions:
Resident operating system, usually held in low memory with interrupt vector
Base register contains value of smallest physical address Limit register contains range of logical addresses each logical address must be less than the limit register MMU maps logical address dynamically
8.15
8.16
Hole block of available memory; holes of various size are scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to accommodate it
Operating system maintains information about: a) allocated partitions b) free partitions (hole)
OS process 5
OS process 5
OS process 5 process 9
process 2
8.17
search entire list, unless ordered by size Produces the smallest leftover hole
Worst-fit: Allocate the largest hole; must also search entire
list
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
8.18
Fragmentation
External Fragmentation total memory space exists to satisfy a request, but it is
not contiguous
Internal Fragmentation allocated memory may be slightly larger than requested
memory; this size difference is memory internal to a partition, but not being used
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory together in one large block
Compaction is possible only if relocation is dynamic, and is done at execution time I/O problem
8.19
Paging
Logical address space of a process can be noncontiguous; process is allocated
memory frames from the backing store.The backing store is divided into fixed sized blocks that are of the same size as the memory frames.
Every addess generated by the CPU is divided into 2 parts: a page
number(p)and a page offset(d).The page number is used as an index to the page table.The page table contains the base address of each page in physical memory.This base address is combined with the page offset to define the physical memory address that is sent to memory unit.
Keep track of all free frames To run a program of size n pages, need to find n free frames and load program
Page number (p) used as an index into a page table which contains base address of each page in physical memory Page offset (d) combined with base address to define the physical memory address that is sent to the memory unit
page number
page offset
p m-n
d
n
8.21
Paging Hardware
8.22
8.23
Paging Example
Paging itself is a form of dynamic relocation.When we use paging scheme there is no external fragmentation.Any free frame can be allocated to a process that needs it.However we may have some internal fragentation. Since the OS is managing physical memory,it must be aware of the allocation detiails of physical memory-which frames are allocated,which frames are available,how many total frames are there.This info is generally kept in a data structure called Frame table. FT has one entry for each physical page frame,indicating whether the latter is free or allocated and if it is,to which page of which process or processes.
Free Frames
Before allocation
Operating System Principles 8.25
After allocation
Silberschatz, Galvin and Gagne 2005
8.26
Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
Each entry In the TLB consists of two parts:a key(a tag) and a value.When the associative memory is presented with an item,the key is compared with all keys simultaneously.If the item is found,its corresponding value field is returned.The search is fast,but hardware expensive.Typically the number of entries in TLB between 64 and 1024.
The TLB is used with page tables in the following way.The TLB contains only a few page table entries. When logical address is generated by the CPU,its page number is presented to the TLB.If the page number is found , its frame number is immediately available and is used to acess memory.If the page is not in the TLB(TLB miss),a memory reference to the page table must be made.When the frame number is obtained , we can use it to access memory.In addition we add it to the TLB,so that they will be found quickly on the next reference.TLB entries for kernel are wired down(cannot be replaced/removed) Some TLBs store address-space identifiers (ASIDs) in each TLB entry uniquely identifies each process to provide address-space protection for that process
8.27
Associative Memory
Associative memory parallel search
Page # Frame #
8.28
8.29
8.30
Memory Protection
Memory protection implemented by associating protection bit
valid indicates that the associated page is in the process logical address space, and is thus a legal page invalid indicates that the page is not in the process logical address space
8.31
8.32
Shared Pages
One advantage of paging is the possibitlity of sharing common code.This consideration is very important in time sharing environment.
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). Shared code must appear in same location in the logical address space of all processes
Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space
8.33
8.34
8.35
8.36
8.37
A logical address (on 32-bit machine with 1K page size) is divided into:
a page number consisting of 22 bits a page offset consisting of 10 bits a 12-bit page number a 10-bit page offset
Since the page table is paged, the page number is further divided into:
page number pi 12 p2 10
page offset d 10
where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table
8.38
Address-Translation Scheme
8.39
8.40
virtual page number.Each entry in the hash table contains a linked list of elements that hash to the same location(to avoid collisions)
Each element consists of 3 fields: (1) the virtual page number
(2)the value of the mapped page frame (3) pointer to the next element in the linked list
The virtual page number is hashed into a page table. This page
8.41
8.42
that real memory location, with information about the process that owns that page
Decreases memory needed to store each page table, but
increases time needed to search the table when a page reference occurs
Use hash table to limit the search to one or at most a
8.43
8.44
Segmentation
Memory-management scheme that supports user view of memory A program is a collection of segments. A segment is a logical unit
such as:
main program, procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays
8.45
8.46
3 4
2 3
user space
8.47
Segmentation Architecture
Logical address consists of a two tuple:
<segment-number, offset>,
Segment table maps two-dimensional physical addresses;
base contains the starting physical address where the segments reside in memory limit specifies the length of the segment
8.48
8.49
Segmentation Hardware
8.50
Example of Segmentation
8.51
8.52
8.53
8.54
8.55
8.56
8.57
End of Chapter 8