OS Summary
OS Summary
OS Summary
User’s view:
It varies according to the interface used.
GUI:
OS is designed (mostly) for ease of use.
o The goal is to maximize the work the user can do.
Resource utilization: how various hardware & software resources are shared.
o This system doesn’t focus on it.
Terminal:
OS is designed to maximize resource utilization.
o Ensures all available resources are used fairly & efficiently.
Users connect to a mainframe/minicomputer through terminals.
Users share resources & can share info.
System view:
OS acts as a resource allocator.
Manages resources (CPU time, memory, storage, I/O devices, ...).
Decides how to allocate them to programs/users.
OS is a control program.
Control program: a prog that manages the execution of user programs to
prevent errors & incorrect use of the comp.
Concerned with the operation & control of I/O devices.
Storage structure.
The CPU can only load instructions from memory.
Types of memory:
Random Access Memory (RAM) || Main memory.
o Volatile memory.
o The only large storage that the CPU can access directly.
Secondary storage.
o Extension of main memory.
o Non-volatile.
o Types of storage:
Hard Disk Drives (HDD).
Disk is logically divided into tracks & sectors.
Disk controller determines the logical interaction between
disk & computer.
Computer-System Architecture.
1) Single general-purpose processor.
a. Used in most systems.
2) Multiprocessor systems
a. Also known as parallel systems || tightly coupled systems.
Advantages:
Increased throughput.
Economy of scale.
Increased reliability (less degradation & fault tolerance).
Its Types:
1- Asymmetric multiprocessing: each processor is assigned a specific task.
2- Symmetric multiprocessing: each processor performs all tasks.
3) Multi-core design.
a. One chip that has more than one core.
b. Each core can do a separate task.
Clustered Systems.
Uses multiple CPUs to achieve any task.
Composed of two or more individual system || nodes.
OS structure.
Multiprogramming: the system runs multiple programs simultaneously.
Increases CPU utilization.
o By organizing tasks (code & data) & CPU always has a task to execute.
Provide an environment that utilizes system resources effectively.
Pool: an area on the disk where tasks/jobs are stored waiting for memory
space to be available.
Job Scheduling: selecting one job for the CPU to execute.
Some tasks depend on other tasks.
o Normal systems: CPU would idle.
o Multiprogrammed systems: the CPU switches to another task.
Caching.
Data that is repeatedly used gets saved in the CPU’s cache.
CPU’s cache is always checked before the main memory.
Cache is typically smaller than RAM.
Distributed system.
A collection of physically separate computers that are connected.
Users can access various resources.
Benefits:
o Increases computation speeds.
o Increases functionality.
o Increases data availability.
o Increases data reliability.
Some OSs have file access built in, others leave the choice to the user.
Handheld systems.
These devices have limited size.
Small amount of memory.
o Between 1 MB & 32 GB.
o Must manage memory efficiently.
o Most devices don’t use virtual memory.
Slow processors.
o Fast processors require more power & that needs a larger battery.
Small displays.
Client-server Computing:
Companies are shifting away from centralized systems and moving to PC servers.
Peer-to-Peer computing:
All nodes can act as a client & a server.
Cloud computing:
Provides computing, storage, and apps across the internet.
Uses virtualizations as a base for its functions.
o Ex. Amazon EC2.
Types:
o Public cloud: available via a pay wall.
o Private cloud: could for a company.
o Hybrid cloud: includes both public & private.
o Software as a Service (SaaS).
o Platform as a Service (PaaS).
o Infrastructure as a Service (IaaS).
Types of systems:
Distributed system.
Handheld systems.
Multimedia systems.
Real time embedded systems.
General use systems.
Lecture 2: )OS structures(
OS Services:
OS provides an environment for program execution.
OS Provides some services for the
programs & the users.
2) Program execution.
Helps the system load, execute, and terminate programs.
3) I/O operations.
Required by some programs.
Used to control external devices.
The user can’t directly control these devices.
o More efficient & protection.
4) File-system manipulation.
Provides a way for programs to read, write, create, delete, search, and
list files & directories.
Some OS provides a variety of file systems.
Some provide specific features || performance characteristics.
5) Communications.
Exchanges info between processes.
o Processes can be on the same device or another device.
Can be implemented via a shared memory || message passing.
6) Error detection.
Errors can occur in the CPU, memory, hardware, I/O devices, and user
programs.
o The OS takes a different action for each type of error.
OS System Services:
These services ensure the efficient operation of the system itself.
Systems with multiple users share resources to increase efficiency.
1) Resource allocation.
Helps regulate resources between the different users & programs
running at the same time.
CPU cycles, main memory, & file allocation have a special allocation
code.
I/O devices have a general request & release code.
To determine how to use the CPU OSs have CPU scheduling routines.
They rely on:
o The speed of the CPU.
o The jobs needed.
o The number of registers available.
There are different routines for printers, modems, and USB storage
drives.
2) Accounting.
Keeps logs on what users use & how much they use them.
May be used for accounting || accumulating usage statistics.
Usage statistics may be valuable for researchers.
o To improve computing services.
Call categories:
1) Process control.
a. End/abort.
b. Load/execute.
c. Create || terminate process.
d. Get || set process attributes.
e. Wait for some time.
f. Wait for an event || signal an event.
g. Allocate || free memory.
2) File manipulation.
a. Create || delete file.
b. Open || close file.
c. Read || write || reposition text.
d. Get || set file attributes.
3) Device manipulation.
a. Request || release device.
b. Read || write || reposition.
c. Get || set device attributes.
d. Logically attach || detach devices.
4) Information maintenance.
a. Get || set time and date.
b. Get || set system data.
c. Get process, file, device attributes.
d. Set process, file, device attributes.
5) Communication.
a. Create || delete communication connection.
b. Send || receive messages.
c. Transfer status information.
d. Attach || detach remote devices.
6) Protection.
C Library:
Provides a portion of the system call interface for many versions of UNIX &
Linux.
It intercepts program calls and invokes the necessary system calls(s).
System programs:
Provide a convenient environment for program development & execution.
Some are user interfaces to system calls.
Most users view OS as system programs & not system calls.
Status information.
o Date, time, available memory, disk space, number of users.
o Details about performance, logs, debugging info.
o Formats & prints the output to the terminal || other devices.
o Some systems implement a registry to store & retrieve configurations.
File modification.
o Text editors to create & modify files.
o Special commands to search file content || perform transformations on
text.
Programming language support.
o Compilers, assemblers, debuggers, interpreters.
Communications.
o Allows the creation of virtual connections among processes, users, and
computer systems.
o Allows users to:
send messages to each other (screen, browser).
Send e-mails.
Log in remotely.
Transfer files.
Application programs.
Virtual machines: the abstraction of a computer’s hardware
into several different execution environments.
Each environment appears as if it’s running on its own.
Takes a layered approach to its logical conclusions.
o Treats the hardware & the OS as hardware.
Provides an interface identical to the hardware.
The OS creates virtual memory.
Each guest is provided with a virtual copy of the computer.
OS debugging:
Debugging: Finding & fixing errors and bugs.
OS generates log files containing the error’s info.
App failures can generate core dump files (captures process memory).
OS failure can generate crash dump files (contains kernel memory).
Performance tuning can optimize system performance.
Kernighan’s Law: debugging is twice as hard as writing code.
o Cleverly written code is hard to debug it.
System boot.
Booting: the procedure of starting a computer by loading the kernel.
Bootstrap: a small piece of code that locates the kernel, loads it into memory, and
starts its execution.
Some computers use a two-step process.
o Simple bootstrap loader fetches more complex boot program from disk.
o Complex boot program loads the kernel.
Firmware is used to hold the initial boot code.
System goals:
o Should be easy to design, implement, and maintain.
o Should be flexible, reliable, error free, and efficient.
Its properties:
More than the program’s code (text section).
Requires resources (CPU time, memory, etc.) to accomplish its task.
Resources are allocated when the process is created || while its executing.
Systems: a collection of processes.
OS processes execute system code.
User processes execute user code.
All processes are executed concurrently.
Processes include:
o Text section (program code).
o Stack (temporary data).
o Data section (global variables).
o Heap (contains dynamically allocated memory).
Process concept:
Early computers allowed only one program to execute at a time.
The executed program has complete control of the system & access to all
system resources.
Current computers allow multiple programs to be executed concurrently.
OS may need to support its own internal programmed activities.
OS process responsibilities:
The creation & deletion of user/system processes.
The scheduling of processes.
Communication & deadlock handling for processes.
Program & processes.
Not all programs are processes.
A program is a passive entity (Executable file).
A process is an active entity.
A program becomes a process when it’s loaded into memory.
Process state.
A process changes state as it executes.
State types:
o New: the process is being created.
o Running: instructions are being executed.
o Waiting: the process is waiting for an event.
o Ready: the process is waiting to be assigned to a processor.
o Terminated: the process has finished execution.
Scheduling queue.
Job queue: contain new Processes that enter
the system.
Device queue: set of processes waiting for an
I/O device.
o Each device has its own queue.
Ready queue: contains processes that are in the RAM & ready to execute.
o Its header contains pointers to the first & final PCBs.
o Each PCB has a pointer to the next one.
Queues are stored as a linked list.
time sharing: CPU quickly switches between processes to maximize its use.
o process scheduler selects and sends the process.
Process events:
1) process issues an I/O request.
a. It gets placed in the device queue (I/O queue).
2) The process creates a new subprocess.
a. It waits for the subprocesses to terminate.
3) The process is removed forcibly from the CPU.
a. A result of an interrupt.
b. Process is places back in the ready queue.
Schedulers.
Selects processes from all the different queues.
Long-term scheduling (job scheduler).
o Selects processes from Pool (job queue) & loads them into Ready queue.
Medium-term scheduling.
Short-term scheduling (CPU scheduler).
o Selects a process from the ready processes & allocates it to the CPU.
Medium-term scheduling (swapping):
Can be added if the number of running tasks needs to be decreased.
Removes processes from memory.
Stores them on a disk.
Brings them back later to continue execution.
Context Switch.
Interrupts cause the OS to change the CPU’s current task.
The system needs to save the current context so it can be restored later.
Context is represented in the PCB of the process.
Includes:
o Value of CPU registers.
o The process state.
o Memory management information.
These operations are called State Save & State Restore.
Context switch: doing a State Save of the current process & a State Restore
of a different process.
Operations on processes.
Process creation.
A process may create other processes (using a system
call).
o Original process is called a parent process & the new
process is a child.
This structure is called a tree.
Most OSs use process identifier (pid) to identify processes.
o Typically, an integer number.
After creation:
o The parent can run concurrently with its children.
o The parent can wait until some/all children have terminated.
Child process address:
o Its address is a duplicate of the parent’s address (has the same data &
program as the parent).
o The child process has a new program loaded into it.
Process termination.
Process terminates when it finishes execution.
Uses exit() system call.
o May return a status value to its parent process using wait() system call.
All resources of the process are de-allocated by the OS.
Parent-child termination:
Parent may terminate a child.
o The child exceeded its usage of some resources.
o The task assigned to the child is no longer required.
o The parent is exiting & the OS doesn’t allow a child to continue without
its parent.
Cooperating process:
o Can affect || be affected by other processes.
o Any process that shares data with other processes.
o Requires an IPC mechanism.
IPC models:
Shared memory model.
o A region of memory that is shared between cooperating processes.
o Processes exchange info by reading & writing data to that region.
Shared-memory systems.
Normally OS tries to prevent a process from accessing another process’s
memory.
Shared memory requires two || more processes to agree to remove this
restriction.
Processes can exchange data by writing & reading to a shared area.
Processes are responsible for:
o Determining the Shared area & data form.
o ensuring that they don’t write to the same location simultaneously.
Message-passing systems.
has a mechanism that allows processes to communicate & synchronize
actions without sharing the same address space.
Useful in distributed environment.
Processes may be on different computers.
Resource sharing.
o Threads share memory & resources of the process.
o Allows an app to have several different threads with the same space.
Economy.
o Creating & managing threads is less resource costly than processes.
Scalability.
o Threads may run in parallel on different processing cores.
Multicore (multiprocessor)programming:
Evolved from single CPU systems.
Each core appears as a separate processor to the OS.
Can be in the same CPU chip || different CPU chips.
Provides:
o A mechanism for more efficient use of the multiple cores.
o Improved concurrency.
System can assign a different thread to each core.
Concurrent systems.
o Supports more than one task (alternates
between them).
o CPU schedulers provide the illusion of parallelism.
Rapidly switches between processes in the system.
Used in single processor systems.
Programming challenges.
Identifying tasks.
o Examining apps to find areas that can be divided into
separate/concurrent tasks.
o Independent tasks can run in parallel on individual cores.
Balance.
o Tasks must perform equal work (equal value).
o Some tasks are not worth a separate core to run (small value).
Data splitting.
o Data accessed/manipulated by tasks must be divided.
Data dependency.
o Data accessed my tasks must be examined for dependencies between
two || more tasks.
o Programmers must ensure tasks execution is synchronized.
User threads.
Management is done by user-level threads library.
Primary thread libraries Include:
o POSIX Pthreads.
o Windows threads.
o Java threads.
Kernal threads.
Supported by the kernel.
Examples (windows, Solaris, Linux, Tru64 UNIX, Mac OS X).
Thread library: provides programmers with an API for creating & managing
threads.
Multithreading models.
Many-to-one model.
Maps many user level threads to one kernel thread.
Thread management is done by thread library in user space
(efficient).
The process will block if a thread makes a blocking system call.
Multiple threads are unable to run (only one thread can access
kernel).
One-to-one model.
Maps each user thread to a kernel thread.
Provides more concurrency than Many-to-One model.
Allows other threads to run when one makes a blocking call.
Allows multiple threads running in parallel.
Creating a user thread requires creating a kernel thread (disadvantage).
Most implementations restrict the number of threads.
Used in Linux & Windows.
Many-to-many model.
maps many user threads to many kernel threads.
Number of kernel threads is less || equal to user threads.
Number of kernel threads changes based on:
o The application.
o The machine (multi || single).
Lecture 5: )Process Synchronization(
Co-operating process: a process that can affect || be affected by other processes.
Can directly share a logical address space (code & data).
Can share data through files & messages.
Concurrent access to shared data can lead to data inconsistency.
Concurrency problems.
Concurrency: the ability to run multiple tasks at the same time.
Uni-processor CPU:
o tasks are divided into sub-tasks (different processes).
o Interleaving: Each process can use the processor(CPU) for a limited time.
o Can cause data inconsistency.
Multi-processor CPU:
o Tasks run in parallel on different processors.
o May lead to overlapping & data inconsistency.
The Producer consumer problem.
Producer: a process that produces information & stores it in a buffer.
Consumer: a process that consumes info from the buffer.
They don’t take turns accessing the buffer.
Problems:
o Producer tries to place an item when the buffer is full.
o Consumer tries to take an item from an empty buffer.
Solutions:
o Use an integer counter to keep track of the items.
Code for both parties =>.
Decrement.
Register2 = counter.
Register2 = register2 - 1.
Counter = register2.
This slow process can cause the counter to land on a wrong value.
Race condition problem.
Happens when multiple processes read & write to shared data item and the
result depends on timing.
Processes are either unaware || indirectly aware of each other.
Ex) two processes placing an order at the same time.
Solution:
o Synchronize processes & co-ordinate between co-operating processes.
Solution:
o Creating areas before & after the critical section.
Entry section.
Exit section.
Remainder section.
o Each process must ask for permission (in entry section) before entering
the critical section.
o Processes move from the critical section to the exit & remainder
sections.
Solution requirements:
o Mutual exclusion:
If a process is executing in the critical section, no other process can
execute in the critical section.
o Progress:
If there is no process executing in its critical section, the selection of
next process to execute can’t be delayed.
o Bonded waiting:
After a process requests to enter its crit section & before the
request is granted, the number of times other processes are
allowed to enter their crit section is limited.
Assumes each process executes at a non-zero speed.
Doesn’t have any assumptions about relative speed of the other
processes.
Non-preemptive kernels:
o Doesn’t allow a process running in kernel mode to be interrupted.
o Processes run until they exit kernel mode || temporarily loses control of
the CPU || gets blocked.
Solution types:
Software solution: code that runs before a process enters crit section.
o Shared lock variable.
o Peterson’s solution.
o Semaphore.
Semaphore usage:
Can be used to solve various synchronization problems.
States:
o When a process is in its crit section, S = 0.
o If a process enters queue, S gets decremented by one.
o If a process exits queue, S gets incremented by one.
o If the queue is empty & there is no process in crit section, S = 1.
Counting semaphore: can range over an unrestricted domain.
o Used to control access to a resource consisting of a finite number of
instances (Semaphore = number of instances).
Binary semaphore: either 1 || 0 (can be used instead of mutex lock).
Deadlock: two or more processes are waiting indefinitely for an event that is
caused by one of them.
Dispatcher.
Part of CPU scheduling.
Dispatcher: A module that gives control of the CPU to the process (selected
by short term scheduler).
Dispatch latency: the time it takes to stop one process & start another.
Scheduling Criteria.
CPU utilization: keeping the CPU as busy as possible.
o Should be maximized.
Throughput: num of processes that complete their execution (in a time unit).
o Should be maximized.
scheduling algorithms.
First come, first served (FCFS) scheduling.
processes are scheduled based on arrival.
Can be represented using Grantt chart.
Avg waiting time = total waiting / num of processes.
o Can be reduced by changing order of arrival.
Convoy effect: processes wait for a long process taking the CPU.
o Results in lower CPU & device utilization.
o Externally defined:
Priority is set by a criteria outside the OS (often political).
Ex) process importance, type & amount of funds.
o Non-preemptive priority.
When a process arrives (with high priority), it gets placed at the
front of the ready queue and it waits for the CPU.
Blocked processes: a process that is ready to run but is waiting for the CPU.
Indefinite blocking (Starvation): happens when the CPU leaves a low priority
process waiting indefinitely.
o These processes either wait for an opening || the computer will crash.
Aging: gradually increasing the priority of a process that has been waiting for
a long time.
Round-Robin (RR) scheduling.
Like FCFS but with preemption to allow the system to switch between
processes.
Designed for timesharing systems.
Time slice (Time quantum): a small unit of time.
o Generally, between 10 to 100 milliseconds.
Ready queue is treated as circular queue.
If time slice is large, RR is like FIFO (arrival order).
Each process gets 1/n of the CPU time.
Processes wait (n-1) * time slice.
Implementation:
o Processes are placed in ready queue by order of arrival.
o CPU scheduler takes the first process and sets a timer to interrupt after
1 time slice.
o If process burst time is less than time slice, the process leaves the CPU.
o If process burst time is more, process gets interrupted & process will be
placed at the end of the queue.
Algorithm evaluation.
Before selecting an algo we must define the importance of some elements.
They may include:
o Maximizing CPU utilization (max response time is 1 second).
o Maximizing throughput.
Deterministic modeling.
Analytic evaluation: uses the given algo & the system’s workload to produce
a number that’s used to evaluate the algo’s performance for that workload.
Deterministic modeling is a type of analytic evaluation.
o It sets a workload and tests all available algorithms (requires exact
numbers).
o It’s fast, simple & provides exact numbers (performance numbers).
Simulations
Queueing models are limited & simulations are more accurate (still limited).
They gather stats used to indicate the algo’s performance.
Simulation data is collected from:
o Random number generators (using probability distribution).
o Tracing tape record sequences of real system events.
Implementation:
o Implements a new scheduler & tests it in real systems.
High cost & risk.
Can produce different results based on environments.
o Some schedulers are flexible & can be modified to fit the system.
o Some have APIs to modify priorities.
Lecture 7: )Deadlocks(
Deadlock: a situation where a waiting process can’t change its state (its waiting
for resources used by other processes).
System model.
Consists of a finite number of resources.
Resources are partitioned into several types (classes).
Resource types can have more than one instance (CPU type has two
instances if the system has two CPUs).
Some instances are not identical (printers on different floors).
A process can’t request resources more than what’s available.
Steps:
o Request: process requests resources.
If resources are being used, the process must wait until it’s free.
o Use: the process uses the resources to finish its task.
o Release: process releases resources.
Deadlocked state set:
o A set of processes where every process in the set is waiting for an event
caused by another process in the same set.
o Events may be resource acquisition & release.
o Resources may be physical (ex. printers) || logical (ex. files).
Deadlock characterization.
Deadlock conditions.
Mutual exclusion.
o At least one resource must be held in a non-sharable mode.
Hold and wait.
o A process must be holding at least one resource & waiting for extra
resources used by other processes.
No preemption.
o Resources can only be released when the process is done using it.
Circular wait.
o Waiting processes must exist in a circular pattern (P0 => P1 => P2… Pn
=> p0).
Resource allocation graph.
Consists of a set of vertices (V) & a set of edges (E).
Vertices are divided into active processes (P) & resources (R).
Request edge: A link from Pi to Rj is denoted by Pi -+ Rj (process i
requested resource j).
Assignment edge: A link from Rj to Pi is denoted by Rj -+ Pi (resource j is
allocated to process i).
Circles represent processes & rectangles represent resources.
Resources with different instances have dots inside their rectangles.
Steps:
o Process (P) requests resource type Ri.
o A request edge is inserted in the resource allocation graph.
o When the resource is available, that edge becomes an assignment edge.
o When the process finishes its task, the assignment edge is deleted.
Cases:
o Resource types have one instance:
If there is a cycle, a deadlock has occurred (each process in the
cycle is deadlocked).
o Resource types have several instances:
If there is a cycle, a deadlock may have occurred.
Handling deadlocks.
1. Using protocols to prevent/avoid deadlocks.
2. Allowing the system to enter deadlock, detect it, and recover.
3. Ignore any deadlock.
a. Used by most OSs.
b. The application developers must write programs to handle deadlocks.
Deadlock prevention: a set of methods that make sure that at least one of the
deadlock conditions is not true.
They place constraints on how requests are made.
Deadlock avoidance: the OS requires any process to send extra info in advance
(which resources it will request & use).
for more info about deadlock avoidance & prevention (book pages 76 – 82).
Banker’s algorithm.
The resource allocation graph is not valid for systems with multiple instances
of each resource type.
Banker’s algorithm: an avoidance algorithm that’s used for systems with
multiple instances of each resource type.
o Can be used in banking systems (stops system from allocating all cash).
o n = number of processes.
o m = number of resource types.
Safety algorithm.
Used to find if a system is in a safe state or not.
Safe state: a state where the system can allocate
resources to each process in some order and still
avoid deadlock.
May require m * n2 operations.
Resource-Request algorithm.
Used to determine if requests can be safely granted.
Requesti is the request vector for Pi.
Actions after a request:
1. If Requesti ≤Needi, go to step 2.
a. Otherwise, raise an error condition (process has exceeded its
maximum claim).
2. If Requesti ≤ Available, go to step 3.
a. Otherwise, Pi must wait (resources are not available).
3. Have the system pretend to have allocated the requested resources to
process Pi by modifying the state as follows:
a. Available = Available–Requesti.
b. Allocationi = Allocationi + Requesti.
c. Needi = Needi –Requesti.
Lecture 8: )Main Memory(
CPU fetches the next instruction from memory (using program counter).
CPU can only access the main memory & registers.
Cashe: memory buffer used to reduce the speed difference.
Basic hardware:
To make sure that each process has a separate memory:
o Base register: holds the smallest legal physical memory address.
o Limit register: specifies the size of the range.
These registers are only loaded by the OS.
Address binding:
Programs are binary exe files on disks.
They are moved to processes to be executed.
Input queue: list of waiting processes.
Swapping.
Swapping processes in & out of memory.
Processes swapped out are stored in a backing store.
Used with priority scheduling.
Sometimes it’s called roll out, roll in.
Backing store: an area in a fast disk.
o Provides direct access to memory images.
o System maintains a ready queue of all the memory images in the
backing store.
The dispatcher checks if the next process is in memory.
o If there is no space for it, it swaps out a process to free some space.
These systems have a high context switch time.
o Because of the transfer time (from store to memory).
o To reduce time systems only swap processes when they are needed.
Using memory requests & releases.
Memory types:
Fixed partition scheme:
o Divide memory into several fixed size partitions.
o Each partition can only contain one process.
o Degree of multi-programming is bound by the number of partitions.
o Originally used by IBM’s OS (MFT).
Memory allocation.
OS has a table containing all available & occupied memory.
Available memory is considered as one large block.
Processes are placed into the input queue.
o When there is enough memory space, the OS loads the process.
the OS orders the input queue according to the scheduling algorithm.
OS might skip over large processes.
Fragmentation.
External fragmentation: storage is fragmented onto large number of small
holes.
o Happens due to loading and removing processes.
Internal fragmentation: a process occupies an area bigger than what it needs.
o System sees the free area as allocated.
Compaction.
A solution to external fragmentation.
Shuffles the memory, making the free memory a continuous part.
Not always possible.
o If relocation is static & done at load time.
Possible if relocation is dynamic & done at execution time.
Moves the program & its data + changes the base register.
Simple method:
o Moving all processes to one end of the memory & the free holes to the
other end.
o This method is expensive.
Paging.
Memory management scheme that allows a process to have non-continuous
addresses.
Avoids external fragmentation & the need for compaction.
Solves the problem of fitting memory chunks of varying sizes onto the
backing store.
o Processes swapped out of memory need space in the backing store.
Used in most OSs.
Hardware support.
Most OS allocate a page table for each process.
Process control block: stores a pointer to the page table + other register
values.
Dispatcher reloads the user registers & defines the correct page table.
Protecting memory.
valid-invalid bit: An additional bit is added to each entry in the page table.
o Valid bit indicates that its page is in the process’s logical address space.
o Invalid bit indicates that its page isn’t in the process's logical address
space (illegal address).
o The OS uses this bit to allow/disallow access to pages.
Shared pages.
Pages can share common code.
Important for time sharing environments.
Reentrant code: non-self-modifying code.
o Can be shared because it can’t change during execution.
Pages don’t share data || registers.
Saves space.
Segmentation.
A memory management scheme that supports the user’s view of memory.
o User’s view: memory is a collection of variable-sized segments
(no ordering).
Logical address space: collection of segments.
Segments have names & lengths (offset).
User specifies addresses by a name & an offset.