Johny

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

DEPARTMENT OF COMPUTER SCIENCE

AND INFORMATION TECHNOLOGY


SCHOOL OF TECHNOLOGY
COURSE NAME: COMPUTER OPERATING
SYSTEMS
COURSE CODE: COMP 214
LECTURE NOTES
2024/2025 ACADEMIC YEAR
Processes, Synchronization, and Scheduling

1. Processes

A process is an instance of a program in execution. It is more than just the program code; it
includes the program counter, registers, stack, and other execution-related data.

1.1 Process States

1. New: The process is being created.

2. Ready: The process is waiting to be assigned to a CPU.

3. Running: Instructions are being executed.

4. Waiting: The process is waiting for some I/O operation to complete.

5. Terminated: The process has finished execution.

1.2 Process Control Block (PCB)

The PCB is a data structure maintained by the operating system for each process. It contains:

• Process ID

• Process state

• Program counter

• CPU registers

• Scheduling information
• Memory management information

• I/O status

3 Process Operations

Creation : Using fork () in UNIX -like system

Termination: A process can terminate itself (exit) or be terminated by another process (Kill)

Communication: Processes can communicate using mechanism like shared memory or


message passing

2. Process Synchronization

Synchronization is critical in multi-process systems to ensure that shared resources are


accessed safely and efficiently.

2.1 Critical Section

A critical section is a segment of code in which a process accesses shared resources.


Synchronization ensures that only one process accesses the critical section at a time.

2.2 Synchronization Mechanisms

1. Locks: Used to prevent concurrent access.

1. Mutex (Mutual Exclusion): A binary lock for exclusive access.

2. Semaphores:
1. Binary Semaphore: Like a mutex, but allows signaling between processes.

2. Counting Semaphore: Allows a set number of processes to access resources.

3. Monitors: High-level synchronization constructs combining mutual exclusion and


condition variables.

4. Condition Variables: Used with locks to wait and notify processes.

Classical Synchronization Problems

1. Producer-Consumer Problem: Producers and consumers share a bounded buffer.

2. Readers-Writers Problem: Readers and writers access a shared database with priorities.

3. Dining Philosophers Problem: Philosophers (processes) share chopsticks (resources).

CPU Scheduling

Scheduling determines which process gets to use the CPU and for how long.

3.1 Scheduling Criteria

• CPU Utilization: Keep the CPU busy as much as possible.

• Throughput: Maximize the number of processes completed in a given time.

• Turnaround Time: Minimize the time taken from submission to completion.

• Waiting Time: Minimize the time spent waiting in the ready queue.

• Response Time: Minimize the time from request submission to the first response.

3.2 Scheduling Algorithms

1. First-Come, First-Served (FCFS): Processes are executed in the order of arrival.

1. Simplest scheduling algorithm that schedules according to arrival times of


processes.

2. FCFS is a non-preemptive scheduling algorithm

2. Shortest Job Next (SJN): Executes the process with the shortest execution time.

1. This is a non-preemptive, pre-emptive scheduling algorithm.

2. Easy to implement in Batch systems where required CPU time is known in advance.

Round Robin (RR): Time-sharing technique with a fixed time quantum.

Fairness, responsive.

High overhead for small quanta.

Round Robin is the preemptive process scheduling algorithm


Priority Scheduling: Executes based on priority.

Risk of starvation for low-priority processes.

Solution: Aging (gradually increasing priority).

Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.

Processes with same priority are executed on first come first served basis.

Priority can be decided based on memory requirements, time requirements or any other
resource requirement.

Multilevel Queue Scheduling: Divides processes into priority queues.

Segregates different classes of processes.

Generally high priority process are placed in the top level queue. Only after completion of
processes from top level queue, lower level queued processes are scheduled. . It can suffer
from starvation.

Multilevel Feedback Queue: Processes can move between queues based on execution history.
The idea is to separate processes according to the characteristics of their CPU bursts.

Dynamically adjusts to process behavior.

If a process uses too much CPU time, it is moved to a lower-priority queue.

Deadlocks

A deadlock occurs when a set of processes is blocked, each waiting for a resource held by
another.

4.1 Necessary Conditions for Deadlock

1. Mutual Exclusion: At least one resource must be held in a non-shareable mode.

2. Hold and Wait: Processes hold resources while waiting for others.

3. No Preemption: Resources cannot be forcibly taken from a process.

4. Circular Wait: A cycle of processes exists, each waiting for a resource held by the next.

Deadlock Prevention and Avoidance

• Prevention: Break at least one of the four necessary conditions.

• Avoidance: Dynamically analyze resource allocation (e.g., using the Banker's Algorithm).

• Detection and Recovery: Detect cycles and recover by terminating processes or


preempting resources.

Real-Time Scheduling
• Hard Real-Time Systems: Missing a deadline can cause catastrophic failure.

• Soft Real-Time Systems: Deadlines are important but not critical.

Scheduling algorithms:

1. Rate Monotonic Scheduling (RMS): Static priority based on task frequency.

2. Earliest Deadline First (EDF): Dynamic priority based on deadlines

Summary

• Processes are the foundation of multitasking operating systems.

• Synchronization mechanisms are vital for consistency in shared environments.

• Scheduling ensures efficient and fair use of the CPU.

• Deadlock management is critical for system reliability.

Dispatching Processes

Definition

• Dispatching: The act of assigning a process to the CPU for execution.

• It is part of the process scheduling mechanism, which ensures the CPU is utilized
effectively by managing multiple processes.

Key Components

1. Process States:

1. Ready: Processes waiting for CPU allocation.

2. Running: The process currently executing on the CPU.

3. Waiting: Processes waiting for I/O operations or other events.

Dispatcher:

• The dispatcher is a module of the operating system responsible for:

• Context Switching: Saving the state of the current process and loading the state
of the next process.

• Switching to User Mode: Ensures the CPU is set up to execute the next process
in user mode.

• Jumping to the Proper Location: Directs execution to the program counter of the
next process.

Types of Dispatching:

• Non-preemptive Dispatching:
• The current process runs until it voluntarily releases the CPU (e.g., completes
execution or performs I/O).

• Preemptive Dispatching:

• A process can be interrupted to allow another process (with higher priority) to


execute.

Factors Influencing Dispatching:

1. Scheduling Policy:

1. First-Come, First-Served (FCFS)

2. Shortest Job Next (SJN)

3. Round Robin (RR)

4. Priority Scheduling

2. Process Priority:

1. Higher-priority processes may preempt lower-priority ones.

3. Quantum Time (in RR scheduling):

1. Time slice allocated for each process.

Relationship Between Dispatching and Process Creation

1. Process Creation:

1. Generates new processes and adds them to the ready queue.

2. Dispatching:

1. Picks processes from the ready queue and allocates CPU for execution.

Key Interaction Points:

• Scheduler decides which process to dispatch based on policies and priorities.

• Context Switching during dispatching maintains continuity and avoids resource conflicts.

Common Challenges

• Overhead in frequent context switching (dispatching).

• Resource Contention during process creation.

• Deadlocks and race conditions due to improper synchronization.

Processes and Threads

Definition
• A process is an instance of a program in execution.

• It contains the program code and its current activity, including:

• Program Counter: Tracks the next instruction.

• Register States: CPU registers holding intermediate data.

• Memory: Code, data, and stack segments.

Components of a Process

1. Process Control Block (PCB):

1. A data structure in the OS containing:

1. Process ID (PID)

2. Current state (e.g., running, ready, waiting)

3. Program counter

4. CPU register values

5. Memory allocation information

Memory Layout:

• Code Segment: Instructions of the program.

• Data Segment: Global variables.

• Heap: Dynamic memory allocation.

• Stack: Function calls and local variables.

Process Life Cycle

1. New: Process is created.

2. Ready: Waiting for CPU allocation.

3. Running: Executing instructions.

4. Waiting: Waiting for I/O or an event.

5. Terminated: Process has finished execution.

Types of Processes

1. Foreground Processes: Directly interact with the user.

2. Background Processes: Run without user interaction.

Threads

Definition
• A thread is the smallest unit of CPU execution.

• Multiple threads can exist within a single process, sharing resources such as:

• Memory

• File handles

Components of a Thread

1. Thread ID: Unique identifier for the thread.

2. Program Counter: Tracks the next instruction.

3. Register Set: Contains intermediate data.

4. Stack: Stores function calls and local variables.

5. Thread vs. Process

Feature Process

Definition Independent execution unit

Memory Separate for each process

Communication Requires inter-process communication (IPC)

Overhead Higher due to context switching

Benefits of Threads

1. Responsiveness: Allows a program to remain responsive (e.g., GUI apps).

2. Resource Sharing: Threads of the same process share memory and resources.

3. Economy: Thread creation and switching have lower overhead compared to processes.

4. Scalability: Threads can take advantage of multicore processors.

Thread Types

1. User Threads:
1. Managed at the user level by a library.
2. OS is unaware of their existence.
2. Kernel Threads:
1. Managed directly by the OS.
2. Slower due to kernel overhead but integrates well with OS scheduling.
Feature Processes Threads

Creation Time Slower Faster

Resource Independent Shared memory, code, files


Sharing
Failure Impact A process crash A thread crash may terminate the entire process
does not affect
others
Communication Complex via IPC Easier via shared resources

Processes vs. Threads

Multithreading
Definition
• Multithreading allows multiple threads to execute concurrently within a single process.
Models of Multithreading
1. Many-to-One:
1. Many user-level threads map to one kernel thread.
2. Efficient but not parallel on multiprocessors.
2. One-to-One:
1. Each user thread maps to a kernel thread.
2. Offers true concurrency but incurs high overhead.
3. Many-to-Many:
1. Many user threads map to multiple kernel threads.
2. Combines the benefits of other models.
Benefits of Multithreading
1. Improved Throughput: Executes multiple tasks simultaneously.
2. Concurrency: Efficient CPU utilization.
3. Parallelism: Exploits multicore processors for performance gains.
Challenges
1. Synchronization:
1. Threads need to coordinate access to shared resources.
2. Requires mechanisms like mutexes, semaphores, or monitors.
2. Deadlocks:
1. Occur when threads wait indefinitely for resources held by each other.
Process-Thread Relationship
• Processes can contain multiple threads.
• Threads share resources of the parent process, making them efficient for tasks
requiring frequent communication.
Use Cases
• Processes:
• Isolated applications like browsers or word processors.
• Threads:
• Concurrent tasks like downloading files and updating a UI simultaneously.
Semaphores and the Producer/Consumer Problem
Semaphores
A semaphore is a synchronization tool used in operating systems to control access to shared
resources in a concurrent system.
Types of Semaphores
1. Binary Semaphore:
1. Also known as a mutex (mutual exclusion).
2. Takes values 0 or 1.
3. Ensures only one process can access a critical section at a time.
Counting Semaphore:
1. Takes non-negative integer values.
2. Useful for managing access to resources with a limited count (e.g., printer
queues).
Uses of Semaphores
1. Mutual Exclusion:
1. Prevents multiple processes from accessing critical sections simultaneously.
2. Synchronization:
1. Coordinates the execution of processes, ensuring proper order.
Advantages of Semaphores
1. Simple and effective for process synchronization.
2. Supports both mutual exclusion and conditional synchronization.
Challenges

1. Deadlocks: Improper use can lead to deadlock situations.

2. Busy Waiting: Inefficient resource utilization in some implementations.

3. Starvation: Processes may indefinitely wait for semaphore release.

The Producer/Consumer Problem

Problem Overview

Involves two types of processes:

• Producer: Generates data and stores it in a shared buffer.

• Consumer: Retrieves data from the shared buffer for processing.

Key Constraints

1. Mutual Exclusion:

1. Only one process can access the buffer at a time.

2. Buffer Management:
1. Prevents the producer from adding data to a full buffer.

2. Prevents the consumer from retrieving data from an empty buffer.

Behavior

1. Producer:

• Checks if there's an empty slot in the buffer (P(Empty)).

• Locks the buffer for exclusive access (P(Mutex)).

• Produces an item and updates the buffer.

• Unlocks the buffer (V(Mutex)) and signals that a slot is filled (V(Full)).

2. Consumer:
• Checks if there's a filled slot (P(Full)).
• Locks the buffer for exclusive access (P(Mutex)).
• Consumes an item and updates the buffer.
• Unlocks the buffer (V(Mutex)) and signals that a slot is empty (V(Empty)).
Common Issues
1. Deadlock:
• Both producer and consumer processes block indefinitely due to improper
semaphore usage.
2. Starvation:
• A process might never gain access to the buffer due to scheduling biases.
Applications
1. Operating Systems: Resource allocation and synchronization.
Multithreading: Ensuring safe access to shared data structures
Semaphore Example: Readers and Writers Problem

Readers and Writers Problem

Problem Overview

A classic synchronization problem involving multiple processes:

• Readers: Processes that only read the shared data.

• Writers: Processes that modify the shared data.

Key Constraints

1. Multiple readers can access the shared data simultaneously.

2. Only one writer can access the shared data at a time.

3. A writer should have exclusive access, meaning no other readers or writers can access
the data while a writer is writing.
Goals

1. Ensure data consistency by synchronizing access.

2. Avoid deadlocks and starvation:

• Readers or writers should not be indefinitely delayed.

3.Maximize concurrency:

• Allow multiple readers when no writer is writing.

Solution Using Semaphores

1. Semaphores Used:

• Mutex:

• Binary semaphore for controlling access to the reader count.

• Write:

• Binary semaphore for ensuring mutual exclusion for writers.

• ReaderCount:

• Shared variable tracking the number of active readers.

Explanation of the Algorithm

Reader Behavior:

• Increment ReaderCount:

• The first reader locks the writer to prevent writing during reading.

• Concurrent Reading:

• Subsequent readers can proceed without locking the writer.

• Decrement ReaderCount:

• The last reader unlocks the writer, allowing writers to proceed.

Writer Behavior:

• Writers gain exclusive access by locking the Write semaphore.

• Writers must wait for all readers to finish before writing.

Challenges
1. Starvation:
1. Writers may starve if readers continuously occupy the resource.
2. To address this, writer-priority solutions can be implemented.
2. Deadlock:
1. Proper semaphore usage and ordering are necessary to prevent circular waiting.
Variants
1. Reader-Priority:
1. Allows readers to access the resource unless a writer is already active.
2. Writers may starve.
2. Writer-Priority:
1. Writers are given priority over readers to avoid writer starvation.
2. Readers may face longer delays.
Applications
1. Database Management:
• Allow multiple queries (readers) while preventing simultaneous updates (writers).
2. File Systems:
• Enabling concurrent read access while ensuring safe file modifications.
Summary
1. Readers and Writers Problem:
1. Balances concurrency (for readers) and exclusivity (for writers).
2. Semaphores:
1. Synchronize and manage access.
3. Variants:
1. Reader-priority and writer-priority solutions can be implemented based on the
requirements.
4. Memory Management and Virtual Memory

5. Storage Allocation

6. Storage allocation refers to the strategies and methods used to manage memory in a
computer system. Memory is allocated to store data and instructions, both at compile-
time and run-time. Efficient storage allocation is critical for program execution and
system performance.

. Categories of Storage Allocation

Storage allocation can be broadly categorized into the following types:

a. Static Allocation
Definition: Memory is allocated at compile time and remains fixed throughout the program's
execution.
Characteristics:
• Requires knowledge of memory requirements at compile time.
• Allocation happens in a fixed memory region.
• Does not support dynamic data structures like linked lists or trees.
Advantages:
• Simple implementation.
• Predictable memory usage.
Disadvantages:
• Wastes memory if allocated space is unused.
• Limited flexibility for dynamic needs.
• Example: Global variables and static variables in C/C++.
Stack Allocation
Definition: Memory is allocated and deallocated in a last-in, first-out (LIFO) manner, using a
stack.
Characteristics:
• Supports function calls and local variables.
• Each function call creates an activation record (or stack frame).
• Memory is automatically reclaimed when the function exits.
Advantages:
• Efficient and fast allocation/deallocation.
• Automatically managed.
Disadvantages:
• Limited to a predefined stack size.
• Cannot handle dynamic memory needs beyond the stack's scope.
Example: Local variables in C/C++.
c. Heap Allocation
Definition: Memory is dynamically allocated and deallocated at runtime from a heap data
structure.
Characteristics:
• Supports dynamic data structures like linked lists, trees, etc.
• Requires manual memory management (e.g., malloc/free in C or new/delete in
C++) or garbage collection.
Advantages:
• Flexible and can handle unpredictable memory needs.
• Suitable for long-lived objects.
Disadvantages:
• Slower than stack allocation.
• Prone to memory leaks and fragmentation.
Example: Dynamically allocated memory using malloc() or new.
2. Memory Layout of a Program
A typical program's memory layout includes the following segments:
1. Text Segment: Stores the program code (read-only).
2. Data Segment:
• Initialized Data Segment: Stores global and static variables with initial values.
• Uninitialized Data Segment (BSS): Stores global and static variables without
initial values.
3. Stack Segment: Used for function call management and local variables.
4. Heap Segment: Used for dynamically allocated memory.
Storage Allocation Strategies

Several strategies are used to allocate storage effectively:

a. First Fit

• Allocates the first available block of memory that is large enough.


• Advantages: Fast and simple.

• Disadvantages: May cause external fragmentation.

b. Best Fit

• Allocates the smallest available block that fits the request.

• Advantages: Minimizes wasted space.

• Disadvantages: Slower and may increase fragmentation.

c. Worst Fit

• Allocates the largest available block to maximize leftover space.

• Advantages: May reduce fragmentation in certain cases.

• Disadvantages: Generally less efficient.

d. Next Fit

• Similar to First Fit but starts searching from the last allocated block.

• Advantages: Balances memory usage.

• Disadvantages: May still cause fragmentation.

. Garbage Collection

A technique used to reclaim memory that is no longer in use.

• Methods:

• Reference Counting: Counts references to an object. Reclaims memory when


the count reaches zero.

• Mark and Sweep: Marks reachable objects and sweeps away unreferenced
objects.

• Generational Collection: Divides memory into generations and collects


younger objects more frequently.

. Storage Allocation in Programming Languages

• C/C++:

• Uses static, stack, and heap allocation.

• Requires manual memory management.

• Java/Python:

• Relies heavily on heap allocation.

• Automatic memory management using garbage collection.


6. Common Issues in Storage Allocation

• Memory Leaks: Failure to deallocate memory, causing wastage.

• Dangling Pointers: Accessing memory that has already been deallocated.

• Fragmentation:

• External Fragmentation: Unused memory blocks scattered outside.

• Internal Fragmentation: Unused space within allocated memory.

7. Best Practices

1. Use stack memory for short-lived variables and heap memory for long-lived or dynamic
objects.

2. Avoid memory leaks by properly freeing unused memory.

3. Minimize fragmentation through efficient allocation strategies.

4. Leverage garbage collection features in high-level languages.

Sharing Main Memory


1. Sharing main memory involves allocating and managing memory resources among
multiple processes or threads in a computing system.
2. Efficient sharing is critical to ensuring system performance, stability, and security.
1. Overview of Main Memory Sharing
• Definition: The process of allowing multiple programs or processes to utilize the main
memory simultaneously.
• Need for Sharing:
• Support multitasking.
• Maximize memory utilization.
• Enable efficient communication between processes.
Challenges in Sharing Main Memory
• Memory Protection: Ensuring that one process cannot access or modify another
process's memory.
• Resource Contention: Multiple processes competing for limited memory resources.
• Fragmentation:
• Internal Fragmentation: Unused memory within allocated space.
• External Fragmentation: Unused memory scattered across the system.
• Synchronization Issues: Maintaining consistency when processes share memory.
3. Techniques for Sharing Main Memory
Several techniques and mechanisms are used to manage and share memory effectively:
a. Fixed Partitioning
• Memory is divided into fixed-sized partitions.
• Allocation: Each partition is allocated to one process.
• Advantages:
• Simple to implement.
• Easy to manage.
• Disadvantages:
• Inefficient use of memory due to internal fragmentation.
• Limited flexibility.
b. Dynamic Partitioning
• Memory is divided into variable-sized partitions based on process requirements.
• Allocation: Processes are allocated memory blocks of varying sizes.
• Advantages:
• Reduces internal fragmentation.
• More flexible than fixed partitioning.
• Disadvantages:
• Prone to external fragmentation.
• Requires compaction to consolidate free memory.
c. Paging
• Memory is divided into fixed-sized blocks called pages.
• Allocation: Processes are divided into pages, which are loaded into memory frames.
• Characteristics:
• Eliminates external fragmentation.
• Requires a page table to map pages to frames.
• Advantages:
• Efficient memory utilization.
• Simplifies memory management.
• Disadvantages:
• Overhead of maintaining page tables.
• May cause page faults if required pages are not in memory.
d. Segmentation
• Memory is divided into segments, each representing a logical unit (e.g., code, data,
stack).
• Allocation: Segments are allocated memory blocks of varying sizes.
• Characteristics:
• Supports the logical organization of memory.
• Requires a segment table to map segments to memory locations.
• Advantages:
• Improves modularity and security.
• Reduces internal fragmentation.
• Disadvantages:
• Prone to external fragmentation.
• More complex than paging.
e. Virtual Memory
• Combines physical memory with secondary storage (e.g., disk) to provide the illusion of
a larger memory space.
• Techniques:
• Paging.
• Segmentation.
• Advantages:
• Allows execution of large programs.
• Improves multitasking.
• Disadvantages:
• Overhead due to page faults and swapping.
• Slower performance if disk access is frequent.
4. Memory Protection Mechanisms
Memory protection ensures the isolation and security of processes sharing memory. Common
mechanisms include:
1. Base and Limit Registers:
• Define the range of memory accessible to a process.
• Protect against out-of-bound access.
2. Virtual Memory:
• Maps logical addresses to physical addresses.
• Enforces access control through the translation process.
3. Access Control Bits:
• Indicate whether a memory region is read-only, write-only, or executable.
• Prevent unintended operations.
4. Segmentation and Paging Protection:
• Uses tables to validate memory access.
5. Memory Sharing Models
Processes can share memory in several ways, depending on system design:
a. Shared Memory
• A region of memory is shared by multiple processes for communication.
• Implementation: Processes use system calls to request shared memory segments.
• Advantages:
• Fast inter-process communication (IPC).
• Reduced overhead compared to message passing.
• Disadvantages:
Requires synchronization mechanisms like semaphores or mutexes
Copy-on-Write (COW)
• A memory-sharing technique where processes share the same memory until a
modification occurs.
• Mechanism:
• A copy is created only when one process modifies the memory.
• Advantages:
• Saves memory.
• Reduces overhead during process creation (e.g., fork in Unix).
• Disadvantages:
• Complexity in tracking memory modifications.
c. Memory-Mapped Files
• Maps files into memory to allow sharing between processes.
• Usage:
• Enables direct file I/O using memory operations.
• Facilitates efficient IPC.
• Advantages:
• High performance for file-based communication.
• Simplifies file access.
• Disadvantages:
• Requires careful synchronization.
6. Synchronization in Shared Memory
Synchronization ensures consistent access to shared memory. Common techniques include:
1. Locks:
1. Mutexes or spinlocks to prevent concurrent modifications.
2. Semaphores:
1. Used to coordinate access among multiple processes.
3. Barriers:
1. Ensure all processes reach a synchronization point before proceeding.
4. Condition Variables:
1. Allow processes to wait for specific conditions.
Best Practices for Sharing Main Memory
1. Use memory protection to ensure process isolation.
2. Minimize memory contention through efficient allocation strategies.
3. Optimize for low fragmentation using techniques like paging or dynamic partitioning.
4. Use appropriate synchronization mechanisms to avoid race conditions.

You might also like