Johny
Johny
Johny
1. Processes
A process is an instance of a program in execution. It is more than just the program code; it
includes the program counter, registers, stack, and other execution-related data.
The PCB is a data structure maintained by the operating system for each process. It contains:
• Process ID
• Process state
• Program counter
• CPU registers
• Scheduling information
• Memory management information
• I/O status
3 Process Operations
Termination: A process can terminate itself (exit) or be terminated by another process (Kill)
2. Process Synchronization
2. Semaphores:
1. Binary Semaphore: Like a mutex, but allows signaling between processes.
2. Readers-Writers Problem: Readers and writers access a shared database with priorities.
CPU Scheduling
Scheduling determines which process gets to use the CPU and for how long.
• Waiting Time: Minimize the time spent waiting in the ready queue.
• Response Time: Minimize the time from request submission to the first response.
2. Shortest Job Next (SJN): Executes the process with the shortest execution time.
2. Easy to implement in Batch systems where required CPU time is known in advance.
Fairness, responsive.
Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Generally high priority process are placed in the top level queue. Only after completion of
processes from top level queue, lower level queued processes are scheduled. . It can suffer
from starvation.
Multilevel Feedback Queue: Processes can move between queues based on execution history.
The idea is to separate processes according to the characteristics of their CPU bursts.
Deadlocks
A deadlock occurs when a set of processes is blocked, each waiting for a resource held by
another.
2. Hold and Wait: Processes hold resources while waiting for others.
4. Circular Wait: A cycle of processes exists, each waiting for a resource held by the next.
• Avoidance: Dynamically analyze resource allocation (e.g., using the Banker's Algorithm).
Real-Time Scheduling
• Hard Real-Time Systems: Missing a deadline can cause catastrophic failure.
Scheduling algorithms:
Summary
Dispatching Processes
Definition
• It is part of the process scheduling mechanism, which ensures the CPU is utilized
effectively by managing multiple processes.
Key Components
1. Process States:
Dispatcher:
• Context Switching: Saving the state of the current process and loading the state
of the next process.
• Switching to User Mode: Ensures the CPU is set up to execute the next process
in user mode.
• Jumping to the Proper Location: Directs execution to the program counter of the
next process.
Types of Dispatching:
• Non-preemptive Dispatching:
• The current process runs until it voluntarily releases the CPU (e.g., completes
execution or performs I/O).
• Preemptive Dispatching:
1. Scheduling Policy:
4. Priority Scheduling
2. Process Priority:
1. Process Creation:
2. Dispatching:
1. Picks processes from the ready queue and allocates CPU for execution.
• Context Switching during dispatching maintains continuity and avoids resource conflicts.
Common Challenges
Definition
• A process is an instance of a program in execution.
Components of a Process
1. Process ID (PID)
3. Program counter
Memory Layout:
Types of Processes
Threads
Definition
• A thread is the smallest unit of CPU execution.
• Multiple threads can exist within a single process, sharing resources such as:
• Memory
• File handles
Components of a Thread
Feature Process
Benefits of Threads
2. Resource Sharing: Threads of the same process share memory and resources.
3. Economy: Thread creation and switching have lower overhead compared to processes.
Thread Types
1. User Threads:
1. Managed at the user level by a library.
2. OS is unaware of their existence.
2. Kernel Threads:
1. Managed directly by the OS.
2. Slower due to kernel overhead but integrates well with OS scheduling.
Feature Processes Threads
Multithreading
Definition
• Multithreading allows multiple threads to execute concurrently within a single process.
Models of Multithreading
1. Many-to-One:
1. Many user-level threads map to one kernel thread.
2. Efficient but not parallel on multiprocessors.
2. One-to-One:
1. Each user thread maps to a kernel thread.
2. Offers true concurrency but incurs high overhead.
3. Many-to-Many:
1. Many user threads map to multiple kernel threads.
2. Combines the benefits of other models.
Benefits of Multithreading
1. Improved Throughput: Executes multiple tasks simultaneously.
2. Concurrency: Efficient CPU utilization.
3. Parallelism: Exploits multicore processors for performance gains.
Challenges
1. Synchronization:
1. Threads need to coordinate access to shared resources.
2. Requires mechanisms like mutexes, semaphores, or monitors.
2. Deadlocks:
1. Occur when threads wait indefinitely for resources held by each other.
Process-Thread Relationship
• Processes can contain multiple threads.
• Threads share resources of the parent process, making them efficient for tasks
requiring frequent communication.
Use Cases
• Processes:
• Isolated applications like browsers or word processors.
• Threads:
• Concurrent tasks like downloading files and updating a UI simultaneously.
Semaphores and the Producer/Consumer Problem
Semaphores
A semaphore is a synchronization tool used in operating systems to control access to shared
resources in a concurrent system.
Types of Semaphores
1. Binary Semaphore:
1. Also known as a mutex (mutual exclusion).
2. Takes values 0 or 1.
3. Ensures only one process can access a critical section at a time.
Counting Semaphore:
1. Takes non-negative integer values.
2. Useful for managing access to resources with a limited count (e.g., printer
queues).
Uses of Semaphores
1. Mutual Exclusion:
1. Prevents multiple processes from accessing critical sections simultaneously.
2. Synchronization:
1. Coordinates the execution of processes, ensuring proper order.
Advantages of Semaphores
1. Simple and effective for process synchronization.
2. Supports both mutual exclusion and conditional synchronization.
Challenges
Problem Overview
Key Constraints
1. Mutual Exclusion:
2. Buffer Management:
1. Prevents the producer from adding data to a full buffer.
Behavior
1. Producer:
• Unlocks the buffer (V(Mutex)) and signals that a slot is filled (V(Full)).
2. Consumer:
• Checks if there's a filled slot (P(Full)).
• Locks the buffer for exclusive access (P(Mutex)).
• Consumes an item and updates the buffer.
• Unlocks the buffer (V(Mutex)) and signals that a slot is empty (V(Empty)).
Common Issues
1. Deadlock:
• Both producer and consumer processes block indefinitely due to improper
semaphore usage.
2. Starvation:
• A process might never gain access to the buffer due to scheduling biases.
Applications
1. Operating Systems: Resource allocation and synchronization.
Multithreading: Ensuring safe access to shared data structures
Semaphore Example: Readers and Writers Problem
Problem Overview
Key Constraints
3. A writer should have exclusive access, meaning no other readers or writers can access
the data while a writer is writing.
Goals
3.Maximize concurrency:
1. Semaphores Used:
• Mutex:
• Write:
• ReaderCount:
Reader Behavior:
• Increment ReaderCount:
• The first reader locks the writer to prevent writing during reading.
• Concurrent Reading:
• Decrement ReaderCount:
Writer Behavior:
Challenges
1. Starvation:
1. Writers may starve if readers continuously occupy the resource.
2. To address this, writer-priority solutions can be implemented.
2. Deadlock:
1. Proper semaphore usage and ordering are necessary to prevent circular waiting.
Variants
1. Reader-Priority:
1. Allows readers to access the resource unless a writer is already active.
2. Writers may starve.
2. Writer-Priority:
1. Writers are given priority over readers to avoid writer starvation.
2. Readers may face longer delays.
Applications
1. Database Management:
• Allow multiple queries (readers) while preventing simultaneous updates (writers).
2. File Systems:
• Enabling concurrent read access while ensuring safe file modifications.
Summary
1. Readers and Writers Problem:
1. Balances concurrency (for readers) and exclusivity (for writers).
2. Semaphores:
1. Synchronize and manage access.
3. Variants:
1. Reader-priority and writer-priority solutions can be implemented based on the
requirements.
4. Memory Management and Virtual Memory
5. Storage Allocation
6. Storage allocation refers to the strategies and methods used to manage memory in a
computer system. Memory is allocated to store data and instructions, both at compile-
time and run-time. Efficient storage allocation is critical for program execution and
system performance.
a. Static Allocation
Definition: Memory is allocated at compile time and remains fixed throughout the program's
execution.
Characteristics:
• Requires knowledge of memory requirements at compile time.
• Allocation happens in a fixed memory region.
• Does not support dynamic data structures like linked lists or trees.
Advantages:
• Simple implementation.
• Predictable memory usage.
Disadvantages:
• Wastes memory if allocated space is unused.
• Limited flexibility for dynamic needs.
• Example: Global variables and static variables in C/C++.
Stack Allocation
Definition: Memory is allocated and deallocated in a last-in, first-out (LIFO) manner, using a
stack.
Characteristics:
• Supports function calls and local variables.
• Each function call creates an activation record (or stack frame).
• Memory is automatically reclaimed when the function exits.
Advantages:
• Efficient and fast allocation/deallocation.
• Automatically managed.
Disadvantages:
• Limited to a predefined stack size.
• Cannot handle dynamic memory needs beyond the stack's scope.
Example: Local variables in C/C++.
c. Heap Allocation
Definition: Memory is dynamically allocated and deallocated at runtime from a heap data
structure.
Characteristics:
• Supports dynamic data structures like linked lists, trees, etc.
• Requires manual memory management (e.g., malloc/free in C or new/delete in
C++) or garbage collection.
Advantages:
• Flexible and can handle unpredictable memory needs.
• Suitable for long-lived objects.
Disadvantages:
• Slower than stack allocation.
• Prone to memory leaks and fragmentation.
Example: Dynamically allocated memory using malloc() or new.
2. Memory Layout of a Program
A typical program's memory layout includes the following segments:
1. Text Segment: Stores the program code (read-only).
2. Data Segment:
• Initialized Data Segment: Stores global and static variables with initial values.
• Uninitialized Data Segment (BSS): Stores global and static variables without
initial values.
3. Stack Segment: Used for function call management and local variables.
4. Heap Segment: Used for dynamically allocated memory.
Storage Allocation Strategies
a. First Fit
b. Best Fit
c. Worst Fit
d. Next Fit
• Similar to First Fit but starts searching from the last allocated block.
. Garbage Collection
• Methods:
• Mark and Sweep: Marks reachable objects and sweeps away unreferenced
objects.
• C/C++:
• Java/Python:
• Fragmentation:
7. Best Practices
1. Use stack memory for short-lived variables and heap memory for long-lived or dynamic
objects.