nws115 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

NWS2,HWM2 and TCM

Operating System

Institut Universitaire du golfe de guinée - IUG-ISTA


3 credits

UG
Edited By

FEUWO TACHULA
Msc. Networks and Distributed Services
[email protected]
-I
S2
NW

VERSION 2021
Christian FEUWO
NWS2: Operating System IUG - 2021/2022

Contents
1 INTRODUCTION TO OPERATING SYSTEM 1
1.1 History and Evolution of Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Operating system structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Different type of kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 PROCESS MANAGEMENT 7
2.1 Process concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

UG
2.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Process Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Process Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Process state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Context switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Process Control Block (PCB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Process scheduling . . . . . . . . . .
-I . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Scheduler Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Non-preemptive scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.3 Preemptive scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
S2

2.4 Threads, Symmetric Multiprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


2.4.1 Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.2 Multiprocessing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Deadlock 14
NW

3.1 inter-process Communication Clock Synchronization . . . . . . . . . . . . . . . . . . . 14


3.1.1 Conflicts between Processes(race conditions) . . . . . . . . . . . . . . . . . . . . 14
3.1.2 Clock Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.3 Message Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.4 Classical Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Deadlock Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Conditions for deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Deadlock avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.3 Deadlock prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 MEMORY MANAGEMENT 18
4.1 Generality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.1.1 Virtual Memory Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.1.2 Memory Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

CONTENTS i
NWS2: Operating System IUG - 2021/2022

4.1.3 Memory Management for Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 19


4.2 Memory Management Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2.1 Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2.2 Pagination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.3 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.4 Segmentation and Pagination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5 FILE MANAGEMENT 24
5.1 File systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1.1 File systems interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1.2 File system structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Organization: files and directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Secondary storage management, file systems: FAT and NTFS . . . . . . . . . . . . . . 27

UG
5.3.1 Partition types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3.2 FAT (File Allocation Table) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3.3 NTFS (New Technology File System) . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4 File protection Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4.1 Types of Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
-I
5.4.2 Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

6 Divices Management 31
6.1 I/O devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Organization of I/O function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
S2

6.2.1 Peripheral Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


6.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.2.3 Input-Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.3 I/O buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
NW

6.3.1 Uses of I/O Buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33


6.3.2 Types of various I/O buffering techniques : . . . . . . . . . . . . . . . . . . . . . 33
6.4 Disk scheduling, RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4.2 Disk Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

CONTENTS ii
NWS2: Operating System IUG - 2021/2022

1 INTRODUCTION TO OPERATING SYSTEM


Introduction
A operating system (OS) is a collection of software that manages computer hardware resources and
provides common services for computer programs. The operating system is a vital component of the
system software in a computer system. It can be also see as a Software that manage the computer
and their devices or a Software layer, whose role is to manage the devices and to provide the user
programs with a simplified interface. The OS simplify the life of users and programmer to manage
machine resources (processor, memory, devices) in an efficient way In this chapter we will go through
the evolution of OS and it structure.

1.1 History and Evolution of Operating Systems

UG
1.1.1 Origin

In the 1940s, computers were programmed by manipulating toggle switches and then inserting a stack
of punch cards into a reader. The computers of the 1940s to 1960s were very expensive and were owned
by companies. And institutions. Each user had the right to use the computer for one year. Limited
time the user then had all the computer hardware at his disposal. He brought with him a stack of
-I
punch cards which contained the instructions for the programme to be executed.
S2
NW

Computers of that era performed one task at a time, serving a single user. The programs for these
computers contained all the instructions necessary to handle the computer hardware. If the software
library for this computer contained fifty programs, the instructions needed to handle the material were
found in each of these fifty programs. With the expansion of the library, the idea came up to isolate

1 INTRODUCTION TO OPERATING SYSTEM 1


NWS2: Operating System IUG - 2021/2022

the routine instructions in a separate programme. A program that would reside continuously in the
memory, regardless of which programme is running. This program was an operating system, in its most
rudimentary form.

1.1.2 History

1965: MIT (Massachusetts Institute of Technology) launches the creation of the first multi-tasking and
multi-user operating system: Multics (MULTiplexed Information and Computing Service).

UG
Figure 1: multis operating system

1969: Engineers Ken Thompson and Dennis Ritchie of Bell Labs launch in writing a lighter version
-I
of Multics. The system, which is functional, is nicknamed Unics, then finally christened UNIX. Quickly
reprogrammed in a language of programming (the C, developed by Ritchie for the occasion), UNIX can
be programmed in a more is particularly easy to wear on new platforms, which ensures that it can be
used on the success.
1972: the Micral of the French company R2E is the first microcomputer in the world. it is equipped
S2

with an Intel 8008 processor and the SYSMIC operating system (then called operation monitor). The
SYSMIC operating system will later be renamed PROLOGUE when the company was bought out by
Bull in 1978.
1980: CP/M (Control Program/Monitor) is an operating system created by Gary Kildall from
NW

Digital Research Inc. it will be used on Amstrad CPC, Commodore 128, TRS80, ZX Spectrum and
other devices. Early versions of MSDOS were largely based on CP/M.
1980: IBM contacts Bill Gates, co-founder of Microsoft, to adapt the BASIC language to its new
microcomputer, the Personal Computer (PC).). IBM is also looking for an operating system, and Gates
is advising the company to turn to CP/M. But Gary Kildall refuses to sign the contract with IBM.
Bill Gates jumps at the opportunity: he buys QDOS (a quickanddirty operating system for Intel 8086
processors) to offer IBM the DOS/BASIC package. After a few modifications made at IBM’s request,
the system is named QDOS/BASIC. MSDOS.
1987: Andrew Tanenbaum, professor at the Free University of Amsterdam, created the Minix operat-
ing system, a UNIX clone whose source code was intended to illustrate his operating system construction
courses.
1991: Inspired by the work of Tanenbaum, Linus Torvalds, a student at the University of Helsinki
then started to develop its own kernel: Linux, which is at the based a rewriting of Minix. The very
first version (0.01) is released in 1991, Linux passes under the GNU licence in 1992, and it was not

1 INTRODUCTION TO OPERATING SYSTEM 2


NWS2: Operating System IUG - 2021/2022

until 1994 that version 1.0 was released. Birth to the distribution of an entirely free operating system,
GNU/Linux.
In 2010, the two most popular families of operating systems were are Unix (including Mac OS X and
Linux) and Windows. The range of Windows systems now equips 38% of servers. and 90% of personal
computers, which places it in a monopoly situation, particularly with the general public. In 2008 its
market shares were fell below 90% for the first time in 15 years. The Unix family of operating systems
has more than 25 members and the market shares of these Unix operating systems are almost 50% on
the servers. The Unix family runs 60% of the websites in the and Linux equips 95% of the world’s 500
supercomputers. There are more than 100 operating systems in the world.

1.2 Operating system structure


1.2.1 Definition

UG
Services

the services of the operating system are the basic programs that form the kernel, they are loaded in the
central memory from the system disk as soon as the computer is started up. these services correspond
to internal commands. as services we can mention:
-I
• process management: the OS must be able to load a program into RAM, run the program,
terminate the program, either normally or abnormally and detect error.

• memory management: the OS provide ways to dynamically allocate portions of memory to


programs at their request, and free it for reuse when no longer needed.
S2

• file management: the OS is also responsible for maintaining directory and subdirectory struc-
tures, mapping file names to specific blocks of data storage, and providing tools for navigating
and utilizing the file system.
NW

• devices management: the OS is responsible for transferring data to and from I/O devices,
including keyboards, terminals, printers, and storage devices.

• user interface management: the OS provides a user-specific interface and thus ensures data
protection and security. access to each interface is password-protected.

Shell

A shell is a special software program that provides the software with an interface to access the operating
system resources. Shell embraces user understandable human commands and transforms them into
anything the kernel can recognize. This is an interpretation of the command language which executes
read instructions from input devices including such a keyboard or documents. Once the user signs in
or starts the terminal the shell gets underway.

1 INTRODUCTION TO OPERATING SYSTEM 3


NWS2: Operating System IUG - 2021/2022

Process

A process is a program in execution. A process is more than the program code, which is sometimes
known as the text section. It also includes the current activity, as represented by the value of the
program counter and the contents of the processor’s registers. A process generally also includes the
process stack, which contains temporary data (such as function parameters, return addresses, and local
variables), and a data section, which contains global variables. A process may also include a heap,
which is memory that is dynamically allocated during process run time.

Call system

In computing, a system call is the programmatic way in which a computer program requests a service
from the kernel of the operating system it is executed on. A system call is a way for programs to
interact with the operating system. A computer program makes a system call when it makes a request

UG
to the operating system’s kernel. System call provides the services of the operating system to the
user programs via Application Program Interface(API). It provides an interface between a process and
operating system to allow user-level processes to request services of the operating system. System calls
are the only entry points into the kernel system. All programs needing resources must use system calls.
-I
1.2.2 kernel

Definition

The kernel is a software responsible for the management of the hardware resources. All other software
must request access to the kernel, which coordinates everything and avoids conflicts. It is also used to
S2

coordinate and ensure the communication of processes that must work together.
Hardware is the physical components of the computer such as the hard disk, the mouse, the RAM,
the CD, the keyboard etc... to simplify it is everything that can be touched.
Software is a program that uses the hardware resources of the computer. It consists of instructions
NW

and code that runs on the PC.


The operating system is a large piece of software made up of smaller pieces of software that run
on your hardware, in other words, what you can’t physically touch.

Function

To ensure that the computer works properly, it is necessary to have harmony between the software and
the hardware. Because the different programs that run are usually not aware of the presence of other
programs. They use the hardware resources as if they were alone. Most hardware resources cannot be
shared. So if two programs try to access a resource simultaneously or already in use, a crash is almost
guaranteed. Aware of the dozens of programs and processes running simultaneously on your machine,
it is imperative to establish a coordination system if you want to avoid a pile-up. Hence the use of the
kernel.
The purpose of the kernel is to coordinate the use of the computer’s hardware resources by the software
in order to avoid memory conflicts and process interlocking.

1 INTRODUCTION TO OPERATING SYSTEM 4


NWS2: Operating System IUG - 2021/2022

The kernel is the first program to be loaded into memory and therefore to be launched at the start of
the operating system. It also allows all other programs to start. It runs in the kernel space which is a
reserved space to which no other program has access. For security reasons the kernel is the key to the
system, any malfunction will have dramatic consequences. Stability is therefore a must for all kernels.
Depending on the needs of the developer, there are several approaches to developing a kernel.

1.2.3 Different type of kernel

Monolethic kernel

A monolithic kernel is an operating system software framework that holds all privileges to access in-
put/output (I/O) devices, memory, hardware interrupts and the CPU stack. Monolithic kernels tend
to be larger than other kernels because they deal with so many aspects of computer processing at the
lowest level, and therefore have to incorporate code that interfaces with many devices, I/O and interrupt

UG
channels, and other hardware operators.
Monolithic kernels are able to dynamically load executable modules at run-time. It manages the sys-
tem resources between application and hardware of the system. The monolithic kernel provides CPU
scheduling, memory management, file management, and other operating system functions through sys-
tem calls. As user services and kernel services both reside in same address space, this results in the fast
executing operating system.
-I
One of the drawbacks of the monolithic kernel is if anyone service fails entire system is crashed. If
a new service is to be added in a monolithic kernel, the entire operating system is to be modified.
S2
NW

Figure 2: Monolethic kernel

Micro-kernels

the core functionality is isolated from system services and device drivers. Micro kernel being a kernel
manages all system resources. But in a micro kernel, the user services and the kernel services are
implemented in different address space. The user services are kept in user address space, and kernel
services are kept under kernel address space. This reduces the size of the kernel and further reduces
the size of the operating system.

1 INTRODUCTION TO OPERATING SYSTEM 5


NWS2: Operating System IUG - 2021/2022

Figure 3: Monolethic kernel

Conclusion

UG
-I
S2
NW

1 INTRODUCTION TO OPERATING SYSTEM 6


NWS2: Operating System IUG - 2021/2022

2 PROCESS MANAGEMENT
INTRODUCTION

2.1 Process concept


2.1.1 Definition

A process is a running program and its execution must progress sequentially, i.e., at any given time no
more than one instruction is executed on behalf of the process. a process may contain one or part of a
program while a program may be represented by one or more processes.

2.1.2 Process Structure

To put it in simple terms, we write our computer programs in a text file and when we execute this

UG
program, it becomes a process which performs all the tasks mentioned in the program. When a program
is loaded into the memory and it becomes a process, it can be divided into four sections stack, heap,
text and data. The following image shows a simplified layout of a process inside main memory.
-I
S2
NW

Figure 4: process structure

• Stack: The process Stack contains the temporary data such as method- /function parameters,
return address and local variables.

• Heap: This is dynamically allocated memory to a process during its run time.

• Text: This includes the current activity represented by the value of Program Counter and the
contents of the processor’s registers.

• Data: This section contains the global and static variables..

2 PROCESS MANAGEMENT 7
NWS2: Operating System IUG - 2021/2022

2.2 Process Life Cycle


2.2.1 Process state

When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized. In general, a process can have one of
the following five states at a time.

• Start: this is the initial state when a process is first started/created.

• Ready: the process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run.

• Running: Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

UG
• Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.

• Terminated or Exit: Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from main memory.
-I
S2

Figure 5: Process Life Cycle


NW

2.2.2 Context switching

A context switching is a process that involves switching of the CPU from one process or task to another.
In this phenomenon, the execution of the process that is present in the running state is suspended by
the kernel and another process that is present in the ready state is executed by the CPU.
It is one of the essential features of the multitasking operating system. The processes are switched so
fastly that it gives an illusion to the user that all the processes are being executed at the same time.
But the context switching process involved a number of steps that need to be followed:

• Step 1 save the context of the current process

• Step 2 suspend the current process

• Step 3 load the next process to be executed

2 PROCESS MANAGEMENT 8
NWS2: Operating System IUG - 2021/2022

2.2.3 Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for very process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process.

UG
Figure 6: Process Control Block

2.3 Process scheduling


-I
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy. Process
scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow
S2

more than one process to be loaded into the executable memory at a time and the loaded process shares
the CPU using time multiplexing.

2.3.1 Scheduler Metrics


NW

• CPU Utilization: The percentage of time that the CPU is busy – not idle.

• Throughput: The number of processes that are completed per time unit.

• Waiting time: The sum of the time spent in the ready queue during the life of the process.
Time blocked, waiting for I/O, is not part of the waiting time.

• Service time: The amount of CPU time that a process will need before it either finishes or
voluntarily exits the CPU, such as to wait for input / output.

• Turnaround time for a process: The amount of time between the time a process arrives in
the ready state to the time it exits the running state for the last time.

• Response time: The time from first submission of the process until the first running.

2 PROCESS MANAGEMENT 9
NWS2: Operating System IUG - 2021/2022

2.3.2 Non-preemptive scheduling

Non-preemptive Scheduling is used when a process terminates, or a process switches from running to
waiting state. In this scheduling, once the resources (CPU cycles) is allocated to a process, the process
holds the CPU till it gets terminated or it reaches a waiting state. In case of non-preemptive scheduling
does not interrupt a process running CPU in middle of the execution. Instead, it waits till the process
complete its CPU burst time and then it can allocate the CPU to another process. Algorithms based
on non-preemptive scheduling are: Shortest Job First, First Come First Served etc...

First Come First Served

First Come First Serve (FCFS) is an operating system scheduling algorithm that automatically executes
queued requests and processes in order of their arrival. It is the easiest and simplest CPU scheduling
algorithm.

UG
First Come First Served Example

-I
S2
NW

Shortest Job First

Shortest Job First (SJF) is an algorithm in which the process having the smallest execution time is
chosen for the next execution. It significantly reduces the average waiting time for other processes
awaiting execution.

2.3.3 Preemptive scheduling

Preemptive scheduling is used when a process switches from running state to ready state or from waiting
state to ready state. The resources (mainly CPU cycles) are allocated to the process for the limited
amount of time and then is taken away, and the process is again placed back in the ready queue if that
process still has CPU burst time remaining. That process stays in ready queue till it gets next chance to
execute. Algorithms based on non-preemptive scheduling are: Round Robin Scheduling (RR), ,Shortest
Remaining Time First (SRTF) etc..

2 PROCESS MANAGEMENT 10
NWS2: Operating System IUG - 2021/2022

Shortest Job First Example

UG
Round Robin Scheduling

Round Robin is a CPU scheduling algorithm where each process is assigned a fixed time slot in a cyclic
way. It is simple, easy to implement, and starvation-free as all processes get fair share of CPU.One
of the most commonly used technique in CPU scheduling as a core. The disadvantage of it is more
overhead of context switching.
-I
S2
NW

Shortest Remaining Time First

Shortest Remaining Time First (SRTF) is the preemptive version of Shortest Job Next (SJN) algorithm,
where the processor is allocated to the job closest to completion.
This algorithm requires advanced concept and knowledge of CPU time required to process the job
in an interactive system, and hence can’t be implemented there. But, in a batch system where it is
desirable to give preference to short jobs, SRT algorithm is used.

2 PROCESS MANAGEMENT 11
NWS2: Operating System IUG - 2021/2022

Shortest Remaining Time First

UG
2.4 Threads, Symmetric Multiprocessing
2.4.1 Threads

A thread is the smallest unit of processing that can be performed in an OS. In most modern operating
systems, a thread exists within a process - that is, a single process may contain multiple threads.
-I
A thread is also called a lightweight process. Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
S2

servers and web server. They also provide a suitable foundation for parallel execution of applications
on shared memory multiprocessors. The following figure shows the working of a single-threaded and a
multithreaded process.
NW

2 PROCESS MANAGEMENT 12
NWS2: Operating System IUG - 2021/2022

2.4.2 Multiprocessing System

Most computer systems are single processor systems but multiprocessor systems are increasing in impor-
tance nowadays. These systems have multiple processors working in parallel that share the computer
clock, memory, bus, peripheral devices etc. There are mainly two types of multiprocessor systems.
These are Symmetric Multiprocessor System and Asymmetric Multiprocessor System.In symmetric
multiprocessing, multiple processors share a common memory and operating system. All of these pro-
cessors work in tandem to execute processes. The operating system treats all the processors equally,
and no processor is reserved for special purposes.

UG
-I
S2

Symmetric multiprocessing is also known as tightly coupled multiprocessing as all the CPU’s are
connected at the bus level and have access to a shared memory. All the parallel processors in symmetric
NW

multiprocessing have their private cache memory to decrease system bus traffic and also reduce the data
access time. Symmetric multiprocessing systems allow a processor to execute any process no matter
where its data is located in memory. The only stipulation is that a process should not be executing on
two or more processors at the same time. In general, the symmetric multiprocessing system does not
exceed 16 processors as this amount can be comfortably handled by the operating system.

Conclusion

2 PROCESS MANAGEMENT 13
NWS2: Operating System IUG - 2021/2022

3 Deadlock
Introduction
Deadlock is a paralyzing process state resulting from improper CPU scheduling, process management,
and synchronization management. During this time, processes are blocked as they compete for system
resources or only communicate with each other. Although it cannot be guaranteed that deadlock may
be avoided 100% of the time, it is important to know how to avoid the deadlocked state and how to
recover from it once it has been achieved. In simple word, A deadlock happens in operating system
when two or more processes need some resource to complete their execution that is held by the other
process.

3.1 inter-process Communication Clock Synchronization

UG
In a computer, two processes can communicate with each other using pipes or shared memory. To
ensure this exchange, the operating system faces three challenges: the management of conflicts between
processes, the synchronization of communication between processes, the effectiveness of communication
between processes. -I
3.1.1 Conflicts between Processes(race conditions)

It can happen that several processes try at different times to access different resources, there is no prob-
lem! However, the opposite gives rise to race conditions. This term is derived from circuit weaknesses
where several sub-circuits race to provide input.
S2

The concurrency conditions can be regulated by using mutual exclusions. These are tricks to ensure
that only one process executes a block of instructions at a time. Such a block of instructions is called
a critical section.
Four conditions must be fulfilled by critical sections:
NW

1. Two processes cannot be in a critical section at the same time.

2. No assumptions should be made about the number or speed of the processes involved.

3. No process that is not in the critical section may block other processes.

4. No process can wait indefinitely.

The concurrency conditions can be regulated by using mutual exclusions. These are tricks to ensure
that only one process executes a block of instructions at a time. Such a block of instructions is called
a critical section.

3.1.2 Clock Synchronization

In the communication between two processes A and B it can happen at a time t1 that the process A
sends a message m1 to the process B and at a time t2 that the process A sends a message m2 to the
process B (t1<t2). the problem which arises here is the fact that the process B can receive m2 before

3 DEADLOCK 14
NWS2: Operating System IUG - 2021/2022

m1 or then m1 and m2 at the same time. then to solve this problem the operating system uses the
semaphores

Semaphores

A semaphore (from the Greek sêma: sign and phoros: that which carries) is a data structure that
allows to count the "wakes" in order to avoid those that are lost because they are performed on an
awake process. A semaphore allows three operations and each of these operations must be implemented
as an atomic (indivisible) operation.
The three operations of a semaphore:

• init: Initializes the semaphore to the number of available resources. This function is called only
once.

UG
• down: If the semaphore has a value greater than 0, it is decremented and the process continues
execution. If the value of the semaphore is 0, the process falls into sleep, waiting for the release
of resources (execution of an up).

• up: Increments the semaphore value. If processes were waiting for this semaphore, one of them is
selected and woken up to allow it to complete its down.

3.1.3 Message Consistency


-I
when process A sends a message m1 to process B then it may happen that:

• B receives the message and A is not aware of it


S2

• the message m1 can be lost on the way due to a system failure (power failure, network failure
etc...)

to manage this problem, the operating system uses the functions sent(p1, m1) and received(p1,m1).
NW

3.1.4 Classical Problem

The philosophers’ dinner

The philosophers’ dinner is a classic early resource sharing problem. It was first presented by Edsger
Dijkstra in 1965 as the sharing of five magnetic tapes by five processes. Soon the problem was refor-
mulated by Tony Hoare in its modern form: the philosophers’ dinner.
Five philosophers sit at a table with five forks and five pieces of cutlery. Each philosopher does two
things in his life: he eats and he thinks. When he thinks, he does not eat and this lasts for an indefinite
time. When he is hungry, he picks up a fork at random, then picks up the one on the other side and eats
for an indefinite time. If, when he wants to take a fork, it is taken, he waits patiently (and indefinitely)
for his colleague to put it down again so he can take it.
This problem quickly leads to resource deprivation. If, for example, all the philosophers take the right
fork at the same time, they turn to the left and wait for the other to put it down. This creates an
inter-blockage that we will study later in the course.

3 DEADLOCK 15
NWS2: Operating System IUG - 2021/2022

Sleeping hairdresser

The sleeping hairdresser problem is set in a hairdresser’s salon. A hairdresser runs a salon where there
are n waiting chairs and an armchair. When the hairdresser has no customers, he sits in the chair and
falls asleep. When a customer arrives, he or she has to wake up the hairdresser to have his or her hair
done. If during this time other customers arrive, they sit in a waiting chair if there are any left. If not,
they will encourage the competition.
This problem models in some way a finite queue size.

3.2 Deadlock Concept


3.2.1 Conditions for deadlocks

There are 4 conditions necessary for the occurrence of a deadlock.

UG
• Mutual Exclusion: There should be a resource that can only be held by one process at a time.

• Hold and Wait: A process can hold multiple resources and still request more resources from
other processes which are holding them.

• No Preemption: A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily.
-I
• Circular Wait: A process is waiting for the resource held by the second process, which is waiting
for the resource held by the third process and so on, till the last process is waiting for a resource
held by the first process. This forms a circular chain.
S2

3.2.2 Deadlock avoidance

In deadlock avoidance, the request for any resource will be granted if the resulting state of the system
NW

doesn’t cause deadlock in the system. The state of the system will continuously be checked for safe and
unsafe states. In order to avoid deadlocks, the process must tell OS, the maximum number of resources
a process can request to complete its execution. The simplest and most useful approach states that
the process should declare the maximum number of resources of each type it may ever need. The
Deadlock avoidance algorithm examines the resource allocations so that there can never be a circular
wait condition.

3.2.3 Deadlock prevention

deadlock is characterised by Mutual Exclusion, Hold and Wait, No preemption and Circular wait. We
can prevent Deadlock by eliminating any of the above four conditions.

• Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual exclusion because
some resources, such as the tape drive and printer, are inherently non-shareable.

3 DEADLOCK 16
NWS2: Operating System IUG - 2021/2022

• Eliminate Hold and Wait: Allocate all required resources to the process before the start of its
execution, this way hold and wait condition is eliminated but it will lead to low device utilization.
for example, if a process requires printer at a later time and we have allocated printer before the
start of its execution printer will remain blocked till it has completed its execution.
The process will make a new request for resources after releasing the current set of resources. This
solution may lead to starvation.

• Eliminate No Preemption: Preempt resources from the process when resources required by
other high priority processes.

• Eliminate Circular Wait: Each resource will be assigned with a numerical number. A process
can request the resources increasing/decreasing. order of numbering. For Example, if P1 process
is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5 such request will not

UG
be granted, only request for resources more than R5 will be granted.

Conclusion

-I
S2
NW

3 DEADLOCK 17
NWS2: Operating System IUG - 2021/2022

4 MEMORY MANAGEMENT
Introduction
Proper management of memory is vital for a computer system to operate properly. Modern operating
systems have complex systems to properly manage memory. Failure to do so can lead to bugs, slow
performance, and at worst case, takeover by viruses and malicious software. Nearly everything a
computer programmer does requires him or her to consider how to manage memory. Even storing a
number in memory requires the programmer to specify how the memory should store it.

4.1 Generality
In computing, memory refers to the computer hardware devices used to store information for immediate

UG
use in a computer; it is synonymous with the term "primary storage". Computer memory operates at a
high speed, for example random-access memory (RAM), as a distinction from storage that provides slow-
to-access program and data storage but offers higher capacities. If needed, contents of the computer
memory can be transferred to secondary storage, through a memory management technique called
"virtual memory". -I
4.1.1 Virtual Memory Problems

Relocation

Relocation is to find a way to map virtual addresses into physical addresses. When a process is loaded
in main memory since there are several instructions inside the process the address of those instructions
S2

are relocatable addresse. But there is some problem with this approach supposes if a process is first
loaded then remove and then after loaded again so in this situation loader will get confused.

Swap
NW

During the memory management process it may happen that the memory occupied by the running
process is greater than the main memory. To solve the problem another memory area must be reserved
on the hard disk to extend the central memory. This memory area will be called swap memory.

Memory Protection

here the problem is that a process can read or write in the memory of another process belonging to
another application or to the operating system.

Shared Memory

shared memory is widely used to facilitate sharing between processes, but this strategy will have to be
implemented by respecting the principle of memory protection.

4 MEMORY MANAGEMENT 18
NWS2: Operating System IUG - 2021/2022

4.1.2 Memory Hierarchy

In the design of the computer system, a processor, as well as a large amount of memory devices, has
been used. the memory organization of the system can be done by memory hierarchy. It has several
levels of memory with different performance rates. But all these can supply an exact purpose, such that
the access time can be reduced. The memory hierarchy was developed depending upon the behavior of
the program.

Memory Hierarchy

UG
-I
A computer’s memory management unit (MMU) is the physical hardware that handles its virtual
memory and caching operations. The MMU is usually located within the computer’s central processing
unit (CPU), but sometimes operates in a separate integrated chip (IC). All data request inputs are sent
S2

to the MMU, which in turn determines whether the data needs to be retrieved from RAM or ROM
storage.

4.1.3 Memory Management for Systems


NW

single-task Systems

In the case of single-task systems, memory management is very simple. All that is required is to reserve
part of the memory for the operating system. The application is then cased in the remaining space,
which is freed up as soon as the application is finished.
This becomes a bit more complicated if the application requires more space than the RAM can provide.
The application is then segmented into overlay segments. This technique is no longer used.
It was used in the DOS era for large applications. The programmer had to plan how his application
would be split up by imagining how these overlays would be loaded into memory one after the other so
that during its execution the application could achieve all the necessary functions.
This was how virtual memory was conceived in the age of single-task operating systems. Note that the
memory area occupied by the operating system was not protected by this type of memory management.

4 MEMORY MANAGEMENT 19
NWS2: Operating System IUG - 2021/2022

Memory Hierarchy

UG
Multitasking Systems

Several processes must share memory without using the space reserved for the operating system or
other processes. When a process ends, the OS must free up the memory space allocated to it so that
new processes can be placed in it. -I
4.2 Memory Management Technique
4.2.1 Partition

Fixed partitions
S2

The simplest way is to divide the memory into fixed partitions as soon as the system boots. The
partitions are of different sizes to prevent large partitions from being occupied only by small processes.
The memory manager, depending on the size of the processes, decides which partition to allocate to it
so as not to waste too much memory.
NW

A queue is associated with each partition. When a new task comes in, the manager determines which
is the smallest partition that can hold it and then places that task in the corresponding queue.
A queue is associated with each partition. When a new task comes in, the manager determines which
is the smallest partition that can hold it and then places that task in the corresponding queue.
Avoiding allocating too large a partition to a small process sometimes leads to aberrations. Sometimes
larger partitions remain unused while endless queues of small processes form elsewhere. Memory is
therefore misused.

Variable Partition

Another solution is to create a single queue. When a partition becomes free, the queue is searched to
find the task that would occupy it optimally.
The risk is that small tasks are penalized. A countermeasure is to keep at least one small partition
that will only be accessible to small tasks. Another solution would be to say that a process can only
be skipped a certain number of times at most. After n denial, it will take place in a partition even if

4 MEMORY MANAGEMENT 20
NWS2: Operating System IUG - 2021/2022

the partition is much larger than necessary.

Memory Hierarchy

UG
Another way to avoid unoccupied memory locations at the end of partitions is to allocate spaces to
the processes that correspond exactly to the space that is useful to them.
As processes are created and completed, partitions are allocated and freed up, leaving fragmented and
unusable memory areas.
Memory becomes fragmented and is increasingly misused. It should be compacted by regularly moving
-I
processes, but this additional task slows down the system.
Partitioning the memory, whether with fixed or variable size partitions, does not allow the memory to
be used in the best possible way.

4.2.2 Pagination
S2

Processes require continuous address spaces. And this is difficult to achieve by dividing the memory
into partitions whose sizes correspond to those of the processes. Paging is a much more efficient memory
allocation technique. It provides processes with sequential address spaces from discontinuous memory
NW

spaces.
Paging divides memory and processes into blocks of the same size called pages. Memory pages are often
referred to as "frames" while process pages are simply referred to as "pages".
Not all (process) pages are active simultaneously, so they are not necessarily all present in the main
memory at the same time. Inactive pages are waiting on the disk. The address space is therefore virtual
and may be larger than the real memory.
Processors currently have a device, the "Memory Manager Unit" MMU, which allows processes to be
placed in memory without necessarily placing process pages in contiguous page frames. A distinction
is made between logical addresses that refer to process pages and physical addresses that refer to page
frames.
Addressing is done by using page numbers and offsets. The offset is the position relative to the beginning
of the page. The logical address consists of the process page number and an offset. The corresponding
physical address is formed from the number of the page frame where the process page is loaded and the
same offset as the logical address. The page frame number is recorded in a page table associated with
the process. The page frame number is retrieved using the process page number as an index.

4 MEMORY MANAGEMENT 21
NWS2: Operating System IUG - 2021/2022

Now let’s look at how the Memory Management Unit (MMU) maps physical and logical addresses. To
do this, it contains a page table with the page frame numbers.

pagination

UG
-I
4.2.3 Segmentation

Like pagination, segmentation is also a scheme of memory management. it is a technique which consists
S2

in dividing each process in variable size called segment and loading them into virtual memory. Each
segment has a number and its size. The segment number is used as an index. to translate a logical ad-

Segmantation
NW

dress into a physical address using the segmentation technique, the segment table is used. Presentation
of segment table

4 MEMORY MANAGEMENT 22
NWS2: Operating System IUG - 2021/2022

segment table

4.2.4 Segmentation and Pagination

Segmentation and pagination concern different problems. They are two techniques that can be com-
bined:
Segmentation divides the processes into linear zones that can be managed differently depending on
whether these segments are process-specific, shared, read, written or executed, and in such a way as to

UG
protect the processes between them.
Pagination divides memory into non-contiguous pages of the same size. It provides the processes with
continuous address spaces (necessary for the segments). Memory pages can be allocated only when a
process needs them. This results in a virtual memory that is larger than the real memory.

conclusion
-I
S2
NW

4 MEMORY MANAGEMENT 23
NWS2: Operating System IUG - 2021/2022

5 FILE MANAGEMENT
introduction
A file management system has limited capabilities and is designed to manage individual or group files,
such as special office documents and records. It may display report details, like owner, creation date,
state of completion and similar features useful in an office environment.

5.1 File systems


is a way of storing information and organising it in files on what is known in software engineering as
secondary storage2 (for computer hardware, this is mass storage such as a hard disk, SSD, CD-ROM,
USB stick, floppy disk, etc.). Such a file management system makes it possible to process and store

UG
large amounts of data as well as to share them between several computer programs. It provides the
user with an abstract view of his or her data and allows them to be located by a path.

5.1.1 File systems interface

A- File Concept -I
A file is a named collection of related information that is recorded on secondary storage such as magnetic
disks, magnetic tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or records
whose meaning is defined by the files creator and user.

i- file attributes
S2

Different operating system keep track of different file attributes, including:

• Name : This denotes the symbolic name of the file. The file name is the only attribute that is
readable by humans easily.
NW

• Identifier: This denotes the file name for the system. It is usually a number and uniquely
identifies a file in the file system.

• Type: If there are different types of files in the system, then the type attribute denotes the
type of file

• Location: This points to the device that a particular file is stored on and also the location of
the file on the device.

• Size This attribute defines the size of the file in bytes,words or blocks. It may also specify the
maximum allowed file size

• Protection: The protection attribute contains protection information for the file such as who
can read or write on the file.

5 FILE MANAGEMENT 24
NWS2: Operating System IUG - 2021/2022

• Time Date:

• User ID:

ii- file operations

The file ADT supports many common operations:

• Creating a file: To create a file, there should be space in the file system. Then the entry for
the new file must be made in the directory. This entry should contain information about the file
such as its name, its location etc.

• Writing a file: To write into a file, the system call should specify the name of the file and the
contents that need to be written. There should be a write pointer at the location where the write

UG
should take place. After the write process is done, the write pointer should be updated.

• Reading a file: To read from a file, the system call should specify the name and location of the
file. There should be a read pointer at the location where the read should take place. After the
read process is done, the read pointer should be updated.

• Repositioning within a file: This is also known as file seek. To reposition a file, the current
-I
file value is set to the appropriate entry. This does not require any actual I/O operations.

• Deleting a file: The file should be found in the directory to delete it. After that all the file space
is deleted so it can be reused by other files.
S2

• Truncating a file:This deletes the data from the file without destroying all its attributes. Only
the file length is reset to zero and the file contents are erased. The rest of the attributes remain
the same.
NW

iii- file types

• sons: mp3, avi etc...

• image: giff, png, jpeg etc ...

• video: mp4, 3gp, HD etc...

• text: .txt, exe, docs, etc...

B- Access Methods

The information in a file can be accessed in various ways. The most common among them are using
sequential access or direct access. More details about these are

5 FILE MANAGEMENT 25
NWS2: Operating System IUG - 2021/2022

Access Methods

i- sequential access

The information in a file is processed in order using sequential access. The files records are accessed on
after another. Most of the file systems such as editors, compilers etc. use sequential access. It is based
on the tape model of a file and so can be used with sequential access devices as well as random access

UG
devices. As seen in the image, the read and write operations in the file can only be done in a sequential

sequential access

-I
manner. However, the file can be reset to the beginning or rewinded as required.
S2

ii- direct access

In direct access or relative access files can be accessed in random for read and write operations. The
NW

direct access model is based on the disk model of a file, since it allows random accesses. In this method,
the file is divided into numbered blocks. Any of these arbitrary blocks can be read or written. For
example, we may read block 8, then write into block 10 and then read block 15. Direct access system
is quite useful and mostly databases are of this type.
A diagram to illustrate direct access is as follows

sequential access

5 FILE MANAGEMENT 26
NWS2: Operating System IUG - 2021/2022

5.1.2 File system structures

The file system structure is the most basic level of organization in an operating system. Almost all of
the ways an operating system interacts with its users, applications, and security model are dependent
upon the way it organizes files on storage devices. Providing a common file system structure ensures
users and programs are able to access and write files.
Three types of files structure in OS:

• A text file: It is a series of characters that is organized in lines.

• An object file: It is a series of bytes that is organized into blocks.

• A source file: It is a series of functions and processes.

UG
5.2 Organization: files and directories
As you use your computer to create and download files, it’s easy to become buried in a sea of tiny icons
with vague names. Just as it is important to organize papers so that you can find them later, it is
important to organize your computer files by creating folders and putting files inside of them.
In Windows, the primary way of interacting with files and folders is through the File Explorer appli-
-I
cation. (In older versions of Windows, this may be called Windows Explorer. In Macs, the equivalent
would be Finder.) There are a couple of ways to open File Explorer. The shortcut Win+E will open
File Explorer. It can also be opened by clicking the Start button and typing “File Explorer” or by
right-clicking any folder and selecting Open. By default, File Explorer is pinned to the task bar (see
below), and it can be opened from there.
S2

Note that in Windows, a file cannot contain any of the following characters: / : * ? " < > |. This is
because those characters have special meaning in Windows. (For example is included in file paths.) If
Windows encounters a file or folder with those symbols, it could potentially misread the file or folder
name and cause problems. As a precaution, Windows will not let you save files or folders with those
NW

characters, so don’t worry about saving a file with those characters in the name by mistake.

5.3 Secondary storage management, file systems: FAT and NTFS


The study of file systems for PC computers inevitably involves knowledge of the components of hard
disks. Many disks are formatted to contain only one large partition, which does not provide optimal
data security, nor does it allow you to organise your files in a way that makes them easy to find, nor does
it allow you to use your disk space most efficiently. If you want to install multiple operating systems
on one disk, or use disk space as efficiently as possible, or secure your files as much as possible, or
physically separate data to make it easier to find files and back up data, you need to understand how
to use multiple partitions of different types.

5.3.1 Partition types

There are two main types of partitions: primary and Secondary partitions. Extended partitions can be
further divided into logical partitions. These are all managed as cylinders on the disk. There can be

5 FILE MANAGEMENT 27
NWS2: Operating System IUG - 2021/2022

up to four primary partitions on a hard disk, one of which can be an extended partition. So you can
have four primary partitions or three primary partitions and one extended partition.

• A primary partition can contain any operating system and thus any file system containing
data files, such as applications and user files.

• Extended partitions were invented to provide a way around the arbitrary limit of four partitions.
An extended partition is essentially a "container" in which you can continue to physically divide
your disk space by creating an unlimited number of logical partitions.

• Logical partitions – Logical partitions can only exist within an extended partition and are
intended to contain only data files and operating systems that can be booted from a logical
partition (such as Linux).

UG
Partions Example

-I
S2

5.3.2 FAT (File Allocation Table)


NW

The FAT is the index table of the linked lists that give the traces of all the files. The FAT is part of a set
of components found at the very beginning of each floppy disk or hard disk. It is located immediately
after the reserved area of sector 0.
It covers a number of sectors depending on the size of the media (floppy or hard disk) and represents a
logical grouping called a cluster. On a single-sided floppy disk there is 1 sector per cluster, on a 2048
MB hard disk there are 64 sectors per cluster. A copy of the FAT (for security reasons) is located
immediately after the first FAT.
The great advantage of clustering is that it allows a whole series of data to be grouped together in the
same area, thus reducing disk accesses due to file fragmentation. This method saves a lot of time but
has a major drawback.
The size of the individual FAT entries is 12 bits (FAT12) or 16 bits (FAT16) depending on the size of the
volumes. With 12 bits, 4096 different clusters can be represented. Depending on the organisation of the
sectors per cluster, this can be between 8 MB and 32 MB. The 16-bit FAT can address 65536 clusters.
It is therefore currently impossible to address more than 2 GB with the 16-bit structure (65536 * 64
* 512 = 2,147,483,648). For this reason, the Windows 98 version was delivered with the new FAT32

5 FILE MANAGEMENT 28
NWS2: Operating System IUG - 2021/2022

structure; this is capable of addressing 2 terabytes (2048 GB), but still with the same limitations of
FAT.

5.3.3 NTFS (New Technology File System)

The NTFS (New Technology File System) file system uses a structure called the Master File Table
(MFT) to hold detailed information about files, which allows detailed information about files to be
stored. This system allows the use of long names, but, Unlike the FAT32 system, it is case sensitive,
i.e. it is able to differentiate between upper and lower case names.
In terms of performance, accessing files on an NTFS partition is faster than on a FAT partition because
it uses a powerful binary tree to locate files. The theoretical limit for the size of a partition is 16
exabytes (17 billion TB), but the physical limit for a disk is 2 TB.
NTFS is very important for security, as it allows attributes to be defined for each file. Version 5 of

UG
this file system (standard under Windows 2000 aka NT 5) brings even more new features, including
increased performance, disk quotas per volume defined for each user.

5.4 File protection Security


In computer systems, alot of user’s information is stored, the objective of the operating system is to
-I
keep safe the data of the user from the improper access to the system. Protection can be provided in
number of ways. For a single laptop system, we might provide protection by locking the computer in a
desk drawer or file cabinet. For multi-user systems, different mechanisms are used for the protection.

5.4.1 Types of Access


S2

The files which have direct access of the any user have the need of protection. The files which are
not accessible to other users doesn’t require any kind of protection. The mechanism of the protection
provide the facility of the controlled access by just limiting the types of access to the file. Access can be
NW

given or not given to any user depends on several factors, one of which is the type of access required.
Several different types of operations can be controlled:

• Read –Reading from a file.

• Write – Writing or rewriting the file.

• Execute – Loading the file and after loading the execution process starts.

• Append – Writing the new information to the already existing file, editing must be end at the
end of the existing file.

• Delete – Deleting the file which is of no use and using its space for the another data.

• List – List the name and attributes of the file.

5 FILE MANAGEMENT 29
NWS2: Operating System IUG - 2021/2022

5.4.2 Access Control

There are different methods used by different users to access any file. The general way of protection is
to associate identity-dependent access with all the files and directories an list called access-control list
(ACL) which specify the names of the users and the types of access associate with each of the user. The
main problem with the access list is their length. If we want to allow everyone to read a file, we must
list all the users with the read access. This technique has two undesirable consequences: Constructing
such a list may be tedious and unrewarding task, especially if we do not know in advance the list of the
users in the system.
To condense the length of the access-control list, many systems recognize three classification of users in
connection with each file:

• Owner – U: Owner is the user who has created the file.

UG
• Group – G: A group is a set of members who has similar needs and they are sharing the same
file.

• Universe – O: In the system, all other users are under the category called universe.

conclusion
-I
S2
NW

5 FILE MANAGEMENT 30
NWS2: Operating System IUG - 2021/2022

6 Divices Management
Introduction
The path between the operating system and virtually all hardware not on the computer’s motherboard
goes through a special program called a driver. Much of a driver’s function is to be the translator
between the electrical signals of the hardware subsystems and the high-level programming languages
of the operating system and application programs. Drivers take data that the operating system has
defined as a file and translate them into streams of bits placed in specific locations on storage devices,
or a series of laser pulses in a printer.

6.1 I/O devices

UG
Managing input and output is largely a matter of managing queues and buffers, special storage facilities
that take a stream of bits from a device, perhaps a keyboard or a serial port, hold those bits, and release
them to the CPU at a rate with which the CPU can cope. This function is especially important when
a number of processes are running and taking up processor time. The operating system will instruct a
buffer to continue taking input from the device, but to stop sending data to the CPU while the process
using the input is suspended. Then, when the process requiring input is made active once again, the
-I
operating system will command the buffer to send data. This process allows a keyboard or a modem
to deal with external users or computers at a high speed even though there are times when the CPU
can’t use input from those sources.
Managing all the resources of the computer system is a large part of the operating system’s function
and, in the case of real-time operating systems, may be virtually all the functionality required. For other
S2

operating systems, though, providing a relatively simple, consistent way for applications and humans
to use the power of the hardware is a crucial part of their reason for existing.

6.2 Organization of I/O function


NW

The I/O subsystem of a computer provides an efficient mode of communication between the central
system and the outside environment. It handles all the inputoutput operations of the computer system

6.2.1 Peripheral Devices

Input or output devices that are connected to computer are called peripheral devices. These devices
are designed to read information into or out of the memory unit upon command from the CPU and
are considered to be the part of computer system. These devices are also called peripherals. For
example: Keyboards, display units and printers are common peripheral devices. There are three types of
peripherals: 1. Input peripherals : Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc. 2. Output peripherals: Allows information output, from the computer to the
outside world. Example: Printer, Monitor etc 3. Input-Output peripherals: Allows both input(from
outised world to computer) as well as, output(from computer to the outside world). Example: Touch
screen etc.

6 DIVICES MANAGEMENT 31
NWS2: Operating System IUG - 2021/2022

6.2.2 Interfaces

Interface is a shared boundary btween two separate components of the computer system which can be
used to attach two or more components to the system for communication purposes. There are two types
of interface: 1. CPU Inteface 2. I/O Interface

6.2.3 Input-Output

Interface

Peripherals connected to a computer need special communication links for interfacing with CPU. In
computer system, there are special hardware components between the CPU and peripherals to control or
manage the input-output transfers. These components are called input-output interface units because
they provide communication links between processor bus and peripherals. They provide a method for

UG
transferring information between internal system and input-output devices.

-I
S2

Input-Output Interface
NW

The Input/output Interface is required because there are exists many differences between the central
computer and each peripheral while transferring information.Some major differences are:

1. Peripherals are electromechanical and electromagnetic devices and their manner of operation
is different from the operation of CPU and memory, which are electronic device.Therefore, a
conversion of signal values may be required.

2. The data transfer rate of peripherals is usually slower than the transfer rate of CPU, and conse-
quently a synchronisation mechanism is needed.

3. Data codes and formats in peripherals differ from the word format in the CPU and Memory.

4. The operating modes of peripherals are differ from each other and each must be controlled so as
not to disturb the operation of other peripherals connected to CPU.

These differences are resolved through input-output interface.As input-output interface(Interface Unit)
contain various components, each of which performs one or more vital function for smooth transforming
of information between CPU and Peripherals.

6 DIVICES MANAGEMENT 32
NWS2: Operating System IUG - 2021/2022

Channels

A channel is an independent hardware component that co-ordinate all I/O to a set of controllers. Com-
puter systems that use I/O channel have special hardware components that handle all I/O operations.
Channels use separate, independent and low cost processors for its functioning which are called Chan-
nel Processors. Channel processors are simple, but contains sufficient memory to handle all I/O tasks.
When I/O transfer is complete or an error is detected, the channel controller communicates with the
CPU using an interrupt, and informs CPU about the error or the task completion. Each channel sup-
ports one or more controllers or devices. Channel programs contain list of commands to the channel
itself and for various connected controllers or devices. Once the operating system has prepared a list
of I/O commands, it executes a single I/O machine instruction to initiate the channel program, the
channel then assumes control of the I/O operations until they are completed.

UG
6.3 I/O buffering
A buffer is a memory area that stores data being transferred between two devices or between a device
and an application.

6.3.1 Uses of I/O Buffering


-I
Buffering is done to deal effectively with a speed mismatch between the producer and consumer of the
data stream.
A buffer is produced in main memory to heap up the bytes received from modem. After receiving the
data in the buffer, the data get transferred to disk from buffer in a single operation. This process of
S2

data transfer is not instantaneous, therefore the modem needs another buffer in order to store additional
incoming data.
When the first buffer got filled, then it is requested to transfer the data to disk. The modem then
starts filling the additional incoming data in the second buffer while the data in the first buffer getting
NW

transferred to disk.
When both the buffers completed their tasks, then the modem switches back to the first buffer while
the data from the second buffer get transferred to the disk.
The use of two buffers disintegrates the producer and the consumer of the data, thus minimizes the
time requirements between them.
Buffering also provides variations for devices that have different data transfer sizes.

6.3.2 Types of various I/O buffering techniques :

Single buffer :

When a user process issues an I/O request, the O.S assigns a buffer in the system portion of main
memory to the operation.
In the block oriented devices, the techniques can be used as follows: Input transfers are made to the
system buffer. When the transfer is complete, the process moves the block into user space and request
another block. This is called reading ahead, it is done in the expectation that the block will be needed

6 DIVICES MANAGEMENT 33
NWS2: Operating System IUG - 2021/2022

sometimes in future.
This approach will generally provide a speed up compared to the lack of system buffering. The O.S
must keep track of the assignment of system buffers to user processes.
Similar considerations apply to block oriented output. When data are being transmitted to a device,
they are first copied from user space into the system buffer, from which they will ultimately be written.
The requesting process is now free to continue.
Suppose T is the time required to input one block and C is the computation time required for input re-
quest. Without buffering: Execution time is T+C. Without buffering: Execution time is max [C,T]+M,
where M is time requied to move the data from system buffer to user memory.
In stream oriented I/O, it can be used in two ways, Line-at a time fashion. Line- at a time operation is
used for scroll made terminals. User inputs one line at a time, with a carriage return signaling at the
end of a line.

UG
Byte-at a time fashion. Byte-at a time operation is used on forms mode, terminals when each
keystroke is significant.

-I
Double buffer :
S2

An improvement over single buffering is by assigning two system buffers to the operations. A process
transfers data to one buffer while operating system empties the other as shown in fig. For block oriented
transfer execution time is Max[C,T]. It is possible to keep the block oriented device going at full speed.
NW

If C<=T, i.e. computation time is less than the time required to input one block. If C>T, i.e.
computation time is greater than the time required to input one block, then double buffering ensures
that the process will not have to wait on I/O.
For Stream oriented input again two types.
For line- at a time I/O, the user process need not be suspendeds for input or output, unless process
runs ahead of double buffer.
For byte- at a time operations, double buffer offers no advantage over a single buffer of twice the length.

6 DIVICES MANAGEMENT 34
NWS2: Operating System IUG - 2021/2022

Circular buffer :

Double buffering may be inadequate, if the process performs rapid burst of I/O. When two or more
buffers are used.
The collection of buffers is called as a circular buffer, with each buffer being one unit in the circular
buffer.

UG
6.4 Disk scheduling, RAID
A process needs two type of time, CPU time and IO time. For I/O, it requests the Operating system
to access the disk. However, the operating system must be fare enough to satisfy each request and
at the same time, operating system must maintain the efficiency and speed of process execution. The
-I
technique that operating system uses to determine the request which is to be satisfied next is called
disk scheduling.

6.4.1 Definition

Seek time is the time taken in locating the disk arm to a specified track where the read/write request
S2

will be satisfied.
Rotational Latency is the time taken by the desired sector to rotate itself to the position from where
it can access the R/W heads.
Transfer Time is the time taken to transfer the data.
NW

Disk access time is given as, Disk Access Time = Rotational Latency + Seek Time + Transfer Time
Disk Response Time is the average of time spent by each request waiting for the IO operation.

6.4.2 Disk Scheduling Algorithms

Disk scheduling algorithm is the algorithm that allow the CPU to select a disk request from the
queue of IO requests and decide the schedule when this request will be processed. iths diferent goals
are: Fairness, High throughout and Minimal traveling head time.

FCFS scheduling algorithm

It is the simplest Disk Scheduling algorithm. It services the IO requests in the order in which they
arrive. There is no starvation in this algorithm, every request is serviced.

6 DIVICES MANAGEMENT 35
NWS2: Operating System IUG - 2021/2022

Example Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4,
50, 89, 52, 61, 87, 25 Head pointer starting at 50 and moving in left direction. Find the number of head
movements in cylinders using FCFS scheduling.

UG
-I
S2

Number of cylinders moved by the head


= (50-45)+(45-21)+(67-21)+(90-67)+(90-4)+(50-4)+(89-50)+(61-52)+(87-61)+(87-25)
NW

= 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62

= 376
Disadvantages of this algorithm are: The scheme does not optimize the seek time, The request may
come from different processes therefore there is the possibility of inappropriate movement of the head.

SSTF (shortest seek time first) algorithm

Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least disk
arm movement from its current position regardless of the direction. It reduces the total seek time as
compared to FCFS. It allows the head to move to the closest track in the service queue.

Example
Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4, 89, 52, 61, 87, 25

6 DIVICES MANAGEMENT 36
NWS2: Operating System IUG - 2021/2022

Head pointer starting at 50. Find the number of head movements in cylinders using SSTF scheduling.

UG
Number of cylinders = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136
Disadvantages of this algorithm are: It may cause starvation for some requests. Switching direction on
the frequent basis slows the working of algorithm and It is not the most optimal algorithm.
-I
Conclusion
conclusion
S2
NW

6 DIVICES MANAGEMENT 37

You might also like