Operating System

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

1.

Overview

An Operating System (OS) is an interface between a computer user and computer


hardware. An operating system is a software which performs all the basic tasks like
file management, memory management, process management, handling input and
output, and controlling peripheral devices such as disk drives and printers.
An operating system is software that enables applications to interact with a
computer's hardware. The software that contains the core components of the
operating system is called the kernel.
The primary purposes of an Operating System are to enable applications
(softwares) to interact with a computer's hardware and to manage a system's
hardware and software resources.

Some popular Operating Systems include Linux Operating System, Windows


Operating System, VMS, OS/400, AIX, z/OS, etc. Today, Operating systems is found
almost in every device like mobile phones, personal computers, mainframe
computers, automobiles, TV, Toys etc.

Definitions

We can have a number of definitions of an Operating System. Let's go through few


of them:

An Operating System is the low-level software that supports a computer's basic


functions, such as scheduling tasks and controlling peripherals.

We can refine this definition as follows:

An operating system is a program that acts as an interface between the user and
the computer hardware and controls the execution of all kinds of programs.

Following is another definition taken from Wikipedia:

An operating system (OS) is system software that manages computer hardware,


software resources, and provides common services for computer programs.

Definition:

 An Operating System (OS) is system software that manages hardware and


software resources on a computer.

Functions:

 Resource Management: Manages CPU, memory, storage, and I/O devices.


 User Interface: Provides command-line (CLI) or graphical user interfaces
(GUI).
 File System Management: Organizes and controls access to files and
directories.
 Task Management: Handles process scheduling and multitasking.

Examples:

 Windows, macOS, Linux, Android, iOS.

2. OS Principles

Abstraction:

 Hardware Abstraction: Hides the complexity of hardware from users and


applications.
 System Calls: Interface for programs to request services from the OS.

Resource Management:

 CPU Scheduling: Allocates CPU time to processes.


 Memory Management: Manages allocation and deallocation of memory.
 I/O Management: Controls input and output operations.

Multitasking:

 Process Management: Supports multiple processes running concurrently.


 Thread Management: Enables concurrent execution within a single
process.

Security:

 Access Control: Restricts access to resources based on permissions.


 Authentication: Verifies user identity.
 Authorization: Grants or denies access based on user roles.

User Interface:

 CLI: Command-line interface allows text-based interaction.


 GUI: Graphical user interface provides visual interaction through icons and
windows.

3. Concurrency
Concurrency in operating systems refers to the ability of an operating system to
handle multiple tasks or processes at the same time. With the increasing demand
for high performance computing, concurrency has become a critical aspect of
modern computing systems. Operating systems that support concurrency can
execute multiple tasks simultaneously, leading to better resource utilization,
improved responsiveness, and enhanced user experience. Concurrency is essential
in modern operating systems due to the increasing demand for multitasking, real-
time processing, and parallel computing. It is used in a wide range of applications,
including web servers, databases, scientific simulations, and multimedia processing.
However, concurrency also introduces new challenges such as race conditions,
deadlocks, and priority inversion, which need to be managed effectively to ensure
the stability and reliability of the system.

Principles of Concurrency

The principles of concurrency in operating systems are designed to ensure that


multiple processes or threads can execute efficiently and effectively, without
interfering with each other or causing deadlock.

 Interleaving − Interleaving refers to the interleaved execution of multiple


processes or threads. The operating system uses a scheduler to determine
which process or thread to execute at any given time. Interleaving allows for
efficient use of CPU resources and ensures that all processes or threads get a
fair share of CPU time.
 Synchronization − Synchronization refers to the coordination of multiple
processes or threads to ensure that they do not interfere with each other.
This is done through the use of synchronization primitives such as locks,
semaphores, and monitors. These primitives allow processes or threads to
coordinate access to shared resources such as memory and I/O devices.
 Mutual exclusion − Mutual exclusion refers to the principle of ensuring that
only one process or thread can access a shared resource at a time. This is
typically implemented using locks or semaphores to ensure that multiple
processes or threads do not access a shared resource simultaneously.
 Deadlock avoidance − Deadlock is a situation in which two or more
processes or threads are waiting for each other to release a resource,
resulting in a deadlock. Operating systems use various techniques such as
resource allocation graphs and deadlock prevention algorithms to avoid
deadlock.
 Process or thread coordination − Processes or threads may need to
coordinate their activities to achieve a common goal. This is typically
achieved using synchronization primitives such as semaphores or message
passing mechanisms such as pipes or sockets.
 Resource allocation − Operating systems must allocate resources such as
memory, CPU time, and I/O devices to multiple processes or threads in a fair
and efficient manner. This is typically achieved using scheduling algorithms
such as round-robin, priority-based, or real-time scheduling.

Processes and Threads:

 Process: An instance of a program in execution. Each process has its own


memory space.
 Thread: A lightweight unit of execution within a process. Threads share the
process’s memory space.

Synchronization:

 Mutex: A mutual exclusion mechanism to avoid concurrent access to shared


resources.
 Semaphore: A signaling mechanism that controls access based on resource
availability.
 Monitor: A higher-level synchronization construct that manages access to
resources.

Deadlock:

 Definition: A situation where a set of processes are blocked because each


process is waiting for resources held by others.
 Prevention: Techniques to ensure that deadlock conditions cannot occur.
 Avoidance: Dynamically ensuring that resource allocation does not lead to
deadlock.
 Detection: Identifying and resolving deadlock situations after they occur.

4. Scheduling and Dispatch


Scheduler

It is used for handling the process and makes scheduling for the process. The main
task of the scheduler is to select the process and make order of process, and to
decide which process runs first.

Types of Schedulers

There are three different types of schedulers, which are as follows −

Long term scheduler

Long term scheduling is performed when a new process is created, if the number of
ready processes in the ready queue becomes very high, and then there is an
overhead on the operating system, for maintaining long lists, containing switching
and dispatching increases. Therefore, allowing only a limited number of processes
into the ready queue, the long term scheduler manages this.

The long term scheduler is given below −

Medium term scheduler

After execution then enters into input/ output operation state, then again comes
back into a ready state. So, at that time a medium-term scheduler is used.

The medium-term scheduler is given below −


Short term scheduler

All the processes are in a ready state, which process should run first, it has to be
decided by the short-term scheduler.

The short term scheduler is given below −

Dispatcher

The dispatcher is done after the scheduler. It gives control of the CPU to the process
selected by the short-term scheduler. After selecting the process, the dispatcher
gives CPU to it.

Functions

The functions of the dispatcher are as follows −


 Switching context.
 Switching to user mode.

The dispatcher is given below −

https://www.tutorialspoint.com/latest/courses?
utm_source=tutorialspoint&utm_medium=tutorials_3p&utm_campaign=internal

Differences

The major differences between scheduler and dispatcher are as follows −

 All the processes are in a ready state with no schedule.


 At that time the scheduler used some algorithm.
 Scheduling all the processes in the ready queue.
 After completing scheduling, the dispatcher enters.
 The dispatcher moves the selected process from the ready queue into the
running state.
 The same process continues simultaneously.
 Scheduler scheduling the process, at the same time dispatcher dispatches
selected processes to the running state.

CPU Scheduling:

 First-Come-First-Serve (FCFS): Processes are executed in the order they


arrive.
 Shortest Job Next (SJN): Executes the process with the shortest execution
time first.
 Round Robin (RR): Each process receives a fixed time slice in a cyclic order.
 Priority Scheduling: Processes are executed based on priority levels.

Context Switching:
 Definition: The process of saving the state of a currently running process
and loading the state of the next process.
 Overhead: Involves saving and loading registers, program counters, and
other process-specific information.

Dispatching:

 Definition: The action of transferring control from the scheduler to the


process selected for execution.
 Dispatch Latency: The time taken to switch from one process to another.

5. Memory Management

Memory management is the functionality of an operating system which handles or


manages primary memory and moves processes back and forth between main
memory and disk during execution. Memory management keeps track of each and
every memory location, regardless of either it is allocated to some process or it is
free. It checks how much memory is to be allocated to processes. It decides which
process will get memory at what time. It tracks whenever some memory gets freed
or unallocated and correspondingly it updates the status.

This tutorial will teach you basic concepts related to Memory Management.

Static vs Dynamic Loading

The choice between Static or Dynamic Loading is to be made at the time of


computer program being developed. If you have to load your program statically,
then at the time of compilation, the complete programs will be compiled and linked
without leaving any external program or module dependency. The linker combines
the object program with other necessary object modules into an absolute program,
which also includes logical addresses.

If you are writing a Dynamically loaded program, then your compiler will compile
the program and for all the modules which you want to include dynamically, only
references will be provided and rest of the work will be done at the time of
execution.

At the time of loading, with static loading, the absolute program (and data) is
loaded into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a
disk in relocatable form and are loaded into memory only when they are needed by
the program.
Static vs Dynamic Linking

As explained above, when static linking is used, the linker combines all other
modules needed by a program into a single executable program to avoid any
runtime dependency.

When dynamic linking is used, it is not required to link the actual module or library
with the program, rather a reference to the dynamic module is provided at the time
of compilation and linking. Dynamic Link Libraries (DLL) in Windows and Shared
Objects in Unix are good examples of dynamic libraries.

Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of


main memory (or move) to secondary storage (disk) and make that memory
available to other processes. At some later time, the system swaps back the
process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running


multiple and big processes in parallel and that's the reason Swapping is also
known as a technique for memory compaction.
The total time taken by swapping process includes the time it takes to move the
entire process to a secondary disk and then to copy the process back to memory, as
well as the time the process takes to regain main memory.

Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second.
The actual transfer of the 1000K process to or from memory will take

2048KB / 1024KB per second


= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds plus other
overhead where the process competes to regain main memory.

Memory Allocation

Main memory usually has two partitions −

 Low Memory − Operating system resides in this memory.


 High Memory − User processes are held in high memory.

Operating system uses the following memory allocation mechanism.

S.N
Memory Allocation & Description
.

Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect
1 user processes from each other, and from changing operating-system
code and data. Relocation register contains value of smallest physical
address whereas limit register contains range of logical addresses. Each
logical address must be less than the limit register.

Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-
2 sized partitions where each partition should contain only one process.
When a partition is free, a process is selected from the input queue and is
loaded into the free partition. When the process terminates, the partition
becomes available for another process.

Fragmentation

As processes are loaded and removed from memory, the free memory space is
broken into little pieces. It happens after sometimes that processes cannot be
allocated to memory blocks considering their small size and memory blocks remains
unused. This problem is known as Fragmentation.

Fragmentation is of two types −

S.N
Fragmentation & Description
.

External fragmentation
1 Total memory space is enough to satisfy a request or to reside a process
in it, but it is not contiguous, so it cannot be used.

2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is
left unused, as it cannot be used by another process.

The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented
memory −

External fragmentation can be reduced by compaction or shuffle memory contents


to place all free memory together in one large block. To make compaction feasible,
relocation should be dynamic.

The internal fragmentation can be reduced by effectively assigning the smallest


partition but large enough for the process.

Virtual Memory:

 Definition: An abstraction that allows applications to use more memory than


physically available.
 Paging: Divides memory into fixed-size blocks (pages) and maps them to
physical memory.
 Segmentation: Divides memory into segments based on logical divisions.

Memory Allocation:

 Contiguous Allocation: Allocates a single contiguous block of memory to a


process.
 Paging and Segmentation: Provides non-contiguous memory allocation to
optimize usage.
 Allocation Strategies:
o First-Fit: Allocates the first available block of sufficient size.
o Best-Fit: Allocates the smallest block that is sufficient.
o Worst-Fit: Allocates the largest available block.

Device management
An essential part of an operating system is device management, which controls how
software applications interact with the hardware attached to the computer system.
It entails the process of locating, setting up, allocating, and managing access to
devices like printers, scanners, storage units, and network interfaces. The device
management system guarantees that the hardware devices are used effectively by
the system software and applications by providing a consistent and dependable
interface to the hardware devices. It also covers input/output control, error
handling, and interrupt management. The operating system may more effectively
utilize the resources at its disposal thanks to the device management system, which
also enhances the computer system's overall performance.

The goals of device management include:


Automatically detecting and identifying devices when connected.
Configuring optimal drivers and software for devices.
Allowing users to easily find, enable/disable, adjust settings for devices.
Monitoring device status, health, errors, and events.
Updating firmware and software for devices.
Applying security policies to devices.
Safely removing and disabling devices.
Without effective device management, operating systems would not be able to
utilize hardware resources fully or ensure devices work properly. Users would have
to manually install drivers, configure settings, and manage devices themselves.
Instead, device management makes the experience seamless.

Basic Functions of Device Management


Device management involves several core functions:

Device detection – Identifying when devices are connected or disconnected.


Device identification – Determining details like manufacturer, make, model,
capabilities, etc.
Device configuration – Installing appropriate drivers, setting optimal defaults.
Device status monitoring – Tracking health metrics, errors, battery levels, etc.
Device maintenance – Updating software/firmware, applying security patches.
Device enable/disable – Allowing devices to be turned on/off as needed.
These functions work together to make devices work smoothly from the user’s
perspective.

Key Components of Device Management Architecture


The architecture that supports device management in modern OSs like
Windows, macOS, Linux, and Unix generally consists of:

Device management in operating system

Device Drivers
Device drivers are software modules that allow the OS to interact with a
particular device. They abstract away hardware complexity so higher level
programs don’t need device specifics.

Device Manager Service


The device manager service oversees and coordinates device management. It
maintains device state, interacts with drivers, and exposes interfaces for other
programs.

Device Management User Interface


The user-facing interface for viewing and configuring devices. This includes
control panel applets, desktop context menus, system preferences panes, and
administration tools.

Device Management Frameworks / APIs


Frameworks like Microsoft’s WMI and Apple’s IOKit allow interaction with
devices in a structured way. APIs can be used by apps and tools.

Background Services & Processes


Background services monitor device events, apply policies, check status, and
perform maintenance tasks like installing updates.

Configuration Repositories
Centralized repositories store device metadata, drivers, policies, and other
configuration details that components leverage.

This architecture provides robust, layered device management capabilities


while abstracting complexity from users. Next, let’s look closer at the key functions.

Critical Importance of Device Management In Operating System


Robust device management delivers several key benefits:

Improved User Experience


Automating driver installation, configuration, and device tasks makes things
“just work” for users. They don’t have to be technical experts.
Increased Performance
Devices perform better with tailored configuration vs generic defaults. System
resources are also used efficiently.
Enhanced Reliability
Proactive health monitoring and maintenance reduces crashes and
malfunctions. Device issues can be resolved faster.
Better Security
Central management of device policies, updates, and monitoring improves
security posture. Vulnerabilities can be addressed.

Platform Stability
With unreliable devices, the entire system can suffer. Device management
maintains platform stability.
Reduced Support Costs
Automation and diagnostics resolve more device issues without user
intervention, reducing support costs.

For these reasons and more, every modern OS invests heavily in robust device
management.

Protection and Security


Protection and security requires that computer resources such as CPU, softwares,
memory etc. are protected. This extends to the operating system as well as the
data in the system. This can be done by ensuring integrity, confidentiality and
availability in the operating system. The system must be protect against
unauthorized access, viruses, worms etc.

Threats to Protection and Security


A threat is a program that is malicious in nature and leads to harmful effects for the
system. Some of the common threats that occur in a system are .
Virus
Viruses are generally small snippets of code embedded in a system. They are very
dangerous and can corrupt files, destroy data, crash systems etc. They can also
spread further by replicating themselves as required.

Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious
user can use these to enter the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the
knowledge of the users. It can be exploited to harm the data or files in a system by
malicious people.
Worm
A worm can destroy a system by using its resources to extreme levels. It can
generate multiple copies which claim all the resources and don't allow any other
processes to access them. A worm can shut down a whole network in this way.
Denial of Service
These type of attacks do not allow the legitimate users to access a system. It
overwhelms the system with requests so it is overwhelmed and cannot work
properly for other user.

Protection and Security Methods


The different methods that may provide protect and security for different computer
systems are −

Authentication
This deals with identifying each user in the system and making sure they are who
they claim to be. The operating system makes sure that all the users are
authenticated before they access the system. The different ways to make sure that
the users are authentic are:
Username/ Password
Each user has a distinct username and password combination and they need to
enter it correctly before they can access the system.
User Key/ User Card
The users need to punch a card into the card slot or use they individual key on a
keypad to access the system.

User Attribute Identification


Different user attribute identifications that can be used are fingerprint, eye retina
etc. These are unique for each user and are compared with the existing samples in
the database. The user can only access the system if there is a match.

One Time Password


These passwords provide a lot of security for authentication purposes. A one time
password can be generated exclusively for a login every time a user wants to enter
the system. It cannot be used more than once. The various ways a one time
password can be implemented are −
Random Numbers
The system can ask for numbers that correspond to alphabets that are pre
arranged. This combination can be changed each time a login is required.

Secret Key
A hardware device can create a secret key related to the user id for login. This key
can change each time.

Virtualization
Virtualization is the creation of a virtual version of an actual piece of technology,
such as an operating system (OS), a server, a storage device or a network resource.

Virtualization uses software that simulates hardware functionality to create a virtual


system. This practice lets IT organizations run multiple OSes, more than one virtual
system and various applications on a single server. The benefits of virtualization
include greater efficiency and economies of scale.

OS virtualization uses software that enables a piece of hardware to run multiple


operating system images at the same time. The technology got its start on
mainframes decades ago to save on expensive processing power.

How virtualization works


Virtualization technology abstracts an application, guest operating system or data
storage away from its underlying hardware or software.

Organizations that divide their hard drives into different partitions already engage in
virtualization. A partition is the logical division of a hard disk drive that, in effect,
creates two separate hard drives.

Server virtualization is a key use of virtualization technology. It uses a software


layer called a hypervisor to emulate the underlying hardware. This includes the
central processing unit's (CPU's) memory, input/output and network traffic.
Hypervisors take the physical resources and separate them for the virtual
environment. They can sit on top of an OS or be directly installed onto the
hardware.

Xen hypervisor is an open source software program that manages the low-level
interactions that occur between virtual machines (VMs) and physical hardware. It
enables the simultaneous creation, execution and management of various VMs in
one physical environment. With the help of the hypervisor, the guest OS, which
normally interacts with true hardware, does so with a software emulation of that
hardware.

Although OSes running on true hardware often outperform those running on virtual
systems, most guest OSes and applications don't fully use the underlying hardware.
Virtualization removes dependency on a given hardware platform, creating greater
flexibility, control and isolation for environments. Plus, virtualization has spread
beyond servers to include applications, networks, data management and desktops.

Advantages of virtualization
The overall benefit of virtualization is that it helps organizations maximize output.
More specific advantages include the following:

Lower costs. Virtualization reduces the amount of hardware servers companies and
data centers require. This lowers the overall cost of buying and maintaining large
amounts of hardware.
Easier disaster recovery. DR is simple in a virtualized environment. Regular
snapshots provide up-to-date data, letting organizations easily back up and recover
VMs, avoiding unnecessary downtime. In an emergency, a virtual machine can
migrate to a new location within minutes.
Easier testing. Testing is less complicated in a virtual environment. Even in the
event of a large mistake, the test can continue without stopping and returning to
the beginning. The test simply returns to the previous snapshot and proceeds.
Faster backups. Virtualized environments take automatic snapshots throughout the
day to guarantee all data is up to date. VMs can easily migrate between host
machines and be efficiently redeployed.
Improved productivity. Virtualized environments require fewer physical resources,
which results in less time spent managing and maintaining servers. Tasks that take
days or weeks in physical environments are done in minutes. This lets staff
members spend their time on more productive tasks, such as raising revenue and
facilitating business initiatives
Single-minded servers. Virtualization provides a cost-effective way to separate
email, database and web servers, creating a more comprehensive and dependable
system.
Optimize deployment and redeployment. When a physical server crashes, the
backup server might not always be ready or up to date. If this is the case, then the
redeployment process can be time-consuming and tedious. However, in a
virtualized data center, virtual backup tools expedite the process to minutes.
Reduced heat and improved energy savings. Companies that use a lot of hardware
servers risk overheating their physical computing resources. Virtualization
decreases the number of servers used for data management.
Environmental consciousness. Companies and data centers that lots of electricity
for hardware have a large carbon footprint. Virtualization can reduce significantly
decreases the amount of cooling and power required and the overall carbon
footprint.
Cloud migration. VMs can be deployed from the data center to build a cloud-based
IT infrastructure. The ability to embrace a cloud-based approach with virtualization
eases migration to the cloud.
Lack of vendor dependency. VMs are agnostic in terms of hardware configuration.
As a result, virtualizing hardware and software means that a company no longer
requires a single vendor for these physical resources.

Limitations of virtualization
Before converting to a virtualized environment, organizations should consider the
various limitations:

Costs. The investment required for virtualization software and hardware can be
expensive. If the existing infrastructure is more than five years old, organizations
should consider an initial renewal budget. Many businesses work with a managed
service provider to offset costs with monthly leasing and other purchase options.
Software licensing considerations. Vendors view software use within a virtualized
environment in different ways. It's important to understand how a specific vendor
does this.
Time and effort. Converting to virtualization takes time and has a learning curve
that requires IT staff to be trained in virtualization. Furthermore, some applications
don't adapt well to a virtual environment. IT staff must be prepared to face these
challenges and address them prior to converting.
Security. Virtualization comes with unique security risks. Data is a common target
for attacks, and the chance of a data breach increases with virtualization.
Complexity. In a virtual environment, users can lose control of what they can do
because several parts of the environment must collaborate to perform the same
task. If any part doesn't work, the entire operation can fail.
References:

https://www.tutorialspoint.com/operating_system

Operating System Tutorial - GeeksforGeeks

You might also like