Operating System Interview Questions
Operating System Interview Questions
Operating System Interview Questions
Processes can be in one of three states: running, ready, or waiting. The running state means
that the process has all the resources it needs for execution and it has been given permission
by the operating system to use the processor. Only one process can be in the running state at
any given time. The remaining processes are either in a waiting state (i.e., waiting for some
external event to occur such as user input or disk access) or a ready state (i.e., waiting for
permission to use the processor). In a real operating system, the waiting and ready states are
implemented as queues that hold the processes in these states.
3. What is a Thread?
A thread is a single sequence stream within a process. Because threads have some of the
properties of processes, they are sometimes called lightweight processes. Threads are a
popular way to improve the application through parallelism. For example, in a browser,
multiple tabs can be different threads. MS Word uses multiple threads, one thread to format
the text, another thread to process inputs, etc.
A thread has its own program counter (PC), a register set, and a stack space. Threads are not
independent of one another, like processes. As a result, threads share with other threads their
code section, data section, and OS resources like open files and signals.
It makes the system more responsive and enables resource sharing. It leads to the use of
multiprocess architecture. It is more economical and preferred.
6. What is Thrashing?
7. What is Buffer?
A buffer is a memory area that stores data being transferred between two devices or between
a device and an application.
Virtual memory creates an illusion that each user has one or more contiguous address spaces,
each beginning at address zero. The sizes of such virtual address spaces are generally very
high. The idea of virtual memory is to use disk space to extend the RAM. Running processes
don’t need to care whether the memory is from RAM or disk. The illusion of such a large
amount of memory is created by subdividing the virtual memory into smaller pieces, which
can be loaded into physical memory whenever they are needed by a process.
An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a user
can execute programs conveniently and efficiently.
An operating system is a software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system and
to prevent user programs from interfering with the proper operation of the system.
The process of loading the page into memory on demand (whenever a page fault occurs) is
known as demand paging.
A kernel is the central component of an operating system that manages the operations of
computers and hardware. It basically manages operations of memory and CPU time. It is a
core component of an operating system. Kernel acts as a bridge between applications and
data processing performed at the hardware level using inter-process communication and
system calls.
Multi-programming increases CPU utilization by organizing jobs (code and data) so that the
CPU always has one to execute. The main objective of multi-programming is to keep
multiple jobs in the main memory. If one job gets occupied with IO, the CPU can be assigned
to other jobs.
A thread is a path of execution within a process. A process can contain multiple threads.
FCFS stands for First Come First served. In the FCFS scheduling algorithm, the job that
arrived first in the ready queue is allocated to the CPU and then the job that came second and
so on. FCFS is a non-preemptive scheduling algorithm as a process that holds the CPU until
it either terminates or performs I/O. Thus, if a longer job has been assigned to the CPU then
many shorter jobs after it will have to wait.
A round-robin scheduling algorithm is used to schedule the process fairly for each job in a
time slot or quantum and interrupting the job if it is not completed by then the job comes after
the other job which is arrived in the quantum time makes these scheduling fairly.
Level-0
Level-1
Level-2
Level-3
Level-4
Level-5
Level-6
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests
for safety by simulating the allocation for the predetermined maximum possible amounts of
all resources, then makes an “s-state” check to test for possible activities, before deciding
whether allocation should be allowed to continue.
21. State the main difference between logical and physical address space?
22. How does dynamic loading aid in better memory space utilization?
With dynamic loading, a routine is not loaded until it is called. This method is especially
useful when large amounts of code are needed in order to handle infrequently occurring cases
such as error routines.
23. What are overlays?
The concept of overlays is that whenever a process is running it will not use the complete
program at the same time, it will use only some part of it. Then overlay concept says that
whatever part you required, you load it and once the part is done, then you just unload it,
which means just pull it back and get the new part you required and run it. Formally, “The
process of transferring a block of program code or other data into internal memory, replacing
what is already stored”.
Processes are stored and removed from memory, which makes free memory space, which is
too little to even consider utilizing by different processes. Suppose, that process is not ready
to dispense to memory blocks since its little size and memory hinder consistently staying
unused is called fragmentation. This kind of issue occurs during a dynamic memory allotment
framework when free blocks are small, so it can’t satisfy any request.
Paging is a memory management method accustomed fetch processes from the secondary
memory into the main memory in the form of pages. in paging, each process is split into parts
wherever the size of every part is the same as the page size. The size of the last half could
also be but the page size. The pages of the process area unit hold on within the frames of
main memory relying upon their accessibility
Bounded-buffer
Readers-writers
Dining philosophers
Sleeping barber
28. What is the Direct Access Method?
The direct Access method is based on a disk model of a file, such that it is viewed as a
numbered sequence of blocks or records. It allows arbitrary blocks to be read or written.
Direct access is advantageous when accessing large amounts of information. Direct memory
access (DMA) is a method that allows an input/output (I/O) device to send or receive data
directly to or from the main memory, bypassing the CPU to speed up memory operations.
The process is managed by a chip known as a DMA controller (DMAC).
Thrashing occurs when processes on the system frequently access pages, not available
memory.
30. What is the best page size when designing an operating system?
The best paging size varies from system to system, so there is no single best when it comes to
page size. There are different factors to consider in order to come up with a suitable page
size, such as page table, paging time, and its effect on the overall efficiency of the operating
system.
The cache is a smaller and faster memory that stores copies of the data from frequently used
main memory locations. There are various different independent caches in a CPU, which
store instructions and data. Cache memory is used to reduce the average time to access data
from the Main memory.
Spooling refers to simultaneous peripheral operations online, spooling refers to putting jobs
in a buffer, a special area in memory, or on a disk where a device can access them when it is
ready. Spooling is useful because devices access data at different rates.
The Assembler is used to translate the program written in Assembly language into machine
code. The source program is an input of an assembler that contains assembly language
instructions. The output generated by the assembler is the object code or machine code
understandable by the computer.
35. What are interrupts?
The interrupts are a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high-priority process requiring interruption of
the current working process. In I/O devices one of the bus control lines is dedicated to this
purpose and is called the Interrupt Service Routine (ISR).
GUI is short for Graphical User Interface. It provides users with an interface wherein actions
can be performed by interacting with icons and graphical symbols.
Easy to implement.
Bootstrapping is the process of loading a set of instructions when a computer is first turned
on or booted. During the startup process, diagnostic tests are performed, such as the power-on
self-test (POST), which set or checks configurations for devices and implements routine
testing for the connection of peripherals, hardware, and external memory devices. The
bootloader or bootstrap program is then loaded to initialize the OS.
Pipes (Same Process): This allows a flow of data in one direction only. Analogous to
simplex systems (Keyboard). Data from the output is usually buffered until the input
process receives it which must have a common origin.
Named Pipes (Different Processes): This is a pipe with a specific name it can be
used in processes that don’t have a shared common process origin. E.g. FIFO where
the details written to a pipe are first named.
Message Queuing: This allows messages to be passed between processes using either
a single queue or several message queues. This is managed by the system kernel these
messages are coordinated using an API.
Shared Memory: This allows the interchange of data through a defined area of
memory. Semaphore values have to be obtained before data can get access to shared
memory.
Sockets: This method is mostly used to communicate over a network between a client
and a server. It allows for a standard connection which is computer and OS
independent
In preemptive scheduling, the CPU is allocated to the processes for a limited time
whereas, in Non-preemptive scheduling, the CPU is allocated to the process till it
terminates or switches to waiting for the state.
In Preemptive Scheduling, there is the overhead of switching the process from the
ready state to the running state, vice-verse, and maintaining the ready queue. Whereas
the case of non-preemptive scheduling has no overhead of switching the process from
running state to ready state.
Preemptive Scheduling has to maintain the integrity of shared data that’s why it is
cost associative which is not the case with Non-preemptive Scheduling.
A process that has finished the execution but still has an entry in the process table to report to
its parent process is known as a zombie process. A child process always first becomes a
zombie before being removed from the process table. The parent process reads the exit status
of the child process which reaps off the child process entry from the process table.
A process whose parent process no more exists i.e. either finished or terminated without
waiting for its child process to terminate is called an orphan process.
Starvation: Starvation is a resource management problem where a process does not get the
resources it needs for a long time because the resources are being allocated to other
processes.
Switching of CPU to another process means saving the state of the old process and loading
the saved state for the new process. In Context Switching the process is stored in the Process
Control Block to serve the new process so that the old process can be resumed from the same
part it was left.
49. What is the difference between the Operating system and kernel?
the process control block (PCB) is a block that is used to track the process’s execution status.
A process control block (PCB) contains information about the process, i.e. registers,
quantum, priority, etc. The process table is an array of PCBs, that means logically contains a
PCB for all of the current processes in the system.
The set of dispatchable processes is in a safe state if there exists at least one temporal order in
which all processes can be run to completion without resulting in a deadlock.
cycle stealing is a method of accessing computer memory (RAM) or bus without interfering
with the CPU. It is similar to direct memory access (DMA) for allowing I/O controllers to
read or write RAM without CPU intervention.
A trap is a software interrupt, usually the result of an error condition, and is also a non-
maskable interrupt and has the highest priority Trapdoor is a secret undocumented entry point
into a program used to grant access without normal methods of access authentication.
The dispatcher is the module that gives process control over the CPU after it has been
selected by the short-term scheduler. This function involves the following:
Switching context
Dispatch latency can be described as the amount of time it takes for a system to respond to a
request for a process to begin operation. With a scheduler written specifically to honor
application priorities, real-time applications can be developed with a bounded dispatch
latency.
Max throughput [Number of processes that complete their execution per time unit]
Min response time [Time when a process produces the first response]
When more than one processes access the same code segment that segment is known as the
critical section. The critical section contains shared variables or resources which are needed
to be synchronized to maintain the consistency of data variables. In simple terms, a critical
section is a group of instructions/statements or regions of code that need to be executed
atomically such as accessing a resource (file, input or output port, global data, etc.).
Mutexes
Condition variables
Semaphores
File locks
Improved throughput. Many concurrent compute operations and I/O requests within a
single process.
Simultaneous and fully symmetric use of multiple processors for computation and
I/O.
The programmer has to keep track of all calls to wait and signal the semaphore.
With improper use, a process may block indefinitely. Such a situation is called
Deadlock.
A system is said to follow bounded waiting conditions if a process wants to enter into a
critical section will enter in some finite time.
Software solutions
Hardware solutions
Semaphores
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests
for safety by simulating the allocation for the predetermined maximum possible amounts of
all resources, then makes an “s-state” check to test for possible activities, before deciding
whether allocation should be allowed to continue.
69. What is concurrency?
A state in which a process exists simultaneously with another process than those it is said to
be concurrent.
71. What are the necessary conditions which can lead to a deadlock in a
system?
Blocking: Processes can block waiting for resources. A process could be blocked for
a long period of time waiting for input from a terminal. If the process is required to
periodically update some data, this would be very undesirable.
Deadlock: It occurs when two processes are blocked and hence neither can proceed to
execute
A precedence graph is a directed acyclic graph that is used to show the execution level of
several processes in the operating system. It has the following properties also:
Nodes of graphs correspond to individual statements of program code.
A directed edge from node A to node B shows that statement A executes first and then
Statement B executes
The resource allocation graph is explained to us what is the state of the system in terms of
processes and resources. One of the advantages of having a diagram is, sometimes it is
possible to see a deadlock directly by using RAG.
Deadlock is a situation when two or more processes wait for each other to finish and none of
them ever finish. Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once they are in front of
each other. A similar situation occurs in operating systems when there are two or more
processes that hold some resources and wait for resources held by other(s).
Relocation
Protection
Sharing
Logical organization
Physical organization
The Association of program instruction and data to the actual physical memory locations is
called Address Binding.
When we do not know how much amount of memory would be needed for the
program beforehand.
When we want data structures without any upper limit of memory space.
Dynamically created lists insertions and deletions can be done very easily just by the
manipulation of addresses whereas in the case of statically allocated memory
insertions and deletions lead to more movements and wastage of memory.
When you want to use the concept of structures and linked lists in programming,
dynamic memory allocation is a must
The process of collecting fragments of available memory space into contiguous blocks by
moving programs and data in a computer’s memory or disk.
Advantages
In many situations, hash tables turn out to be more efficient than search trees or any
other table lookup structure. For this reason, they are widely used in many kinds of
computer software, particularly for associative arrays, database indexing, caches, and
sets.
Disadvantages
Hash collisions are practically unavoidable. when hashing a random subset of a large
set of possible keys.
Hash tables become quite inefficient when there are many collisions.
Hash table does not allow null values, like a hash map.
Define Compaction.
The performance of virtual memory of a virtual memory management system depends on the
total number of page faults, which depend on “paging policies” and “frame allocation”.
Effective access time = (1-p) x Memory access time + p x page fault time
Operation on file:
Create
Open
Read
Write
Rename
Delete
Append
Truncate
Close
91. Define the term Bit-Vector?
A Bitmap or Bit Vector is a series or collection of bits where each bit corresponds to a disk
block. The bit can take two values: 0 and 1: 0 indicates that the block is allocated and 1
indicates a free block.
FAT stands for File Allocation Table and this is called so because it allocates different files
and folders using tables. This was originally designed to handle small file systems and disks.
A file allocation table (FAT) is a table that an operating system maintains on a hard disk that
provides a map of the cluster (the basic units of logical storage on a hard disk) that a file has
been stored in.
Rotational Latency: Rotational Latency is the time taken by the desired sector of the disgeek
to rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or written. So the disk scheduling algorithm that gives a minimum average
seek time is better.
Bélády’s anomaly is an anomaly with some page replacement policies increasing the number
of page frames resulting in an increase in the number of page faults. It occurs when the First
In First Out page replacement is used.
Deadlock. If a thread that had already locked a mutex, tries to lock the mutex again, it will
enter into the waiting list of that mutex, which results in a deadlock. It is because no other
thread can unlock the mutex. An operating system implementer can exercise care in
identifying the owner of the mutex and return it if it is already locked by the same thread to
prevent deadlocks.
Enhanced performance.
Multiple applications.
A real-time system means that the system is subjected to real-time, i.e., the response should
be guaranteed within a specified timing constraint or the system should meet the specified
deadline.
Process termination
o Abort all the deadlock processes
o Abort one process at a time until the deadlock is eliminated
Resource preemption
o Rollback
o Selecting a victim
One is that it depends on how often a deadlock is likely to occur under the implementation of
this algorithm. The other has to do with how many processes will be affected by deadlock
when this algorithm is applied.
The resource allocation graph is explained to us what is the state of the system in terms of
processes and resources. One of the advantages of having a diagram is, sometimes it is
possible to see a deadlock directly by using RAG.
Source: www.geeksforgeeks.org