Resource Management

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Academic Year 2020-21

Topic: Resource Management


Class: DP 1 Computer Science

Paging
Multi-user environments
Many computer systems, especially servers, are used by multiple users. The OS thus needs to
manage time and resources not only among processes, but also between users.

Each user’s and each process’s data and memory space needs to be separated securely in
order to prevent access by other users or processes.

OS types
Single user, single task: Early computers used to be like this. You would write your program on
punch cards and book time on the computer to run it. Users would have to queue up to use the
computer. If your program generated an error, you would have to come back next week! Modern
examples of single-user, single-tasking OS’s are Palm OS and early versions of the iPhone and
iPad. Mobile phones are slowly developing multi-tasking capability though.

Single user, multi-tasking: A basic standalone home PC has one user who can run lots of
different programs at the same time, e.g. Mac OS or Windows 7.

Multi-user: A network operating system, such as the one at school, in which multiple users can
run multiple programs simultaneously, e.g. Windows Server 2012.

Memory Management
Neccessary for:

• keeping memory spaces of processes apart


• keep memory spaces (both primary and secondary) of users apart
• allocating and deallocating memory for each process
• paging divides virtual memory up into equal-sized blocks (pages)
• paging allows to use memory in a non-contiguous way (not in order),
avoiding fragmentation
Virtual memory

This technology makes use of secondary memory as if it were primary memory and is necessary
for swapping out processes, e.g. those blocked on I/O. The OS treats all virtual memory the
same, adding a layer of abstraction, so that programs referencing to memory spaces do not need
to worry, if the piece of information is on primary or secondary memory.

Paging

Paging is a technique in which the physical memory is divided into blocks of the same size,
which are called pages. When a process or program is run, it is also divided into pages and can
be loaded into free memory pages. The advantage of this is that the free memory blocks don’t
need to be contiguous, in other words in order. This avoid external fragmentation, which is when
free memory can’t be used for a process because it is non-contiguous, even though the space is
big enough. In order to keep track of the pages, the OS will have page map tables to locate
them. Pages do also make it possible to keep not requested pages in the virtual memory to make
the memory usage more efficient.
Disadvantages of virtual memory
Although virtual memory makes a more efficient use of memory possible, it tends to be slower
due to the access times of the secondary memory. Also the system may be busier paging in and
out memory, than actually executing the processes, again slowing down the whole system.
Virtual memory also requires hardware support, usually by the MMU (memory management unit).
Also there will be less secondary memory available for storage, although this is not a great
disadvantage considering its common memory capacities.
Academic Year 2020-21

OS resource management techniques


Resource management is the dynamic allocation and de-allocation by an operating system of processor cores, memory pages, and various types of
bandwidth to computations that compete for those resources. The objective is to allocate resources so as to optimize responsiveness subject to the
finite resources available.

Technique Description

The aim of CPU scheduling is to make the system efficient, fast and fair. [3] Scheduling is the method by which work is assigned to resources
that complete the work.[4]. There are many different scheduling strategies. The main purposes of scheduling algorithms are to minimize
resource starvation and to ensure fairness amongst the parties utilizing the resources[5] Different scheduling approaches are:

scheduling • First Come First Serve(FCFS) Scheduling


• Shortest-Job-First(SJF) Scheduling
• Priority Scheduling
• Round Robin(RR) Scheduling.
• Multilevel feedback queue scheduling.
Given a particular task, policy refers to what needs to be done (i.e. activities to perform) and mechanism refers to how to do it (i.e.
implementation to enforce policy).[6]. Put another way, the separation of mechanism and policy is a design principle in computer science. It
states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources)
policies
should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which
resources to allocate.[7] Please distinguish between policy and mechanism. Policies are ways to choose which activities to perform.
Mechanisms are the implementations that enforce policies.[8]
In computing, multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them
concurrently.[9]. Multitasking operating systems allow more than one program to run at a time. They can support either preemptive
multitasking
multitasking, where the OS doles out time to applications (virtually all modern OSes) or cooperative multitasking, where the OS waits for the
program to give back control (Windows 3.x, Mac OS 9 and earlier). [10]
In computing, virtual memory (also virtual storage) is a memory management technique that provides an "idealized abstraction of the storage
resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory. [11]. This video is
deliciously excellent: https://www.youtube.com/watch?time_continue=1&v=qlH4-oHnBb8

virtual The primary benefits of virtual memory include freeing applications (and programmers) from having to manage a shared memory space,
memory increasing security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the
technique of paging. Indeed, almost every virtual memory implementations divide a virtual address space into blocks of contiguous virtual
memory addresses, called pages, which are usually 4 KB in size. [12]
Prior to virtual memory we used overlays, which you can learn about by clicking here.

This is related to virtual memory. In computer operating systems, paging is a memory management scheme by which a computer stores and
retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in
paging
same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary
storage to let programs exceed the size of available physical memory. [13]. For a deeper (and excellent) look at paging, please click here
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate
attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing.
The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an
interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities. There are two types of interrupts: hardware interrupts and software interrupts. [14].
interrupt
Basically, the processor has a set of interrupt wires which are connected to a bunch of devices. When one of the devices has something to
say, it turns its interrupt wire on, which triggers the processor (without the help of any software) to pause the execution of current instructions
and start running a handler function.[15]
An interrupt is a hardware mechanism that enables CPU to detect that a device needs its attention. The CPU has a wire interrupt-request line
which is checked by CPU after execution of every single instruction. When CPU senses an interrupt signal on the interrupt-request line, CPU
stops its currently executing task and respond to the interrupt send by I/O device by passing the control to interrupt handler. The interrupt
handler resolves the interrupt by servicing the device.
Although CPU is not aware when an interrupt would occur as it can occur at any moment, but it has to respond to the interrupt whenever it
occurs.
When the interrupt handler finishes executing the interrupt, then the CPU resumes the execution of the task that it has stopped for responding
the interrupt. Software, hardware, user, some error in the program, etc. can also generate an interrupt. Interrupts handling nature of CPU
leads to multitasking, i.e. a user can perform a number of different tasks at the same time.
If more than one interrupts are sent to the CPU, the interrupt handler helps in managing the interrupts that are waiting to be processed. As
interrupt handler gets triggered by the reception of an interrupt, it prioritizes the interrupts waiting to be processed by the CPU and arranges
them in a queue to get serviced.[16]

Polling, or polled operation, in computer science, refers to actively sampling the status of an external device by a client program as a
synchronous activity. [17].

• A polling cycle is the time in which each element is monitored once. The optimal polling cycle will vary according to several factors,
including the desired speed of response and the overhead (e.g., processor time and bandwidth) of the polling.

• In roll call polling, the polling device or process queries each element on a list in a fixed sequence. Because it waits for a response
from each element, a timing mechanism is necessary to prevent lock-ups caused by non-responding elements. Roll call polling
can be inefficient if the overhead for the polling messages is high, there are numerous elements to be polled in each polling cycle
and only a few elements are active.

polling • In hub polling, also referred to as token polling, each element polls the next element in some fixed sequence. This continues until
the first element is reached, at which time the polling cycle starts all over again.[18]
Key Differences Between Interrupt and Polling in OS:[19]

1. In interrupt, the device notifies the CPU that it needs servicing whereas, in polling CPU repeatedly checks whether a device
needs servicing.
2. Interrupt is a hardware mechanism as CPU has a wire, interrupt-request line which signal that interrupt has occurred. On the
other hands, Polling is a protocol that keeps checking the control bits to notify whether a device has something to execute.
3. Interrupt handler handles the interrupts generated by the devices. On the other hands, in polling, CPU services the device
when they require.
4. Interrupts are signalled by the interrupt-request line. However, Command-ready bit indicate that the device needs servicing.
5. In interrupts, CPU is only disturbed when any device interrupts it. On the other hand, in polling, CPU waste lots of CPU cycles
by repeatedly checking the command-ready bit of every device.
6. An interrupt can occur at any instant of time whereas, CPU keeps polling the device at the regular intervals.
7. Polling becomes inefficient when CPU keeps on polling the device and rarely finds any device ready for servicing. On the
other hands, interrupts become inefficient when the devices keep on interrupting the CPU processing repeatedly.
Academic Year 2020-21

Topic: Resource Management


Class: DP 1 Computer Science

Multi-tasking and the OS


Multi-tasking
A computer system has a set of limited resources, but in most situations it is desirable to run
multiple programs at once in order to multi-task. This leads to the problem of how to best share
available resources among running programs.

The most important concepts here include:

Multiple CPUs (“cores”)

For example: dual core, quad core , graphics processor, etc.

• greater processing power


• extra layer of complexity
• dedicated processing power (GPU)

Time-slicing

• processing time is divided equally among all running programs


• works well only if all programs require the same processing power, which seldomly
is the case

Prioritisation

• some processes are treated as more important than others

Polling

• approach for handling I/O


• CPU keeps asking program or other resource whether it needs CPU time
Interrupts

• approach for handling I/O


• instead of polling, the process sends an “interrupt” to the CPU to receive CPU time

Blocking

• a process can declare itself to be blocked, meaning that it is unable to proceed, until
some condition is met, e.g. some input is provided
• when to processes are blocked on each other, this is called a deadlock

Swapping

• blocked processes can be “swapped” out of main memory in order not to waste
memory space

Handling I/O
Problems can occur when different resources communicate through I/O (inputs & ouputs),
because they usually operate at different speeds. For instance, the CPU usually works
magnitudes faster than a hard drive or network resource will require to respond to some I/O
request. It is therefore important that the CPU doesn’t waste time by waiting for an I/O response.
In order to avoid this, usually buffers are used, which allow the CPU to queue up a meaningful
amount of data to communicate (both in and out) before processing them.

Blocked on I/O

If a process is blocked by waiting for an I/O response, the CPU sometimes swaps it out of
memory to get on with other tasks. In order to resume the swapped out process, the CPU can
either keep polling or receive an interrupt

As mentioned before, polling keeps asking I/O for a response, while an interrupt comes from the
I/O. Interrupts can be either from the software side or directly hardware bound.

Interrupts Polling

Pros • Save CPU time • Easy to implement,


because it doesn't because no special
have to keep checking hardware is required
• Give CPU more control
over what it does

Cons • Too many interrupts • Polling wastes CPU time


slow the CPU down
Direct Memory Access (DMA)

This is a more recent alternative to polling and interrupts. This method bypasses the CPU, so
that data from the I/O are directly passed to the RAM by a dedicated DMA controller. This
approach still uses interrupts, but only once the data transfer is finished, so that CPU time is
saved.

You might also like