Group 7

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 42

SUBJECT: COMPUTER ARCHITECTURE

GROUP # 7

2019-EE-630
2019-EE-618
2019-EE-607

TOPIC:

VECTOR PROCESSORS, PAGING


& VIRTUAL MEMORY
VECTOR PROCESSING

There is a class of computational problems that are beyond the capabilities of a


conventional computer. These problems are characterized by the fact that they
require a vast number of computations that will take a conventional computer days
or even weeks to complete.

Vector Processor (computer)

• Ability to process vectors, and matrices much faster than conventional computers
PROCESSORS
Usually there are two types of processors:
 Vector Processors
 Scalar Processors

Scalar processing have disadvantages


Instruction by instruction will be done in scalar processing.
Vector Processing Applications:

 Long-range weather forecasting


 Seismic data analysis
 Aerodynamics and space flight simulations
 Artificial intelligence
 Expert systems
 Image processing
 Petroleum explorations
 Medical diagnosis
Conventional scalar processor loop processing:
Initialize I = 0
20 Read A(I)
Read B(I)
Store
C(I) = A(I) + B(I)
Increment I = I + 1
If I <=100 go to 20
Continue
Single Vector Instruction
C(1: 100) = A(1 : 100) + B(1 : 100)
The addition is done with a pipelined floating-point
adder - (Arithmetic Pipeline)

Vector Instruction Format


Pipelined Vector Processing
Memory Interleaving

Pipeline and vector processors often require simultaneous access to memory


from two or more sources.
An instruction pipeline may require the fetching of an instruction and an operand
at the same time from two different segments.
Similarly, an arithmetic pipeline usually requires two or more operands to enter
the pipeline at the same time.
The memory can be partitioned into a number of modules connected to a
common memory address and data buses.
Memory
Interleaving
 Each memory array has its own address register AR and data register
DR.

 The address registers receive information from a common address bus


and the data registers communicate with a bidirectional data bus.

 The two least significant bits of the address can be used to distinguish
between the four modules.
Virtual Memory
Introduction
• Virtual memory deals with the main
memory size limitations
 Provides an illusion of having more
memory than the system’s RAM
 Virtual memory separates logical
memory from physical memory
» Logical memory: A process’s view of
memory
» Physical memory: The processor’s
view of memory
 Before virtual memory
» Overlaying was used
– It is a programmer controlled technique
• Virtual memory also provides
 Relocation
» Each program can have its own virtual address
space
 Protection
» Programs are isolated from each other
– A benefit of working in their own address
spaces
» Protection can be easily implemented
Virtual Memory Concepts

• Implements a mapping function


 Between virtual address space and physical
address space
• Examples
 PowerPC
» 48-bit virtual address
» 32-bit physical address
 Pentium
» Both are 32-bit addresses
– But uses segmentation
• Virtual address space is divided
into fixed-size chunks
 These chunks are called virtual
pages
 Virtual address is divided into
» Virtual page number
» Byte offset into a virtual page
 Physical memory is also divided
into similar-size chunks
» These chunks are referred to as
physical pages
» Physical address is divided into
– Physical page number
– Byte offset within a page
• Page size is similar to cache line size
• Typical page size
» 4 KB
• Example
 32-bit virtual address to 24-bit physical
address
 If page size is 4 KB
» Page offset: 12 bits
» Virtual page number: 20 bits
» Physical page number: 12 bits
 Virtual memory maps 220 virtual pages to
212 physical pages
An example
mapping of 32-
bit virtual
address to 24-bit
physical address
Virtual to physical address mapping
• A virtual page can be
 In main memory
 On disk
• Page fault occurs if the page is not in
memory
 Like a cache miss
• OS takes control and transfers the page
 Demand paging
» Pages are transferred on demand
Page fault
handling
routine
Introduction to paging

Computer architecture
Introduction

Definition of paging: Paging is a memory management technique that divides the virtual

memory space of a process into fixed-sized blocks called pages. These pages are then

mapped to physical memory by a page table, allowing for efficient use of memory and

enabling multiple processes to run simultaneously.

Importance of paging in computer architecture: Paging allows for efficient use of memory,

memory protection, and supports multiple processes. It is a fundamental technique used

in most modern operating systems and computer systems.


Paging
• Partition memory into small equal-size chunks anddivide each
process into the same size chunks
• The chunks of a process are called pages and chunks of
memory are called frames
• Operating system maintains a page table for each process
• Contains the frame location for each page in the process
• Memory address consist of a page number and offset
within the page
Exmaple
For example, if the main memory size is 16 KB and Frame size
is 1 KB. Here, the main memory will be divided into the
collection of 16 frames of 1 KB each. There are 4 separate
processes in the system that is A1, A2, A3, and A4 of 4 KB each.
Paging
• A program requires N free frames
• There is a need to set up a page table to
• translate logical to physical addresses
• Internal fragmentation
Address Translation Scheme
The address translation in paging in OS is an address space that
is the range of valid addresses available in a program or process
memory. It is the memory space accessible to a program or
process. The memory can be physical or virtual and is used for
storing data and executing instructions
Cont.
• If the number of logical address bits is m
and
• the number of bits in the offset is n then
• Logical address space is 2m
• Page size (number of entries) is 2n
• Example m 4, n 2
• Number of address is 16
• Number of entries in a page is 4
Paging hardware

The hardware is referred to as the memory


management unit (MMU)
Cont.()
• The CPU issues a logical address (remember that
all addresses are binary)
• The hardware extracts the page number, p, and the page offset d
• The page number, p, is used to index the page table
• The entry in the page table consists of the frame number, f .
• The actual address is the concatenation of the
• bits that make up f and d.
Paging algorithm
Different algorithms used to decide which pages to move
in and out of memory
Common algorithms include FIFO, LRU, and Clock
FIFO: Oldest page is moved out first. Simple, but not
efficient.
LRU: Least recently used page is moved out first. More
complex, but generally better.
Clock: Variation of FIFO that marks pages as used or not
used. Simple and fair.
Each algorithm has pros and cons, depending on the
system's needs and workload
Implementation of Paging
• Overview of how paging is implemented in computer architecture: Paging involves dividing the virtual memory

space into fixed-sized blocks called pages, which are then mapped to physical memory by a page table.

• Page table: A data structure that maps virtual memory addresses to physical memory addresses. Each process has

its own page table, which is managed by the operating system.

• Page fault: When a process tries to access a page that is not currently in physical memory, a page fault occurs. The

operating system then swaps the necessary pages in and out of memory as needed.

• Translation lookaside buffer (TLB): A cache for the page table that is used to speed up the translation of virtual

addresses to physical addresses. The TLB is usually implemented in hardware and can greatly improve the

performance of paging.
Observations

The logical address space and the physical


• address space DO NOT have to be the same size
The logical address space can be larger then the
• physical address space (more on this later)
Conclusion
 Recap of key points: Paging is a memory management technique that divides the virtual memory space into fixed-sized

blocks called pages, which are then mapped to physical memory by a page table. Paging algorithms such as FIFO, LRU, and

Clock are used to decide which pages to move in and out of memory.

 Importance of paging: Paging is a fundamental technique used in most modern operating systems and computer systems,

allowing for efficient use of memory and enabling multiple processes to run simultaneously.

 Challenges and trade-offs: Paging can also present challenges and trade-offs, such as the choice of page size and the

overhead associated with managing the page table.

 Overall, paging is a crucial aspect of computer architecture and plays a key role in enabling efficient and effective memory

management.

You might also like