Memory Organization
Memory Organization
Memory Organization
Semiconductor memory can be divided into two main categories: volatile and
non-volatile memory. Volatile memory, such as dynamic random-access
memory (DRAM) and static random-access memory (SRAM), requires power
to retain stored data. Non-volatile memory, such as read-only memory (ROM),
flash memory, and Electrically Erasable Programmable Read-Only Memory
(EEPROM), can retain data even when power is turned off.
Various differences between volatile and non-volatile memory are there.
a. NOR Flash: NOR gate flash is used for executing code directly from
the memory. It has a slower write speed but a faster read speed.
b. NAND Flash: NAND gate flash is used for data storage. It has a
faster write speed but a slower read speed.
Registers are memories located within the Central Processing Unit (CPU).
Various types of registers are available within the CPU. Registers are small
but the CPU can access them quickly. Some of the registers available in the
system are given below.
Instruction Register
ALU I/O registers
Status Register
Stack pointer register
The program counter, etc.
Static RAM and Dynamic RAM both are types of Read Acess Memory. It can
be used for the purpose of data storage. Here few differences between
SRAM and DRAM are discussed below.
The access time of SRAM is less and thus these memories are faster
memories.
As SRAM consists of flip-flops thus, refreshing is not required.
Less number of memory cells are required in SRAM for a unit area.
Classification of ROM
Programmable logic devices are a special type of IC. Different types of logic
functions can be implemented using a single programmed IC chip of PLD.
PLDs can be reprogrammed because these are based on rewritable memory
technologies. PLDs are divided into three types. They are PLA, PAL, and
FPGA.
Programmable Logic Array (PLA)
Data in primary memory can be accessed faster than secondary memory but
still, access times of primary memory are generally in a few microseconds,
whereas the CPU is capable of performing operations in nanoseconds. Due to
the time lag between accessing data and acting on data performance of the
system decreases as the CPU is not utilized properly, it may remain idle for
some time. In order to minimize this time gap new segment of memory is
Introduced known as Cache Memory.
Role of Cache Memory
The role of cache memory is explained below,
Cache memory plays a crucial role in computer systems.
It provide faster access.
It acts buffer between CPU and main memory(RAM).
Primary role of it is to reduce average time taken to access data, thereby
improving overall system performance.
Benefits of Cache Memory
Various benefits of the cache memory are,
1. Faster access: Faster than main memory. It resides closer to CPU , typically
on same chip or in close proximity. Cache stores subset of data and
instruction.
2. Reducing memory latency: Memory access latency refers to time taken for
processes to retrieve data from memory. Caches are designed to exploit
principle of locality.
3. Lowering bus traffic: Accessing data from main memory involves transferring
it over system bus. Bus is shared resource and excessive traffic can lead to
congestion and slower data transfers. By utilizing cache memory , processor
can reduce frequency of accessing main memory resulting in less bus traffic
and improves system efficiency.
4. Increasing effective CPU utilization: Cache memory allows CPU to operate at a
higher effective speed. CPU can spend more time executing instruction
rather than waiting for memory access. This leads to better utilization of
CPU’s processing capabilities and higher overall system performance.
5. Enhancing system scalability: Cache memory helps improve system scalability
by reducing impact of memory latency on overall system performance.
In order to understand the working of cache we must understand few points:
Cache memory is faster, they can be accessed very fast
Cache memory is smaller, a large amount of data cannot be stored
Whenever CPU needs any data it searches for corresponding data in the cache
(fast process) if data is found, it processes the data according to instructions,
however, if data is not found in the cache CPU search for that data in primary
memory(slower process) and loads it into the cache. This ensures frequently
accessed data are always found in the cache and hence minimizes the time
required to access the data.
Cache Performance
On searching in the cache if data is found, a cache hit has occurred.
On searching in the cache if data is not found, a cache miss has occurred.
Performance of cache is measured by the number of cache hits to the number
of searches. This parameter of measuring performance is known as the Hit
Ratio.
Hit ratio=(Number of cache hits)/(Number of searches)
Types of Cache Memory
L1 or Level 1 Cache: It is the first level of cache memory that is present inside
the processor. It is present in a small amount inside every core of the processor
separately. The size of this memory ranges from 2KB to 64 KB.
L2 or Level 2 Cache: It is the second level of cache memory that may present
inside or outside the CPU. If not present inside the core, It can be shared
between two cores depending upon the architecture and is connected to a
processor with the high-speed bus. The size of memory ranges from 256 KB to
512 KB.
L3 or Level 3 Cache: It is the third level of cache memory that is present outside
the CPU and is shared by all the cores of the CPU. Some high processors may
have this cache. This cache is used to increase the performance of the L2 and
L1 cache. The size of this memory ranges from 1 MB to 8MB.
Virtual Memory
Virtual memory is the partition of logical memory from physical memory. This
partition supports large virtual memory for programmers when only limited
physical memory is available.
Virtual memory can give programmers the deception that they have a very high
memory although the computer has a small main memory. It creates the function
of programming easier because the programmer no longer requires to worry
about the multiple physical memory available.
Virtual memory works similarly, but at one level up in the memory hierarchy.
A memory management unit (MMU) transfers data between physical memory and
some gradual storage device, generally a disk. This storage area can be defined
as a swap disk or swap file, based on its execution. Retrieving data from
physical memory is much faster than accessing data from the swap disk.
There are two primary methods for implementing virtual memory are as follows
−
Paging
Advantages of Paging
Disadvantage of Paging
The partition of memory into logical units called segments, according to the
user’s perspective is called segmentation. Segmentation allows each segment to
grow independently, and share. In other words, segmentation is a technique
that partition memory into logically related units called a segment. It means
that the program is a collection of the segment.
Unlike pages, segments can vary in size. This requires the MMU to manage
segmented memory somewhat differently than it would manage paged memory.
A segmented MMU contains a segment table to maintain track of the segments
resident in memory.
A segment can initiate at one of several addresses and can be of any size, each
segment table entry should contain the start address and segment size. Some
system allows a segment to start at any address, while other limits the start
address. One such limit is found in the Intel X86 architecture, which requires a
segment to start at an address that has 6000 as its four low-order bits.