Co Module V 2020
Co Module V 2020
Co Module V 2020
MODULE V
Syllabus
I/O system – Accessing I/O devices, Modes of data transfer, Programmed I/O, Interrupt
driven I/O, Direct Memory Access, Standard I/O interfaces – Serial port, Parallel port, PCI,
SCSI, and USB.
Memory system – Hierarchy, Characteristics and Performance analysis, Semiconductor
memories (RAM, ROM, EPROM), Memory Cells – SRAM and DRAM, internal
organization of a memory chip, Organization of a memory unit.
I/O Accessing
By using bus arrangement, the I/O devices are connected to the computer. The bus
enables all the devices connected to it to exchange information. It consists of three sets of
lines used to carry address, data and control signals. Each I/O device is assigned a
unique set of address. When the processor places a particular address on the address
lines, the device that recognizes this address responds to the commands issued on the
control lines.
From the CPU's perspective, an I/O device appears as a set of special-purpose registers, of
three general types:
Status registers provide status information to the CPU about the I/O device.
Configuration/control registers are used by the CPU to configure and control the device. Data
registers are used to read data from or send data to the I/O device.
The address decoder enables the device to recognize its address when this address
appears on the address lines. The data register holds the data being transferred to or from
the processor. The status register contains information relevant to the operation of the I/O
device. Both the data and status registers are connected to the data bus and assigned unique
addresses. The address decoder, the data and status registers, and the control circuitry
MBITS, ECE Dept. Page 1
Computer Organisation Module V
Data transfer to and from peripherals may be handled in one of four possible modes:
1. Memory-mapped I/O
2. Programmed I/O
3. Interrupt-initiated I/O
4. Direct memory access (DMA)
1. Memory-mapped I/O: When using memory-mapped I/O, the same address space is shared
by memory and I/O devices. Some addresses represent memory cells, while others represent
registers in I/O devices. No separate I/O instructions are needed in a CPU that uses memory-
mapped I/O. Instead, we can perform I/O operations using any instruction that can reference
memory.
For example, if DATAIN is the address of the input buffer associated with the keyboard, the
instruction Move DATAIN, R0 reads the data from DATAIN and stores them into processor
register RO.
Similarly, the instruction Move R0, DATAOUT sends the contents of register R0 to location
DATAOUT, which may be the output data buffer of a display unit or a printer. Most
computer systems use memory-mapped I/O. Some processors have special In and Out
instructions to perform I/O transfers.
2. Program-controlled I/O:
Programmed I/O operations are the result of I/O instructions written in the computer
program. Each data item transfer is initiated by an instruction in the program. Usually,
the transfer is to and from a CPU register and peripheral. Other instructions are
needed to transfer the data to and from CPU and memory. Once a data transfer is
initiated, the CPU is required to monitor the interface to see when a transfer can again
be made.
If the speed of an I/O device is in the right range, neither too fast for the processor to
read and write the signalling bits nor too slow for the processor to wait for its activity,
this form of signalling may be sufficient.
An example of data transfer from an I/O device through an interface into the CPU is shown in
Figure bellow.
By device
When a byte of data is available, the device places it in the I/O bus and enables its data valid
line.
By interface
Accepts the byte into its data register and enables the data accepted line. Sets F.
By device
Can now disable the data valid line, but it will not transfer another byte until the data
accepted line is disabled by the interface. This is according to the handshaking procedure
established.
By program
1. Read the status register.
2. Check the status of the flag bit and branch to step 1 if not set or to step 3 if set.
3. Read the data register.
The flag bit is then cleared to 0 by either the CPU or the interface, depending on how the
interface circuits are designed.
By interface
Once the flag is cleared, the interface disables the data accepted line and the device can then
transfer the next data byte.
Each byte is read into a CPU register and then transferred to memory with a store instruction.
The programmed I/O method is particularly useful in small low-speed computers or in
systems that are dedicated to monitor a device continuously .The difference in information
transfer rate between the CPU and the I/O device makes this type of transfer inefficient.
3. Interrupt Initiated I/O
Interrupt is a hardware signal to the processor from I/O devices through one of the
control line called interrupt-request line. The routine executed in response to an interrupt
request is called the interrupt-service routine.
For eg: Assume that an interrupt request arrives during execution of instruction i in
Figure below. The processor first completes execution of instruction i. Then, it loads
the program counter with the address of the first instruction of the interrupt-service
routine. After execution of the interrupt-service routine, the processor has to come back to
instruction i + 1. Therefore, when an interrupt occurs, the current contents of the PC, which
point to instruction i + 1, must be put in temporary storage in a known location. A Return
from-interrupt instruction at the end of the interrupt-service routine reloads the PC from that
temporary storage location, causing execution to resume at instruction i + 1. In many
processors, the return address is saved on the processor stack. Alternatively, it may be
saved in a special location, such as a register provided for this purpose.
a) Interrupt Hardware
INTR1 to INTRn are inactive, that is, if all switches are open, the voltage on the interrupt-
request line will be equal to Vdd. This is the inactive state of the line. When a device
requests an interrupt by closing its switch, the voltage on the line drops to 0, causing the
interrupt -request signal, INTR, received by the processor to go to 1. Since the closing of
one or more switches will cause the line voltage to drop to 0, the value of INTR is the
logical OR of the requests from individual devices, that is, INTR = INTR 1 +... + INTRn. It
is customary to use the complemented form, INTR, to name the interrupt request signal on
the common line, because this signal is active when in the low-voltage state.
1. Polling
The information needed to determine whether a device is requesting an interrupt is
available in the status register ie, IRQ bit. When a device raises an interrupt request it is set
to1. ie, The processor should reads the status registers of each I/O device associated, the first
devices which set the IRQ bit is served first which is called polling. One of the main
disadvantage of polling is always checking the IRQ bit for all the devices which do not
request any interrupt.
2. Vectored Interrupt
To reduce the time involved in the polling process, a device requesting an interrupt may
identify itself directly to the processor. Then, the processor can immediately start
executing the corresponding interrupt service routine. The term vectored interrupts refers to
all interrupt - handling schemes based on this approach. A device requesting an interrupt can
identify itself by sending a special code to the processor over the bus. This enables the
processor to identify individual devices even if they share a single interrupt-request line. The
code supplied by the device may represent the starting address of the interrupt-service
routine for that device. The code length is typically in the range of 4 to 8 bits. The remainder
of the address is supplied by the processor based on the area in its memory where the
addresses for interrupt-service routines are located. When the processor is ready to receive
the interrupt-vector code, it activates the interrupt-acknowledge line, INTA.
3. Interrupt Nesting
A special control unit is provided to allow transfer of a block of data directly between an
external device and the main memory, without continuous intervention by the processor.
This approach is called direct memory access, or DMA. DMA transfers are performed by
a control circuit that is part of the I/O device interface called DMA controller.
Although a DMA controller can transfer data without intervention by the processor, its
operation must be under the control of a program executed by the processor. To initiate
the transfer of a block of words, the processor sends the starting address, the number of
words in the block, and the direction of the transfer. On receiving this information, the
DMA controller proceeds to perform the requested operation. When the entire block has
been transferred, the controller informs the processor by raising an interrupt signal.
Above figure shows an example of the DMA controller registers that are accessed by the
processor to initiate transfer operations.
Two registers are used for storing the starting address and the word count.
The third register contains status and control flags. The R/W bit determines the direction of
the transfer. When this bit is set to 1 by a program instruction, the controller performs a
read operation, that is, it transfers data from the memory to the I/O device. Otherwise, it
performs a write operation.
When the controller has completed transferring a block of data and is ready to receive
another command, it sets the Done flag to 1.
Bit 30 is the Interrupt-enable flag, IE. When this flag is set to 1, it causes the controller
to raise an interrupt after it has completed transferring a block of data. Finally, the
controller sets the IRQ bit to 1 when it has requested an interrupt.
Cycle stealing: Requests by DMA devices for using the bus are always given higher
priority than processor requests. Among different DMA devices, top priority is given to
high-speed peripherals such as a disk, a high-speed network interface, or a graphics display
device. Since the processor originates most memory access cycles, the DMA controller
can be said to "steal" memory cycles from the processor. Hence, this interweaving
technique is usually called cycle stealing. Alternatively, the DMA controller may be given
exclusive access to the main memory to transfer a block of data without interruption. This is
known as block or burst mode.
I/O Interfaces
An I/O interface consists of the circuitry required to connect an I/O device to a computer bus.
On one side of the interface, we have bus signals. On the other side, we have a data path with
its associated controls to transfer data between the interface and the I/O device called port.
The conversion from the parallel to the serial format, and vice versa, takes place inside the
interface circuit.
PARALLEL PORT
Figure shows the hardware components needed for connecting a keyboard to a processor
SERIAL PORT
Serial port is used to connect the processor to I/O devices that require transmission of data
one bit at a time.
Serial port communicates in a bit-serial fashion on the device side and bit parallel fashion on
the bus side.
Serial interfaces require fewer wires, and hence serial transmission is convenient for
connecting devices that are physically distant from the computer.
Speed of transmission of the data over a serial interface is known as the ―bit rate‖.
• Input shift register accepts input one bit at a time from the I/O device.
• Once all the 8 bits are received, the contents of the input shift register are loaded in parallel
into DATAIN register.
• Output data in the DATAOUT register are loaded into the output shift register.
• Bits are shifted out of the output shift register and sent out to the I/O device one bit at a time.
• As soon as data from the input shift reg. are loaded into DATAIN, it can start accepting
another 8 bits of data.
• Input shift register and DATAIN registers are both used at input so that the input shift register
can start receiving another set of 8 bits from the input device after loading the contents to
DATAIN, before the processor reads the contents of DATAIN. This is called as double-
buffering.
Devices which require high-speed connection to the processor are connected directly to this
bus.
Because of electrical reasons only a few devices can be connected directly to the processor
bus.
Motherboard usually provides another bus that can support more devices.
Processor bus and the other bus (called as expansion bus) are interconnected by a circuit
called ―bridge‖.
Devices connected to the expansion bus experience a small delay in data transfers.
Design of a processor bus is closely tied to the architecture of the processor.
No uniform standard can be defined.
Expansion bus however can have uniform standard defined.
A number of standards have been developed for the expansion bus.
Three widely used bus standards:
◦ PCI (Peripheral Component Interconnect)
◦ SCSI (Small Computer System Interface)
◦ USB (Universal Serial Bus)
PCI Bus
Devices connected to the PCI bus appear to the processor as if they were connected directly
to the processor bus. They are assigned addresses in the memory address space of the
processor.
The PCI bus is designed primarily to support burst mode data transfer ie, data are transferred
in blocks than a single word.
At any given time, one device is the bus master. It has the right to initiate data transfers by
issuing read and write commands. A master is called an initiator in PCI terminology. This is
either a processor or a DMA controller.
The addressed device that responds to read and write commands is called a target.
The main bus signals used for transferring data are listed in the following table.
Signals whose name ends with the symbol # are asserted when in the low- voltage state. The
main difference between the PCI protocol with others is that in addition to a Target-ready
signal, PCI also uses an Initiator ready signal, IRDY #.
Example:
Consider a bus transaction in which the processor reads four 32-bit words from the memory.
In this case, the initiator is the processor and the target is the memory.
A complete transfer operation on the bus, involving an address and a burst of data, is called a
transaction.
Clock Cycle 1
In clock cycle 1, the processor asserts FRAME# to indicate the beginning of a transaction.
At the same time, it sends the address on the AD lines and a command on the C/BE# lines.
In this case, the command will indicate that a read operation is requested and that the memory
address space is being used.
Clock Cycle 2
The processor removes the address and disconnects its drivers from the AD lines.
The selected target enables its drivers on the AD lines, and fetches the requested data.
Clock Cycle 3
Fetches the requested data to be placed on the bus during clock cycle 3.
It asserts DEVSEL# and maintains it in the asserted state until the end of the transaction.
The C/BE# lines, which were used to send a bus command in clock cycle 1, are used for a
different purpose during the rest of the transaction.
During clock cycle 3, the initiator asserts the initiator ready signal, IRDY#, to indicate that it
is ready to receive data.
If the target has data ready to send at this time, it asserts target ready, TRDY#, and sends a
word of data.
The initiator loads the data into its input buffer at the end of the clock cycle.
Clock Cycle 4 to 6
It refers to a standard bus defined by the American National Standards Institute (ANSI) under
the designation X3.131.
Devices connected to the SCSI bus are not part of the address space of the processor in the
same way as devices connected to the processor bus.
The SCSI bus is connected to the PCI bus through a SCSI controller. This controller uses
DMA to transfer data packets from the main memory to the device, or vice versa.
A packet may contain a block of data, commands from the processor to the device, or status
information about the device.
A controller connected to a SCSI bus is one of two types - an initiator or a target. An initiator
has the ability to select a particular target and to send commands specifying the operations to
be performed. The disk controller operates as a target. It carries out the commands it receives
from the initiator.
The processor sends a command to the SCSI controller, which causes the following sequence
of events to take place:
1. The SCSI controller, acting as an initiator, contends for control of the bus.
2. When the initiator wins the arbitration process, it selects the target controller and hands
over control of the bus to it.
3. The target starts an output operation (from initiator to target); in response to this, the
initiator sends a command specifying the required read operation.
4. The target, realizing that it first needs to perform a disk seek operation, sends a message to
the initiator indicating that it will temporarily suspend the connection between them. Then it
releases the bus.
5. The target controller sends a command to the disk drive to move the read head to the first
sector involved in the requested read operation. Then, it reads the data stored in that sector
and stores them in a data buffer.
When it is ready to begin transferring data to the initiator, the target requests control of the
bus. After it wins arbitration, it reselects the initiator controller, thus restoring the suspended
connection.
6. The target transfers the contents of the data buffer to the initiator and then suspends the
connection again. Data are transferred either 8 or 16 bits in parallel, depending on the width
of the bus.
7. The target controller sends a command to the disk drive to perform another seek operation.
Then, it transfers the contents of the second disk sector to the initiator, as before. At the end
of this transfer, the logical connection between the two controllers is terminated.
8. As the initiator controller receives the data, it stores them into the main memory using the
DMA approach.
9. The SCSI controller sends an interrupt to the processor to inform it that the requested
operation has been completed.
Universal Serial Bus (USB) is an industry standard developed through a collaborative effort
of several computer and communications companies, including Compaq, Hewlett-Packard,
Intel, Lucent, Microsoft, Nortel Networks, and Philips.USB is simple, low cost mechanism to
connect devices like keyboards, microphones, cameras, speakers etc. to the computer.
The USB supports two speeds of operation, called low-speed (1.5 megabits/s) and fullspeed
(12 megabits/s). The most recent revision of the bus specification (USB 2.0) introduced a
third speed of operation, called high-speed (480 megabits/s).
USB Architecture:
To accommodate a large number of devices that can be added or removed at any time, the
USB has the tree structure as shown in the Figure. Each node of the tree has a device called a
hub, which acts as an intermediate control point between the host and the I/O devices. At the
root of the tree, a root hub connects the entire tree to the host computer. The leaves of the tree
are the I/O devices being served (for example, keyboard, speaker, or digital TV), which are
called functions in USB terminology.
USB Protocols:
All information transferred over the USB is organized in packets, where a packet consists of
one or more bytes of information.
The information transferred on the USB can be divided into two broad categories:
1. Control Packet
Control packets perform such tasks as addressing a device to initiate data transfer,
acknowledging that data have been received correctly, or indicating an error.
Packet Fields
The first field of any packet is called the packet identifier, PID, which identifies the type of
that packet.
There are four bits of information in this field, but they are transmitted twice. The first time
they are sent with their true values, and the second time with each bit complemented, as
shown in Figure (a). This enables the receiving device to verify that the PID byte has been
received correctly.
Control packets used for controlling data transfer operations are called token packets.
They have the format shown in Figure (b). A token packet starts with the PID field, using one
of two PID values to distinguish between an IN packet and an OUT packet, which control
input and output transfers, respectively.
The PID field is followed by the 7 -bit address of a device and the 4-bit endpoint number
within that device.
The packet ends with 5 bits for error checking, using a method called cyclic redundancy
check (CRC). The CRC bits are computed based on the contents of the address and endpoint
fields
Data packets, which carry input and output data, have the format shown in Figure ©. The
packet identifier field is followed by up to 8192 bits of data, then 16 error-checking bits. Note
that data packets do not carry a device address or an endpoint number. This information is
included in the IN or OUT token packet that initiates the transfer.
Packet Transmission
The host computer sends a token packet of type OUT to the hub, followed by a data packet
containing the output data.
The PID field of the data packet identifies it as data packet number 0.
The hub verifies that the transmission has been error free by checking the error control bits,
and then sends an acknowledgment packet (ACK) back to the host.
The hub forwards the token and data packets downstream. All I/O devices receive this
sequence of packets, but only the device that recognizes its address in the token packet
accepts the data in the packet that follows.
After verifying that transmission has been error free, it sends an ACK packet to the hub.
Successive data packets on a full-speed or low-speed pipe carry the numbers 0 and 1,
alternately. This simplifies recovery from transmission errors.
If a token, data, or acknowledgment packet is lost as a result of a transmission error, the
sender resends the entire sequence.
By checking the data packet number in the PID field, the receiver can detect and discard
duplicate packets. High-speed data packets are sequentially numbered 0, 1, 2, 0, and so on.
Computer system performance depends on the memory system as well as the processor
microarchitecture.
The processor communicates with the memory system over a memory interface.
Memory Interface
Figure shows the simple memory interface used in our multicycle MIPS processor. The
processor sends an address over the Address bus to the memory system. For a read,
MemWrite is 0 and the memory returns the data on the ReadData bus.
For a write, MemWrite is 1 and the processor sends data to memory on the WriteData bus.
MEMORY HIERARCHY
The processor first seeks data in a small but fast cache that is usually located on the same
chip.
If the data is not available in the cache, the processor then looks in main memory.
If the data is not there either, the processor fetches the data from virtual memory on the large
but slow hard disk.
1. Cache
The next level of hierarchy is processor cache, holds copies of instructions and data stored in
much larger memory that is provided externally.
If the processor requests data that is available in the cache, it is returned quickly. This is
called a cache hit.
Otherwise, the processor retrieves the data from main memory (DRAM). This is called a
cache miss.
If the cache hits most of the time, then the processor seldom has to wait for the slow main
memory, and the average access time is low.
2. Main Memory
The next level in the hierarchy is called the main memory. This larger memory is
implemented using dynamic memory components.
The main memory is much larger but significantly slower than the cache memory. The access
time for the main memory is about ten times longer than the access time for the cache.
3. Hard Disk, Or Hard Drive
The third level in the memory hierarchy is the hard disk, or hard drive.
Computer systems use the hard disk to store data that does not fit in main memory.
The hard disk provides an illusion of more capacity than actually exists in the main memory.
It is thus called virtual memory
Disk drives provide a huge amount of inexpensive storage. They are very slow compared to
the semiconductor devices used to implement the main memory.
There are two levels of caches. A primary cache is always located on the processor chip. This
cache is small because it competes for space on the processor chip. The primary cache is
referred to as level 1(L1) cache. A larger, secondary cache is placed between the primary
cache and the rest of the memory. It is referred to as level (L2) cache. It is usually
implemented using SRAM chips.
MEMORY CHARACTERISTICS
1. Location:
The term location in Table refers to whether memory is internal and external to the computer.
Internal memory is often equated with main memory. But there are other forms of internal
memory. The processor requires its own local memory, in the form of registers.
External memory consists of peripheral storage devices, such as disk and tape, that are
accessible to the processor via I/O controllers.
2. Capacity:
An obvious characteristic of memory is its capacity. For internal memory, this is typically
expressed in terms of bytes (1 byte =8 bits) or words. Common word lengths are 8, 16, and
32 bits. External memory capacity is typically expressed in terms of bytes.
3. Unit of transfer:
For internal memory, the unit of transfer is equal to the number of electrical lines into and out
of the memory module.
Word: The natural unit of organization of memory. The size of the word is typically equal to
the number of bits used to represent an integer.
For external memory, data are often transferred in much larger units than a word, and these
are referred to as blocks.
4. Method of accessing:
Access time (latency): For random-access memory, this is the time it takes to perform a read
or write operation. For non-random-access memory, access time is the time it takes to
position the read–write mechanism at the desired location.
Memory cycle time: This concept is primarily applied to random-access memory and consists
of the access time plus any additional time required before a second access can commence.
Transfer rate: This is the rate at which data can be transferred into or out of a memory unit.
Physical characteristics:
1. Volatile
In a volatile memory, information decays naturally or is lost when electrical power is
switched off.
2. Nonvolatile memory
Information once recorded remains without deterioration until deliberately changed; no
electrical power is needed to retain information. Magnetic-surface memories are nonvolatile.
Semiconductor memory may be either volatile or nonvolatile.
3. Organization
For random-access memory, the organization is a key design issue. By organization is meant
the physical arrangement of bits to form words.
1. Memory system performance metrics are miss rate or hit rate and average memory
access time.
Miss and hit rates are calculated as:
It is the average time a processor must wait for memory per load or store instruction. In the
typical computer system the processor first looks for the data in the cache. If the cache
misses, the processor then looks in main memory. If the main memory misses, the processor
accesses virtual memory on the hard disk. Thus, AMAT is calculated as
SEMICONDUCTOR MEMORIES
Semiconductor memory is fabricated on silicon chip. The low cost of semiconductor memory
(as compared to other memory devices) is the main reason for the ready availability and low
cost of microcomputers nowadays.
The main characteristics of semiconductor memory are low cost, high density (bits per chip),
and ease of use.
Apart from these characteristics, memory can be graded in terms of capacity and speed of
access.
Volatile[RAM]
Those whose contents can be read and also written to is called volatile memory. In a volatile
memory, information decays naturally or is lost when electrical power is switched off.
Examples of this type are DRAM and SRAM.
Non-Volatile[ROM]
Those whose contents can only be read. Some memory‘s contents may be permanent, while
other memory chips may be removed from the computer and reprogrammed. Information
once recorded remains without deterioration until deliberately changed; no electrical power is
needed to retain information.
Examples of this type are ROM, PROM, EPROM and EEPROM.
Random Access Memory (RAM)
Characteristics
1. It is possible both to read data from the memory and to write new data into the memory
easily and rapidly. Both the reading and writing are accomplished through the use of
electrical signals.
2. It is volatile. A RAM must be provided with a constant power supply. If the power is
interrupted, then the data are lost. Thus, RAM can be used only as temporary storage. The
two traditional forms of RAM used in computers are DRAM and SRAM.
Static Random Access Memory (SRAM)
SRAM is a digital device. In a SRAM, binary values are stored using traditional flip-flop
logic-gate configurations A static RAM will hold its data as long as power is supplied to it.
Consist of circuits that are capable of retaining their state as long as the power is applied, so
called static memories.
Volatile memories, because their contents are lost when power is interrupted.
Access times of static RAMs are in the range of few nanoseconds.
However, the cost is usually high.
Two inverters are cross-connected to form a latch. The latch is connected to two bit lines by
transistors T1 and T2. These transistors act as switches that can be opened or closed under the
control of the word line. When the word line is at ground level, the transistors are turned off
and a latch retains its state. For example, let us assume that the cell is in state 1 if the logic
value at point X is 1 and at point Y is 0. This state is maintained as long as the signal on the
word line is at ground level.
Read Operation
In order to read the state of the SRAM cell, the word line is activated to close switches T1
and T2.
Word line = 1, access transistors are ON.
If the cell is in state 1, the signal on bit line b is high and the signal on bit line b‗ is low. The
opposite is true if the cell is in state 0. Thus, b and b‗are compliments of each other.
Sense/Write circuits at the end of the bit lines monitor the state of b and b‗ and set the output
accordingly. It compares the difference between bit b and bit b‘.
• if bit > b‘, output is 1
• if bit < b‘, output is 0
Write operation
Word line = 1, access transistors are ON.
Place appropriate values on the bit line b and its compliment on b‗.
This forces the cell into the corresponding state. Data in latch overwritten with new value.
The required signals on the bit lines are generated by the Sense/ Write circuit.
SRAMs are fast, but they come at a high cost because their cells require several transistors.
Less expensive RAM can be implemented if simpler cells are used. However, such cells
do not retain their state indefinitely; hence they are called dynamic RAMs(DRAMs).
Information are stored in dynamic memory cell in the form of charge on a capacitor, and
this charge can be maintained for only tens of milliseconds. Since the cell is required to
store information for a much longer time, its contents must be periodically refreshed by
restoring the capacitor charge to its full value.
MBITS, ECE Dept. Page 27
Computer Organisation Module V
Dram Operation
Make the Word line active when bit read or written
Transistor switch closed (current flows)
Read Operation
Both volatile
—Power needed to preserve data
Dynamic cell
—Simpler to build, smaller
—More dense
—Less expensive
—Needs refresh
—Larger memory units
•Static
—Faster
—Cache
Figure 1
Write Operation
A logic value 0 is stored in the cell if the transistor is connected to ground at point P;
otherwise, a 1 is stored. The bit line is connected through a resistor to the power
supply.
Read Operation
To read the state of the cell, the word line is activated to close the transistor switch.
As a result, the voltage on the bit line drops to near zero if there is a connection
between the transistor and ground.
If there is no connection to ground, the bit line remains at the high voltage level,
indicating a 1. A sense circuit at the end of the bit line generates the proper output
MBITS, ECE Dept. Page 29
Computer Organisation Module V
value. The state of the connection to ground in each cell is determined when the chip
is manufactured, using a mask with a pattern that represents the information to be
stored.
Both static and dynamic RAM chips are volatile, which means that they retain information
only while power is turned on.
There are many applications requiring memory devices that retain the stored information
when power is turned off.
For example, the need to store a small program in such a memory, to be used to start the
bootstrap process of loading the operating system from a hard disk into the main memory.
Many embedded applications do not use a hard disk and require non-volatile memories to
store their software.
A special writing process is needed to place the information into a nonvolatile memory. Since
its normal operation involves only reading the stored data, a memory of this type is called a
read-only memory (ROM).
Figure shows the dot notation for a 4-word into 3-bit ROM containing the data
A dot at the intersection of a row (wordline) and a column (bitline) indicates that the data bit
is 1. For example, the top wordline has a single dot on Data1, so the data word stored at
Address 11 is 010.
A programmable ROM places a transistor in every bit cell but provides a way to
connect or disconnect the transistor to ground.
Some ROM designs allow the data to be loaded by the user, thus providing a
programmable ROM (PROM). Programmability is achieved by inserting a fuse at
point P in the above Figure.
Advantages
The cost of preparing the masks needed for storing a particular information pattern makes
ROMs cost effective only in large volumes. The alternative technology of PROMs provides a
more convenient and considerably less expensive approach, because memory chips can be
programmed directly by the user.
Erasable PROMs [EPROM]
Another type of ROM chip provides an even higher level of convenience. It allows
the stored data to be erased and new data to be written into it. Such an erasable,
reprogrammable ROM is usually called an EPROM.
It provides considerable flexibility during the development phase of digital systems.
Since EPROMs are capable of retaining stored information for a long time, they can
be used in place of ROMs or PROMs while software is being developed.
In this way, memory changes and updates can be easily made.
An EPROM cell has a structure similar to the ROM cell in Figure 1, the connection to
ground at point P is made through a special transistor. The transistor is normally
turned off, creating an open switch. It can be turned on by injecting charge into it that
becomes trapped inside. Thus, an EPROM cell can be used to construct a memory in
the same way as the previously discussed ROM cell. Erasure requires dissipating the
charge trapped in the transistors that form the memory cells. This can be done by
exposing the chip to ultraviolet light, which erases the entire contents of the chip. To
make this possible, EPROM chips are mounted in packages that have transparent
windows. EPROMs, replace the nMOS transistor and fuse with a floating-gate
transistor.
Figure shows a memory array with two address bits and three data bits.
Symbol Function
The two address bits specify one of the four rows (data words) in the array. Each data
word is three bits wide. Figure 5.39(b) shows some possible contents of the memory
array. The depth of an array is the number of rows, and the width is the number of
columns, also called the word size.
Bit Cells
Memory arrays are built as an array of bit cells, each of
which stores 1 bit of data.
Figure shows that each bit cell is connected to a
wordline and a bitline.
For each combination of address bits, the memory
asserts a single wordline that activates the bit cells in
that row.
When the wordline is HIGH, the stored bit transfers to
or from the bitline.
Each row of cells constitutes a memory word, and all cells of a row are connected to a
common line referred to as the word line, which is driven by the address decoder on the
chip. The cells in each column are connected to a Sense/Write circuits by two bit lines. The
Sense/Write circuits are connected to the data input/output lines of the chip.
Read Operation:
During a Read operation, these circuits sense, or read, the information stored in the cells
selected by a word line and transmit this information to the output data lines.
Write Operation:
During a Write operation, the Sense/Write circuits receive input information from the
bidirectional data lines and store it in the cells of the selected word.