Learning Objectives
Learning Objectives
Learning Objectives
CHAPTER-3
3. INSTRUCTION CYCLE
The instruction cycle (also known as the fetch–decode–execute cycle or the fetch-execute
cycle) is the basic operational process of a computer system. It is the process by which a
computer retrieves a program instruction from its memory, determines what actions the
instruction describes, and then carries out those actions. This cycle is repeated continuously
by a computer’s central processing unit (CPU), from boot-up until the computer has shut
down. A group of wires that carries information between different components is called bus.
Fetch – The control unit requests instructions from the main memory that is stored at a
memory’s location as indicated by the program counter (also known as the instruction
counter).
Decode – Received instructions are decoded in the instruction register. This involves
breaking the operand field into its components based on the instruction’s operation code
(opcode).
Execute – This involves the instruction’s opcode as it specifies the CPU operation
required. The program counter indicates the instruction sequence for computer. These
instructions are arranged into the instructions register and as each are executed, it
increments the program counter so that the next instruction is stored in memory.
Appropriate circuitry is then activated to perform the requested task. As soon as
instructions have been executed, it restarts the machine cycle that begins the fetch step.
3.2 T-STATES
T-state is defined as one subdivision of operation performed in one clock period. These
subdivisions are internal states synchronized with the system clock, and each T-state is precisely
equal to one clock period.
QUESTIONS:
3) what is T-state:
1) What is an instruction?
5) Define T-states.
CHAPTER 4
1. Immediate addressing.
2. Register addressing.
3. Direct addressing.
4. Indirect addressing.
Types Examples
2. Specific data byte to a register or a 2. Load register B with the data byte 32H.
memory location.
1. data transfer, the contents of the source are not destroyed; only the contents of the destination
are changed. The data copy instructions do not affect the flags.
2. Arithmetic and Logical operations are performed with the contents of the accumulator, and the
results are stored in the accumulator (with some expectations). The flags are affected
according to the results.
3. Any register including the memory can be used for increment and decrement.
4. A program sequence can be changed either conditionally or by testing for a given data
condition.
1. One-word or 1-byteinstructions
2. Two-word or 2-byteinstructions
3. Three-word or 3-byteinstructions
In the 8085, "byte" and "word" are synonymous because it is an 8-bit microprocessor. However,
instructions are commonly referred to in terms of bytes rather than words.
These instructions are 1-byte instructions performing three different tasks. In the first instruction,
both operand registers are specified. In the second instruction, the operand
B is specified and the accumulator is assumed. Similarly, in the third instruction, the accumulator is
assumed to be the implicit operand. These instructions are stored in 8- bit binary format in memory;
each requires one memory location.
MOV rd, rs
rd <-- rs copies contents of rs into rd.
Coded as 01 ddd sss where ddd is a code for one of the 7 general registers which is the destination
of the data, sss is the code of the sourceregister.
Example: MOV A,B
Coded as 01111000 = 78H = 170 octal (octal was used extensively in instruction design of such
processors).
ADD r
A <-- A + r
DATA
Assume that the data byte is 32H. The assembly language instruction is written as
ADIdata
A <-- A + data OUTport
where port is an 8bitdeviceaddress. (Port) <-- A. Since the byte is not the data but points directly to
In a three-byte instruction, the first byte specifies the opcode, and the following two bytes specify the
16-bit address. Note that the second byte is the low-order address and the third byte is the high-
orderaddress.
opcode + data byte + data byte For example:
This instruction would require three memory locations to store in memory. Three byte instructions -
Example:
LXI H,0520H coded as 21H 20H 50H in three bytes. This is also immediate addressing.
LDA addr
A <-- (addr) Addr is a 16-bit address in L H order. Example: LDA 2134H coded as 3AH 34H 21H.
This is also an example of direct addressing.
4.8 Programs
Example 1: Addition of two 8-bit numbers whose sum is8-bits.
Explanation: This assembly language program adds two 8-bit numbers stored in two memory
locations. The sum of the two numbers is 8-bits only. The necessary algorithm and flow charts are
given below.
ALGORITHM:
Step1. : Initialize H-L pair with memory address XX00 (say: 9000).
Step5. : Add the contents of memory indicated by memory pointer to accumulator. Step6. : Store the
Step7. :Halt
PROGRAM:
Addres Mnemonics
s of the Hex code Label Comments
memor Op-code Operand
y
locatio
n
8000 21,00,90 LXI H, 9000 Initialise memory pointer to point
the first data location
9000.
8004 00
800A 90
8501).
location
Step 9 :Halt
PROGRAM:
8004 00
8007 00
800C 80
8010 85
8011 79 MOV A,C
8013 03
8014 85
8004 00
8008 27 DAA
0D
80
800F 85
8012 03
8013 85
QUESTIONS
A. Very short answers type questions.
1. Define opcode.
2. Define 1 byte instruction.
3. Give an example of 2 byte instruction.
4. Explain logical instructions.
5. Explain LDA 2000 instruction.
6. Explain PUSH and POP instructions.
7. Explain 4 data transfer instruction.
8. What is the function of DAA instruction?
9. Differentiate between RIM and SIM.
10. What is direct addressing?
CHAPTER 5
5. Memory mapping
In memory mapped I/O interfacing with 8085 microprocessor, the I/O devices are not given
separate addresses other than treated as a memory location. Whose address range between
0000h to FFFFh (64k).But some part of the space is reserved for I/O devices. The advantage is
any instruction that references memory can also transfer data between an I/O device and the
microprocessor, as long as the I/O port is assigned to the memory address space rather than to
the I/O address space. The register associated with the I/O port is simply treated as memory
location register.
Now we can discuss this memory mapped I/O interfacing with 8085 microprocessor with an
example in which address bit A15 designates whether instructions reference memory or an I/O
device. If A15= 0, a memory register is addressed; If A15= 1, than a memory mapped I/O device
is address .this assignment elevates the first 32k bytes of memory address space to memory and
second 32k to memory mapped I/O devices. External logic generates devices select pulses for
memory mapped I/O only when = 0, the appropriate address is on the address low and a or
strobe occurs.
Input and output transfer using memory mapped I/O are not limited to the accumulator. For
example, same of 8085 A instructions that can be used for input from memory mapped I/O ports.
MOV r, m :- Move the connects of input port whose address is available in (H,L) reg pair to any
internal register.
LDA addr :- Load the acc with the content of the input port whose address is available as a
second and third byte of the instruction.
Other instructions include, ANA M, ADD M, M provide input data transfer and computation in a
single instruction. same instruction that out the data from memory mapped ports are
MOV M,r
STA addr
MVI M, data
SHLD addr
LHLD and SHLD carry out 16- bit I/O transfers with single instructions which reduce program
executive time considerably. The price paid for this added capability is a reduction in directly
addressable main memory and the necessity of decoding a 16- bit rather than an 8-bit address.
Now we discuss the process of memory mapped I/O interfacing with 8085 microprocessor by
which microprocessor work in Memory mapped I/O interfacing with 8085 microprocessor. When
a microprocessor puts out an address and generates a control strobe for a memory read, it has
no way of determining whether the device that responds with data is a memory device or an I/O
device. It only requires that the devices that respond within the allowable access time or uses the
READY line to request a sufficient number of WAIT states. It supplies an address data and a
write strobe and continues its operations; external logic determines whether memory, I/O or
anything at all receives the data transferred.
Memory address decoding is nothing but to assign an address for each location in the memory
chip. The data stored in the memory is accessed by specifying its address. Memory address can
be decoded in two ways:
1. Absolute or Fully decoding and
2. Linear Select or Partial decoding
There are many advantages in absolute address decoding.
1. Each memory location has only one address, there is no duplication in the address
2. Memory can be placed contiguously in the address space of the microprocessor
3. Future expansion can be made easily without disturbing the existing circuitry
There are few disadvantages in this method
1. Extra decoders are necessary
2. Some delay will be produced by these extra decoders.
The main advantage of linear select decoding is its simplified decoding circuit. This reduces the
hardware design cost. But there are many disadvantages in this decoding.
1. Multiple addresses are provided for the same location
2. Complete memory space of the microprocessor is not efficiently used
3. Adding or interfacing ICs with already existing circuitry is difficult.
In peripheral mapped I/O interfacing, IN instruction is used to access input device and OUT
instruction is used to access output device. Each I/O device is identified by a unique 8-bit
address assigned to it. Since the control signals used to access input and output devices are
different, and all I/O device use 8-bit address, a maximum of 256 (28) input devices and 256
output devices can be interfaced with 8085.
Now it will be better for us if we discuss the topic with an example. In bellow we take an example
and discussed to show how peripheral mapped I/O interfacing work.
As per our above discussion we know that in peripheral mapped I/O interfacing, IN instruction is
used to get data from DIP switch and store it in accumulator. Steps involved in the execution of
this instruction are:
15. Address F0H is placed in the lines A0 – A7 and a copy of it in lines A8 – A15.
16. The IOR signal is activated ( IOR = 0), which makes the selected input device to place its
data in the data bus.
iii. The data in the data bus is read and store in the accumulator.
Fig. shows the interfacing of DIP switch.
A7 A6 A5 A4 A3 A2 A1 A0
1 1 1 1 0 0 0 0 = F0H
A0 – A7 lines are connected to a NAND gate decoder such that the output of NAND gate is
The output of NAND gate is ORed with the IOR signal and the output of OR gate is connected to
1G and 2G of the 74LS244. When 74LS244 is enabled, data from the DIP switch is placed on the
data bus of the 8085. The 8085 read data and store in the accumulator. Thus data from DIP
switch is transferred to the accumulator
5.3 Difference between memory mapped I/O and I/O mapped I/O
Memory mapped IO
QUESTIONS
1. Define memory.
LEARNING OBJECTIVES:
Concept of interrupt.
Concept maskable and non maskable interrupt.
Edge triggered and level triggered interrupt
Software and hardware interrupts of 8085.
Servicing interrupts.
CHAPTER 6
6.1 INTRODUCTION
An interrupt is a signal sent to the processor that interrupts the current process. It may be
generated by a hardware device or a software program.
Software interrupts are used to handle errors and exceptions that occur while a program is
running. For example, if a program expects a variable to be a valid number, but the value is null,
an interrupt may be generated to prevent the program from crashing. It allows the program to
change course and handle the error before continuing. Similarly, an interrupt can be used to
break an infinite loop, which could create a memory leak or cause a program to be unresponsive.
Both hardware and software interrupts are processed by an interrupt handler, also called an
interrupt service routine, or ISR. When a program receives an interrupt request, the ISR handles
the event and the program resumes. Since interrupts are often as brief as a keystroke or mouse
click, they are often processed in less than a millisecond.
(one or more than one) of the sharing devices is signalling an outstanding interrupt.
Level-triggered interrupt is favored by some because it is easy to share
the interrupt request line without losing the interrupts, when multiple shared devices interrupt
at the same time. Upon detecting assertion of the interrupt line, the CPU must search through
the devices sharing the interrupt request line until one who triggered the interrupt is detected.
After servicing this device, the CPU may recheck the interrupt line status to determine whether
any other devices also need service. If the line is now de-asserted, the CPU avoids checking the
remaining devices on the line. Since some devices interrupt more frequently than others, and
other device interrupts are particularly expensive, a careful ordering of device checks is
employed to increase efficiency. The original PCI standard mandated level-triggered interrupts
because of this advantage of sharing interrupts.
There are also serious problems with sharing level-triggered interrupts. As long as any device
on the line has an outstanding request for service the line remains asserted, so it is not
possible to detect a change in the status of any other device. Deferring servicing a low-priority
device is not an option, because this would prevent detection of service requests from higher-
priority devices. If there is a device on the line that the CPU does not know how to service,
then any interrupt from that device permanently blocks all interrupts from the other devices.
6.5 Edge-triggered INTERRRUPT
An edge-triggered interrupt is an interrupt signalled by a level transition on the interrupt line,
either a falling edge (high to low) or a rising edge (low to high). A device, wishing to signal an
interrupt, drives a pulse onto the line and then releases the line to its inactive state. If the pulse
is too short to be detected by polled I/O then special hardware may be required to detect the
edge.
Multiple devices may share an edge-triggered interrupt line if they are designed to. The
interrupt line must have a pull-down or pull-up resistor so that when not actively driven it
settles to one particular state. Devices signal an interrupt by briefly driving the line to its non-
default state, and let the line float (do not actively drive it) when not signalling an interrupt.
This type of connection is also referred to as open collector. The line then carries all the pulses
generated by all the devices. (This is analogous to the pull cord on some buses and trolleys
that any passenger can pull to signal the driver that they are requesting a stop.) However,
interrupt pulses from different devices may merge if they occur close in time. To avoid losing
interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line
is pulled up and driven low). After detecting an interrupt the CPU must check all the devices
for service requirements.
Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have
with sharing. Service of a low-priority device can be postponed arbitrarily, and interrupts
will continue to be received from the high-priority devices that are being serviced. If there
is a device that the CPU does not know how to service, it may cause a spurious interrupt, or
even periodic spurious
Page|4
interrupts, but it does not interfere with the interrupt signalling of the other devices. However,
it is fairly easy for an edge triggered interrupt to be missed - for example if interrupts have to
be masked for a period - and unless there is some type of hardware latch that records the event
it is impossible to recover. Such problems caused many "lockups" in early computer hardware
because the processor did not know it was expected to do something. More modern hardware
often has one or more interrupt status registers that latch the interrupt requests; well written
edge-driven interrupt software often checks such registers to ensure events are not missed.
The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, but does
not mandate that devices be able to share them. The parallel port also uses edge-triggered
interrupts. Many older devices assume that they have exclusive use of their interrupt line,
making it electrically unsafe to share them. However, ISA motherboards include pull-up
resistors on the IRQ lines, so well-behaved devices share ISA interrupts just fine.
ISR Stands for "Interrupt Service Routine." An ISR (also called an interrupt handler) is a
software process invoked by an interrupt request from a hardware device. It handles
the request and sends it to the CPU, interrupting the active process. When the ISR is complete,
the process is resumed.
A basic example of an ISR is a routine that handles keyboard events, such as pressing or
releasing a key. Each time a key is pressed, the the ISR processes the input. For example, if
you press and hold the right arrow key in a text file, the ISR will signal to the CPU that the
right arrow key is depressed. The CPU sends this information to the active word processor or
text editing program, which will move the cursor to the right. When you let go of the key, the
ISR handles the "key up" event. This interrupts the previous "key down" state, which signals to
the program to stop moving the cursor.
QUESTIONS:
MULTIPLE CHOICE QUESTIONS:
1) What is interrupt?
LEARNING OBJECTIVES:
CHAPTER-7
7.1 INTRODUCTION
Data transfer is the process of using computing techniques and technologies to transmit or
transfer electronic or analog data from one computer node to another. Data is transferred in the
form of bits and bytes over a digital or analog medium, and the process enables digital or analog
communications and its movement between devices.
Data transfer is also known as data transmission.
Data transfer utilizes various communication medium formats to move data between one or more
nodes. Transferred data may be of any type, size and nature. Analog data transfer typically sends
data in the form of analog signals, while digital data transfer converts data into digital bit
streams. For example, data transfer from a remote server to a local computer is a type of digital
data transfer.
Moreover, data transfer also may be accomplished through the use of network-less
environments/modes, such as copying data to an external device and then copying from that
device to another.
Page|2
4. The processor will periodically check the status of the I/O module until it find that the
operation is complete.
1. Each input is read after first testing whether the device is ready with the input (a state
reflected by a bit in a status register).
2. The program waits for the ready statuses by repeatedly testing the status bit and till all
targeted bytes are read from the input device.
3. The program is in busy (non-waiting) state only after the device gets ready else in wait
state.
In synchronous transmission, data moves in a complete paired approach in the form of chunks or
frames. Synchronization between the source and target is required so that the source knows
where the new byte begins since there is no space between the data.
Synchronous transmission is effective, dependable and is utilized for transmitting a large amount
of data. It offers real-time communication between linked devices.
A comparison of synchronous transmission would be the transfer of a large text file. Before the
file is transmitted, it is first dissected into blocks of sentences. The blocks are then transferred
over the communication link to the target location. Because there is no
beginning and end bits the data transfer rate is quicker but there’s a
Page|4
Possibility of more errors to occur. Over time, clocks will get out of sync and the target device
would have the incorrect time, so some bytes could become tampered due to lost bits. To resolve
this issue, there is a need for regular re-synchronization of the clocks as well as the use of check
digits to make sure that the bytes are correctly received and translated.
Video conferencing
Telephonic conversations
Face-to-face interactions
Each character is headed by a beginning bit and superseded by one or more end bits.
There may be gaps or spaces in between characters.
Examples of Asynchronous Transmission
Emails
Letters
Radios
Televisions
We discussed on programmed I/O data transfer method. And we saw that in the
programmed, microprocessor is busy all the time in checking for the availability of data
from the slower I/O devices. And it also busy in checking if I/O device is ready for the
data transfer or not. In other words in this data transfer scheme, some of the
microprocessor time is wasted in waiting while an I/O device is getting ready. To
overcome this problem interrupt driven I/O data transfer introduced.
The interrupt driven I/O data transfer method is very efficient because no
microprocessor time is wasted in waiting for an I/O device to be ready. In this interrupt
driven I/O data transfer method the I/O device informs the microprocessor for the data
transfer whenever the I/O device is ready. This is achieved by interrupting the
microprocessor. As we know that the interrupt is hardware facilities provided on the
microprocessor.
Now come to the working process of interrupt driven I/O data transfer. So the beginning
the microprocessor initiates data transfer by requesting the I/O device ‘to get ready’ and
then continue executing its original program rather wasting its time by checking the
status of I/O device. Whenever the device is ready to accept or supply data, it informs the
processor through a control signal. This control signal known as interrupt (INTR) signal.
In response to this interrupt signal, the microprocessor sends back an interrupt
acknowledge signal to the I/O device. By sending acknowledgement it indicating that it
received the request. It then suspends its job after executing the current instruction. It
saves the contents and status of program counter to stack and jumps to the subroutine
program.
This subroutine program is called Interrupt Service Subroutine (ISS) program. The ISS
saves the processor status into stack; and after executing the instruction for the data
transfer, it restores the processor status and then returns to main program.
I/O devices and external memory is via the accumulator. Now think for bulk data transfer
from I/O devices to memory or vice-versa, these two methods discussed above are time
consuming and quite uneconomical even though the speed of I/O devices matches with
the speed of microprocessor. Because in those methods the data is first transferred to
accumulator and then to concerned device.
To overcome those problem direct memory access data transfer method is introduced.
The Direct Memory Access (DMA) data transfer method is used for bulk data transfer
from I/O devices to microprocessor or vice-versa. In this method I/O devices are allowed
to transfer the data directly to the external memory without being routed through
accumulator. For this reason the microprocessor relinquishes the control over the data
bus and address bus, so that these can be used for transfer of data between the devices.
Working principle of direct memory access data transfer
So now come to working principle of direct memory access data transfer. For the data
transfer using DMA process, a request to the microprocessor in form of HOLD signal, by
the I/O device is sent. When microprocessor receipt of such request, the microprocessor
relinquishes the address and data buses and informs the I/O devices of the situation by
sending Acknowledge signal HLDA. The I/O device withdraws the request when the data
transfer between the I/O device and external memory is complete.
If we discuss in brief about working principal of DMA controller. Then we should
Mention that DMA controller is used with the microprocessor that helps to generate the
addresses for the data to be transferred from the I/O devices. The peripheral device sends
the request signal (DMARQ) to the DMA controller and the DMA controller in turn
passes it to the microprocessor (HOLD signal). On receipt of the DMA request the
microprocessor sends an acknowledge signal (HLDA) to the DMA controller. On
Page|7
receipt of this signal (HLDA) the DMA controller sends a DMA acknowledge signal
(DMACK) to the I/O device. The DMA controller then takes over the control of the buses
of microprocessor and controls the data transfer between RAM and I/O device. When the
data transfer is complete, DMA controller returns the control over the buses to the
microprocessor by disabling the HOLD and DMACK signals.
7.11 SERIAL INPUT AND OUTPUT DATA
In computing, a serial port is a serial communication interface through which information
transfers in or out one bit at a time (in contrast to a parallel port). Throughout most of the history
of personal computers, data was transferred throughserial ports to devices such as modems,
terminals, and various peripherals.
QUESTIONS:
4) What is DMA?
LEARNING OBJECTIVES:
CHAPTER-8
8.1. INTRODUCTION
A programmable peripheral device can perform various input/output functions. It has an internal
register called control register. Such a device can be used to perform specific functions by writing
instructions in its control register. Furthermore the functions can be changed anytime during the
execution of a program by writing a control word or instruction in its control register. These
programmable peripheral devices are flexible, versatile and economical.
8.2 8255 PPI
Page|2
Programmable peripheral interface 8255
PPI 8255 is a general purpose programmable I/O device designed to interface the CPU with its
outside world such as ADC, DAC, keyboard etc. We can program it according to the given
condition. It can be used with almost any microprocessor.
It consists of three 8-bit bidirectional I/O ports i.e. PORT A, PORT B and PORT C. We can
assign different ports as input or output functions.
Page|3
It consists of 40 pins and operates in +5V regulated power supply. Port C is further divided into
two 4-bit ports i.e. port C lower and port C upper and port C can work in either BSR (bit set rest)
mode or in mode 0 of input-output mode of 8255. Port B can work in either mode or in mode 1 of
input-output mode. Port A can work either in mode 0, mode 1 or mode 2 of input-output mode.
It has two control groups, control group A and control group B. Control group A consist of port A
and port C upper. Control group B consists of port C lower and port B.
Depending upon the value if CS’, A1 and A0 we can select different ports in different modes as
input-output function or BSR. This is done by writing a suitable word in control register (control
word D0-D7).
Page|3
0 0 0 PORT A 80 H
0 0 1 PORT B 81 H
0 1 0 PORT C 82 H
0 1 1 Control Register 83 H
1 X X No Seletion X
Pin diagram –
Operating modes –
Bit set reset (BSR) mode – If MSB of
control word (D7) is 0, PPI works in BSR mode. In this mode only port C bits are used
for set or reset.
1. Input-Outpt mode
If MSB of control word (D7) is 1, PPI works in input-output mode. This is further
divided into three modes:
Page|5
Mode 0 –In this mode all the three ports (port A, B, C) can work as simple input
function or simple output function. In this mode there is no interrupt handling
capacity.
Mode 1 – Handshake I/O mode or strobbed I/O mode. In this mode either port A or
port B can work as simple input port or simple output port, and port C bits are used
for handshake signals before actual data transmission. It has interrupt handling
capacity and input and output are latched.
Example: A CPU wants to transfer data to a printer. In this case since speed of
processor is very fast as compared to relatively slow printer, so before actual data
transfer it will send handshake signals to the printer for synchronization of the speed
of the CPU and the peripherals.
Mode 2 – Bi-directional data bus mode. In this mode only port A works, and port B
can work either in mode 0 or mode 1. 6 bits port C are used as handshake signals. It
also has interrupt handling capacity.
The Intel 8253 and 8254 are Programmable Interval Timers (PTIs) designed for microprocessors
to perform timing and counting functions using three 16-bit registers. Each counter has 2 input
pins, i.e. Clock & Gate, and 1 pin for “OUT” output. To operate a counter, a 16-bit count is
loaded in its register. On command, it begins to decrement the count until it reaches 0, then it
generates a pulse that can be used to interrupt the CPU.
8253 8254
Reads and writes of the same Reads and writes of the same
counter cannot be interleaved. counter can be interleaved.
These three counters can be programmed for either binary or BCD count.
8253 has a powerful command called READ BACK command, which allows the user to
check the count value, the programmed mode, the current mode, and the current status
of the counter.
8253 Architecture
The architecture of 8253 looks as follows −
Page|7
In the above figure, there are three counters, a data bus buffer, Read/Write control logic, and a
control register. Each counter has two input signals - CLOCK & GATE, and one output signal -
OUT.
Address lines A0 & A1 of the CPU are connected to lines A0 and A1 of the
8253/54, and CS is tied to a decoded address. The control word register and
counters are selected according to the signals on lines A0 & A1.
Page|9
A1 A0 Result
0 0 Counter 0
0 1 Counter 1
1 0 Counter 2
X X No Selection
A1 A0 RD WR CS Result
0 0 1 0 0 Write Counter 0
0 1 1 0 0 Write Counter 1
1 0 1 0 0 Write Counter 2
0 0 0 1 0 Read Counter 0
P a g e | 10
0 1 0 1 0 Read Counter 1
1 0 0 1 0 Read Counter 2
1 1 0 1 0 No operation
X X 1 1 0 No operation
X X X X 1 No operation
Counters
Each counter consists of a single, 16 bit-down counter, which can be operated in either
binary or BCD. Its input and output is configured by the selec tion of modes stored in
the control word register. The programmer can read the contents of any of the three
counters without disturbing the actual count in process.
DMA stands for Direct Memory Access. It is designed by Intel to transfer data at the fastest rate. It
allows the device to transfer the data directly to/from memory without any interference of the
CPU.
Using a DMA controller, the device requests the CPU to hold its data, address and control bus, so
the device is free to transfer data directly to/from the memory. The DMA data transfer is initiated
only after receiving HLDA signal from the CPU.
Initially, when any device has to send data between the device and the memory,
the device has to send DMA request (DRQ) to DMA controller.
The DMA controller sends Hold request (HRQ) to the CPU and waits for the
CPU to assert the HLDA.
P a g e | 11
Then the microprocessor tri-states all the data bus, address bus, and control bus.
The CPU leaves the control over bus and acknowledges the HOLD request through
HLDA signal.
Now the CPU is in HOLD state and the DMA controller has to manage the
operations over buses between the CPU, memory, and I/O devices.
Features of8257
Here is a list of some of the prominent features of 8257 −
It has four channels which can be used over four I/O devices.
Each channel can perform read transfer, write transfer and verify transfer
operations.
It generates MARK signal to the peripheral device that 128 bytes have been
transferred.
8257 Architecture
The following image shows the architecture of 8257 −
P a g e | 12
DRQ0−DRQ3
These are the four individual channel DMA request inputs, which are used by the
peripheral devices for using DMA services. When the fixed priority mode is selected,
then DRQ0 has the highest priority and DRQ3 has the lowest priority among them.
DACKo − DACK3
These are the active-low DMA acknowledge lines, which updates the requesting
peripheral about the status of their request by the CPU. These lines can also act as
strobe lines for the requesting devices.
Do − D7
These are bidirectional, data lines which are used to interface the system bus with the
internal data bus of DMA controller. In the Slave mode, it carries command words to
8257 and status word from 8257. In the master mode, these lines are used to send
higher byte of the generated address to the latch. This address is further latched using
ADSTB signal.
P a g e | 14
IOR
It is an active-low bidirectional tri-state input line, which is used by the CPU to read
internal registers of 8257 in the Slave mode. In the master mode, it is used to read data
from the peripheral devices during a memory write cycle.
IOW
It is an active low bi-direction tri-state line, which is used to load the contents of the
data bus to the 8-bit mode register or upper/lower byte of a 16-bit DMA address
register or terminal count register. In the master mode, it is used to load the data to the
peripheral devices during DMA memory read cycle.
CLK
It is a clock frequency signal which is required for the internal operation of 8257.
RESET
This signal is used to RESET the DMA controller by disabling all the DMA channels.
Ao - A3
These are the four least significant address lines. In the slave mode, they act as an
input, which selects one of the registers to be read or written. In the master mode, they
are the four least significant memory address output lines generated by 8257.
CS
It is an active-low chip select line. In the Slave mode, it enables the read/write
operations to/from 8257. In the master mode, it disables the read/write operations
to/from 8257.
A4 - A7
These are the higher nibble of the lower byte address generated by DMA in the master
mode.
READY
It is an active-high asynchronous input signal, which makes DMA ready by inserting
wait states.
HRQ
This signal is used to receive the hold request signal from the output device. In the
slave mode, it is connected with a DRQ input line 8257. In Master mode, it is
connected with HOLD input of the CPU.
P a g e | 15
HLDA
It is the hold acknowledgement signal which indicates the DMA controller that the bus
has been granted to the requesting peripheral by the CPU when it is set to 1.
MEMR
It is the low memory read signal, which is used to read the data from the addressed
memory locations during DMA read cycles.
MEMW
It is the active-low three state signal which is used to write the data to the addressed
memory location during DMA write operation.
ADST
This signal is used to convert the higher byte of the memory address generated by the
DMA controller into the latches.
AEN
This signal is used to disable the address bus/data bus.
TC
It stands for ‘Terminal Count’, which indicates the present DMA cycle to the present
peripheral devices.
MARK
The mark will be activated after each 128 cycles or integral multiples of it from the
beginning. It indicates the current DMA cycle is the 128th cycle since the previous
MARK output to the selected peripheral device.
Vcc
It is the power signal which is required for the operation of the circuit.
QUESTIONS:
4) What is PIC?
Learning Outcomes
After undergoing the topic, students will be able to:
Understand about 8086 basic Architecture and pin diagram
CHAPTER 9
Stack is a set of memory locations in the Read/Write memory which is used for temporary storage of binary
information during the execution of a program. It is implemented in the Last- in-first-out (LIFO) manner. i.e.,
the data written first can be accessed last; one can put the data on the top of the stack by a special operation
known as PUSH. Data can be read or taken out from the top of the stack by another special instruction known
as POP.
Stack is implemented in two ways. In the first case, a set of registers is arranged in a shift register organization.
One can PUSH or POP data from the top register. The whole block of data moves up or down as a result of
push and pop operations respectively. In the second case, a block of RAM area is allocated to the stack. A
special purpose register known as stack pointer (SP) points to the top of the stack. Whenever the stack is empty,
it points to the bottom address. If a PUSH operation is performed, the data are stored at the location pointed to
by SP and it is decremented by one. Similarly if the POP operation is performed, the data are taken out of the
location pointed at by SP and SP is incremented by one. In this case the data do not move but SP is incremented
or decremented as a result of push or pop operations respectively.
Application of Stack: Stack provides a powerful data structure which has applications in many situations. The
main advantage of the stack is that, we can store data (PUSH) in it without destroying previously stored data.
This is not true in the case of other registers and memory locations. Stack operations are also very fast
The stack may also be used for storing local variables of subroutine and for the transfer of parameter addresses
to a subroutine. This facilitates the implementation of re-entrant subroutines which is a very important software
property. The disadvantage is, as the stack has no fixed address, it is difficult to debug and document a program
that uses stack.
Stack operation: Operations on stack are performed using the two instructions namely PUSH and POP. The
contents of the stack are moved to certain memory locations after PUSH instruction. Similarly, the contents of
the memory are transferred back to registers by POP instruction.
For example let us consider a Stack whose stack top is 4506 H. This is stored in the 16- bit Stack pointer
register as shown in Fig.29
P a g e | 18
Subroutine: It is a set of instructions written separately from the main program to execute a function that occurs
repeatedly in the main program.
For example, let us assume that a delay is needed three times in a program. Writing delay programs for three
times in a main program is nothing but repetition. So, we can write a subroutine program called ‘delay’ and can
be called any number of times we need
Similarly, in 8085 microprocessor we do not find the instructions for multiplication and division. For this
purpose we write separate programs. So, in any main program if these operations are needed more than once,
the entire program will become lengthy and complex. So, we write subroutine programs MUL & DIV separately
from main program and use the instruction CALL MUL (or) CALL DIV in the main program. This can be done
any number of times. At the end of every subroutine program there must be an instruction called ‘RET’. This
will take the control back to main program.
The 8085 microprocessor has two instructions to implement the subroutines. They are CALL and RET. The
CALL instruction is used in the main program to call a subroutine and RET instruction is used at the end of the
subroutine to return to the main program. When a subroutine is called, the contents of the program counter,
which is the address of the instruction following the CALL instruction is stored on the stack and the program
execution is transferred to the subroutine address. When the RET instruction is executed at the end of the
subroutine, the memory address stored on the stack is retrieved and the sequence of execution is resumed in the
main program.
9.1 8086Microprocessor
It is a 16-bitµp.
8086 has a 20 bit address bus can access up to 220 memory locations (1MB).
It has multiplexed address and data bus AD0- AD15 and A16 –A19.
It requires single phase clock with 33% duty cycle to provide internaltiming.
P a g e | 19
It can prefetches up to 6 instruction bytes from memory and put them in instr queue in order to speed
up instruction execution.
The 8086 has two parts, the Bus Interface Unit (BIU) and the Execution Unit (EU).
The BIU fetches instructions, reads and writes data, and computes the 20-bitaddress.
The EU decodes and executes the instructions using the 16-bitALU.
The two unit’s functions independently.
– The minimum mode is selected by applying logic1 to the MN/¯M̄ X¯input pin. This is a single
microprocessor configuration.
– The maximum mode is selected by applying logic0 to the MN/¯M̄ X¯input pin. This is a multi
micro processors configuration.
P a g e | 20
ADDER
DATA SEGMENT(16)
A AH (8) AL (8)
B BH (8) BL (8)
C CH CL(8)
DH DL(8)
SP(16)
BP(16)
SI(16)
BIU EU
Instruction GPR (AX,BX,CX,DX)
Pointer(IP)
Adder
Index register(SI,DI)
Instruction Queue
ALU
Flag
P a g e | 21
The BIU performs all bus operations such as instruction fetching, reading and writing operands for
memory and calculating the addresses of the memory operands.
It provides a full 16 bit bidirectional data bus and 20 bit address bus.
The bus interface unit is responsible for performing all external bus operations.
Specifically it has the following functions:
Instruction fetch , Instruction queuing, Operand fetch and storage, Address calculation relocation
and Bus control.
The BIU uses a mechanism known as an instruction queue to implement pipeline architecture.
This queue permits prefetch of up to six bytes of instruction code. Whenever the queue of the BIU is not
full and it has room for at least two more bytes and at the same time EU is not requesting it to read or
write operands from memory, the BIU is free to look ahead in the program by prefetching the next
sequential instruction.
These prefetching instructions are held in its FIFO queue. With its 16 bit data bus, the BIU fetches
two instruction bytes in a single memory cycle.
After a byte is loaded at the input end of the queue, it automatically shifts up through the FIFO to
the empty location nearest the output.
The EU accesses the queue from the output end. It reads one instruction byte after the other from the
output of the queue. If the queue is full and the EU is not requesting access to operand in memory.
These intervals of no bus activity, which may occur between bus cycles, are known as Idle state.
If the BIU is already in the process of fetching an instruction when the EU request it to read or
write operands from memory or I/O, the BIU first completes the instruction fetch bus cycle before
initiating the operand read / write cycle.
The BIU also contains a dedicated adder which is used to generate the 20bit physical address that
is output on the address bus. This address is formed by adding an appended 16 bit segment address
and a 16 bit offset address.
For example: The physical address of the next instruction to be fetched is formed by combining the
current contents of the code segment CS register and the current contents of the instruction pointer
IP register.
9.5 EXECUTION UNIT (EU)
The Execution unit is responsible for decoding and executing all instructions.
The EU extracts instructions from the top of the queue in the BIU, decodes them, generatesP aoperands
g e | 22
if
necessary, passes them to the BIU and requests it to perform the read or write bys cycles to memory or
I/O and perform the operation specified by the instruction on the operands.
During the execution of the instruction, the EU tests the status and control flags and updates them
based on the results of executing the instruction.
If the queue is empty, the EU waits for the next instruction byte to be fetched and shifted to top of the
queue.
When the EU executes a branch or jump instruction, it transfers control to a location corresponding to
another set of sequential instructions.
Whenever this happens, the BIU automatically resets the queue and then begins to fetch instructions
from this new location to refill the queue
The BIU fetches instructions using the CS and IP, written CS:IP, to contract the 20-bit address. Data is
fetched using a segment register (usually the DS) and an effective address (EA) computed by the EU
depending on the addressing mode.
AX AH AL Accumulator
EU BX BH BL Base Register
CX CH CL Count Register
Registers
DX DH DL Data Register
SP Stack Pointer
BP Base Pointer
SI Source Index Register
DI Destination Index Register
FR Flag Register
BIU
CS Code Segment Register
Registers DS Data Segment Register
SS Stack Segment Register
ES Extra Segment Register
IP Instruction Pointer
P a g e | 23
The 8086 has four groups of the user accessible internal registers.
These are
Instruction pointer(IP)
Four General purpose registers(AX,BX,CX,DX)
Four pointer(SP,BP,SI,DI)
Four segment registers(CS,DS,SS,ES)
Flag Register(FR)
The 8086 has a total of fourteen 16-bit registers including a 16 bit register called the status register (flag
register), with 9 of bits implemented for status and control flags.
Most of the registers contain data/instruction offsets within 64 KB memory segment.
There are four different 64 KB segments for instructions, stack, data and extra data. To specify where in
1 MB of processor addressable memory these 4 segments are located the processor uses four segment
registers:
1) Code segment (CS) is a 16-bit register containing address of 64 KB segment with processor
instructions. The processor uses CS segment for all accesses to instructions referenced by
instruction pointer (IP) register.
2) Stack segment (SS) is a 16-bit register containing address of 64KB segment with program stack. By
default, the processor assumes that all data referenced by the stack pointer (SP) and base pointer
(BP) registers is located in the stack segment. SS register can be changed directly using POP
instruction.
3) Data and Extra segment (DS and ES) is a 16-bit register containing address of 64KB segment with
program data. By default, the processor assumes that all data referenced by general registers (AX,
BX, CX, and DX) and index register (SI, DI) is located in the data and Extra segment.
1) AX(Accumulator)
It is consists of two 8-bit registers AL and AH, which can be combined together and
used as a 16-bit register AX. AL in this case contains the low- order byte of the word,
and AH contains the high-order byte. Accumulator can be used for I/O operations and
string manipulation.
2) BX (Base register)
P a g e | 24
It is consists of two 8-bit registers BL and BH, which can be combined together and
used as a 16-bit register BX. BL in this case contains the low- order byte of the word,
and BH contains the high-order byte.
BX register usually contains a offset for data segment.
3) CX (Count register)
It is consists of two 8-bit registers CL and CH, which can be combined together and
used as a 16-bit register CX. When combined, CL register contains the low-order
byte of the word, and CH contains the high-order byte.
Count register can be used in Loop, shift/rotate instructions and as a counter in string
manipulation.
8086 has the LOOP instruction which is used for conuter purpose when it is executed
CX/CL is automatically decremented by1.
EX
START NOP
4) DX (Data register)
It is consists of two 8-bit registers DL and DH, which can be combined together and
used as a 16-bit register DX. When combined, DL register contains the low-order
byte of the word, and DH contains the high-order byte.
DX can be used as a port number in I/O operations.
In integer 32-bit multiply and divide instruction the DX register contains high-order
word of the initial or resulting number.
1. Stack Pointer (SP) is a 16-bit register is used to hold the offset address for stack segment.
2. Base Pointer (BP) is a 16-bit register is used to hold the offset address for stack segment.
i. BP register is usually used for based, based indexed or register indirect
addressing.
ii. The difference between SP and BP is that the SP is used internally to store the
address in case of interrupt and the CALL instrn.
3. Source Index (SI) and Destination Index (DI)
These two 16-bit register is used to hold the offset address for DS and ES in case of string
manipulation instrn.
i. SI is used for indexed, based indexed and register indirect addressing, as well as a
source data addresses in string manipulation instructions.
ii. DI is used for indexed, based indexed and register indirect addressing, as well as
a destination data addresses in string manipulation instructions.
9.9 Instruction Pointer (IP)
P afor
It is a 16-bit register. It acts as a program counter and is used to hold the offset address g eCS.
| 25
i. Overflow Flag(OF)
This flag is set if an overflow occurs. i.e. if the result of a signed operation is large enough to
be accommodated in a destination register.
ii. Direction Flag (DF)–
This is used by string manipulation instructions. If this flag bit is ‘0’, the
string is processed beginning from the lowest address to the highest address.
i.e. auto-incrementing mode.
Otherwise, the string is processed from the highest address towards the lowest
address, i.e. auto-decrementing mode.
iii. Interrupt-enable Flag (IF)–
If this flag is set, the maskable interrupts are recognized by the CPU. Otherwise they are
ignored. Setting this bit enables maskable interrupts.
iv. Single-step Flag (TF)–
If this flag is set, the processor enters the single step execution mode. In other words, a
trap interrupt is generated after execution of each instruction. The processor executes the
current instruction and the control is transferred to the Trap interrupt serviceroutine.
v. Sign Flag (SF)–
This flag is set when the result of any computation is negative. For signed
computations, the sign flag equals the MSB of theresult.
vi. Zero Flag (ZF) - set if the result iszero.
vii. Auxiliary carry Flag (AF)–
set if there was a carry from or borrow to bits 0-3 in the ALregister.
viii. Parity Flag (PF)–
set if parity (the number of "1" bits) in the low-order byte of the result iseven.
ix. Carry Flag (CF)–
This flag is set when there is a carry out of MSB in case of addition or a borrow in case of
subtraction. For example. When two numbers are added, a carrymay
P a g e | 26
be generated out of the most significant bit position. The carry flag, in this case, will
be set to 1’. In case, no carry is generated, it will be ‘0.