STLD
STLD
STLD
Error is a condition when the output information does not match with the input
information. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from one system to other. That
means a 0 bit may change to 1 or a 1 bit may change to 0.
Error-Detecting codes:
Whenever a message is transmitted, it may get scrambled by noise or data may
get corrupted. To avoid this, we use error-detecting codes which are additional
data added to a given digital message to help us detect if an error occurred during
transmission of the message. A simple example of error-detecting code is parity
check.
Error-Correcting codes:
Along with error-detecting code, we can also pass some data to figure out the
original message from the corrupt message that we received. This type of code
is called an error-correcting code. Error-correcting codes also deploy the same
strategy as error-detecting codes but additionally, such codes also detect the
exact location of the corrupt bit.
In error-correcting codes, parity check has a simple way to detect errors along
with a sophisticated mechanism to determine the corrupt bit location. Once the
corrupt bit is located, its value is reverted (from 0 to 1 or 1 to 0) to get the
original message.
The additional bits are called parity bits. They allow detection or correction
of the errors.
The data bits along with the parity bits form a code word.
Odd parity -- Odd parity means the number of 1's in the given word including
the parity bit should be odd (1,3,5,....).
For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the
entire word is even. Shown in fig. (a).
For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the
entire word is odd. Shown in fig. (b).
Theorem 1
The left hand side (LHS) of this theorem represents a NAND gate with inputs
A and B, whereas the right hand side (RHS) of the theorem represents an OR
gate with inverted inputs.
The LHS of this theorem represents a NOR gate with inputs A and B, whereas
the RHS represents an AND gate with inverted inputs.
It consists of six theorems of the Boolean algebra and the four of its
lead to confusion. This theorem and the postulates listed in the table below
are the most basic theorem in the Boolean algebra. The theorems like the
postulates are listed in the pairs; each relation is dual with the one pair with
it. The postulates are the basic axioms of algebraic structure and the need
to proof. The theorem must be proven from the postulates. The proofs of
the variable with the examples are listed below. At the right is listed the
The below table shows the postulates and the theorem of the Boolean
algebra:
Theorem 1 a) x +x = 1 b)x.x=x
Theorem 2 a) x + 1 = 1 b)x .0 =0
We know that, excess-3 code begins with the binary 0011(decimal 3) and it will continue up to
binary 1100(decimal 12) where I get the output binary 1001(decimal 9) for input binary
1100(decimal 12). So I need 4 variables as inputs and 4 variables as outputs. With 4 variables I can
represent 16 binary values from 0000 to 1111. Since I do not use 0, 1, 2, 13, 14, 15 as inputs, when I
simplify the output function I use those terms as don’t care conditions.
1)—
. We are used to using the base-10 number system, which is also called
decimal. Other common number systems include base-16 (hexadecimal),
base-8 (octal), and base-2 (binary).
Base-16 is also called hexadecimal. It’s commonly used in computer
programming, so it’s very important to understand. Let’s start with counting
in hexadecimal to make sure we can apply what we’ve learned about other
bases so far.
Understanding different number systems is extremely useful in many
computer-related fields. Binary and hexadecimal are very common, and I
encourage you to become very familiar with them
2—
The XOR ( exclusive-OR ) gate acts in the same way as the logical
"either/or." The output is "true" if either, but not both, of the inputs are
"true." The output is "false" if both inputs are "false" or if both inputs are
"true." Another way of looking at this circuit is to observe that the output is 1
if the inputs are different, but 0 if the inputs are the same.
Applications:
These type of logic gates are used in generation of parity generation and
checking units. The two diagrams below shows the even and odd parity
generator circuits respectively for a four data.
With the help of these gates parity check operation can be also performed.
The diagrams below show even and odd parity check.
5—
signed binary number can be represented in one of the three ways
1. Signed magnitude representation
2. 1’s complement representation
3. 2’s complement representation
Signed magnitude representation :
1. If the data has positive as well as negative numbers then the signed binary number should be used.
2. the + or – signs are represented in the form of binary by using 0 or 1. So 0 is used to represent the ( + )
sign and 1 is used to represent the ( – ) sign.
3. the MSB of a binary number is used to represent the sign and the remaining bits are used to represent
the magnitude.
7—
The JK Flip Flop is the most widely used flip flop. It is considered to be a
universal flip-flop circuit. The sequential operation of the JK Flip Flop is same
as for the RS flip-flop with the same SET and RESET input. The difference is
that the JK Flip Flop does not the invalid input states of the RS Latch (when S
and R are both 1).The JK Flip Flop name has been kept on the inventor name
of the circuit known as Jack Kilby.
The basic NAND gate RS flip-flop suffers from two main problems. Firstly, the
condition when S = 0 and R = 0 should be avoided. Secondly, if the state of
S or R changes its state while the input which is enabled is high, the correct
latching action does not occur. Thus to overcome these two problems of the
RS Flip-Flop, the JK Flip Flop was designed.
The JK Flip Flop is basically a gated RS flip flop with the addition of the clock
input circuitry. When both the inputs S and R are equal to logic “1”, the
invalid condition takes place. Thus to prevent this invalid condition, a clock
circuit is introduced. The JK Flip Flop has four possible input combinations
because of the addition of the clocked input. The four inputs are “logic 1”,
‘logic 0”. “No change’ and “Toggle”.
When both the J and K input are at logic “1” at the same time and the clock
input is pulsed HIGH, the circuit toggle from its SET state to a RESET or visa
verse. When both the terminals are HIGH the JK flip-flop acts as a T type
toggle flip-flop.
8-
The synchronous Ring Counter example above, is preset so that exactly one data bit in the
register is set to logic “1” with all the other bits reset to “0”. To achieve this, a “CLEAR”
signal is firstly applied to all the flip-flops together in order to “RESET” their outputs to a
logic “0” level and then a “PRESET” pulse is applied to the input of the first flip-flop ( FFA )
before the clock pulses are applied. This then places a single logic “1” value into the circuit
of the ring counter.
So on each successive clock pulse, the counter circulates the same data bit between the four
flip-flops over and over again around the “ring” every fourth clock cycle. But in order to
cycle the data correctly around the counter we must first “load” the counter with a suitable
data pattern as all logic “0’s” or all logic “1’s” outputted at each clock cycle would make the
ring counter invalid.
This type of data movement is called “rotation”, and like the previous shift register, the effect
of the movement of the data bit from left to right through a ring counter
9—
SRAM DRAM
Requirement of
peripheral Comparatively less Comparatively more
circuitary
Capacity (same
Less 5 to 10 times more than SRAM
technology)
Generally in smaller
applications like CPU cache Commonly used as the main
Applications
memory and hard drive memory in personal computers
buffers
Fast Page Mode DRAM
Asynchronous SRAM
Extended Data Out DRAM
Types Synchronous SRAM
Burst EDO DRSSM
Pipeline Burst SRAM
Synchronous DRAM
Power
Less More
Consumption
10—
19—
FPGA consists of large number of "configurable logic blocks" (CLBs) and routing channels.
Multiple I/O pads may fit into the height of one row or the width of one column in the array. In
general all the routing channels have the same width
Block diagram-
CLB: The CLB consists of an n-bit look-up table (LUT), a flip-flop and a 2x1 mux. The value
n is manufacturer specific. Increase in n value can increase the performance of a FPGA.
Typically n is 4. An n-bit lookup table can be implemented with a multiplexer whose select
lines are the inputs of the LUT and whose inputs are constants. An n-bit LUT can encode
any n-input Boolean function by modeling such functions as truth tables. This is an efficient
way of encoding Boolean logic functions, and LUTs with 4-6 bits of input are in fact the key
component of modern FPGAs. The block diagram of a CLB is shown below.
Each CLB has n-inputs and only one input, which can be either the registered or the
unregistered LUT output. The output is selected using a 2x1 mux. The LUT output is
registered using the flip-flop (generally D flip-flop). The clock is given to the flip-flop, using
which the output is registered. In general, high fanout signals like clock signals are routed via
special-purpose dedicated routing networks, they and other signals are managed separately.
Routing channels are programmed to connect various CLBs. The connecting done according
to the design. The CLBs are connected in such a way that logic of the design is achieved.
Applications
ASIC prototyping: Due to high cost of ASIC chips, the logic of the application
is first verified by dumping HDL code in a FPGA. This helps for faster and
cheaper testing. Once the logic is verified then they are made into ASICs.
Very useful in applications that can make use of the massive parallelism
offered by their architecture. Example: code breaking, in particular brute-force
attack, of cryptographic algorithms.
FPGAs are sued for computational kernels such as FFT or Convolution
instead of a microprocessor.
Applications include digital signal processing, software-defined radio,
aerospace and defense systems, medical imaging, computer vision, speech
recognition, cryptography, bio-informatics, computer hardware emulation and
a growing range of other areas.
Adder circuits are classified into two types, namely Half Adder Circuit and Full Adder Circuit
The half adder circuit is used to sum two binary digits namely A and B. Half adder has two
o/ps such as sum and carry, where the sum is denoted with ‘S’ and carry is denoted with ‘C’.
The carrier signal specifies an overflow into the following digit of a multi-digit addition. The
value of the sum ‘S’ is 2C+S. The simplest design of half adder is shown below. The half adder
is used to add two i/p bits and generate a sum and carry which are called as o/ps. The i/p
variables of the half adder are termed as augend bits & addend bits, whereas the o/p variables
are termed as sum and carry.
The truth table of half adder is shown below, using this we can get the Boolean functions for
sum & carry. Here Karnal map is used to get the Boolean equations for the sum and carry of
the half adder.
Truth Table of Half Adder
Half Adder Logic Diagram
The logic diagram of half adder is shown below.If A & B are binary i/ps of the half adder, then
the Boolean function to calculate the sum ‘S’ is the XOR gate of inputs A and B. Logic
functions to calculate the carry ‘C’ is the AND gate of A and B. From the below half adder
logic diagram, it is very clear, it requires one AND gate and one XOR gate. The universal gates,
namely NAND and NOR gates are used to design any digital application. For example, here in
the below figure shows the designing of a half adder using NAND gates.
A full adder is used to add three input binary numbers. Implementation of full adder is
difficult compared with half adder. Full adder has three inputs and two outputs, i/ps are
A, B and Cin and o/p’s are sum ‘S’ and carry ‘Cout’. In three inputs of the full adder,
two i/ps A B are addend and augend, where third i/p Cin is carry on preceding digit
operation. The full adder circuit generates a two bit o/p and these are denoted with the
signals namely S and Cout. Where sum= 2XCout+S
The truth table of full adder circuit is shown below, using this we can get the Boolean
functions for sum & carry. Here Karnal map is used to get the Boolean equations for
the sum and carry of the full adder.
This full adder logic circuit is used to add three binary numbers, namely A, B and C,
and two o/ps sum and carry. This full adder logic circuit can be implemented with two
half adder circuits. The first half adder circuit is used to add the two inputs to generate
an incomplete sum & carry. Whereas, a second half adder is used to add ‘Cin’ to the
sum of the first half adder to get the final output. If any half adder logic circuit generates
a carry, there will be an o/p carry. So output carry will be an OR function of the half
adder’s carry o/p. Take a look at the full adder logic circuit shown below.
Ones Complement
The complement (or opposite) of +5 is −5. When representing positive
and negative numbers in 8-bit ones complement binary form, the
positive numbers are the same as in signed binary notation described
in Number Systems Module 1.4 i.e. the numbers 0 to +127 are
represented as 000000002 to 011111112. However, the complement
of these numbers, that is their negative counterparts from −128 to −1,
are represented by ‘complementing’ each 1 bit of the positive binary
number to 0 and each 0 to 1.
For example:
+510 is 000001012
−510 is 111110102
Notice in the above example, that the most significant bit (msb) in the
negative number −510 is 1, just as in signed binary. The remaining 7
bits of the negative number however are not the same as in signed
binary notation. They are just the complement of the remaining 7 bits,
and these give the value or magnitude of the number.
This is better than subtraction in signed binary, but it is still not correct.
The result should be +210 but the result is +1 (notice that there has
also been a carry into the none existent 9th bit).
Fig. 1.5.2 shows another example, this time adding two negative
numbers −4 and −3.
Because both numbers are negative, they are first converted to ones
complement notation.
15b—
Binary Multiplication
Similar to the multiplication of decimal numbers, binary multiplication follows the
same process for producing a product result of the two binary numbers. The binary
multiplication is much easier as it contains only 0s and 1s. The four fundamental
rules for binary multiplication are
0×0=0
0×1=0
1×0=0
1×1=1
The multiplication of two binary numbers can be performed by using two common
methods, namely partial product addition and shifting, and using parallel multipliers.
Before discussing about the types, let us look at the unsigned binary numbers
multiplication process. Consider a two 4 bit binary numbers as 1010 and 1011, and
its multiplication of these two is given as
From the above multiplication, partial products are generated for each digit in the
multiplier. Then all these partial products are added to produce the final product
value. In the partial product multiplication, when the multiplier bit zero, the partial
product is zero, and when the multiplier bit is 1, the resulted partial product is the
multiplicand.
As similar to the decimal numbers, each successive partial product is shifted one
position left relative to the preceding partial product before summing all partial
products.
Therefore, this multiplication uses n-shifts and adds to multiply n-bit binary number.
The combinational circuit implemented to perform such multiplication is called as an
array multiplier or combinational multiplier.
17a—
17b—