Major Doc Fine1

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 75

CHAPTER 1

INTRODUCTION
1.1 INTRODUCTION
Data that can be read and understood without any special measures is
called plaintext or clear text. The method of disguising plaintext in such a way as to
hide its substance is called encryption. Encrypting plaintext results in unreadable
gibberish called cipher text. You use encryption to ensure that information is hidden
from anyone for whom it is not intended, even those who can see the encrypted data.
The process of reverting cipher text to its original plaintext is called decryption.
Figure 1.1 illustrates this process.

Figure 1.1 Encryption and decryption


1.1.1 WHAT IS CRYPTOGRAPHY
To enhance the security of the data, code language for writing messages were
used. The branch of mathematics that investigates the code languages and methods is
called cryptology. Cryptology consists of two streams namely cryptography and
cryptanalysis. Cryptography is a science of coding message secretly while
cryptanalysis is a science of breaking codes.

CRYPTOLOGY

CRYPTOGRAPHY CRYPTANALYSIS

Our project is concerned with cryptography. Cryptography is a science of


using mathematics to encrypt and decrypt data. Cryptography enables to store
sensitive information or transmit it across insecure networks so that it cannot be read
by any one except the intended recipient.
1
Cryptography or Cryptology is derived from Greek kryptos “hidden” and the
verb grafo “write” or legein “to speak” is the practice and study of hiding information.
In modern times, Cryptology is considered to be a branch of both mathematics and
computer science, and is afflicted closely with information theory, computer security
and engineering. Cryptography is used in applications present in technology advanced
in societies; examples include the security of the ATM cards, computer pass words
and electronic commerce which all depend upon Cryptography.
Cryptography embraces both cryptography and cryptanalysis. While
cryptography is science of securing data, cryptanalysis is a science of analyzing and
breaking secure communication. Classical involves and interesting combination of
analytical reasoning, application of mathematical tools, pattern finding, determination,
and luck. Cryptanalysts are also attackers.
There are two kinds of cryptography in this world: cryptography that will stop
major governments from reading our files. PGP is also about the latter sort of
cryptography. Cryptography can be strong or weak, as explained above.
Cryptography strength is measured in the time and the resources it would
require to recover plain text. The result of the strong Cryptography is cipher text that
is very difficult to decipher without possession of the appropriate decoding tool. How
difficult? Given all today’s computing power and available time- even a billion
computers doing a billion checks a second – it is not possible to decipher the result of
strong cryptography before the end of the universe.
One would think, then, that strong Cryptography would hold up rather well
against even an extremely determined cryptanalyst. Who’s really to say? No can prove
that the strongest encryption obtainable today will hold up under tomorrow’s
computing power. Vigilance and conservatism will protect us better, however, than
claims of impenetrability.
1.1.2 HOW DOES CRYPTOGRAPHY WORK
A cryptographic algorithm, or cipher, is a mathematical function used in the
encryption and decryption process. A cryptographic algorithm works in combination
with a key—a word, number, or phrase—to encrypt the plaintext. The same plaintext
encrypts to different ciphertext with different keys.
The security of encrypted data is entirely dependent on two things: the
strength of the cryptographic algorithm and the secrecy of the key.

2
A cryptographic algorithm, plus all possible keys and all the protocols that
make it work comprise a cryptosystem. PGP is a cryptosystem. Cryptosystem can be
divided in to Software and Hardware.
CRYPTOSYSTEM

SOFTWARE HARDWARE

1.1.3 THE PURPOSE OF CRYPTOGRAPHY


Cryptography is the science of writing in secret code and is an ancient art; the
first documented use of cryptography in writing dates back to circa 1900 B.C. when
an Egyptian scribe used non-standard hieroglyphs in an inscription. Some experts
argue that cryptography appeared spontaneously sometime after writing was invented,
with applications ranging from diplomatic missives to war-time battle plans. It is no
surprise, then, that new forms of cryptography came soon after the widespread
development of computer communications.
In data and telecommunications, cryptography is necessary when
communicating over any un-trusted medium, which includes just about any network,
particularly the Internet.
Within the context of any application-to-application communication, there are
some specific security requirements including:
 Authentication: The process of proving one's identity. (The primary
forms of host-to-host authentication on the Internet today are name-based or address-
based, both of which are notoriously weak.)
 Privacy/confidentiality: Ensuring that no one can read the message
except the intended receiver.
 Integrity: Assuring the receiver that the received message has not been
altered in any way from the original.
 Non-repudiation: A mechanism to prove that the sender really sent this
message.
Cryptography, then, not only protects data from theft or alteration, but can also
be used for user authentication. There are, in general, three types of cryptographic
schemes typically used to accomplish these goals: secret key (or symmetric)

3
cryptography, public-key (or asymmetric) cryptography, and hash functions, each of
which is described below. In all cases, the initial unencrypted data is referred to as
plaintext. It is encrypted into cipher text, which will in turn (usually) be decrypted
into usable plaintext.
In many of the descriptions below, two communicating parties will be referred
to as Alice and Bob; this is the common nomenclature in the crypto field and literature
to make it easier to identify the communicating parties. If there is a third or fourth
party to the communication, they will be referred to as Carol and Dave. Mallory is a
malicious party, Eve is an eavesdropper, and Trent is a trusted third party.

1.2 METHODS OF ENCRYPTION


Although there can be several pieces to an encryption method, the two main
pieces are the algorithms and the keys. As stated earlier, algorithms are usually
complex mathematical formulas that dictate the rules of how the plaintext will be
turned into cipher text. A key is a string of random bits that will be inserted into the
algorithm. For two entities to be able to communicate via encryption, they must use
the same algorithm and, many times, the same key. In some encryption methods, the
receiver and the sender use the same key and in other encryption methods, they must
use different keys for encryption and decryption purposes. The following sections
explain the difference between these two types of encryption methods.
Symmetric versus Asymmetric Algorithms
Cryptography algorithms use either symmetric keys, also called secret keys,
or asymmetric keys, also called public keys. As encryption was not complicated
enough, the titles that are used to describe the key type’s only make it worse. Just pay
close attention and we will get through this just fine.
1.2.1 SYMMETRIC CRYPTOGRAPHY
In a cryptosystem that uses symmetric cryptography, both parties will be using
the same key for encryption and decryption, as shown in Figure 1.2. This provides
dual functionality. As we said, symmetric keys are also called secret keys because this
type of encryption relies on each user to keep the key a secret and properly protected.
If this key got into an intruder’s hand, that intruder would have the ability to decrypt
any intercepted message encrypted with this key.

4
Each pair of users who want to exchange data using symmetric key encryption
must have their own set of keys. This means if Dan and Iqqi want to communicate,
both need to obtain a copy of the same key. If Dan also wants to communicate using
symmetric encryption with Norm and Dave, he now needs to have three separate keys,
one for each friend.

Figure 1.2 Using symmetric algorithms, the sender and receiver use the same key
for encryption and decryption functions.
This might not sound like a big deal until Dan realizes that he may
communicate with hundreds of people over a period of several months, and keeping
track and using the correct key that corresponds to each specific receiver can become
a very daunting task. If Dan were going to communicate with 10 other people, then he
would need to keep track of 45 different keys. If Dan were going to communicate
with 100 other people, then he would have to maintain and keep up with 4,950
symmetric keys. Dan is a pretty bright guy, but does not necessarily want to spend his
days looking for the right key to be able to communicate with Dave.
The security of the symmetric encryption method is completely dependent on
how well users protect the key. This should raise red flags to you if you have ever had
to depend on a whole staff of people to keep a secret. If a key is compromised, then
all messages encrypted with that key can be decrypted and read by an intruder.
This is complicated further by how symmetric keys are actually shared and
updated when necessary. If Dan wants to communicate to Norm for the first time, Dan

5
has to figure out how to get Norm the right key. It is not safe to just send it in an e-
mail message because the key is not protected and it can be easily intercepted and
used by attackers. Dan has to get the key to Norm through an out-of-band method.
Dan can save the key on a floppy disk and walk over to Norm’s desk, send it to him
via snail mail, or have a secure carrier deliver it to Norm. This is a huge hassle, and
each method is very clumsy and insecure. Because both users use the same key to
encrypt and decrypt messages, symmetric cryptosystems can provide confidentiality,
but they cannot provide authentication or non-repudiation. There is no way to prove
who actually sent a message if two people are using the exact same key.
Well, if symmetric cryptosystems have so many problems and flaws, why use
them at all? They are very fast and can be hard to break. Compared to asymmetric
systems, symmetric algorithms scream in speed. They can encrypt and decrypt large
amounts of data that would take an unacceptable amount of time if an asymmetric
algorithm was used instead. It is also very difficult to uncover data that is encrypted
with a symmetric algorithm if a large key size was used.
The following list outlines the strengths and weakness of symmetric key
systems:
 Strengths
 Much faster than asymmetric systems
 Hard to break if using a large key size
 Weaknesses
 Key distribution It requires a secure mechanism to deliver keys
properly.
 Scalability Each pair of users needs a unique pair of keys, so the
number of
 Keys grow exponentially.
 Limited security It can provide confidentiality, but not authenticity or
non-repudiation.

1.2.2 ASYMMETRIC CRYPTOGRAPHY


Some things you can tell the public, but some things you just want to
keep private.

6
In symmetric key cryptography, a single secret key is used between entities,
whereas in public key systems, each entity has different keys, or asymmetric keys.
The two different asymmetric keys are mathematically related. If a message is
encrypted by one key, the other key is required to decrypt the message.
In a public key system, the pair of keys is made up of one public key and one
private key. The public key can be known to everyone, and the private key must only
be known to the owner. Many times, public keys are listed in directories and databases
of e-mail addresses so they are available to anyone who wants to use these keys to
encrypt or decrypt data when communicating with a particular person. Figure 1.3
illustrates an asymmetric cryptosystem.

Figure 1.3 Asymmetric cryptosystem


The public and private keys are mathematically related, but cannot be derived
from each other. This means that if an evildoer gets a copy of Bob’s public key, it does
not mean he can now use some mathematical magic and find out Bob’s private key.
If Bob encrypts a message with his private key, the receiver must have
a copy of Bob’s public key to decrypt it. The receiver can decrypt Bob’s message and
decide to reply back to Bob in an encrypted form. All she needs to do is encrypt her
reply with Bob’s public key, and then Bob can decrypt the message with his private
key. It is not possible to encrypt and decrypt using the exact same key when using an
asymmetric key encryption technology.

7
Bob can encrypt a message with his private key and the receiver can then
decrypt it with Bob’s public key. By decrypting the message with Bob’s public key,
the receiver can be sure that the message really came from Bob. A message can only
be decrypted with a public key if the message was encrypted with the corresponding
private key. This provides authentication, because Bob is the only one who is
supposed to have his private key. When the receiver wants to make sure Bob is the
only one that can read her reply, she will encrypt the response with his public key.
Only Bob will be able to decrypt the message because he is the only one who has the
necessary private key.
Now the receiver can also encrypt her response with her private key instead of
using Bob’s public key. Why would she do that? She wants Bob to know that the
message came from her and no one else. If she encrypted the response with Bob’s
public key, it does not provide authenticity because anyone can get a hold of Bob’s
public key. If she uses her private key to encrypt the message, then Bob can be sure
that the message came from her and no one else. Symmetric keys do not provide
authenticity because the same key is used on both ends. Using one of the secret keys
does not ensure that the message originated from a specific entity.
If confidentiality is the most important security service to a sender, she would
encrypt the file with the receiver’s public key. This is called a secure message format
because it can only be decrypted by the person who has the corresponding private key.
If authentication is the most important security service to the sender, then she would
encrypt the message with her private key. This provides assurance to the receiver that
the only person who could have encrypted the message is the individual who has
possession of that private key. If the sender encrypted the message with the receiver’s
public key, authentication is not provided because this public key is available to
anyone.
Encrypting a message with the sender’s private key is called an open message
format because anyone with a copy of the corresponding public key can decrypt the
message; thus, confidentiality is not ensured.
For a message to be in a secure and signed format, the sender would encrypt
the message with her private key and then encrypt it again with the receiver’s public
key. The receiver would then need to decrypt the message with his own private key
and then decrypt it again with the sender’s public key. This provides confidentiality

8
and authentication for that delivered message. The different encryption methods are
shown in Figure 1.4.

Figure 1.4 Type of security service that will be provided.


Each key type can be used to encrypt and decrypt, so do not get confused and
think the public key is only for encryption and the private key is only for decryption.
They both have the capability to encrypt and decrypt data.
An asymmetric cryptosystem works much slower than symmetric systems, but
can provide confidentiality, authentication, and non repudiation depending on its
configuration and use. Asymmetric systems also provide for easier and more
manageable key distribution than symmetric systems and do not have the scalability
issues of symmetric systems.
The following outlines the strengths and weaknesses of asymmetric key
systems:
 Strengths

9
 Better key distribution than symmetric systems
 Better scalability than symmetric systems
 Can provide confidentiality, authentication, and non repudiation
 Weaknesses
 Works much slower than symmetric systems
The following are examples of asymmetric key algorithms:
 RSA
 Elliptic Curve Cryptosystem (ECC)
 Diffie-Hellman
 El Gamal
 Digital Signature Standard (DSS)

1.3 TYPES OF CRYPTOGRAPHIC ALGORITHMS


There are several ways of classifying cryptographic algorithms. For purposes
of this paper, they will be categorized based on the number of keys that are employed
for encryption and decryption, and further defined by their application and use. The
three types of algorithms those are discussed in Figure 1.5.
 Secret Key Cryptography (SKC): Uses a single key for both encryption
and decryption
 Public Key Cryptography (PKC): Uses one key for encryption and
another for decryption
 Hash Functions: Uses a mathematical transformation to irreversibly
"encrypt" information

10
Figure 1.5 Three types of cryptographic algorithms
1.4 INTRODUCTION TO DES
The Data Encryption Standard (DES) specifies a FIPS-approved cryptographic
algorithm that can be used to protect electronic data. DES algorithm is a symmetric
block cipher that can encrypt (encipher) and decrypt (decipher) information.
Encryption converts data to an unintelligible form called cipher-text; decrypting the
cipher-text converts the data back into its original form, called plaintext.

Original Cipher Original


Message Message Message
Encryption Decryption
64 Algorithm 64 Algorithm 64

64
Secret
Key
Figure 1.6 Overall Representations of Encryption and Decryption
Fifteen candidates were accepted and based on public comments the pool was
reduced to five. One of these five algorithms was selected as the forthcoming
standard: a slightly modified version of the Rijndael.

11
The Rijndael, whose name is based on the names of its two Belgian inventors,
Joan Daemen and Vincent Rijmen is a Block cipher, which means that it works on
fixed length group of bits, which are called blocks. It takes an input block of a certain
size, usually 64 bits, and produces a corresponding output block of the same size. The
transformation requires a second input, which is the secret key with lengths of 64 bits.
DES, which is based on Feistel network such as substitution-permutation network,
which is a series of mathematical operations that use substitutions (also called S-Box)
and permutations (P-Boxes) and their careful definition implies that each output bit
depends on every input bit.

1.4.1 BLOCK CIPHER


When a block cipher algorithm is used for encryption and decryption
purposes, the message is divided into blocks of bits. These blocks are then put through
substitution, transposition, and other mathematical functions. The algorithm dictates
all the possible functions available to be used on the message, and it is the key that
will determine what order these functions will take place.
Strong algorithms make reengineering or trying to figure out all the functions
that took place on the message, basically impossible. It has been said that the
properties of a cipher should contain confusion and diffusion. Different unknown key
values cause confusion, because the attacker does not know these values, and
diffusion is accomplished by putting the bits within the plaintext through many
different functions so that they are dispersed throughout the algorithm. Block ciphers
use diffusion and confusion in their methods.
Advantages of DES:
 Through DES, input message of length 64 bits can be encrypted using
the secret key length of 64 bits.
 The cipher key is expanded into a larger key, which is later used for the
actual operation.
 The Expanded Key shall ALWAYS be derived from the Cipher Key
and never be specified directly.
 DES is very hard to attack or crack.
 DES will be faster when compared other algorithm such as RSA
Encryption Algorithm.

12
The Data Encryption Standard (DES) has been developed as a
cryptographic standard for general use by the public. DES was designed with the
following objectives in mind.
 High level of security
 Completely specified and easy to understand
 Cryptographic security do not depend on algorithm secrecy
 Adaptable to diverse applications
 Economical hardware implementation
 High data rates
 Can be validated
 Exportable
1.5 APPLICATION
 This standard may be used by Federal departments and agencies when
an agency determines that sensitive (unclassified) information (as defined in P. L.
100-235) requires cryptographic protection
 High speed ATM/Ethernet/Fiber-Channel switches
 Secure video teleconferencing
 Routers and Remote Access Servers
In addition, this standard may be adopted and used by non-Federal Government
organizations. Such use is encouraged when it provides the desired
security for commercial and private organizations.

1.6 INTRODUCTION TO VLSI


The first digital circuit was designed by using electronic components
like vacuum tubes and transistors. Later Integrated Circuits (ICs) were invented,
where a designer can be able to place digital circuits on a chip consists of less than 10
gates for an IC called SSI (Small Scale Integration) scale. With the advent of new
fabrication techniques designer can place more than 100 gates on an IC called MSI
(Medium Scale Integration). Using design at this level, one can create digital sub
blocks (adders, multiplexes, counters, registers, and etc.) on an IC. This level is LSI
(Large Scale Integration), using this scale of integration people succeeded to make
digital subsystems (Microprocessor, I/O peripheral devices and etc.) on a chip.

13
At this point design process started getting very complicated. i.e.,
manually conversion from schematic level to gate level or gate level to layout level
was becoming somewhat lengthy process and verifying the functionality of digital
circuits at various levels became critical. This created new challenges to digital
designers as well as circuit designers. Designers felt need to automate these processes.
In this process, Rapid advances in Software Technology and development of new
higher level programming languages taken place. People could able to develop
CAD/CAE (Computer Aided Design/Computer Aided Engineering) tools, for design
electronics circuits with assistance of software programs. Functional verification and
Logic verification of design can be done using CAD simulation tools with greater
efficiency. It became very easy to a designer to verify functionality of design at
various levels.
With advent of new technology, i.e., CMOS (Complementary Metal Oxide
Semiconductor) process technology. One can fabricate a chip contains more than
Million of gates. At this point design process still became critical, because of manual
converting the design from one level to other. Using latest CAD tools could solve the
problem. Existence of logic synthesis tools design engineer can easily translate to
higher-level design description to lower levels. This way This may be leading to
development of sophisticated electronic products for both consumer as well as
business.
1.6.1 VLSI BASICS
VLSI stands for "Very Large Scale Integration". This is the field which
involves packing more and more logic devices into smaller and smaller areas.
 Simply we say Integrated circuit is many transistors on one chip.
 Design/manufacturing of extremely small, complex circuitry using
modified semiconductor material
 Integrated circuit (IC) may contain millions of transistors, each a few
mm in size
 Applications wide ranging: most electronic logic devices

1.6.2 TYPICAL IC DESIGN FLOW:

14
Specifications

Behavioral Description Behavioral


simulation

Behavioral
Synthesis
Constraints

RTL

Functional Simulation

Logic
Lib
Constraints Synthesis

Gate Level Net list


Logic simulation

Automatic
P&R Lay Out
Managemen
Layout t

Fabrication

Figure 1.7: IC Design Flow


1.6.3 History of Scale Integration:
 Late 40s Transistor invented at Bell Labs

 Late 50s First IC (JK-FF by Jack Kilby at TI)

 Early 60s Small Scale Integration (SSI)

 10s of transistors on a chip


 Late 60s Medium Scale Integration (MSI)
 100s of transistors on a chip
 Early 70s Large Scale Integration (LSI)
 1000s of transistor on a chip

15
 Early 80s VLSI 10,000s of transistors on a chip (later 100,000s & now
1,000,000s)
 Ultra LSI is sometimes used for 1,000,000s
 SSI - Small-Scale Integration (0-102)
 MSI - Medium-Scale Integration (102-103)
 LSI - Large-Scale Integration (103-105)
 VLSI - Very Large-Scale Integration (105-107)
 ULSI - Ultra Large-Scale Integration (>=107)

1.6.4 Advantages of ICs over discrete components:


While we will concentrate on integrated circuits, the properties of
integrated circuits-what we can and cannot efficiently put in an integrated circuit-
largely determine the architecture of the entire system. Integrated circuits improve
system characteristics in several critical ways.

ICs have three key advantages over digital circuits built from discrete
components:
Size: Integrated circuits are much smaller-both transistors and wires are
shrunk to micrometer sizes, compared to the millimeter or centimeter scales of
discrete components. Small size leads to advantages in speed and power consumption,
since smaller components have smaller parasitic resistances, capacitances, and
inductances.
Speed: Signals can be switched between logic 0 and logic 1 much quicker
within a chip than they can between chips. Communication within a chip can occur
hundreds of times faster than communication between chips on a printed circuit
board. The high speed of circuits on-chip is due to their small size-smaller
components and wires have smaller parasitic capacitances to slow down the signal.
Power consumption: Logic operations within a chip also take much less
power. Once again, lower power consumption is largely due to the small size of
circuits on the chip-smaller parasitic capacitances and resistances require less power
to drive them.
Applications
 Electronic system in cars.
 Digital electronics control VCRs
16
 Transaction processing system, ATM
 Personal computers and Workstations
 Medical electronic systems. Etc….

1.6.5 APPLICATIONS OF VLSI:


Electronic systems now perform a wide variety of tasks in daily life.
Electronic systems in some cases have replaced mechanisms that operated
mechanically, hydraulically, or by other means; electronics are usually smaller, more
flexible, and easier to service. In other cases electronic systems have created totally
new applications.
Electronic systems perform a variety of tasks, some of them visible,
some more hidden:
 Personal entertainment systems such as portable MP3 players and
DVD players perform sophisticated algorithms with remarkably little energy.
 Electronic systems in cars operate stereo systems and displays; they
also control fuel injection systems, adjust suspensions to varying terrain, and perform
the control functions required for anti-lock braking (ABS) systems.
 Digital electronics compress and decompress video, even at high-
definition data rates, on-the-fly in consumer electronics.
 Personal computers and workstations provide word-processing,
financial analysis, and games. Computers include both central processing units
(CPUs) and special-purpose hardware for disk access, faster screen display, etc.
 Medical electronic systems measure bodily functions and perform
complex processing algorithms to warn about unusual conditions. The availability of
these complex systems, far from overwhelming consumers, only creates demand for
even more complex systems.
The growing sophistication of applications continually pushes the design and
manufacturing of integrated circuits and electronic systems to new levels of
complexity. And perhaps the most amazing characteristic of this collection of systems
is its variety-as systems become more complex, we build not a few general-purpose
computers but an ever wider range of special-purpose systems. Our ability to do so is
a testament to our growing mastery of both integrated circuit manufacturing and
design, but the increasing demands of customers continue to test the limits of design
and manufacturing

17
1.7 INTRODUCTION TO FPGA
FPGA stands for Field Programmable Gate Array which has the array of logic
module, I /O module and routing tracks (programmable interconnect). FPGA can be
configured by end user to implement specific circuitry. Speed is up to 100 MHz but at
present speed is in GHz.
Main applications are DSP, FPGA based computers, logic emulation, ASIC
and ASSP. FPGA can be programmed mainly on SRAM (Static Random Access
Memory). It is Volatile and main advantage of using SRAM programming technology
is re-configurability. Issues in FPGA technology are complexity of logic element,
clock support, IO support and interconnections (Routing).
In this work, design of an DES Encryption Algorithm is made using Verilog
HDL is synthesized on FPGA family through XILINX ISE Tool. This process
includes following:
 Translate
 Map
 Place and Route
1.7.1 FPGA FLOW
The basic implementation of design on FPGA has the following steps.
 Design Entry
 Logic Optimization
 Technology Mapping
 Placement
 Routing
 Programming Unit
 Configured FPGA
Above shows the basic steps involved in implementation. The initial design
entry of may be Verilog HDL, schematic or Boolean expression. The optimization of
the Boolean expression will be carried out by considering area or speed.

18
Figure 1.8 Logic Block
In technology mapping, the transformation of optimized Boolean expression to
FPGA logic blocks, that is said to be as Slices. Here area and delay optimization will
be taken place. During placement the algorithms are used to place each block in
FPGA array. Assigning the FPGA wire segments, which are programmable, to
establish connections among FPGA blocks through routing. The configuration of final
chip is made in programming unit.

19
CHAPTER 2
DATA ENCRYPTION STANDARD ALGORITHM

2.1 INTRODUCTION
On May 15, 1973, during the reign of Richard Nixon, the National Bureau of
Standards (NBS) published a notice in the Federal Register soliciting proposals for
cryptographic algorithms to protect data during transmission and storage. The notice
explained why encryption was an important issue. Over the last decade, there has
been an accelerating increase in the accumulations and communication of digital data
by government, industry and by other organizations in the private sector. The contents
of these communicated and stored data often have very significant value and/or
sensitivity. It is now common to find data transmissions which constitute funds
transfers of several million dollars, purchase or sale of securities, warrants for arrests
or arrest and conviction records being communicated between law enforcement
agencies, airline reservations and ticketing representing investment and value both to
the airline and passengers, and health and patient care records transmitted among
physicians and treatment centers.
The increasing volume, value and confidentiality of these records regularly
transmitted and stored by commercial and government agencies has led to heightened
recognition and concern over their exposures to unauthorized access and use. This
misuse can be in the form of theft or defalcations of data records representing money,
malicious modification of business inventories or the interception and misuse of
confidential information about people. The need for protection is then apparent and
urgent. It is recognized that encryption (otherwise known as scrambling, enciphering
or privacy transformation) represents the only means of protecting such data during
transmission and a useful means of protecting the content of data stored on various
media, providing encryption of adequate strength can be devised and validated and is
inherently integrable into system architecture. The National Bureau of Standards
solicits proposed techniques and algorithms for computer data encryption. The Bureau
also solicits recommended techniques for implementing the cryptographic function:
for generating, evaluating, and protecting cryptographic keys; for maintaining files
encoded under expiring keys for making partial updates to encrypted files and mixed

20
clear and encrypted data to permit labeling, polling, routing, etc. The Bureau in its
role for establishing standards and aiding government and industry in assessing
technology, will arrange for the evaluation of protection methods in order to prepare
guidelines.
NBS waited for the responses to come in. It received none until August 6,
1974, three days before Nixon's resignation, when IBM submitted a candidate that it
had developed internally under the name LUCIFER. After evaluating the algorithm
with the help of the National Security Agency (NSA), the NBS adopted a
modification of the LUCIFER algorithm as the new Data Encryption Standard (DES)
on July 15, 1977. DES was quickly adopted for non-digital media, such as voice-
grade public telephone lines. Within a couple of years, for example, International
Flavours and Fragrances were using DES to protect its valuable formulas transmitted
over the phone. Meanwhile, the banking industry, which is the largest user of
encryption outside government, adopted DES as a wholesale banking standard.
Standards for the wholesale banking industry are set by the American National
Standards Institute (ANSI). ANSI X3.92, adopted in 1980, specified the use of the
DES algorithm
The Data Encryption Standard (DES) shall consist of the following Data
Encryption Algorithm (DES). This device can be designed in such a way that they
may be used in a computer system or network to provide cryptographic protection to
binary coded data. The method of implementation will depend on the application and
environment. The devices shall be implemented in such a way that they may be tested
and validated as accurately performing the transformations specified in the following
algorithms.
The main objectives of DES are high level security, adoptable to diverse
application, efficient and exportable. In this project work, the plain text of 64 bits is
given as input to encryption block in which encryption of data is made and the cipher
text of 64 bits is throughout as output. The key length of 64 bits is used in process of
encryption. The DES algorithm is a block cipher that uses the same binary key both to
encrypt and decrypt data blocks is called a symmetric key cipher. A commonly
accepted definition of a good symmetric key algorithm, such as the DES, is that there
exists no attack better than key exhaustion to read an encrypted message.

21
Many times when data is exchanged electronically the privacy of the data is
important. The use of encryption restricts unwanted persons from viewing the data,
which are considered confidential and it is dangerous if made known to unwanted
persons. Encryption is the process of transforming plaintext data that can be read by
anyone, to cipher text data that can be read only by a person with a secret decryption
key. Cryptosystem is the encryption algorithm of converting the plain text in to cipher
text. Data Encryption Algorithm is the commonly used method to encrypt and decrypt
the data. It was created by IBM and announced by the federal government in 1977.
DES algorithm takes a 64-bit plain text and converts into a 64-bit cipher text. DES
operates in four different modes of operation. They are
 Electronic Code Book mode (ECB mode)
 Chain Block Coding mode (CBC mode)
 Cipher Feedback mode (CFB mode)
 Output Feedback mode (OFB mode)

ECB (Electronic Code Book)


This is the regular DES algorithm. ECB is the fastest and easiest to implement,
making it the most common mode of DES.
CBC (Cipher Block Chaining)
This mode of operation is more secure than ECB
CFB (Cipher Feedback)
This mode of operation is similar to CBC and is very secure, but it is slower
than ECB due to the added complexity.
OFB (Output Feedback)
This operation is less secure than CFB mode

2.2 TERMINOLOGIES
The various terminologies and their definitions used in this project were
discussed in this section.
S.No. Term Definition
1 DES Data Encryption Standard
2 Bit A binary digit having a value of 0 or 1.
3 Block Sequence of binary bits that comprise the input,
output, State and Round Key. The length of a sequence
22
is the number of bits it contains. Blocks are also
interpreted as arrays of bytes.
Series of transformations that converts plaintext
4 Cipher
to cipher text using the Cipher Key.
Secret, cryptographic key that is used by the Key
Expansion routine to generate a set of Round Keys; can
5 Cipher Key
be pictured as a rectangular array of bytes, having four
rows and Nk columns.
Data output from the Cipher or input to the
6 Cipher text
Inverse Cipher.
Routine used to generate a series of Round Keys
7 Key Schedule
from the Cipher Key.
Data input to Cipher or output from the Inverse
8 Plaintext
Cipher.
Round keys are values derived from the Cipher
9 Round Key Key using the Key Expansion routine; they are applied
to the State in the Cipher and Inverse Cipher.
Non-linear substitution table used in several bits
10 S-box
substitution transformations

Table 2.1 Terminologies and their Definitions


2.3 Literature Survey on DES
The Data Encryption Standard (DES) is the most well-known
symmetric-key block cipher (If same is used for both encryption and decryption, the
system in said to be Symmetric. The encrypting the data of specific block size are
called Block Ciphers. DES is defined by the American Federal Information
Processing Standard (FIPS) 46-2. It is also known as Data Encryption Algorithm
(DEA) by ANSI and the DEA-1 by the ISO.
Recognized world-wide, DES set a precedent for cryptography in the mid
1970s, and is still standard around the world to protect privacy of sensitive
information. It was the first commercial-grade modern algorithm with openly and
fully specified implementation details. The design of DES algorithm uses the two
general concepts of cryptography.
 Product Cipher: It combines two or more transformations to build a
complex encryption function by mixing several simple operations. The basic
operations include transpositions, translations (e.g. XOR), linear arithmetic operation
23
and simple substitutions. So, the resultant cipher is more secure than the individual
components.
 Fiestel Cipher: It is an iterated cipher method, i.e. repeating some
steps of operations sequentially in round form. In each round the data is processed
with a different key. For such cipher if the same steps are followed by reversing the
keys, the original text can be retrieved by providing the cipher text at the input.

2.4 DES ALGORITHM


The DES is an iterated symmetric block cipher, which means that,
 DES works by repeating the same defined steps multiple times.
 DES is a secret key encryption algorithm.
 DES operates on a fixed number of bits
The algorithm is designed to encipher and decipher blocks of data consisting
of 64 bits under control of a 64-bit key1. Deciphering must be accomplished by using
the same key as for enciphering, but with the schedule of addressing the key bits
altered so that the deciphering process is the reverse of the enciphering process. A
block to be enciphered is subjected to an initial permutation IP, then to a complex
key-dependent computation and finally to a permutation which is the inverse of the
initial permutation IP-1. The key-dependent computation can be simply defined in
terms of function f, called the cipher function, and a function KS, called the key
schedule.
A description of the computation is given first, along with details as to how the
algorithm is used for encipherment. Next, the use of the algorithm for decipherment is
described. Finally, a definition of the cipher function f is given in terms of primitive
functions which are called the selection functions Si and the permutation function P.
Si, P and KS of the algorithm.

2.4.1 SPECIFICATION
For the DES algorithm, the length of the input block, the output block
and the State is 64 bits. For the DES algorithm, the length of the Cipher Key, K, is 64
bits. For the DES algorithm, 16 number of rounds to be performed during the
execution of the algorithm.

24
2.4.2 DESCRIPTION
The word “Data Encryption Standard” means to the encryption of plaintext
(input data) on the basis of Standards that was developed. The output of DES is an
encrypted data called as Ciphertext. Basically an encryption is one that deals with the
transformation of plaintext to Ciphertext whereas decryption deals with
transformation of Ciphertext to plaintext (original message). There is some critical
information used for encryption and decryption is known as key. The algorithm that is
used to encrypt data is a constant one which is said to be a standard.
The above figure 2.1 shows the top level blocks available in the DES
algorithm. Also the basic inputs to the system and the outputs from the system were
clearly represented. As per the standard, Initial Permutation will be performed after
which the 16 rounds with 64 bits key length are carried out and finally the result of
the last round will be given to Inverse Initial Permutation which will be performed
separately. For Cipher, the DES algorithm uses a round function that is composed of
different transformations which are showed in the table 2.2.

Figure 2.1 Top Level Block Diagram of DES Algorithm


IP Initial Permutation
IP-1 Inverse Permutation

PC1 Permuted Choice-1

PC2 Permuted Choice-2


E Expansion Permutation
P Permutation
25
Table 2.2 Description of DES algorithm Blocks
Above mentioned functions were carried out for every individual round. Based
on the key provided, the new set of keys will be generated in the Key schedule block
and is given to the each round as input.
There are four types
1. Permutation
2. Shifting
3. S-box
4. Instantiations
 For this implementation of the algorithm can be using the first step is
permutation tables. The permutation tables having the different type of changing the
bits location then gives the output. The permutation can be having the 6 types. Based
on the types the bit location can be taking place. This type of permutation can be gives
the sufficient types of output.
 The next step of the algorithm can be key bits can be shift using the
circular shifting by 16 times with using the left circular shift. So shift’s the key 56 bits
in continuous way.
 This Sbox contain a Rom. Each Rom contains the some address. The
Sbox can be created each one the each box can be wire.
 The last step can be wired or instantiated all the permutation tables and
Sboxs. So can create whole block diagram.

2.5 ENCRYPTION
 Substitution-permutation algorithm:
o 64-bit input and output blocks
o 56-bit key (with an additional 8 parity bits)
o Information data is cycled 16 times through a set of substitution and
permutation trans-formations: highly non-linear input-output relationship
 Very high throughput rates achievable (up to 100 M bits/s)
 Availability of economical hardware to implement DES
 Low to medium security applications (e.g. secure speech
communications)

26
The DES Algorithm flow steps were discussed with the help of the figure 2.2
which is shown below. The figure 2.2 represents the block diagram of DES algorithm
and it explains the flow in which it works. The main blocks present in single round
block are as follows.
 Expansion permutation
 Permuted choice – 2
 Substitution Box
 Permutation

Figure 2.2 DES Algorithm Blocks


DES processes the input data of block size 64-bits and a 64-bit key to provide
a 64-bit cipher text. Out of the 64 bits of the key, every eighth bit is used as parity
checking bits. So, an effective key on length 56-bits takes part in the algorithm to
encrypt the data. This key size provides 256 possible keys and thus flexibility for
having multiple keys. For trans-positioning bits at various stages of the encryption
flow, various permutation tables are defined.
The 64-bit data is sent to the block ‘initial permutation’ which provides 64-bit
output by de-scrambling the bits. The 64-bit key being fed to the ‘permutation
27
choice1’ provides a diffused output of 56-bits by ignoring the bit with sequence no. in
multiples of 8. The two outputs from these two blocks are fed to the first round in the
sequence of 16-round blocks.

Figure 2.3 DES Algorithm with 16 Rounds


The round block treats each of its input, as two equal parts, so the data bits
have the parts Li and Ri of 32-bits each, with Li having the left 32-bits and Ri, the rest
of 32-bits. Similarly, the key bits are broken in two parts C i and Di, each of length 28-
bits. Depending on the round count, C i and Di are processed with either 1 or 2 bits left
circular shifts to give the output Co and Do, respectively. These processed parts of keys
are used to generate the 48-bit round key through the ‘permutation choice2’.
To get processed with the 48-bit round key, R i is expanded to this size through
‘expansion permutation’. The output from the expansion permutation block is XOR-

28
ed with the round key to get the 48-bit address for the substitution box (s-box). The
sequence of 48-bits to the s-box is divided into 8-sections, each of 6-bits. Each set of
6-bits refer to a block of memory location. The 6-bits serve as the address to a specific
memory location and the 4-bit data stored in the location is read. Thus, the total of 32-
bit data received from each set of 6-bits is concatenated to get a 32-bit output from the
s-box. The substitution box replaces every 6-bit of data to a 4-bit data.
The 32-bit sequence is sent to the ‘permutation function’ to provide more
diffusion of the bits. The 32-bit output from the permutation function is XOR-ed with
the 32-bit Li. This output is connected to the R i of the next round. The Li of the next
round is connected to the Ri of this round. Thus, the main operation of processing is
done only on 32-bits of the data and the output is reversed to ensure operation of the
other 32-bits in the next round.
The Co and Do of the round are connected to Ci and Di of the next round,
respectively. Out of 16 rounds, 12 rounds have 2 left circular shifts while the rest 4
have only 1. This in totality gives a 28-bit left circular shift for the two data sets and
both the sequences return to their initial values after the end of the 16 rounds.
The 64-databits from the round 16 are operated with 32-bit swap and fed to the
‘inverse initial permutation’ block. This final stage of the encryption provides another
trans-position before getting the final 64-bit sequence called the Cipher Text.
Here all the rounds were clearly represented in the figure 2.2 and
figure 2.3 and also clear explanation for the algorithm is made. All the operations such
as permutation, s-box, expansion permutation, key schedule for the key generation
were carried with respect to the predefined tables which were shown in the Appendix
for the reference.

2.6 FUNCTIONAL (F) BLOCK

29
The F-function, depicted in Figure 2, operates on half a block (32 bits) at a
time and consists of four stages:

Fig: 2.4 Functional Block Diagram

Expansion: the 32-bit half-block is expanded to 48 bits using the expansion


permutation, denoted E in the diagram, by duplicating half of the bits. The output
consists of eight 6-bit (8 * 6 = 48 bits) pieces, each containing a copy of 4
corresponding input bits, plus a copy of the immediately adjacent bit from each of the
input pieces to either side.
Key mixing: the result is combined with a sub key using an XOR operation.
Sixteen 48-bit sub keys one for each round are derived from the main key using
the key schedule (described below).
Substitution: after mixing in the sub key, the block is divided into eight 6-bit
pieces before processing by the S-boxes or substitution boxes. Each of the eight S-
boxes replaces its six input bits with four output bits according to a non-linear
transformation, provided in the form of a lookup table. The S-boxes provide the core
of the security of DES without them, the cipher would be linear, and trivially
breakable.
Permutation: finally, the 32 outputs from the S-boxes are rearranged
according to a fixed permutation, the P-box. This is designed so that, after
permutation, each S-box's output bits are spread across four different S boxes in the
next round.
30
The alternation of substitution from the S-boxes, and permutation of bits from
the P-box and E-expansion provides so-called "confusion and diffusion" respectively,
a concept identified by Claude Shannon in the 1940s as a necessary condition for a
secure yet practical cipher.
2.7 EXPANSION PERMUTATION BOX

Fig: 2.5 Expansion Box

This operation expands the right half of the data from 32 to 48 bits. Because
this operation changes the order of the bits as well as repeating certain bits, it is
known as an expansion permutation. This operation has two purposes. It makes the
right half the same size as the key for the XOR operation and it provides a longer
result that can be compressed during the substitution operation. However, neither of
those is its main cryptographic purpose. By allowing one bit to affect two
substitutions, the dependency of the output bits on the input bits spreads faster. This is
called an avalanche effect.DES is designed to reach the condition bit of the key as
quickly as possible.
Figure 2.5 defines the expansion permutation. This is sometimes called the E-
box. For each 4-its input block, the first and fourth bits each represent two bits of the
output block, while the second and third bits each represent one bit of the output
block.For example, the bit in position 3 of the input block moves to position 4 of the
output block, and the bit in position 2 of the input block move to positions 30 and 32
of the output block. Although the output block is larger than the input block, each
input block generates a unique block.
2.8 S-BOX INTERNAL STRUCTURE

31
Fig: 2.6 S-Box Internal Structure
After the compressed key is XORed with the expanded block, the 48-bit result
moves to a substitution operation. The substitutions are performed by eight
substitution boxes, or S-boxes. Each S-box has a 6 bits input and a 4 bit output and
there are eight different S-boxes. The 48 bits are divided into eight 6 bit S-block. Each
separate block is operated on by S-box. The first block is operated on by S-box 1, the
second block is operated on by S-box 2, and soon.
Each S-box is a table of the 4 rows and 16 columns. Each entry in the box is a
4-bit number. The 6 input bits of the S-box specify under which row and column
number to look for the output.
The input bits specify an entry in the S-box in a very particular manner.
Consider an S-box input of 6 bits, labels b1, b2, b3, b4, b5, and b6. Bit b1 and b6 are
combined to form a 2-bit number, from 0 to 3, which corresponds to a row in the
table. The middle 4 bits, b2 through b5 are combined to form a 4-bit number, from 0
to15, which corresponds to a column in the table.
For example, assume that the input to the sixth S-box through 36 of the XOR
function is 110011. The first and last bits combine to form 11, which corresponds to
row 3 of the sixth S-box. The middle 4 bits combine to form 1001, which corresponds
to the column 9 of the same S-box. The entry under row 3, column 9 of S-box 6 is 14.
The value 1110 is substituted for 110011.
32
It is, of course, far easier to implement the S-boxes in software as 64-entry
arrays. It takes some rearranging of the entries to do this, but that’s not hard.
However, this way of describing the S-box helps visualize how they work. Each S-
box can be viewed as a substitution function on a 4-bit entry b2 through b5 goes in,
and a 4-bit result comes out. Bits b1 and b6 come from neighboring blocks they select
one out of four substitution functions available in the particular S-box.
The S-box substitution is the critical step in DES. The algorithms other
operations are linear and easy to analyze. The S-boxes are nonlinear and more than
anything else give DES its security.
The result of this substitution phase is eight 4-bit block which are recombined
into a single 32-bit block. This block moves to the next step the P-box permutation.
2.9 P-BOX PERMUTATION
The 32 bit output of the S-box substitution is permuted according to a P-box.
This permutation maps each input bit to an output position, no bits are used twice and
no bits are ignored. This is called a straight permutation or just a permutation. For
example bit 21 moves to bit 4, while bit 4 moves to bit 31. Finally the result of the P-
box permutation is XORed with the left half of the initial 64 bit block. Then the left
and right halves are switched and another round begins.

2.10 CRACKING DES


Before DES was adopted as a national standard, during the period NBS was
soliciting comments on the proposed algorithm, the creators of public key
cryptography, Martin Hellman and Whitfield Diffie, registered some objections to the
use of DES as an encryption algorithm. Hellman wrote: "Whit Diffie and I have
become concerned that the proposed data encryption standard, while probably secure
against commercial assault, may be extremely vulnerable to attack by an intelligence
organization".
Diffie and Hellman then outlined a "brute force" attack on DES. (By "brute
force" is meant that you try as many of the 2^56 possible keys as you have to before
decrypting the cipher text into a sensible plaintext message.) They proposed a special
purpose "parallel computer using one million chips to try one million keys each" per
second, and estimated the cost of such a machine at $20 million.

33
Fast forward to 1998. Under the direction of John Gilmore of the EFF, a team
spent $220,000 and built a machine that can go through the entire 56-bit DES key
space in an average of 4.5 days. On July 17, 1998, they announced they had cracked a
56-bit key in 56 hours. The computer, called Deep Crack, uses 27 boards each
containing 64 chips, and is capable of testing 90 billion keys a second.
Despite this, as recently as June 8, 1998, Robert Litt, principal associate
deputy attorney general at the Department of Justice, denied it was possible for the
FBI to crack DES: Let me put the technical problem in context: It took 14,000
Pentium computers working for four months to decrypt a single message . We are not
just talking FBI and NSA, we are talking about every police department.
Responded cryptography expert Bruce Schneier the FBI is either incompetent
or lying, or both. Schneier went on to say. The only solution here is to pick an
algorithm with a longer key; there isn't enough silicon in the galaxy or enough time
before the sun burns out to brute- force triple-DES.

CHAPTER 3
DES ALGORITHM IMPLEMENTATION
3.1 INTRODUCTION
The DES is a block cipher. This means that the number of bits that it
encrypts is fixed. DES can currently encrypt blocks of 64 bits at a time; no other

34
block sizes are presently a part of the DES standard. If the bits being encrypted are
larger than the specified block then DES is executed concurrently. This also means
that DES has to encrypt a minimum of 64 bits. If the plain text is smaller than 64 bits
then it must be padded. Simply said the block is a reference to the bits that are
processed by the algorithm.
The Data Encryption Standard Algorithm which includes Encryption is
implemented using Verilog HDL and their functionality will be verified in the
ModelSim Tool with proper test cases.

3.2 IMPLEMENTATION REQUIREMENTS


During the implementation, there are different parameters are required
which are discussed as follows.
Input Data Length Requirements
An implementation of the DES algorithm should have the input data (Plain
Text) length of 64 bits which acts as the primary input to the Encryption block.
Key Length Requirements
In this DES implementation the input key chosen to be as 64 bits from the
various key lengths available. This also acts as the primary input to the Encryption
block.
Keying Restrictions
No weak or semi-weak keys have been identified for the DES algorithm and
there is no restriction on key selection.
Parameterization of Block Size and Round Number
Here since the input data and the input key lengths are 64 bits, the
Round Number will be of 16. The Round Number will be taken with respect to the
DES Algorithm Standard.

3.3 NOTATION AND CONVENTIONS


The different notations and conventions were used in this
implementation of DES Algorithm.
HEX
Hexadecimal defines a notation of numbers in base 16. This simply means that
the highest number that can be represented in a single digit is 15, rather than the usual

35
9 in the decimal (base 10) system. Hence all the values were represented in the
Hexadecimal number system.
Inputs and Outputs
The input and output for the DES algorithm each consist of sequences of 64
bits (digits with values of 0 or 1). These sequences will sometimes be referred to as
blocks and the number of bits they contain will be referred to as their length. The
Cipher Key for the DES algorithm is a sequence of 64 bits. Other input and output
lengths are not permitted by this standard.
The bits within such sequences will be numbered starting at zero and ending at
one less than the sequence length (block length or key length). The number i attached
to a bit is known as its index and will be in one of the ranges 0 ≤ i < 64 depending on
the block length and key length (specified above).

3.4 IMPLEMENTATION OF DES ALGORITHM


In DES all the operation sequences are fixed. So, the steps for encryption of a
64-bit data can be coded in HDL and then simulated to check the results. All the
operations involved in the DES are coded separately. These are connected by proper
instantiations and finally operating under the same clock. The algorithm
implementation in Verilog is divided in four major operations –
 Permutation Functions: After understanding the DES algorithm, it is
found that there are many stages of permutation taking place in the entire encryption
flow. This function is re-ordering the bits of the sequence at its input. The same
function is used to expand the bit size by repeating some bits at output and reducing
the output bit size by ignoring some of the input bits, as required by the stage.

For each of the permutation function the corresponding input and out bit-sizes
are specified in the module. The output ports and connected to the corresponding
input ports, as required for the transposition of bits in that stage. This is equivalent to
re-wiring the bit connection in different fashion.

36
64-bit Initial
Round Inverse Initial 64-bit
Plain Permutation
Permutation Cipher
Text Text
Cascade
connection
of 16-
rounds
Permutation Choice1 Flip-flop at output to
64-bit Key hold values

Figure 3.1 Top level of DES Algorithm Flow


 Left Shift Operations: In each round the 28-bit parts of the key are to
be left circular-shifted by 1 or 2 bits. This is again re-ordering the ports. For
implementation, a parameter “sc” is defined in the round module to represent the shift
count. When the round is instantiated in the top module definition of the DES
algorithm, the parameter is re-defined. So, shift-registers become a part of the
implemented logic.
 Substitution Function: The s-box in the round operation provides
substitution operation. Each of 6-bit set is replaced by 4-bits w.r.t. the value stored in
the ROM location addressed by the input 6-bits. There and 8 s-boxes and hence a
ROM location for each is specified. Each ROM block is of size 64, with address in 6-
bits and data storage of 4-bits in each location.
 Instantiations: All the different sections being implemented separately
are instantiated at the top module and connected. The connection amongst the
instances also has re-wiring done to get the proper swap operation required of the bits
without coding for it separately. This is equivalent to cross-wiring in implementation
terms.
It has been realized that after understanding the algorithm used in the DES its
implementation is very simple in terms of complexities. The main challenge lies in re-
wiring at multiple places. The implementation of parameterized left-shift operation
provided the flexibility to control the number of step required in the round with ease.

37
c. Single S-Box
The S-box toplevel comprises of 8
individual s-boxes, one being
enlarged in part c. Each s-box is a
ROM location with 6-bit address
and 4-bit data output.

b. S-Box Toplevel

a. Toplevel for Round

Figure 3.2 Detailed Flow for Round Algorithm

The round logic shown in part has permutation choice2, expansion


permutation, s-box and permutation function. S-box top level is enlarged in part b.
Implementation of expansion permutation and permutation choice function
provided clarity about how bit size can be expanded or reduced. The implementation
of substitution box provided know-how about programming ROM and reading the
corresponding entries with respect to the address received to read the location.
The functional verification was carried out for all the test cases and hence the
RTL modeling is taken to the synthesis process using the Xilinx tool.
Synthesis Process
 The synthesis process will be carried out by giving the RTL model as
the input to the tool. This RTL modeling requires Virtex-2 board for the
implementation.

38
 Hence the Virtex-2 board is selected and the whole process flow will
be carried out in the Xilinx tool and finally the BIT FILE is generated which is used
for dumping on the board.
3.5 GENERAL IMPLEMENTATION FLOW
The generalized implementation flow diagram of the project is
represented in figure 3.3. Initially the market research should be carried out which
covers the previous version of the design and the current requirements on the design.
Based on this survey, the specification and the architecture must be identified. Then
the RTL modeling should be carried out in Verilog HDL with respect to the identified
architecture. Once the RTL modeling is done, it should be simulated and verified for
all the cases. The functional verification should meet the intended architecture and
should pass all the test cases.

Figure 3.3 General Implementation Flow Diagram

39
Once the functional verification is clear, the RTL model will be taken
to the synthesis process. Three operations will be carried out in the synthesis process
such as
 Translate
 Map
 Place and Route
The developed RTL model will be translated to the mathematical
equation format which will be in the understandable format of the tool. These
translated equations will be then mapped to the library that is, mapped to the
hardware. Once the mapping is done, the gates were placed and routed. Before these
processes, the constraints can be given in order to optimize the design.
Finally the BIT MAP file will be generated that has the design information in
the binary format which can be dumped in the FPGA board.

3.6 KEY GENERATION


Key generation is the process of generating keys in cryptography. A key is
used to encrypt and decrypt whatever data is being encrypted/decrypted. Modern
cryptographic systems include symmetric-key algorithms (such as DES) and public-
key algorithms (such as RSA). Symmetric-key algorithms use a single shared key;
keeping data secret requires keeping this key secret. Public-key algorithms use
a public key and a private key. The public key is made available to anyone. A sender
encrypts data with the public key only the holder of the private key can decrypt this
data.
Since public-key algorithms tend to be much slower than symmetric-key
algorithms, modern systems such as TLS and SSH use a combination of the two: one
party receives the other's public key, and encrypts a small piece of data (either a
symmetric key or some data used to generate it). The remainder of the conversation
uses a (typically faster) symmetric-key algorithm for encryption attack impractical.

Currently, key lengths of128 bits for symmetric key algorithms Computer
cryptography uses integers for keys. In some cases keys are randomly generated using
a random number generator (RNG) or pseudorandom number generator (PRNG). A
PRNG is a computer algorithm that produces data that appears random under analysis.
40
PRNGs that use system entropy to seed data generally produce better results, since
this makes the initial conditions of the PRNG much more difficult for an attacker to
guess. In other situations, the key is derived deterministically using a passphrase and a
key derivation function.
The simplest method to read encrypted data without actually decrypting it is
a brute force attack simply attempting every number, up to the maximum length of the
key. Therefore, it is important to use a sufficiently long key length longer keys take
exponentially longer to attack, rendering a brute force.

3.6.1 KEY LENGTH


Key length is directly proportional to security. In modern cryptosystems, key
length is measured in bits and each bit of key increases the difficulty of a brute-
force attack exponentially. It is important to note that in addition to adding more
security, each bit slows down the cryptosystem as well. Because of this, key length
like all things security is tradeoffs.
Furthermore, different types of cryptosystems require vastly different key
lengths to maintain security. For instance, modulo-based public key systems such
as Diffie-Hellman and RSA require rather long keys, whereas symmetric systems,
both block and stream, are able to use shorter keys. Furthermore, elliptic curve public
key systems are capable of maintaining security at key lengths similar to those of
symmetric systems. While most block ciphers will only use one key length, most
public key systems can use any number of key lengths.
As an illustration of relying on different key lengths for the same level of
security, modern implementations of public key systems give the user a choice of key
lengths usually ranging between 768 and 4,096 bits. These implementations use the
public key system to encrypt a randomly generated block-cipher key which was used
to encrypt the actual message. Equal to the importance of key length, is information
entropy. Entropy, defined generally as "a measure of the disorder of a system" has a
similar meaning in this sense: if all of the bits of a key are not securely generated and
equally random, then the system is much more vulnerable to attack.
If a 128 bit key only has 64 bits of entropy, then the effective length of the key
is 64 bits. This can be seen in the DES algorithm. DES actually has a key length of 64
bits, however 8 bits are used for parity, therefore the effective key length is 56 bits.
3.6.2 SIGNIFICANCE

41
Keys are used to control the operation of a cipher so that only the correct key
can convert encrypted text to plaintext. Many ciphers are actually based on publicly
known algorithms or are open source and so it is only the difficulty of obtaining the
key that determines security of the system, provided that there is no analytic attack
and assuming that the key is not otherwise available. The widely accepted notion that
the security of the system should depend on the key alone has been explicitly
formulated by Auguste Kerckhoffs and Claude Shannon the statements are known
as Kerckhoffs' principle and Shannon's Maxim respectively.
A key should therefore be large enough that a brute force attack is infeasible
would take too long to execute. Shannon's work on information theory showed that to
achieve so called perfect secrecy, the key length must be at least as large as the
message and only used once. In light of this, and the practical difficulty of managing
such long keys, modern cryptographic practice has discarded the notion of perfect
secrecy as a requirement for encryption, and instead focuses on computational
security, under which the computational requirements of breaking an encrypted text
must be infeasible for an attacker.
3.7 KEY GENERATION PROCESS
DES operates on the 64-bit blocks using key sizes of 56- bits. The keys are
actually stored as being 64 bits long, but every 8th bit in the key is not used i.e. bits
numbered 8, 16, 24, 32, 40, 48, 56, and 64. However, we will nevertheless number the
bits from 1 to 64, going left to right. But, as you will see, the eight bits just mentioned
get eliminated when we create sub keys.
The relevant 56 bits are subject to a permutation at the beginning before any
round keys are generated. This is referred to as Permutation Choice 1 that is shown in
appendix PERMUTED CHOICE – PC1.
At the beginning of each round, we divide the 56 relevant key bits into two 28
bit halves each half is thereafter treated separately and circularly shift to the left each
half by one or two bits, depending on the round.

42
Figure 3.4 Key Generation Block Diagram
For generating the round key, we join together the two halves and apply a 56
bit to 48 bit contracting permutation this is referred to as Permutation Choice 2, as
shown in appendix PERMUTED CHOICE – PC2 to the joined bit pattern. The
resulting 48 bits constitute our round key. The contraction permutation shown in
Permutation Choice 2, along with the one-bit or two-bit rotation of the two key halves
each round is meant to ensure that each bit of the original encryption key is used in
roughly 14 of the 16 rounds.
The two halves of the encryption key generated in each round are fed as the
two halves going into the next round. The table shown below tells us how many
positions to use for the left circular shift that is applied to the two key halves at the
beginning of each round.
43
This permutation tells us that the 0th bit of the output will be 33 the 56th bit of the
input (in a 64 bit representation of the 56-bit encryption key), the 1st bit of the output
the 48th bit of the input, and so on, until finally we have for the 55th bit of the output
the 3rd bit of the input.
As with permutation shown on the appendix PERMUTED CHOICE – PC2,
what is shown is NOT a table, in the sense that the rows and the columns do not carry
any special and separate meanings. The permutation order for the bits is given by
reading the entries shown from the upper left corner to the lower right corner. Since
there are only six rows and there are 8 positions in each row, the output will consist of
48 bits.
The substitution step is very effective as far as diffusion is concerned. It has
been shown that if you change just one bit of the 64-bit input data block, on the
average that alters 34 bits of the ciphertext block.
The manner in which the round keys are generated from the encryption key is
also very effective as far as confusion is concerned. It has been shown that if you
change just one bit of the encryption key, on the average that changes 35 bits of the
ciphertext. Both effects mentioned above are referred to as the avalanche effect. And,
of course, the 56-bit encryption key means a key space of size 256 ≈ 7.2 × 1016.
Assuming that, on the average, you’d need to try half the keys in a brute-force
attack, a machine able to process 1000 keys per microsecond would need roughly 13
months to break the code. However, a parallel-processing machine trying 1 million
keys simultaneously would need only about 10 hours

3.8 TRIPLE DES KEYS


Triple DES simply extends the key size of DES by applying the algorithm
three times in succession with three different keys. The combined key size is thus 168
bits (3 times 56), beyond the reach of brute-force techniques such as those used by the
EFF DES Cracker. Triple DES has always been under some suspicion, as the original
algorithm was never designed to be used in this way, however, no serious flaws have
been discovered in its design, and it is today a viable and widely used cryptosystem
with usage in a number of Internet protocols.
The standards define three keying options:

44
 Keying option 1: All three keys are independent. Keying option 1 is the
strongest, with 3 × 56 = 168 independent key bits.
 Keying option 2: K1 and K2 are independent, and K3 = K1. Keying option 2
provides less security, with 2 × 56 = 112 key bits. This option is stronger than
simply DES encrypting twice, e.g. with K1 and K2, because it protects
against meet-in-the-middle attacks.
 Keying option 3: All three keys are identical, i.e. K1 = K2 = K3. Keying
option 3 is equivalent to DES, with only 56 key bits. This option provides
backward compatibility with DES, because the first and second DES
operations cancel out. It is no longer recommended by the National Institute
of Standards and Technology (NIST).

CHAPTER 4
TRIPLE DES
4.1 INTRODUCTION OF TRIPLE DES
45
Triple DES was created back when DES was getting a bit weaker than people
were comfortable with. As a result, they wanted an easy way to get more strength. In a
system dependent on DES, making a composite function out of multiple DES is likely
to be easier than bolting in a new cipher and sidesteps the political issue of arguing
that the new cipher is better than DES.
As it turns out, when you compose a cipher into a new one, you can't use a
double enciphering. There is a class of attacks called meet-in-the-middle attacks, in
which you encrypt from one end, decrypt from the other, and start looking for
collisions (things that give you the same answer). With sufficient memory, Double
DES would only be twice as strong as the base cipher or one bit more in strength.
The DES algorithm is popular and in wide use today because it is still
reasonably secure and fast. There is no feasible way to break DES, however because
DES is only a 64-bit (eight characters) block cipher, an exhaustive search of 255 steps
on average, can retrieve the key used in the encryption. For this reason, it is a
common practice to protect critical data using something more powerful than DES.
A much more secure version of DES called Triple-DES (TDES), which is
essentially equivalent to using DES three times on plaintext with three different keys.
It is considered much safer than the plain DES and like DES, TDES is a block cipher
operating on 64-bit data blocks. There are several forms, each of which use the DES
cipher three times. Some forms of TDES use two 56-bit keys, while others use three.
TDES can however work with one, two or three 56-bit keys. With one key TDES =
DES. The TDES can be implemented using three DES blocks in serial with some
combination logic or using three DES blocks in parallel. The parallel implementation
improves performance and reduces gate count.
Using standard DES encryption, TDES encrypts data three times and uses a
different key for at least one of the three passes. The DES "modes of operation" may
also be used with triple-DES. This 192-bit (24 characters) cipher uses three separate
64-bit keys and encrypts data using the DES algorithm three times. While anything
less than that can be considered reasonably secure only the 192 bit (24 characters)
encryption can provide true security. One variation that takes a single 192 bit (24
characters) key and then: encrypts data using first 64 bits (eight characters), decrypts
same data using second 64 bits (eight characters), and encrypts same data using the
last 64 bits (eight characters).

46
Encryption can be further intensified with longer keys. Keys are usually 56
bits or 128 bits, with 56 bits generally considered the smallest size for sufficient
protection. For multinational organizations, this is a problem because the U.S. State
Department requires that exportable encryption technology use keys no longer than 40
bits. TDES has not been broken and hence its security has not been compromised.
4.2 TRIPLE DES WORKING

Fig: 4.1 Triple DES Block Diagram


The main purpose behind the development of Triple DES was to address the
obvious flaws in DES without making an effort to design a whole new cryptosystem.
Triple DES simply extends the key size of DES by applying the algorithm three times
in succession with three different keys. The combined key size is thus 168 bits (3
times 56), beyond the reach of brute-force techniques such as those used by the EFF
DES Cracker. Triple DES has always been under some suspicion, as the original
algorithm was never designed to be used in this way, however, no serious flaws have
been discovered in its design, and it is today a viable and widely used cryptosystem
with usage in a number of Internet protocols.

The standards define three keying options:


Keying option 1: All three keys are independent. Keying option 1 is the
strongest, with 3 × 56 = 168 independent key bits.

47
Keying option 2: K1 and K2 are independent, and K3 = K1. Keying option 2
provides less security, with 2 × 56 = 112 key bits. This option is stronger than simply
DES encrypting twice, e.g. with K1 and K2, because it protects against meet-in-the-
middle attacks.
Keying option 3: All three keys are identical, i.e. K1 = K2 = K3. Keying
option 3 is equivalent to DES, with only 56 key bits. This option provides backward
compatibility with DES, because the first and second DES operations cancel out. It is
no longer recommended by the National Institute of Standards and Technology.

4.3 ALGORITHM
Triple DES uses a "key bundle" that comprises three DES keys, K1, K2 and K3,
each of 56 bits (excluding parity bits). The encryption algorithm is:
ciphertext = EK3(DK2(EK1(plaintext)))
I.e., DES encrypts with K1, DES decrypt with K2, then DES encrypt with K3.
Decryption is the reverse:
Plain text = DK1(EK2(DK3(ciphertext)))
I.e., decrypt with K3, encrypt with K2, then decrypt with K1.
Each triple encryption encrypts one block of 64 bits of data.
In each case the middle operation is the reverse of the first and last. This
improves the strength of the algorithm when using keying option 2, and
provides backward compatibility with DES with keying option 3.

4.4 APPLICATION
 Secure video teleconferencing.
 Electronic Payment Industry.
 Microsoft Outlook and Microsoft System Center configuration
manager use this Algorithm to Password protect user content and system data.

CHAPTER 5
SOFTWARE TOOLS USED

48
5.1 INTRODUCTION HARDWARE DESCRIPTION LANGUAGES
Digital circuit design has evolved rapidly over the last 25 years. The earliest
digital circuits were designed with vacuum tubes and transistors. Integrated circuits
were then invented where logic gates were placed on a single chip. The first integrated
circuits (IC) were SSI (small scale integration) chips where the gate count was very
small. As technologies became sophisticated, designers were able to place circuits
with hundreds of gates on a chip. These chips were called MSI (Medium scale
integration) chips. With the advent of LSI (large scale integration), designers could
put thousands of gates on a single chip. At this point, design processes started getting
very complicated, and designers felt the need to automate these processes. Electronic
design automation (EDA) techniques began to evolve. Chip designers began to use
circuits and logic simulation techniques to verify the functionality of building blocks
of the order of about 100 transistors. The circuits were still tested on the breadboard,
and the layout was done on paper or by hand on a graphic computer terminal.
With the advent of VLSI (very large scale integration) technology, designers
could design single chips with more than 100,000 transistors. Because of the
complexity of these circuits, it was not possible to verify these circuits on a
breadboard. Computer aided techniques became critical for verification and design of
VLSI digital circuits. Computer programs o do automatic placement and routing of
circuit layouts also became popular. The designers were now building gate level
digital circuits manually on graphic terminals. They would build small building
blocks and then derive higher-level blocks from them. This process would continue
until they had built the top level block. Logic simulators came into existence to verify
the functionality of these circuits before they were fabricated on chip.
As designs got larger and more complex, logic simulation assumed an
important role in the design process. Designers could iron out functional bugs in the
architecture before the chip was designed further.

5.1.1 EMERGENCE OF HDLs


For a long time, programming languages such as FORTRAN, Pascal, and C
were being used to describe compute programs that were sequential in nature.
Similarly, in the digital design field, designers felt the need for a standard language to

49
describe digital circuits. Thus, Hardware description languages (HDLs) came into
existence. HDLs allowed the designers to model the concurrency of processes found
in hardware elements. Hardware description languages such as verilog HDL and
VHDL became popular. Verilog HDL originated in 1983 at gateway design
automation. Later VHDL was developed under contract from DARPA. Using both
verilog and VHDL simulators to simulate large digital circuits quickly gained
acceptance from designers.
Even though HDLs were popular for logic verification, designers had to
manually translate the HDL-based design into a schematic circuit with
interconnections between gates. The advent of logic synthesis in the late 1980s
changed the design methodology radically. Digital circuits could be described at a
register transfer level (RTL) by use of an HDL. Thus, the designer had to specify how
the data flows between registers and how the design processes the data. The details of
gates and their interconnections to implement the circuit were automatically extracted
by logic synthesis tools from the RTL description.
Thus, logic synthesis pushed the HDLs into the forefront of digital design.
Designers no longer had to manually place gates to build digital circuits. They could
describe complex circuits at an abstract level in terms of functionality and data flow
by designing those circuits in HDLs. Logic synthesis tools would implement the
specified functionality in erms of gate interconnections.
HDLs also began to be used for system-level design. HDLs were used for
simulation of system boards, interconnect buses, FPGAs (Field programmable gate
arrays), and PALs (programmable array logic). A common approach is to design each
IC chip, using an HDL, and then verify system functionality via simulation. Today,
verilog HDL is an accepted IEEE standard. In 1995, the original standard IEEE 1364-
2001 is the latest verilog HDL standard that made significant improvements to the
original standard.

5.2 TYPICAL DESIGN FLOW


A typical design flow for designing VLSI IC circuits is shown in figure.
Unshaded blocks show the level of design representation; shaded blocks show
processes in the design flow.
50
The design flow shown in figure is typically used by designers who use HDLs.
In any design, specifications are written first. Specifications describe abstractly the
functionality, interface, and overall architecture of the digital circuit to be designed.
At this point, the architects do not need to think about how they will implement this
circuit. A behavioral description is then created to analyze the design in terms of
functionality, performance, compliance to standards and other high-level issues.
Behavioral descriptions are often written with HDLs.
The behavioral description is manually converted to an RTL description in an
HDL. The designer has to describe the data flow that will implement the described
digital circuit. From this point onward, the design process is done with the assistance
of EDA tools.
Logic synthesis tools convert the RTL description to a gate-level netlist. A
gate-level netlist is a description of the circuit in terms of gates and connections
between them. logic synthesis tools ensure that the gate-level netlist meets timing,,
area, and power specifications. The gate-level netlist is input to an automatic place
and route tool, which creates a layout. The layout is verified and then fabricated on a
chip.
Thus, most digital design activity is concentrated on manually optimizing the
RTL description of the circuit. After the RTL description is frozen. EDA tools are
available to assist the designer in further processes. Designing at the RTL level has
shrunk the design cycle times from years to a few months. It is also possible to do
many design iterations in a short period of time.
Behavioral synthesis tools have begun to emerge recently. These tools can
create RTL descriptions from a behavioral or algorithmic description of the circuit. As
these tools programming, designers will simply implement the algorithm in an HDL at
a very abstract level. EDA tools will help the designer convert the behavioral
description to a final IC chip.
It is important to note that, although EDA tools are available to automate the
processes and cut design cycle times, the designer is still the person who controls how
the tool will perform. EDA tools are also suspectible to the “GIGO: garbage in
garbage out” phenomenon. If used improperly, EDA tools will lead to inefficient
designs. Thus, the designer still needs to understand the nuances of design
methodologies, using EDA tools to obtain an optimized design.

51
Figure 5.1 Typical design flow of HDLs.

5.3 IMPORTANCE OF HDLs

HDLs have many advantages compared to traditional schematic based design:

52
 Design can be described at a very abstract level by use of HDLs.
Designers can write their RTL description without choosing a specific fabrication
technology. Logic synthesis tools can automatically convert the design o any
fabrication technology. If a new technology emerges, designers do not need to
redesign their circuit. They simply input the RTL description to the logic synthesis
tool and create a new gate-level netlist, using the new fabrication technology. The
logic synthesis tool will optimize the circuit in area and timing for the new
technology.
 By describing designs in HDLs, functional verification of the design
can be done early in the design cycle. Since designers work at the RTL level, they can
optimize and modify the RTL description until it meets the desired functionality. Most
design bugs are probability of hitting a functional bug at a later time in the gate-level
netlist or physical layout is minimized.
 Designing with HDLs is analogous to computer programming. A
textual description with comments is an easier way to develop and circuits. This also
provides a concise representation of the design compared to gate-level schematics.
Gate-level schematics are almost incomprehensible for very complex designs.
HDL-based design is here to say. With rapidly increasing complexities of
digital circuits and increasingly sophisticated EDA tools, HDL are now the dominant
method for large digital designs. No digital circuit designer can afford to ignore
HDL-based design.

5.4 TRENDS IN HDLs


The speed and complexity of digital circuits have increased rapidly. Designers
have responded by designing at higher levels of abstraction. Designers have to think
only in terms of functionality. EDA tools take care of the implementation details. With
designer assistance, EDA tools have become sophisticated enough to achieve a close –
to-optimum implementation.
The most popular trend currently is to design in HDL at an RTL level, because
logic synthesis tools can create gate-level netlist from RTL level design. Behavioral
logic synthesis allowed engineers to design directly in terms of algorithms and the
behavior of the circuit, and then use EDA tools to do the translation and optimization
in each phase of the design. However, behavioral synthesis did not gain widespread

53
acceptance. Today, RTL design continues to be very popular. Verilog HDL is also
being constantly enhanced to meet the needs of new verification methodologies.
Formal verification and assertion checking techniques have emerged. Formal
verification applies formal mathematical techniques to verify the correctness of
verilog HDL descriptions and to establish equivalency between RTL and gate-level
netlists. However, the need to describe a design in verilog HDL will not go away.
Assertion checkers allow checking to be embedded in the RTL code. This is a
convenient way to do checking in the most important parts of a design.
New verification languages have also gained rapid acceptance. These
languages combine the parallelism and hardware construct from HDLs with the object
oriented nature of C++. These also provide support for automatic stimulus creation,
checking, and coverage. However, these languages do not replace verilog HDL. They
simply boost the productivity of the verification process. Verilog HDL is still needed
to describe the design.
For very high speed and timing circuits like microprocessors, the gate-level
netlist provided by logic synthesis tools is not optimal. In such cases, designers often
mix gate-level description directly into the RTL description to achieve optimum
results. This practice is opposite to the high-level design paradigm, yet it is frequently
used for high-speed designs because designers need to squeeze the last bit of timing
out of circuits, and EDA tools sometimes prove to be insufficient to achieve the
desired results.
Another technique that is used for system-level design is a mixed bottom-up
methodology where the designers use either existing verilog HDL modules, basic
building blocks, or vendor-supplied core blocks to quickly bring up their system
simulation. This is done to reduce development costs and compress design schedules.
For example, consider a system that has a CPU, graphics chip, I/O chip, and a system
bus. The CPU designers would build the next-generation CPU themselves at an RTL
level, but they would use behavioral models for the graphics chip and the I/O chip and
would buy a vendor supplied model for the system bus.

5.5 VERILOG HDL


5.5.1 OVERVIEW

54
Hardware description languages such as Verilog differ from
software programming languages because they include ways of describing the
propagation of time and signal dependencies (sensitivity). There are two assignment
operators, a blocking assignment (=), and a non-blocking (<=) assignment. The non-
blocking assignment allows designers to describe a state-machine update without
needing to declare and use temporary storage variables (in any general programming
language we need to define some temporary storage spaces for the operands to be
operated on subsequently; those are temporary storage variables). Since these
concepts are part of Verilog's language semantics, designers could quickly write
descriptions of large circuits in a relatively compact and concise form. At the time of
Verilog's introduction (1984), Verilog represented a tremendous productivity
improvement for circuit designers who were already using graphical schematic
capture software and specially-written software programs to document and simulate
electronic circuits.
The designers of Verilog wanted a language with syntax similar to the C
programming language, which was already widely used in engineering software
development. Verilog is case-sensitive, has a basic preprocessor (though less
sophisticated than that of ANSI C/C++), and equivalent control
flow keywords (if/else, for, while, case, etc.), and compatible operator precedence.
Syntactic differences include variable declaration (Verilog requires bit-widths on
net/reg types), demarcation of procedural blocks (begin/end instead of curly braces
{}), and many other minor differences.
A Verilog design consists of a hierarchy of modules. Modules
encapsulate design hierarchy, and communicate with other modules through a set of
declared input, output, and bidirectional ports. Internally, a module can contain any
combination of the following: net/variable declarations (wire, reg, integer, etc.),
concurrent and sequential statement blocks, and instances of other modules (sub-
hierarchies). Sequential statements are placed inside a begin/end block and executed
in sequential order within the block. But the blocks themselves are executed
concurrently, qualifying Verilog as a dataflow language.
Verilog's concept of 'wire' consists of both signal values (4-state: "1, 0,
floating, undefined") and strengths (strong, weak, etc.). This system allows abstract
modeling of shared signal lines, where multiple sources drive a common net. When a

55
wire has multiple drivers, the wire's (readable) value is resolved by a function of the
source drivers and their strengths.
A subset of statements in the Verilog language is synthesizable. Verilog
modules that conform to a synthesizable coding style, known as RTL (register-transfer
level), can be physically realized by synthesis software. Synthesis software
algorithmically transforms the (abstract) Verilog source into a net list, a logically
equivalent description consisting only of elementary logic primitives (AND, OR,
NOT, flip-flops, etc.) that are available in a specific FPGA or VLSI technology.
Further manipulations to the net list ultimately lead to a circuit fabrication blueprint
(such as a photo mask set for an ASIC or a bit stream file for an FPGA).

5.5.2 HISTORY

Beginning
Verilog was the first modern hardware description language to be invented. It
was created by Phil Moorby and PrabhuGoel during the winter of 1983/1984. The
wording for this process was "Automated Integrated Design Systems" (later renamed
to Gateway Design Automation in 1985) as a hardware modeling language. Gateway
Design Automation was purchased by Cadence Design Systems in 1990. Cadence
now has full proprietary rights to Gateway's Verilog and the Verilog-XL, the HDL-
simulator that would become the de-facto standard (of Verilog logic simulators) for
the next decade. Originally, Verilog was intended to describe and allow simulation;
only afterwards was support for synthesis added.
Verilog 2001
Extensions to Verilog-95 were submitted back to IEEE to cover the
deficiencies that users had found in the original Verilog standard. These extensions
became IEEE Standard 1364-2001 known as Verilog-2001.
Verilog-2001 is a significant upgrade from Verilog-95. First, it adds explicit
support for (2's complement) signed nets and variables. Previously, code authors had
to perform signed operations using awkward bit-level manipulations (for example, the
carry-out bit of a simple 8-bit addition required an explicit description of the Boolean
algebra to determine its correct value). The same function under Verilog-2001 can be
more succinctly described by one of the built-in operators: +, -, /, *, >>>. A

56
generate/end generate construct (similar to VHDL's generate/end generate) allows
Verilog-2001 to control instance and statement instantiation through normal decision
operators (case/if/else). Using generate/end generate, Verilog-2001 can instantiate an
array of instances, with control over the connectivity of the individual instances. File
I/O has been improved by several new system tasks. And finally, a few syntax
additions were introduced to improve code readability (e.g. always @*, named
parameter override, C-style function/task/module header declaration).
Verilog-2001 is the dominant flavor of Verilog supported by the majority of
commercial EDA software packages

Verilog 2005
Not to be confused with System Verilog, Verilog 2005 (IEEE Standard 1364-
2005) consists of minor corrections, spec clarifications, and a few new language
features (such as the uwire keyword).
A separate part of the Verilog standard, Verilog-AMS, attempts to integrate
analog and mixed signal modeling with traditional Verilog.
System Verilog
System Verilog is a superset of Verilog-2005, with many new features and
capabilities to aid design verification and design modeling. As of 2009, the System
Verilog and Verilog language standards were merged into System Verilog 2009 (IEEE
Standard 1800-2009).
The advent of hardware verification languages such as OpenVera,
and Verisity's e language encouraged the development of Superlog by Co-Design
Automation Inc. Co-Design Automation Inc was later purchased by Synopsys. The
foundations of Superlog and Vera were donated to Accellera, which later became the
IEEE standard P1800-2005: System Verilog.
In the late 1990s, the Verilog Hardware Description Language (HDL) became
the most widely used language for describing hardware for simulation and synthesis.
However, the first two versions standardized by the IEEE (1364-1995 and 1364-2001)
had only simple constructs for creating tests. As design sizes outgrew the verification
capabilities of the language, commercial Hardware Verification Languages (HVL)
such as Open Vera and ewere created. Companies that did not want to pay for these
tools instead spent hundreds of man-years creating their own custom tools. This

57
productivity crisis (along with a similar one on the design side) led to the creation of
Accellera, a consortium of EDA companies and users who wanted to create the next
generation of Verilog. The donation of the Open-Vera language formed the basis for
the HVL features of System Verilog.Accellera’s goal was met in November 2005 with
the adoption of the IEEE standard P1800-2005 for System Verilog, IEEE (2005).
The most valuable benefit of System Verilog is that it allows the user to
construct reliable, repeatable verification environments, in a consistent syntax, that
can be used across multiple projects
Some of the typical features of an HVL that distinguish it from a Hardware
Description Language such as Verilog or VHDL are
 Constrained-random stimulus generation
 Functional coverage
 Higher-level structures, especially Object Oriented Programming
 Multi-threading and interprocess communication
 Support for HDL types such as Verilog’s 4-state values
 Tight integration with event-simulator for control of the design
There are many other useful features, but these allow you to create test
benches at a higher level of abstraction than you are able to achieve with an HDL or a
programming language such as C.
System Verilog provides the best framework to achieve coverage-driven
verification (CDV). CDV combines automatic test generation, self-checking
testbenches and coverage metrics to significantly reduce the time spent verifying a
design. The purpose of CDV is to:
 Eliminate the effort and time spent creating hundreds of tests.
 Ensure thorough verification using up-front goal setting.
 Receive early error notifications and deploy run-time checking and
error analysis to simplify debugging.

5.5.3 POPULARITY OF VERILOG HDL


Verilog HDL has evolved as a standard hardware description language. Verilog
HDL offers many useful features for hardware design.

58
 Verilog HDL is a general-purpose hardware description language that
is easy to learn and easy to use. It is similar in syntax to the C programming language.
Designers with C programming experience will find it easy to learn verilog HDL.
 Verilog HDL allows different levels of abstractions to be mixed in the
same model. Thus, a designer can define a hardware model in terms of switches,
gates, RTL. Or behavioral code. Also, a designer needs to learn only one language for
stimulus and hierarchical design.
 Most popular logic synthesis tools support verilog HDL. This makes it
the language of choice for designers.
 All fabrication vendors provide verilog HDL libraries for post logic
synthesis simulation. Thus designing a chip in verilog HDL allows the widest choice
of vendors.
 The programming language interface (PLI) is a powerful feature that
allows the user to write custom C code to interface with the internal data structure of
verilog designers can customize a verilog HDL simulator to their needs with the PLI.

5.6 XILINX
Xilinx Tools is a suite of software tools used for the design of digital circuits
implemented using Xilinx Field Programmable Gate Array (FPGA) or Complex
Programmable Logic Device (CPLD). The design procedure consists of (a) design
entry, (b) synthesis and implementation of the design, (c) functional simulation and
(d) testing and verification. Digital designs can be entered in various ways using the
above CAD tools: using a schematic entry too ,using a hardware description language
(HDL) Verilog or VHDL or a combination of both. In this lab we will only use the
design flow that involves the use of Verilog HDL.
The CAD tools enable you to design combinational and sequential circuits
starting with Verilog HDL design specifications. The steps of this design procedure
are listed below:
1. Create Verilog design input file(s) using template driven editor.
2. Compile and implement the Verilog design file(s).
3. Create the test-vectors and simulate the design (functional simulation)
without using a PLD.
4. Assign input/output pins to implement the design on a target device.
5. Download bit stream to an FPGA or CPLD device.

59
6. Test design on FPGA/CPLD device

5.6.1 Migrating Projects:


When you open a project file from a previous release, the ISE® software
prompts you to migrate your project. If you click Backup and Migrate or Migrate
Only, the software automatically converts your project file to the current release. If
you click Cancel, the software does not convert your project and, instead, opens
Project Navigator with no project loaded.
To Migrate a Project
1. In the ISE 12 Project Navigator, select File > Open Project.
2. In the Open Project dialog box, select the .xise file to migrate.
3. In the dialog box that appears, select Backup and Migrate or Migrate
Only.
4. The ISE software automatically converts your project to an ISE 12
project.
5. Implement the design using the new version of the software.

Obsolete Source File Types:


The ISE 12 software supports all of the source types that were supported in the
ISE 11 software.
If you are working with projects from previous releases, state diagram source
files (.dia), ABEL source files (.abl), and test bench waveform source files (.tbw) are
no longer supported. For state diagram and ABEL source files, the software finds an
associated HDL file and adds it to the project, if possible. For test bench waveform
files, the software automatically converts the TBW file to an HDL test bench and adds
it to the project. To convert a TBW file after project migration, see Converting a TBW
File to an HDL Test Bench.

5.6.2 CREATING A PROJECT


Project Navigator allows you to manage your FPGA and CPLD designs using
an ISE® project, which contains all the source files and settings specific to your
design. First, you must create a project and then, add source files, and set process
properties. After you create a project, you can run processes to implement, constrain,

60
and analyze your design. Project Navigator provides a wizard to help you create a
project as follows.

To Create a Project
1. Select File > New Project to launch the New Project Wizard.
2. In the Create New Project page, set the name, location, and project
type, and click Next.
3. For EDIF or NGC/NGO projects only: In the Import EDIF/NGC
Project page, select the input and constraint file for the project, and click Next.
4. In the Project Settings page, set the device and project properties, and
click Next.
5. In the Project Summary page, review the information, and click Finish
to create the project
Project Navigator creates the project file (project_name.xise) in the directory
you specified. After you add source files to the project, the files appear in the
Hierarchy pane of the

Design panel:
Project Navigator manages your project based on the design properties (top-
level module type, device type, synthesis tool, and language) you selected when you
created the project. It organizes all the parts of your design and keeps track of the
processes necessary to move the design from design entry through implementation to
programming the targeted Xilinx® device.

You can now perform any of the following:

 Create new source files for your project.

 Add existing source files to your project.

 Run processes on your source files.


Modify process properties.

5.6.3 SOFTWARE TOOL USED

61
 ISE XILINX 13.2 SIMULATOR

XILINX SIMULATOR
Xilinx is a Synthesis Tool which converts Schematic/HDL Design Entry into
functionally equivalent logic gates on Xilinx FPGA, with optimized speed & area. So,
after specifying behavioural description for HDL, the designer merely has to select
the Library and specify Optimization Criteria; and Xilinx Synthesis Tool determines
the net list to meet the specification;
Which is then converted into Bit-File to be loaded onto FPGA PROM . Also,
Xilinx Tool generates Post-Process Simulation Model after every Implementation
Step, which is used to functionally verify generated net list after processes, like Map,
Place & Route.

CHAPTER 6

RESULTS AND DISCUSSION

6.1 ANALYSIS OF THE IMPLEMENTATION


The DES Encryption algorithm and the implementation were discussed in the
previous chapters. Now this chapter deals with the simulation and synthesis results of
the implemented DES algorithm. Here Xilinx tool is used in order to simulate the
design and checks the functionality of the design. Once the functional verification is
62
done, the design will be taken to the Xilinx tool for Synthesis process and the net list
generation.
The Appropriate test cases have been identified in order to test this modeled
DES Encryption algorithm. Based on the identified values as the reference the plain
text and the key of 64 bits will be given as the input to the design and the obtained
cipher text should match the reference result. This proves that the modeled design
works properly as per the algorithm.
The test bench is developed in order to test the modeled design. This
developed test bench will automatically force the inputs, which were taken from the
reference, and will make the operations of algorithm to perform. The simulated
waveforms for the various cases have been discussed in this section.
CASE-1:

Rest set to LOW Clock is Set Plaintext Key Ciphertext

Figure 6.1 Simulation Result of Set-1

The DES algorithm is implemented using the Verilog HDL code. The same is
analyzed step-by-step to check the results at each step of the algorithm. For this, a test
case is analyzed, with an input data=0123456789ABCDEFH and the encryption
key=133457799BBCDFF1H. The corresponding calculations for each step are done
with pen and paper and simulation results are verified with it. The details of analysis
are provided below.

63
Initial Permutation
0123456789ABCDEFH CC00CCFFF0AAF0AAH
Permutation Choice1
133457799BBCDFF1H F0CCAAF556678FH
Round 1 Inputs, Operation and Outputs
Inputs : Li=CC00CCFFH Ri=F0AAF0AAH Ci=F0CCAAFH & Di=556678FH
Left Circular Shift (for C)
F0CCAAFH E19955FH
Left Circular Shift (for D)
556678FH AACCF1EH
Permutation Choice2
556678FAACCF1EH 1B02EFFC7072H
Expansion Permutation
F0AAF0AAH 7A15557A1555H

7A15557A1555H XOR1 6117BA866527H


1B02EFFC7072H
Substitution Box
6117BA866527H 5C82B597H
Permutation Function
5C82B597H 234AA9BBH
234AA9BB H XOR2 EF4A6544H
CC00CCFFH

Outputs : Lo=F0AAF0AAH Ro=EF4A6544H Co=E19955FH & Do=AACCF1EH

The outputs of round 1 serve as the inputs to the round 2 at its respective ports.
So, the serial operation of 16 rounds provide the final outputs as L o= 43423234H, Ro=
0A4CD995H, Co=F0CCAAFH and Do=556678FH. As expected, the values of C and D
return to the initial values after completion of 16-rounds. These outputs are being
brought at the toplevel of the design and inverse permuted to get the final result.
Inverse Initial Permutation
0A4CD99543423234H 85E813540F0AB405H

So, the cipher text is received. Three other test cases with known results are
analyzed and the final results are found to be matched.
CASE-2:

Plaintext as input KEY for Encryption Encrypted data

64
Figure 6.2 Simulation Result of Set-2
In test case-2, here the sets of inputs are taken from the reference as follows.
Input = 68 65 20 74 69 6D 65 20
Cipher Key = 01 23 45 67 89 AB CD EF
The above inputs were represented in the hexadecimal format which contains
8 bytes, that is, 64 bits. Hence the output of the encryption process, that is, the cipher
text for the given set of inputs is obtained as follows.
Cipher Text = 6A 27 17 87 AB 88 83 F9
Thus the simulation result which is shown in the figure 4.2 gives the clear
view on the DES operation which was explained above.

6.2 Device utilization summary of DES:


This device utilization includes the following.
 Logic Utilization
 Logic Distribution
 Total Gate count for the Design

65
Figure: 6.3 Summary of DES Output

The device utilization summery is shown above in which its gives the details
of number of devices used from the available devices and also represented in %.
Hence as the result of the synthesis process, the device utilization in the used device
and package is shown above.

6.3 DES RTL Schematic


The RTL (Register Transfer Logic) can be viewed as black box after
synthesize of design is made. It shows the inputs and outputs of the system. By
double-clicking on the diagram we can see gates, flip-flops and MUX.

66
Figure 6.4 DES RTL Schematic

The above figure 6.4 shows the top level block diagram that contains the
primary inputs and outputs of the design.

6.4 SIMULATION RESULT OF DES:

67
Figure: 6.5 Simulation Result of DES
Timing Summary:
Speed Grade: -4
Minimum period: 3.545ns (Maximum Frequency: 282.048MHz)
Minimum input arrival time before clock: 2.276ns
Maximum output required time after clock: 5.446ns
Maximum combinational path delay: No path found
In timing summery, details regarding time period and frequency is shown are
approximate while synthesize. After place and routing is over, we get the exact timing
summery. Hence the maximum operating frequency of this synthesized design is
given as 282.048 MHz and the minimum period as 3.545 ns. OFFSET IN is the
minimum input arrival time before clock and OFFSET OUT is maximum output
required time after clock.

6.5 Triple DES Device utilization summary:

68
Figure: 6.6 Summary of Triple DES Output

6.6 Triple DES RTL Schematic

69
Figure 6.7 Triple DES RTL Schematic

The RTL (Register Transfer Logic) can be viewed as black box after
synthesize of design is made. It consists of three different DES keys K 1, K2 and K3 and
DESIN. This means that the actual 3TDES key has length 3×56 = 168 bits.

6.7 SIMULATION RESULT OF TRIPLE DES:

70
Figure 6.8 Simulation Result of Triple DES

CHAPTER 7
CONCLUSION AND FUTURE SCOPE
7.1 CONCLUSION

71
We successful implemented the DES algorithm make to know one of
the encryption standards available in market. This is a 64-bits Key dependent
algorithm which has control over the 64-bits input data or plaintext. The original
message is taken to 16 round operations which produces the ciphertext. Given the
same input key and data (plaintext or ciphertext) any implementation that produces
the same output (ciphertext or plaintext) as the algorithm specified in this standard is
an acceptable implementation of the DES. The coding for the DES algorithm in
verilog HDL
DES is symmetric fiestel block-cipher, and has a high avalanche effect
(no. of bit changing in the cipher code with change of 1 bit in the input data) and
complete effect (each output bit being computed as a complex function of all input
data and key bits. It also achieves the cryptography goals of confidentiality, data
integrity, authentication and non-repudiation. We worked on the plain text of 64 bit is
given as input to encryption block in which encryption of data is made and the cipher
text of 64 bits is throughout as output. The key of 64 bits is used in process of
encryption in which the effective key length is 56 bits and remaining 8 bits are used as
parity checking bits.

7.2 FUTURE SCOPE


This work on the DES Encryption Algorithm of 64 bits can be
extended in the future.
 Further pipelining and parallelization can provide further
improvements.

BIBLIOGRAPHY

REFERENCE BOOKS:

72
1. Differential Cryptanalysis of the Data Encryption Standard Paperback
by Eli Biham, Adi Shamir.

2. Brute Force: Cracking the Data Encryption Standard


by Matt Curtin.
3. Block ciphers: Data Encryption Standard, Blowfish, Triple DES, Advanced
Encryption Standard, International Data Encryption Algorithm

WEBSITES:
https://en.wikipedia.org/wiki/Data_Encryption_Standard

http://www.tutorialspoint.com/cryptography/data_encryption_standard.htm

http://www.tutorialspoint.com/cryptography/triple_des.htm
https://en.wikipedia.org/wiki/Triple_DES
http://searchsecurity.techtarget.com/definition/Data-Encryption-Standard

APPENDIX
STANDARD TABLES FOR DES ALGORITHM
INITIAL AND INVERSE PERMUTATION TABLES:

73
EXPANSION PERMUTATION AND PERMUTATION
TABLES:

KEY SCHEDULE OF LEFT SHIFTS:

74
S-BOXES (SUBSTITUTION BOXES) TABLES:

PERMUTED CHOICE – PC1& PC2 TABLES:

75

You might also like