Compiler Design Unit 1 Notes
Compiler Design Unit 1 Notes
Compiler Design Unit 1 Notes
o A compiler is a translator that converts the high-level language into the machine
language.
o High-level language is written by a developer and machine language can be understood
by the processor.
o Compiler is used to show errors to the programmer.
o The main purpose of compiler is to change the code written in one language without
changing the meaning of the program.
o When you execute a program which is written in HLL programming language then it
executes into two parts.
o In the first part, the source program compiled and translated into the object program (low
level language).
o In the second part, object program translated into the target program through the
assembler.
Compiler Phases
The compilation process contains the sequence of various phases. Each phase takes source
program in one representation and produces output in another representation. Each phase takes
input from its previous stage.
Lexical Analysis:
Lexical analyzer phase is the first phase of compilation process. It takes source code as input. It
reads the source program one character at a time and converts it into meaningful lexemes.
Lexical analyzer represents these lexemes in the form of tokens.
Syntax Analysis
Syntax analysis is the second phase of compilation process. It takes tokens as input and generates
a parse tree as output. In syntax analysis phase, the parser checks that the expression made by the
tokens is syntactically correct or not.
Semantic Analysis
Semantic analysis is the third phase of compilation process. It checks whether the parse tree
follows the rules of language. Semantic analyzer keeps track of identifiers, their types and
expressions. The output of semantic analysis phase is the annotated tree syntax.
In the intermediate code generation, compiler generates the source code into the intermediate
code. Intermediate code is generated between the high-level language and the machine language.
The intermediate code should be generated in such a way that you can easily translate it into the
target machine code.
Code Optimization
Code optimization is an optional phase. It is used to improve the intermediate code so that the
output of the program could run faster and take less space. It removes the unnecessary lines of
the code and arranges the sequence of statements in order to speed up the program execution.
Code Generation
Code generation is the final stage of the compilation process. It takes the optimized intermediate
code as input and maps it to the target machine language. Code generator translates the
intermediate code into the machine code of the specified computer.
Example:
Compiler Passes
Pass is a complete traversal of the source program. Compiler has two passes to traverse the
source program.
Multi-pass Compiler
o Multi pass compiler is used to process the source code of a program several times.
o In the first pass, compiler can read the source program, scan it, extract the tokens and
store the result in an output file.
o In the second pass, compiler can read the output file produced by first pass, build the
syntactic tree and perform the syntactical analysis. The output of this phase is a file that
contains the syntactical tree.
o In the third pass, compiler can read the output file produced by second pass and check
that the tree follows the rules of language or not. The output of semantic analysis phase is
the annotated tree syntax.
o This pass is going on, until the target output is produced.
One-pass Compiler
o One-pass compiler is used to traverse the program only once. The one-pass compiler
passes only once through the parts of each compilation unit. It translates each part into its
final machine code.
o In the one pass compiler, when the line source is processed, it is scanned and the token is
extracted.
o Then the syntax of each line is analyzed and the tree structure is build. After the semantic
part, the code is generated.
o The same process is repeated for each line of code until the entire program is compiled.
Bootstrapping
o Bootstrapping is widely used in the compilation development.
o Bootstrapping is used to produce a self-hosting compiler. Self-hosting compiler is a type
of compiler that can compile its own source code.
o Bootstrap compiler is used to compile the compiler and then you can use this compiled
compiler to compile everything else as well as future versions of itself.
1. Source Language
2. Target Language
3. Implementation Language
1. Create a compiler SCAA for subset, S of the desired language, L using language "A" and that
compiler runs on machine A.
3. Compile LCSA using the compiler SCAA to obtain LCAA. LCAA is a compiler for language L,
which runs on machine A and produces code for machine A.
o Finite automata machine takes the string of symbol as input and changes its state
accordingly. In the input, when a desired symbol is found then the transition occurs.
o While transition, the automata can either move to the next state or stay in the same state.
o FA has two states: accept state or reject state. When the input string is successfully
processed and the automata reached its final state then it will accept.
1. δ: Q x ∑ →Q
DFA
DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the
computation. In DFA, the input character goes to one state only. DFA doesn't accept the null
move that means the DFA cannot change state without any input character.
Example
2. ∑ = {0, 1}
3. q0 = {q0}
4. F = {q3}
NDFA
NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of
states for a particular input. NDFA accepts the NULL move that means it can change state
without reading the symbols.
NDFA also has five states same as DFA. But NDFA has different transition function.
δ: Q x ∑ →2Q
Example
2. ∑ = {0, 1}
3. q0 = {q0}
4. F = {q3}
Regular expression
o Regular expression is a sequence of pattern that defines a string. It is used to denote
regular languages.
o It is also used to match character combinations in strings. String searching algorithm used
this pattern to find the operations on string.
o In regular expression, x* means zero or more occurrence of x. It can generate {e, x, xx,
xxx, xxxx,.....}
o In regular expression, x+ means one or more occurrence of x. It can generate {x, xx, xxx,
xxxx,.....}
Union: If L and M are two regular languages then their union L U M is also a union.
1. L U M = {s | s is in L or s is in M}
Intersection: If L and M are two regular languages then their intersection is also an intersection.
1. L ⋂ M = {st | s is in L and t is in M}
Kleene closure: If L is a regular language then its kleene closure L1* will also be a regular
language.
Example
Solution:
The string of language L starts with "a" followed by atleast three b's. Itcontains atleast one "a" or
one "b" that is string are like abbba, abbbbbba, abbbbbbbb, abbbb.....a
r= ab3b* (a+b)+
To optimize the DFA you have to follow the various steps. These are as follows:
Step 1: Remove all the states that are unreachable from the initial state via any set of the
transition of DFA.
Step 3: Now split the transition table into two tables T1 and T2. T1 contains all final states and
T2 contains non-final states.
1. δ (q, a) = p
2. δ (r, a) = p
That means, find the two states which have same value of a and b and remove one of them.
Step 5: Repeat step 3 until there is no similar rows are available in the transition table T1.
Step 7: Now combine the reduced T1 and T2 tables. The combined transition table is the
transition table of minimized DFA.
Example:
Solution:
Step 1: In the given DFA, q2 and q4 are the unreachable states so remove them.
Step 3:
1. One set contains those rows, which start from non-final sates:
2. Other set contains those rows, which starts from final states.
Step 5: In set 2, row 1 and row 2 are similar since q3 and q5 transit to same state on 0 and 1. So
skip q5 and then replace q5 by q3 in the rest.
o Lex is a program that generates lexical analyzer. It is used with YACC parser generator.
o The lexical analyzer is a program that transforms an input stream into a sequence of
tokens.
o It reads the input stream and produces the source code as output through implementing
the lexical analyzer in the C program.
1. { definitions }
2. %%
3. { rules }
4. %%
5. { user subroutines }
Where pi describes the regular expression and action1 describes the actions what action the
lexical analyzer should take when pattern pi matches a lexeme.
User subroutines are auxiliary procedures needed by the actions. The subroutine can be loaded
with the lexical analyzer and compiled separately.
BNF Notation
BNF stands for Backus-Naur Form. It is used to write a formal representation of a context-free
grammar. It is also used to describe the syntax of a programming language.
Where leftside ∈ (Vn∪ Vt)+ and definition ∈ (Vn∪ Vt)*. In BNF, the leftside contains one non-
terminal.
We can define the several productions with the same leftside. All the productions are separated
by a vertical bar symbol "|".
1. S → aSa
2. S → bSb
3. S → c
1. S → aSa| bSb| c
Ambiguity
A grammar is said to be ambiguous if there exists more than one leftmost derivation or more
than one rightmost derivative or more than one parse tree for the given input string. If the
grammar is not ambiguous then it is called unambiguous.
Example:
1. S = aSb | SS
2. S = ∈
For the string aabb, the above grammar generates two parse trees:
If the grammar has ambiguity then it is not good for a compiler construction. No method can
automatically detect and remove the ambiguity but you can remove ambiguity by re-writing the
whole grammar without ambiguity.
YACC
o YACC stands for Yet Another Compiler Compiler.
o YACC provides a tool to produce a parser for a given grammar.
o YACC is a program designed to compile a LALR (1) grammar.
o It is used to produce the source code of the syntactic analyzer of the language produced
by LALR (1) grammar.
o The input of YACC is the rule or grammar and the output is a C program.
C Compiler
1. G= (V, T, P, S)
Where,
In CFG, the start symbol is used to derive the string. You can derive the string by repeatedly
replacing a non-terminal by the right hand side of the production, until all non-terminal have
been replaced by terminal symbols.
Example:
Production rules:
1. S → aSa
2. S → bSb
3. S → c
Now check that abbcbba string can be derived from the given CFG.
1. S ⇒ aSa
2. S ⇒ abSba
3. S ⇒ abbSbba
4. S ⇒ abbcbba
By applying the production S → aSa, S → bSb recursively and finally applying the production
S → c, we get the string abbcbba.
Derivation
Derivation is a sequence of production rules. It is used to get the input string through these
production rules. During parsing we have to take two decisions. These are as follows:
We have two options to decide which non-terminal to be replaced with production rule.
Left-most Derivation
In the left most derivation, the input is scanned and replaced with the production rule from left to
right. So in left most derivatives we read the input string from left to right.
Example:
Production rules:
1. S = S + S
2. S = S - S
3. S = a | b |c
Input:
a-b+c
1. S=S+S
2. S=S-S+S
3. S=a-S+S
4. S=a-b+S
5. S=a-b+c
Right-most Derivation
In the right most derivation, the input is scanned and replaced with the production rule from right
to left. So in right most derivatives we read the input string from right to left.
Example:
1. S = S + S
2. S = S - S
3. S = a | b |c
Input:
a-b+c
1. S=S-S
2. S=S-S+S
3. S=S-S+c
4. S=S-b+c
5. S=a-b+c
Parse tree
o Parse tree is the graphical representation of symbol. The symbol can be terminal or non-
terminal.
o In parsing, the string is derived using the start symbol. The root of the parse tree is that
start symbol.
o It is the graphical representation of symbol that can be terminals or non-terminals.
o Parse tree follows the precedence of operators. The deepest sub-tree traversed first. So,
the operator in the parent node has less precedence over the operator in the sub-tree.
Example:
Production rules:
1. T= T + T | T * T
2. T = a|b|c
Input:
a*b+c
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Capabilities of CFG
There are the various capabilities of CFG: