CKY) Cocke-Kasami-Younger) Earley Parsing Algorithms: Dor Altshuler
CKY) Cocke-Kasami-Younger) Earley Parsing Algorithms: Dor Altshuler
CKY) Cocke-Kasami-Younger) Earley Parsing Algorithms: Dor Altshuler
and
Earley Parsing Algorithms
Dor Altshuler
Context Free Grammar
A Context-free Grammar (CFG) is a 4-tuple:
G = (N, , P, S) where:
N is a finite set of non-terminal symbols.
is a finite set of terminal symbols (tokens).
P is a finite set of productions (rules).
Production : (, ) P will be written
where is in N and is in (N)*
S is the start symbol in N.
Chomsky Normal Form
A context-free grammar where the right side of each
production rule is restricted to be either two non terminals
or one terminal.
Production can be one of following formats:
A
A BC
Examples:
CKY: bottom-up dynamic programming.
Earley parsing: top-down dynamic programming.
CKY )Cocke-Kasami-Younger)
One of the earliest recognition and parsing algorithms
The standard version of CKY can only recognize
languages defined by context-free grammars in
Chomsky Normal Form (CNF).
It is also possible to extend the CKY algorithm to handle
some grammars which are not in CNF
Harder to understand
Based on a dynamic programming approach:
Build solutions compositionally from sub-solutions
Uses the grammar directly.
CKY Algorithm
Considers every possible consecutive subsequence of
letters and sets K T[i,j] if the sequence of letters starting
from i to j can be generated from the non-terminal K.
Once it has considered sequences of length 1, it goes on
to sequences of length 2, and so on.
For subsequences of length 2 and greater, it considers
every possible partition of the subsequence into two
halves, and checks to see if there is some production
A -> BC such that B matches the first half and C matches
the second half. If so, it records A as matching the whole
subsequence.
Once this process is completed, the sentence is
recognized by the grammar if the entire string is matched
by the start symbol.
CKY Algorithm
Observation: any portion of the input string spanning i
to j can be split at k, and structure can then be built
using sub-solutions spanning i to k and sub-solutions
spanning k to j .
Meaning:
Solution to problem [i, j] can constructed from solution
to sub problem [i, k] and solution to sub problem [k ,j].
CKY Algorithm for Deciding CFL
Consider the grammar G given by:
S e | AB | XB
T AB | XB
X AT
Aa
Bb
CKY Algorithm for Deciding CFL
Consider the grammar G given by:
S e | AB | XB
T AB | XB
X AT
Aa
Bb
1. Is w = aaabb in L(G ) ?
2. Is w = aaabbb in L(G ) ?
CKY Algorithm for Deciding CFL
The algorithm is bottom-up in that we start
with bottom of derivation tree.
S e | AB | XB
a a a b b
T AB | XB
X AT
Aa
Bb
CKY Algorithm for Deciding CFL
S e | AB | XB
a a a b b
T AB | XB A A A B B
X AT
Aa
Bb
CKY Algorithm for Deciding CFL
a a a b b
S e | AB | XB
T AB | XB A A A B B
X AT
Aa
S,T
T
Bb
CKY Algorithm for Deciding CFL
S e | AB | XB
a a a b b
T AB | XB A A A B B
X AT
Aa
S,T
T
Bb X
CKY Algorithm for Deciding CFL
a a a b b
S e | AB | XB
T AB | XB A A A B B
X AT
Aa
S,T
T
Bb X
S,T
CKY Algorithm for Deciding CFL
S e | AB | XB a a a b b
T AB | XB
X AT
A A A B B
Aa S,T
T
Bb
X
S,T
aaabb REJECTED!
X
CKY Algorithm for Deciding CFL
Now look at aaabbb :
S e | AB | XB a a a b b b
T AB | XB
X AT
Aa
Bb
CKY Algorithm for Deciding CFL
1) Write variables for all length 1 substrings.
S e | AB | XB a a a b b b
T AB | XB
X AT
A A A B B B
Aa
Bb
CKY Algorithm for Deciding CFL
2) Write variables for all length 2 substrings.
S e | AB | XB a a a b b b
T AB | XB
X AT
A A A B B B
Aa S,T
Bb
CKY Algorithm for Deciding CFL
3) Write variables for all length 3 substrings.
S e | AB | XB a a a b b b
T AB | XB
X AT
A A A B B B
Aa S,T
T
Bb X
CKY Algorithm for Deciding CFL
4) Write variables for all length 4 substrings.
S e | AB | XB a a a b b b
T AB | XB
X AT
A A A B B B
Aa S,T
T
Bb X
S,T
CKY Algorithm for Deciding CFL
5) Write variables for all length 5 substrings.
S e | AB | XB a a a b b b
T AB | XB
X AT
A A A B B B
Aa S,T
T
Bb X
S,T
X
CKY Algorithm for Deciding CFL
6) Write variables for all length 6 substrings.
S e | AB | XB a a a b b b
T AB | XB
X AT A A A B B B
Aa S,T
T
Bb
X
S,T
S is included so X
aaabbb accepted!
S,T
The CKY Algorithm
function CKY (word w, grammar P) returns table
for i from 1 to LENGTH(w) do
table[i-1, i] {A | A wi P }
for j from 2 to LENGTH(w) do
for i from j-2 down to 0 do
for k i + 1 to j 1 do
table[i,j] table[i,j] {A | A BC P,
B table[i,k], C table[k,j] }
Space complexity:
A three dimensions table at size n*n*r or
A n*n table with lists up to size of r
Space complexity O(rn2) = O)n2(
Another Parsing Algorithm
Parsing General CFLs vs. Limited Forms
Efficiency:
Deterministic (LR) languages can be parsed in linear
time.
A number of parsing algorithms for general CFLs
require O(n3) time.
Asymptotically best parsing algorithm for general
CFLs requires O(n2.37), but is not practical.
Utility - why parse general grammars and not just CNF?
Grammar intended to reflect actual structure of
language.
Conversion to CNF completely destroys the parse
structure.
The Earley Algorithm (1970)
Doesnt require the grammar to be in CNF.
Usually moves left-to-right.
Makes it faster than O(n3) for many grammars.
Earleys algorithm resembles recursive descent, but
solves the left-recursion problem.
No recursive function calls.
Use a parse table as we did in CKY,
so we can look up anything weve
discovered so far.
The Earley Algorithm
The algorithm is a bottom-up chart parser with
top-down prediction: the algorithm builds up parse trees
bottom-up, but the incomplete edges (the predictions) are
generated top-down, starting with the start symbol.
We need a new data structure: A dotted rule stands for a
partially constructed constituent, with the dot indicating
how much has already been found and how much is still
predicted.
Dotted rules are generated from ordinary grammar rules.
The algorithm maintains sets of states, one set for each
position in the input string (starting from 0).
The Dotted Rules
With dotted rules, an entry in the chart records:
Which rule has been used in the analysis
Which part of the rule has already been found (left of the
dot).
Which part is still predicted to be found and will combine
into a complete parse (right of the dot).
the start and end position of the material left of the dot.
Example: A X1X2 C Xm
Operation of the Algorithm
Process all hypotheses one at a time in order.
This may add new hypotheses to the end of the
to-do list, or try to add old hypotheses again.
Process a hypothesis according to what follows
the dot:
If a symbol, scan input and see if it matches.
If a non-terminal, predict ways to match it.
(well predict blindly, but could reduce # of predictions by
looking ahead k symbols in the input and only making
predictions that are compatible)
If nothing, then we have a complete constituent, so
attach it to all its customers.
Parsing Operations
The Earley algorithm has three main operations:
Predictor: an incomplete entry looks for a symbol to the right
of its dot. if there is no matching symbol in the chart, one is
predicted by adding all matching rules with an initial dot.
Scanner: an incomplete entry looks for a symbol to the right
of the dot. this prediction is compared to the input, and a
complete entry is added to the chart if it matches.
Completer: a complete edge is combined with an incomplete
entry that is looking for it to form another complete entry.
Parsing Operations
The Earley Recognition Algorithm
The Main Algorithm: parsing input w=w1w2wn
1. S0 = {[S P (0) ]}
2. For 0 i n do:
Process each item s Si in order by applying to
it a single applicable operation among:
(a) Predictor (adds new items to Si)
(b) Completer (adds new items to Si)
(c) Scanner (adds new items to Si+1)
3. If Si+1 = Reject the input.
4. If i = n and [SP (0)] Sn then Accept the input.
Earley Algorithm Example
Consider the following grammar for arithmetic
expressions:
SP (the start rule)
PP+M
PM
MM*T
MT
T number
Thank you