Equation Sheet

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

N-Gram Model Formulas Estimating Probabilities

• Word sequences • N-gram conditional probabilities can be estimated


from raw text based on the relative frequency of
word sequences.
• Chain rule of probability Bigram:

• Bigram approximation N-gram:

• To have a consistent probabilistic model, append a


• N-gram approximation unique start (<s>) and end (</s>) symbol to every
sentence and treat these as additional words.

Perplexity Laplace (Add-One) Smoothing


• Measure of how well a model “fits” the test data. • “Hallucinate” additional training data in which each
• Uses the probability that the model assigns to the possible N-gram occurs exactly once and adjust
test corpus. estimates accordingly.
• Normalizes for the number of words in the test Bigram:
corpus and takes the inverse.
N-gram:
where V is the total number of possible (N-1)-grams
(i.e. the vocabulary size for a bigram model).
• Measures the weighted average branching factor
in predicting the next word (lower is better). • Tends to reassign too much mass to unseen events,
so can be adjusted to add 0<!<1 (normalized by !V
instead of V).
Interpolation Formal Definition of an HMM

• Linearly combine estimates of N-gram • A set of N +2 states S={s0,s1,s2, … sN, sF}


– Distinguished start state: s0
models of increasing order. – Distinguished final state: sF
Interpolated Trigram Model: • A set of M possible observations V={v1,v2…vM}
• A state transition probability distribution A={aij}
Where:

• Learn proper values for "i by training to


(approximately) maximize the likelihood of • Observation probability distribution for each state j
an independent development (a.k.a. tuning) B={bj(k)}
corpus.
• Total parameter set !={A,B} 6

Forward Probabilities Computing the Forward Probabilities

• Initialization
• Let #t(j) be the probability of being in state j
after seeing the first t observations (by
summing over all initial paths leading to j). • Recursion

• Termination

7 8
Viterbi Scores Computing the Viterbi Scores
• Recursively compute the probability of the most • Initialization
likely subsequence of states that accounts for the
first t observations and ends in state sj.
• Recursion

• Also record “backpointers” that subsequently allow


backtracing the most probable state sequence. • Termination
! btt(j) stores the state at time t-1 that maximizes the
probability that system was in state sj at time t (given
the observed sequence).

9
Analogous to Forward algorithm except take max instead of sum 10

Computing the Viterbi Backpointers Supervised Parameter Estimation

• Initialization • Estimate state transition probabilities based on tag


bigram and unigram statistics in the labeled data.

• Recursion
• Estimate the observation probabilities based on tag/
word co-occurrence statistics in the labeled data.
• Termination

• Use appropriate smoothing if training data is sparse.


Final state in the most probable state sequence. Follow
backpointers to initial state to construct full sequence. 11 12
Context Free Grammars (CFG) Estimating Production Probabilities

• N a set of non-terminal symbols (or variables) • Set of production rules can be taken directly
• $ a set of terminal symbols (disjoint from N) from the set of rewrites in the treebank.
• R a set of productions or rules of the form • Parameters can be directly estimated from
A"%, where A is a non-terminal and % is a frequency counts in the treebank.
string of symbols from ($& N)*
• S, a designated non-terminal called the start
symbol

14

You might also like