ECE4007 Information Theory and Coding: DR - Sangeetha R.G

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 76

ECE4007

INFORMATION THEORY AND


CODING
Dr.Sangeetha R.G
Associate Professor Senior
SENSE
Syllabus
Module:1 Introduction 4 CO: 1
hours
Review of Probability Theory, Introduction to information
theory
Review
• Probability
• Types of Events
 Mutually Exclusive Events
 Independent Events
• Joint probability of related and independent
Events
• Statistical Independence
• Axioms of probability
Introduction to Probability
Distributions
Random Variable
• A random variable x takes on a defined set of values
with different probabilities
• For example, if you roll a die, the outcome is random (not fixed)
and there are 6 possible outcomes, each of which occur with
probability one-sixth
• For example, if you poll people about their voting preferences,
the percentage of the sample that responds “Yes on
Proposition 100” is a also a random variable (the percentage
will be slightly differently every time you poll)

• Roughly, probability is how frequently we expect


different outcomes to occur if we repeat the
experiment over and over (“frequentist” view)
Random variables can be
discrete or continuous
• Discrete random variables have a countable
number of outcomes
– Examples: Dead/alive, treatment/placebo, dice, counts,
etc.
• Continuous random variables have an infinite
continuum of possible values.
– Examples: blood pressure, weight, the speed of a car,
the real numbers from 1 to 6.
Probability functions

• A probability function maps the possible


values of x against their respective
probabilities of occurrence, p(x)
• p(x) is a number from 0 to 1.0.
• The area under a probability function is
always 1.
Discrete example: roll of a
die

p(x)

1/6

x
1 2 3 4 5 6

 P(x)  1
all x
Probability mass function (pmf)

x p(x)
1 p(x=1)=1/6

2 p(x=2)=1/6

3 p(x=3)=1/6

4 p(x=4)=1/6

5 p(x=5)=1/6

6 p(x=6)=1/6
1.0
Cumulative distribution function
(CDF)

1.0 P(x)
5/6

2/3

1/2

1/3

1/6

1 2 3 4 5 6 x
Cumulative distribution
function
x P(x≤A)
1 P(x≤1)=1/6

2 P(x≤2)=2/6

3 P(x≤3)=3/6

4 P(x≤4)=4/6

5 P(x≤5)=5/6

6 P(x≤6)=6/6
Practice Problem:
• The number of patients seen in the ER in any given hour is a
random variable represented by x. The probability distribution
for x is:

x 10 11 12 13 14
P(x) .4 .2 .2 .1 .1

Find the probability that in a given hour:


a.    exactly 14 patients arrive  p(x=14)= .1
b.    At least 12 patients arrive p(x12)= (.2 + .1 +.1) = .4
c.    At most 11 patients arrive p(x≤11)= (.4 +.2) = .6
Review Question 1

If you toss a die, what’s the probability that you


roll a 3 or less?

a. 1/6
b. 1/3
c. 1/2
d. 5/6
e. 1.0
Review Question 2

Two dice are rolled and the sum of the face


values is six? What is the probability that at
least one of the dice came up a 3?

a. 1/5
b. 2/3
c. 1/2
d. 5/6
e. 1.0
Review Question 2

Two dice are rolled and the sum of the face


values is six. What is the probability that at least
one of the dice came up a 3?

a. 1/5 How can you get a 6 on two


b. 2/3 dice? 1-5, 5-1, 2-4, 4-2, 3-3
c. 1/2 One of these five has a 3.
d. 5/6
e. 1.0 1/5
Continuous case
 The probability function that accompanies a
continuous random variable is a continuous
mathematical function that integrates to 1.
 For example, recall the negative exponential
function (in probability, this is called an
“exponential distribution”):

f ( x)  e  x
 This function integrates to 1:
 

e
x x
 e  0 1 1
0
0
Continuous case: “probability density function”
(pdf)

p(x)=e-x

Probability at a point for


1 continuous distribution =0=1/infinity

The probability that x is any exact particular value (such as 1.9976) is 0;


we can only assign probabilities to possible ranges of x.
For example, the probability of x falling within 1 to 2:

Clinical example: Survival times


after lung transplant may
roughly follow an exponential
p(x)=e-x function.
Then, the probability that a
patient will die in the second
1 year after surgery (between
years 1 and 2) is 23%.

x
1 2

2 2


x x
P(1  x  2)  e  e  e  2  e 1  .135  .368  .23
1
1
Example 2: Uniform distribution

The uniform distribution: all values are equally likely.


f(x)= 1 , for 1 x 0
p(x)

x
1

We can see it’s a probability distribution because it integrates


to 1 (the area under the curve is 1): 1 1

1  x
 
1 0 1
0
0
Example: Uniform
distribution

 What’s the probability that x is between 0 and ½?

Clinical Research Example:


p(x) When randomizing patients in
an RCT, we often use a random
1 number generator on the
computer. These programs work
by randomly generating a
number between 0 and 1 (with
equal probability of every
0 ½ 1
x number in between). Then a
subject who gets X<.5 is control
and a subject who gets X>.5 is
treatment.

P(½ x 0)= ½


Expected Value and Variance

• All probability distributions are


characterized by an expected value
(mean) and a variance (standard
deviation squared).
Expected value of a random variable

• Expected value is just the average or mean (µ) of


random variable x.

• It’s sometimes called a “weighted average”


because more frequent values of X are weighted
more highly in the average.

• It’s also how we expect X to behave on-average


over the long run (“frequentist” view again).
Expected value, formally

Discrete case:

E( X )   x p(x )
all x
i i

Continuous case:

E( X )  
all x
xi p(xi )dx
Symbol Interlude

• E(X) = µ
– these symbols are used
interchangeably
Example: expected value

• Recall the following probability distribution of ER


arrivals:

x 10 11 12 13 14
P(x) .4 .2 .2 .1 .1

 x p( x)  10(.4)  11(.2)  12(.2)  13(.1)  14(.1)  11.3


i 1
i
Sample Mean is a special case of
Expected Value…
Sample mean, for a sample of n subjects: =
n

x i n
1
X i 1
n
 
i 1
xi ( )
n
The probability (frequency) of each person in
the sample is 1/n.
Expected Value

• Expected value is an extremely


useful concept for good decision-
making!
Variance/standard deviation

2=Var(x) =E(x-)2

“The expected (or average) squared


distance (or deviation) from the
mean”
  Var ( x)  E[( x   ) ]   ( xi   ) p(xi )
2 2 2

all x
Variance, continuous

Discrete case:

Var ( X )   (x
all x
i   ) p(xi )
2

Continuous case:

 ( xi   ) p(xi )dx
2
Var ( X ) 
all x
Symbol Interlude

• Var(X)= 2
• SD(X) = 
– these symbols are used
interchangeably
Review Question 3

The expected value and variance of a


coin toss (H=1, T=0) are?

a. .50, .50
b. .50, .25
c. .25, .50
d. .25, .25
Important discrete probability
distribution:

Bernoulli

Binomial
Bernoulli Distribution
• In probability theory and statistics, the
Bernoulli distribution, named after Swiss
mathematician Jacob Bernoulli is the discrete
probability distribution of a random variable
which takes the value 1 with probability p and
the value 0 with probability q=1-p.
p(X)=p; x=1
p(x)=q=1-p; x=0 (not necessarily equi-
probable)
Binomial Probability
Distribution

 A fixed number of observations (trials), n


 e.g., 15 tosses of a coin; 20 patients; 1000 people
surveyed
 A binary outcome
 e.g., head or tail in each toss of a coin; disease or no
disease
 Generally called “success” and “failure”
 Probability of success is p, probability of failure is 1 – p
 Constant probability for each observation
 e.g., Probability of getting a tail is the same each time we
toss the coin
Binomial distribution
Solution:
One way to get exactly 3 heads: HHHTT

What’s the probability of this exact arrangement?


P(heads)*P(heads)*P(heads)*P(tails)*P(tails)
=(1/2)3 * (1/2)2

Another way to get exactly 3 heads: THHHT


Probability of this exact outcome = (1/2)1 * (1/2)3 *
(1/2)1 = (1/2)3 * (1/2)2
Binomial distribution

In fact, (1/2)3 x (1/2)2 is the probability of each


unique outcome that has exactly 3 heads and 2
tails.

So, the overall probability of 3 heads and 2 tails


is:
(1/2)3 x (1/2)2 + (1/2)3 x (1/2)2 + (1/2)3 x (1/2)2 +
….. for as many unique arrangements as there
are—but how many are there??
Outcome Probability
  THHHT (1/2)3 x (1/2)2
HHHTT (1/2)3 x (1/2)2
TTHHH (1/2)3 x (1/2)2
HTTHH (1/2)3 x (1/2)2 The probability
ways to
5 arrange 3
HHTTH (1/2)3 x (1/2)2 of each unique
  heads in 5 HTHHT
THTHH
(1/2)3 x (1/2)2
(1/2)3 x (1/2)2
outcome (note:
 3
trials they are all
HTHTH (1/2)3 x (1/2)2 equal)
HHTHT (1/2)3 x (1/2)2
THHTH (1/2)3 x (1/2)2
10 arrangements x (1/2)3 x (1/2)2
5
C3 = 5!/3!2! = 10
 
Factorial review: n! = n(n-1)(n-2)…
 5
P(3 heads
 
and 2 tails) =  x P(heads)3 *P(tails)2 =
 3

10 * (½)5=0.3125
Binomial distribution function:

X= the number of heads tossed in 5 coin


tosses p(x)

x
0 1 2 3 4 5
number of heads
Binomial distribution, generally
Note the general pattern emerging  if you have only two possible
outcomes (call them 1/0 or yes/no or success/failure) in n independent
trials, then the probability of exactly X “successes”=
n = number of trials

n X n X
  p (1  p )
X 1-p = probability
of failure
X=# p = probability of
successes out success
of n trials
Binomial distribution: example

• If I toss a coin 20 times, what’s the


probability of getting exactly 10 heads?

 20  10 10
 (.5) (.5)  .176
 10 
Binomial distribution:
example

If I toss a coin 20 times, what’s the


probability of getting of getting 2 or fewer
heads?
 20  20!
0
 (.5) (.5) 
20
(.5) 20  9.5 x10 7 
0 20!0!
 20  20!
  (.5)1
(.5)19
 (.5) 20  20 x9.5 x10  7  1.9 x10 5 
1 19!1!
 20  20!
2 18
 (.5) (.5)  (.5) 20  190 x9.5 x10 7  1.8 x10  4
2 18!2!
 1.8 x10  4
**All probability distributions are
characterized by an expected value and
a variance:
If X follows a binomial distribution with
parameters n and p: X ~ Bin (n, p)
Then:
E(X) = np
Var (X) = np(1-p)
SD (X)= np(1  p)
Poisson Process
Poisson Process
Poisson Distribution
Poisson Distribution
Poisson Distribution
More Than One Random Variable
and Independence

Joint Distributions
More Than One Random Variable
and Independence

Joint Distributions
More Than One Random Variable
and Independence
Joint Distributions
More Than One Random Variable
and Independence
Joint Distributions
More Than One Random Variable
and Independence

Independence
Independence
Functions of Random Variables
Functions of Random Variables
Linear Combinations of Independent
Random Variables
Functions of Random Variables
What If the Random Variables Are
Not Independent?
Functions of Random Variables
What If the Random Variables Are
Not Independent?
Random Samples, Statistics, and
The Central Limit Theorem
Sum of two Uniform RVs
Sum of multiple Uniform
RVs
Sum of two Exponential
RVs
Sum of multiple Exponential
RVs
Sum of multiple Gaussian
RVs
Random Samples, Statistics, and
The Central Limit Theorem

Central Limit Theorem


Random Samples, Statistics, and
The Central Limit Theorem

You might also like