Comparing Means: Samples: T-Tests For One Sample & Two Related
Comparing Means: Samples: T-Tests For One Sample & Two Related
Comparing Means: Samples: T-Tests For One Sample & Two Related
The t-Statistic
• What if we don’t know σ ?
– In most real-world situations in which we want to test a hypothesis, we
do not know the population standard deviation σ
– If we don’t know σ we can compute a test statistic using s, but this
statistic will no longer be normally distributed, so we can no longer use
the z test statistic
• Why?
– s is variable across samples and its sampling distribution is not normally
distributed
• s2 is distributed as a chi-square distribution, which we’ll talk about near the
end of the semester
Sample 2
Student Score 7
s 2 20.80 n5
M 73.4 237 68
192 76 s 4.56
101 78
109 69
180 76
Sample 3
Student Score
221 65 s 2 11.80
M 67.6
85 70 s 3.44
223 71
48 63
40 69
The t-Statistic
• If we compute something like z, but using s instead of σ, we
get a statistic that follows the t distribution
Remember: Similarly,
M M
z , t ,
M sM
s
where M where sM
n n
(n = 5)
• The estimation comes from the fact that we are using the sample
variance to estimate the unknown population variance.
– For large samples (large df), the estimation is very good and the t statistic
will be very similar to a z-score.
– For small samples (small df), the t statistic will provide a relatively poor
estimate of z.
– For large df, the t distribution will be nearly normal, but for small df, the t
distribution will be flatter and more spread out than a normal distribution.
t(4), α = 0.05
t(200), α = 0.05
2. Compute t-statistic
– For data in which I give you raw scores, you will have to compute the
sample mean and sample standard deviation
3. Make a decision: does the t-statistic for your sample fall into
the rejection region?
Compute t-Statistic
70.0 M
t df
s 7.0 s
n5 n
M 75.0 75 70
t n 1
7
t.05 2.776
5
5
t 4 1.60
3.13
critical region
t(4), α= 0.05, 2-tailed test
(extreme 5%) middle 95%
reject H0 retain H0
µ from H0
t -2.78 0 2.78
t = 1.60 retain H0:
no difference
01:830:200:01-04 Spring 2014
t-Tests for One Sample & Two Related Samples
• x = sizezenith /sizehorizon
M
t df
X X2
1.73
1.06
2.99
1.12
s
2.03 4.12 n
1.40 1.96
0.95 0.90
1.13 1.28
1.41 1.99
1.73 2.99
1.63 2.66
1.56 2.43
sum 14.63 22.45
x
M i
n
x
2
SS x 2
n
SS
s
n 1
M
1.06 1.12
2.03 4.12
t df
1.40 1.96 s
0.95 0.90
1.13 1.28 n
1.41 1.99
1.463 1
1.73 2.99
t n 1
1.63
1.56
2.66
2.43
0.341
sum 14.63 22.45 10
0.463
x 14.63
t 9 4.29
M i
1.463 0.108
n 10
x
2
14.632
SS x 2 22.45 1.046
n 10
t0.05 2.262
SS 1.046
s 0.116 0.341
n 1 9 4.29 2.262; reject H 0
difference score = D = x2 – x1
X1 X2 D = X2 –X1
83.80 95.20 11.40
83.30 94.30 11.00
86.00 91.50 5.50
82.50 91.90 9.40
86.70 100.30 13.60
79.60 76.70 -2.90
76.90 76.80 -0.10
94.20 101.60 7.40
73.40 94.90 21.50
80.50 75.20 -5.30
• The null hypothesis states that the population of difference scores has a
mean of zero:
H 0 : D 2 1 0
• The alternative hypothesis states that there is a systematic difference
between treatments that causes the difference scores to be consistently
positive (or negative) and produces a non-zero mean difference between
the treatments:
H1 : D 0
M D D MD
t df nD 1
sM D sD
nD
• The numerator of the t statistic measures the difference between the
sample mean and the hypothesized population mean.
X1 X2 D = X2 –X1 D2
(Remember, µD=0 under H0)
83.80 95.20 11.40 129.96
83.30 94.30 11.00 121.00
MD
86.00 91.50 5.50 30.25 t df
82.50 91.90 9.40 88.36 sD
86.70 100.30 13.60 184.96
79.60 76.70 -2.90 8.41 nD
76.90 76.80 -0.10 0.01
7.15 0
94.20 101.60 7.40 54.76
t n 1
73.40 94.90 21.50 462.25 8.14
80.50 75.20 -5.30 28.09
Sum 71.50 1108.05 10
7.15
D
t 9 2.78
MD i
71.50
7.15
2.57
n 10
D
2
71.502
SS D 2 1108.05 569.82
n 10
t0.05 2.262
SS 569.82
s 66.31 8.14
n 1 9 2.78 2.262; reject H 0
1 0
Cohen’s d: d
M 0
• For z-tests: dˆ
M 0
• For one-sample t-tests: d̂
s
ˆ M D D0
• For related-samples t-tests: d
sD
01:830:200:01-04 Spring 2014
t-Tests for One Sample & Two Related Samples
1.0
s 0.341
n 10
M 1.463
ˆ M 1.463 1
Cohen’s d: d 1.36
s 0.341