Materials SB: 1+3.3log Log (N)
Materials SB: 1+3.3log Log (N)
Materials SB: 1+3.3log Log (N)
CHAPTER 1
❖ Descriptive data and Inferential Data
❖ Critical thinking
+ Conclusion from small samples
+ Conclusion from non-random samples
+ Conclusion from rare event
+ Poor survey methods
+ Post-hoc fallacy
CHAPTER 2
❖ Level of measurement
+ Nominal measurement
+ Ordinal measurement
+ Interval measurement
+ Ratio measurement
❖ Sampling method
+ Simple random sample
+ Systematic sample
+ Stratified sample Random sampling method
+ Cluster sample
+ Judgment sample
+ Convenience sample Non-random sampling method
+ Focus group
CHAPTER 3
❖ Stem and leaf
❖ Dot plot
❖ Sturges’ Rule: k ¿ 1+3.3 log log (n)
❖ Bin width
❖ Histogram and shape
CHAPTER 4: Descriptive Statistics
n
1
❖ Sample mean: x = ∑ x i
n i=1
❖ Geometric mean: G = √ x 1 x 2 … x n
n
CHAPTER 5: Probability
P( A) 1−P( A)
❖ Odds for A: ; Odds against A:
1−P( A) P( A)
❖ General Law of Addition: P ( A ∪ B ) =P ( A )+ P ( B )−P ( A ∩B )
P ( A ∩ B)
❖ Conditional probability: P ( B ) = P( B)
λ x e− λ
❖ Poisson approximation for binomial: λ=nπ ; P(X =x) = (when n ≥ 20 ; π ≤ 0.05)
x!
❖ Binomial approximation to the hypergeometric:
n s
when <0.05 → use n as sample size, π=
N N
❖ Geometric: P ( X=x )=π ( 1−π)x−1;
P ( X ≤ x )=1−(1−π ) x
σ
❖ Standard error of the sample mean: σ x =
√❑
σ
❖ Confidence interval for μ, known σ: x ± z α / 2
√❑
s
❖ Confidence interval for μ, unknown σ: x ± t α / 2 with d . f .=n−1
√❑
❖ Standard error of the sample proportion: σ p = √ ❑
❖ Confidence interval for π: p ± z α /2 √❑
2
zσ
❖ Sample size to estimate μ: n=( )
E
❖ Interpretation:
P - value Interpretation
P > 0,05 No evidence against H0
P < 0.05 Moderate evidence against H0
P < 0.01 Strong evidence against H0
P < 0.001 Very strong evidence against H0
x−μ0 x−μ0
❖ Test statistic for a Mean (known σ): zcalc = = σ
σx
√❑
x−μ0
❖ Test statistic for a Mean (unknown σ): tcalc = s
√❑
p−π 0 p−π 0
❖ Test statistic for a Proportion: zcalc = =
σp √❑
( p − p )−(π 1−π 2) ;
❖ Test statistic for equality of proportion: z calc = 1 2
√❑
x1 + x 2 x1 x2
p= ; p1= ; p2=
n1 + n2 n1 n2
❖ Confidence interval for the difference of two proportions, π 1−π 2:
( p1− p2 )± z α / 2. √ ❑
s 21
❖ Test statistic (two variances): F calc = , with df 1 =n1 – 1, df 2 =n2 – 1
s 22
1
❖ F-test: F R =Fdf 1 , df 2; FL =
F df 2 , df 1
❖ Tukey’s test: Tukey is a two-tailed test for equality of paired means from groups
compare simultaneously
H0: μ j=μk
H1: μ j ≠ μk
Tcalc =
| y j− y k|
√❑
Tcalc > Tc,n-c -> Reject H0, Tc,n-c is a critical value. (pg. 449)
❖ Hartley’s Test:
H0: σ 21=σ 22 =…=σ 2c
H1: σ not equal
2
s max
Hcalc = 2 > Hcritical (or Hc,n/c-1) (pg. 451)
s min
CHAPTER 12: Simple Regression
Commonly Used Formulas in Simple Regression
n
i=1 i=1
b1−0
❖ T test for zero slope: tcalc = s b
1
with d . f .=n−2
❖ Confidence interval for conditional mean of Y: ^y i ±t α/ 2 s. √ ❑
❖ Prediction interval for Y: ^y i ±t α / 2 s. √ ❑
❖ Excel Output:
Regression Statistics
R square SSR
R2 =
SST
Standard Error se = √ ❑
ANOVA
df SS MS F
Regression k (SSR) SSR MSR
MSR =
k MSE
Residual n-k-1 (SSE) SSE
MSE =
n−k−1
Total n-1 (SST)