Chapter 5 Hypothesis Testing

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 27

CHAPTER 5

HYPOTHESIS TESTING

By: Dr. Yonas H. (Assistant Professor)

Oct 01/2021
Hypothesis (Definition)
• A statement that can be refuted by empirical
data
• An unproven proposition
• A possible solution to a problem
• Guess
Hypothesis testing
• A conclusion is statistically valid depending on the accuracy of
accepting or rejecting your hypotheses.
• In an experiment, you test the null hypothesis (H0). As a
researcher, you expect that the null hypothesis will be rejected.
• To accept the scientific hypothesis, you must collect evidence to
reject the null hypothesis. There are two types of hypotheses:
– Scientific hypothesis (or alternative hypothesis): represents the
relationship among the variables examined. For example: attitudes are
higher when mood is positive compared to negative.
– Null hypothesis: a statement of no relationship among variables. For
example: attitudes are not higher when mood is positive compared to
negative.
Types of hypothesis tests
• Tests: used either to accept or reject the null
hypothesis.
 Two-tailed test:
 Null hypothesis: Ho: X=0
Vs
 Alternative hypothesis :H1: X≠0 i.e x>0 or x<0

 One –tailed test: Ho: X≥0


H1: X, i.e x<0 the only option
Example: Hypothesis Testing
• Example: Consider the following research research
question and then develop a hypotheis, conduct a
correlation analysis and also test the significance of
the correlation result.(Use original data from SPSS
DATA from Excel file) Is there any relationship
netween minutes taken in the first week(Week1) and
second week (week 2) for the 20 students?
 Answer:
• Null Hypotheis(Ho): No r/s b/n minutes taken at week 1 and
week 2.
• Alternative Hypothesis (Ho): There is a r/s b/n minutes
taken at week 1 and week 2
Significance Value(p-value)
• Significance: Significance of the result depends on the level of
significance(or degree of tolerance we selected). I
i.e 1%degree of tolerence =99%CI (confidence interval)
5% degree of tolerence =95%CI (confidence interval)
10% degree of tolerence=90%CI (confidence interval)
• Note: Sig.value(p-value)= is a measure of the level of
significance.  It is the probability of accepting the Null
hypothesis. Or  It is the probability that the accompanieed
result is due to chance. o If p-value 0.01, it is significant at 5%
o If p-value 0.05, it is significant at 10%  But, customarily as a
rule of thumb, statisticians use the 5% level of significance.[i.e
95% confidence interval]
Main analysis
Conduct Main Analyses
• When there’s 1 IV with only 2 conditions, a T-Test
should be used (see below).
• In a T-Test, the larger the T-value, the greater the
chance that the group differences are real and not
due to chance.
• T-test = group mean difference / average within-
group variability.
• The T-value expresses how much the between
group mean difference is greater than the average
within group variability.
T-test
• An independent measures T-Test: is a single factor
between participant design.
• Example: A dieting study where participants are exposed
to normal (condition 1) or thin (condition 2) models.

• The purpose of the study is to compare to which extent


self-esteem is different when consumers are exposed to
normal vs. thin models.
• After performing the t-test, if the p-value or sig.value
<0.05 = reject H0, when >0.05 you don’t.
T-test formula
• Where:
T-test
• Condition 1: mean = 3, s2 = 0.71, condition 2:
mean = 1, s2 = 0.86. sample size in both cases
is 15 participants.

• Given that the t-value is 6.18, a t-table will tell


the according p-value.
Analysis of variance

ANOVA
One-way independent measures ANOVA
(explained by its F –test)
• A One-way independent measures ANOVA: is an ANOVA,
between-participants design, with 1 IV of more than 2
conditions.
•  When comparing more than 2 groups, an ANOVA is done.
• When t-testing for example condition 1 vs. 2, 2 vs. 3 and 1 vs.
3, there will be 3 t-tests in total with a 0.05 level of
significance.
• This implies that there’s a chance of 0.95 3 = 0.857 of not
getting a type 1 error. When there would be more IV
conditions, this chance will be even higher.
• An ANOVA will compare all means in 1 test, making it more
favourable with a 0.05 chance on type 1 error.
ANOVA
Example between-participant: IV (strong vs. weak vs. no
arguments), DV (attitude 7-scale):
Participant Strong Participant Weak Participant No

1 6 2 7 3 3

4 5 5 2 6 1

7 7 8 4 9 2

10 6 11 3 12 2

M=6 M=4 M=2 Grand m =


4
Here, 6, 5, 7, and 6 cause the within-group error for the ‘strong’ condition,
this should be small. The means 6, 4 and 2 cause the between-group error
(IV).
Simple regression
• Linear regression provides additional statistical information about the
relationship between two quantitative variables.
• The coefficient of determination, R², which indicates the percentage
of variance in the dependent variable that is accounted by variability
in the independent variable
• The regression equation is the formula for the trend or fit line which
enables us to predict the dependent variable for any given value of
the independent variable
• The regression equation has two parts – the intercept and the slope
• The intercept is the point on the vertical axis where the regression
line crosses. It generally does not provide useful information.
Cont’d
• Simple Linear Regression is expressed in the
form of linear equation Y = a + bx
Where: X and Y occurrence values ; a is the y-
intercept; b is the slope.
Correlation
• Correlation: is a measure of the linear relationship
between two variables which may be used to measure
the degree of the association between the two variables.
 Types of correlation:
1. Pearson correlation(r)= for two variables. For
Normal/scale . r is usually between 0 & 1. if r is more
than 0.6, it can be said that there is strong correlation
between two variables.
2. Spearman Rho(rs)= for two variables. For Ordinal scales
3. Correlation Matrix=association among all the pairs of
three or more variables.
Interpretation of Correlation
Coefficients
Chi-Square Test : defined
A chi-squared testis any statistical hypothesis
test wherein the sampling distribution of the
test statistic is a chi-squared distribution when
the null hypothesis is true.
 The purpose of the test is to evaluate how
likely it is that the null hypothesis is true, given
the observations.
 It is used for ordinal measures
Chi-Square Test of Independence

• The Chi-Square test of independence is used to


determine if there is a significant relationship
between two nominal (categorical) variables.
• The frequency of each category for one nominal
variable is compared across the categories of the
second nominal variable.
 Typical questions answered with the Chi-Square Test
of Independence are as follows:
Marketing – Are women more likely than men to buy a product online?
Economy – Are white-collar workers more likely to quit their jobs than blue-
collar workers?
Chi-Square Test of Independence
Cont’d…
• For example, say a researcher wants to examine the
relationship between gender (male vs. female) and
empathy (high vs. low).  The chi-square test of
independence can be used to examine this
relationship.  The null hypothesis for this test is that
there is no relationship between gender and
empathy.  The alternative hypothesis is that there is a
relationship between gender and empathy (e.g. there
are more high-empathy females than high-empathy
males).
Chi-Square Test of Independence-
Cont’d…
• The Chi-Square Test of Independence is also
known as Pearson’s Chi-Square and has two
major applications: 1) goodness of fit test (this
is used to test whether the data fits the
theoretical model to be tested) and 2) test of
independence (this is explained in the previous
slides) .
Crobanch’s Alpha=most common
measure of reliability.
 Cronbach's alpha is the most common measure of
internal consistency ("reliability").
 It is most commonly used when you have multiple
Likert questions in a survey/questionnaire that form
a scale and you wish to determine if the scale is
reliable.
 Cronbach’s alpha is not a statistical test – it is a
coefficient of reliability (or consistency). 
 Note that a reliability coefficient (Crobanch’s Alpha)
of .70 or higher is considered  “acceptable” in most
social science research situations.) 
To find Cranach's alpha: Step one
To find Cranach's alpha: Step two
To find Cranach's alpha: Final step- once you get this
window, click continue
End

You might also like