One Way ANOVA in 4 Pages

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

One-way ANOVA

statistics.laerd.com /statistical-guides/one-way-anova-statistical-guide.php

What is this test for?


The one-way analysis of variance (ANOVA) is used to determine whether there are any statistically significant
differences between the means of three or more independent (unrelated) groups. This guide will provide a brief
introduction to the one-way ANOVA, including the assumptions of the test and when you should use this test. If
you are familiar with the one-way ANOVA, you can skip this guide and go straight to how to run this test in SPSS
Statistics by clicking here.

What does this test do?


The one-way ANOVA compares the means between the groups you are interested in and determines whether
any of those means are statistically significantly different from each other. Specifically, it tests the null
hypothesis:

where = group mean and k = number of groups. If, however, the one-
way ANOVA returns a statistically significant result, we accept the
alternative hypothesis (HA), which is that there are at least two group
means that are statistically significantly different from each other.

At this point, it is important to realize that the one-way ANOVA is an omnibus test statistic and cannot tell you
which specific groups were statistically significantly different from each other, only that at least two groups were.
To determine which specific groups differed from each other, you need to use a post hoc test. Post hoc tests
are described later in this guide.

Join the 10,000s of students, academics and professionals who rely on Laerd Statistics.

When might you need to use this test?


If you are dealing with individuals, you are likely to encounter this situation using two different types of study
design:

One study design is to recruit a group of individuals and then randomly split this group into three or more smaller
groups (i.e., each participant is allocated to one, and only one, group). You then get each group to undertake
different tasks (or put them under different conditions) and measure the outcome/response on the same
dependent variable. For example, a researcher wishes to know whether different pacing strategies affect the
time to complete a marathon. The researcher randomly assigns a group of volunteers to either a group that (a)
starts slow and then increases their speed, (b) starts fast and slows down or (c) runs at a steady pace
throughout. The time to complete the marathon is the outcome (dependent) variable. This study design is
illustrated schematically in the diagram below:

1/2
When you might use this test is continued on the next page.

1234

2/2
One-way ANOVA (cont...)
statistics.laerd.com /statistical-guides/one-way-anova-statistical-guide-2.php

When might you need to use this test? (cont...)


A second study design is to recruit a group of individuals and then split them into groups based on some
independent variable. Again, each individual will be assigned to one group only. This independent variable is
sometimes called an attribute independent variable because you are splitting the group based on some attribute
that they possess (e.g., their level of education; every individual has a level of education, even if it is "none").
Each group is then measured on the same dependent variable having undergone the same task or condition (or
none at all). For example, a researcher is interested in determining whether there are differences in leg strength
between amateur, semi-professional and professional rugby players. The force/strength measured on an
isokinetic machine is the dependent variable. This type of study design is illustrated schematically in the Figure
below:

Why not compare groups with multiple t-tests?


Every time you conduct a t-test there is a chance that you will make a Type I error. This error is usually 5%. By
running two t-tests on the same data you will have increased your chance of "making a mistake" to 10%. The
formula for determining the new error rate for multiple t-tests is not as simple as multiplying 5% by the number of
tests. However, if you are only making a few multiple comparisons, the results are very similar if you do. As such,
three t-tests would be 15% (actually, 14.3%) and so on. These are unacceptable errors. An ANOVA controls for
these errors so that the Type I error remains at 5% and you can be more confident that any statistically
significant result you find is not just running lots of tests. See our guide on hypothesis testing for more
information on Type I errors.

Join the 10,000s of students, academics and professionals who rely on Laerd Statistics.

What assumptions does the test make?


There are three main assumptions, listed here:

1/2
1. The dependent variable is normally distributed in each group that is being compared in the one-way
ANOVA (technically, it is the residuals that need to be normally distributed, but the results will be the
same). So, for example, if we were comparing three groups (e.g., amateur, semi-professional and
professional rugby players) on their leg strength, their leg strength values (dependent variable) would
have to be normally distributed for the amateur group of players, normally distributed for the semi-
professionals and normally distributed for the professional players. You can test for normality in SPSS
Statistics (see our guide here).

2. There is homogeneity of variances. This means that the population variances in each group are equal. If
you use SPSS Statistics, Levene's Test for Homogeneity of Variances is included in the output when you
run a one-way ANOVA in SPSS Statistics (see our One-way ANOVA using SPSS Statistics guide).

3. Independence of observations. This is mostly a study design issue and, as such, you will need to
determine whether you believe it is possible that your observations are not independent based on your
study design (e.g., group work/families/etc).

What to do when the assumptions are not met is dealt with on the next page.

1234

2/2
One-way ANOVA (cont...)
statistics.laerd.com /statistical-guides/one-way-anova-statistical-guide-3.php

What happens if my data fail these assumptions?


Firstly, don't panic! The first two of these assumptions are easily fixable, even if the last assumption is not. Lets
go through the options as above:

1. The one-way ANOVA is considered a robust test against the normality assumption. This means that it
tolerates violations to its normality assumption rather well. As regards the normality of group data, the
one-way ANOVA can tolerate data that is non-normal (skewed or kurtotic distributions) with only a small
effect on the Type I error rate. However, platykurtosis can have a profound effect when your group sizes
are small. This leaves you with two options: (1) transform your data using various algorithms so that the
shape of your distributions become normally distributed or (2) choose the nonparametric Kruskal-Wallis H
Test which does not require the assumption of normality.

2. There are two tests that you can run that are applicable when the assumption of homogeneity of variances
has been violated: (1) Welch or (2) Brown and Forsythe test. Alternatively, you could run a Kruskal-Wallis
H Test. For most situations it has been shown that the Welch test is best. Both the Welch and Brown and
Forsythe tests are available in SPSS Statistics (see our One-way ANOVA using SPSS Statistics guide).

3. A lack of independence of cases has been stated as the most serious assumption to fail. Often, there is
little you can do that offers a good solution to this problem.

How do I run a one-way ANOVA?


There are numerous ways to run a one-way ANOVA. However, we provide a comprehensive, step-by-step guide
on how to do this using SPSS Statistics.

How do I report the results of a one-way ANOVA?


You will have calculated the following results or obtained them from SPSS Statistics:

Structure of results:

Source SS df MS F Sig.

Between SSb k-1 MSb MSb/MSw p value

Within SSw N-k MSw

Total SSb + SS w N-1

An example:

Source SS df MS F Sig.

Between 91.476 2 45.733 4.467 .021

Within 276.400 27 10.237

Total 367.867 29

Join the 10,000s of students, academics and professionals who rely on Laerd Statistics.

1/2
You will want to report this as follows:

There was a statistically significant difference between groups as determined by one-way ANOVA ( F(2,27) =
4.467, p = .021). This is all you will need to write for the one-way ANOVA per se. However, in reality you will want
probably also want to report means standard deviations for your groups, as well as follow up a statistically
significant result with a post hoc test. If you use SPSS Statistics, these descriptive statistics will be reported in
the output along with the result from the one-way ANOVA. The general form of writing the result of a one-way
ANOVA is as follows:

where df = degrees of freedom.

You should not report the result as "significant


difference", but instead report it as "statistically
significant difference". This is because your decision
as to whether the result is significant or not should not
be based solely on your statistical test. Therefore, to
indicate to readers that this "significance" is a
statistical one, include this is your sentence.

Find out what else you have to do when you have a statistically significant ANOVA or a non-statistically
significant ANOVA result on the next page.

1234

2/2
One-way ANOVA (cont...)
statistics.laerd.com /statistical-guides/one-way-anova-statistical-guide-4.php

My p-value is greater than 0.05, what do I do now?


Report the result of the one-way ANOVA (e.g., "There were no statistically significant differences between group
means as determined by one-way ANOVA (F(2,27) = 1.397, p = .15)"). Not achieving a statistically significant
result does not mean you should not report group means standard deviation also. However, running a post hoc
test is usually not warranted and should not be carried out.

My p-value is less than 0.05, what do I do now?


Firstly, you need to report your results as highlighted in the "How do I report the results of a one-way ANOVA?"
section on the previous page. You then need to follow-up the one-way ANOVA by running a post hoc test.

Homogeneity of variances was violated. How do I continue?


You need to perform the same procedures as in the above three sections, but add into your results section that
this assumption was violated and you needed to run a Welch F test.

What are post hoc tests?


Recall from earlier that the ANOVA test tells you whether you have an overall difference between your groups,
but it does not tell you which specific groups differed post hoc tests do. Because post hoc tests are run to
confirm where the differences occurred between groups, they should only be run when you have a shown an
overall statistically significant difference in group means (i.e., a statistically significant one-way ANOVA result).
Post hoc tests attempt to control the experimentwise error rate (usually alpha = 0.05) in the same manner that
the one-way ANOVA is used instead of multiple t-tests. Post hoc tests are termed a posteriori tests; that is,
performed after the event (the event in this case being a study).

Join the 10,000s of students, academics and professionals who rely on Laerd Statistics.

Which post hoc test should I use?


There are a great number of different post hoc tests that you can use. However, you should only run one post hoc
test do not run multiple post hoc tests. For a one-way ANOVA, you will probably find that just two tests need to
be considered. If your data met the assumption of homogeneity of variances, use Tukey's honestly significant
difference (HSD) post hoc test. Note that if you use SPSS Statistics, Tukey's HSD test is simply referred to as
"Tukey" in the post hoc multiple comparisons dialogue box). If your data did not meet the homogeneity of
variances assumption, you should consider running the Games Howell post hoc test.

How should I graphically present my results?


First off, it is not essential that you present your results in a graphical form. However, it can add a lot of clarity to
your results. There are a few key points to producing a good graph. Firstly, you need to present error bars for
each group mean. It is customary to use the standard deviation of each group, but standard error and confidence
limits are also used in the literature. You should also make sure that the scale is appropriate for what you are
measuring. Generally, if graphically presenting data from an ANOVA, we recommend using a bar chart with
standard deviation bars.

What to do now?
Now that you understand the one-way ANOVA, you can go to our guide on how to run the test in SPSS Statistics
1/2
When you might use this test is continued on the next page.

1234

2/2

You might also like