Appendix B Writing APA Style Results

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Appendix B: Writing APA Style Results

APA (American Psychological Association) style is often used to report statistics in the social
sciences. This appendix provides guidelines on how to write up many of the commonly used
statistical analyses in APA style, as well as examples. Note that the majority of the time when
results are reported it is written up in a report or presentation; therefore these guidelines indicate
how you should type up results. If you are writing by hand, please do your best to use the correct
notation (e.g., italics), although spacing and punctuation should be easy to achieve by hand.

I. Writing about statistical tests in general


• Double space and use Times New Roman (12 pt). Write in a formal tone. Use
black ink.
• Capitalize abbreviations of statistical tests (i.e., ANOVA, GLM). Do not
capitalize written out forms (i.e., t-test, chi-square analysis, general linear model).
• If your results were not significantly different, describe your results as "not
significant" -- do not say that they were "insignificant."
• Italicize statistical terms! For example, these terms should be italicized: M, SD,
F, t, MSE, p, N, χ2.
• If your p-value is less than .001, it will be calculated as .000 in some statistical
software packages (e.g., SPSS, PSPP). In such a case, do not report a p-value of 0;
instead report p < .001. Otherwise, simply report the exact p-value (e.g., p = .03).
• Make sure your conclusions match the inferential statistics! If the probability of
your sample is very small, this is a “significant” finding (i.e., your result “sticks out”). If
it is relatively large, describe your result as not significant.
• Please round all your numbers to the nearest hundredth (two decimal places)
except for p-value. P-value please round to nearest thousandth (three decimal places).

II. Writing About Correlation/Regression


• For these bivariate analyses, make sure to include:
• description of the relationship
• if there is a statistically significant relationship, you can speculate about the
reasons why in the discussion section (but be careful to note that these are just
speculations – no causal statements!)
• Correlation example: This analysis attempted to examine whether the number of classes
students in Dr. Graham’s class attended was related to their score on the final exam.
Results of a correlation analysis indicated that final exam scores and number of classes
attended were strongly correlated, r(55) = .49, p < .001.
• r shows that you found Pearson’s correlation coefficient
• r(55) means that the df was 55
• .49 is the correlation (the rule of thumb is that .70+ is very strong, .50-.70 is
strong, .30-.50 is moderate, .10-.30 is weakly, anything between 0-.1 is negligible
and probably is not statistically different from 0)
• p < .001 means that the variables are significantly correlated (there is a
statistically significant linear relationship) and the resulting p-value was less than
alpha (.01)

• Regression analyses are often best presented in a table (usually because this is the clearest
way to present multiple regression; in our class we have only covered linear regression
with one independent variable). APA doesn't say much about how to report regression
results in the text, but if you would like to report the regression in the text of your results
section, you should at least present the slope (b) along with the t-test and the
corresponding significance level.
• We learned about b1 - the unstandardized slope where the data units might be
inches, percent correct, anxiety score, miles per hours, etc.

• Regression example: This analysis attempted to model the relationship between the
number of classes attended and final exam score. Results of a linear regression indicated
that number of classes attended significantly predicted performance on the final exam,
F(1, 53) = 4.51, p < .001, with an R2 = .18. Students’ final exam score is equal to 30.56 +
3.25(classes attended), indicating that final exam score increased by 3.25 for each class
attended.
• b = .34 shows that you found the unstandardized slope
• the F-test that follows examined whether this slope is significantly more extreme
than 0 (interpret the p-value in the same way as correlation!)
• r2 is the measure of how much this regression model explains the variability from
the main.
• The regression equation is provided, with an interpretation of the equation.
III. One Sample CI and T-test
Confidence Intervals
• Typically we don’t report confidence intervals by themselves. They are usually
included with a figure or appended to the end of the t-test (see the example in C. below).
For the purposes of our class, you may report it like either of the below examples:
From this random sample of Katy Perry fans, we have 95% confidence that the
population mean lies between CI [0.24, 1.14].
Based on this random sample of Katy Perry fans, the 95% confidence interval is 0.24 to 1.14.
One Sample T-test
• For the one-sample t-test, make sure to include:
• Purpose of analysis
• Sample statistics
• Population parameter that the sample is being tested against
• Test statistics
• t
• df
• p (two-tailed)
• Example (significant): This analysis attempted to examine whether the average
number of hours students at Valley College sleep a night differs from the national
average of 6.5 hours. A one-sample t-test indicated that Valley College students
get significantly more sleep (M = 7.68, SD = 0.53) than the national average (μ =
6.50), t(92) = 3.28, p = .002.
• The M and SD stand for the sample mean and standard deviation
• The μ stands for the population mean from the null hypothesis
• t shows that you used the t-statistic
• t(92) means the t-statistic given that the degrees of freedom was 92 (from
this I can tell that there were 93 students in the data set)
• 3.28 is the sample t
• p = .002 means that the resulting two-tailed p-value was equal to .002.
• Example (not significant): This analysis attempted to examine whether the
average number of hours students at Valley College sleep a night differs from the
national average of 6.5 hours. A one-sample t-test indicated that Valley College
students do not differ significantly in sleep (M = 6.61, SD = 0.44) than the
national average (μ = 6.50), t(92) = 0.31, p = .521.
• Avoid saying “insignificant” that is not the opposite of significant in statistics
IV. T-tests with Two Samples
• For independent and paired samples t-tests, make sure to include:
• Purpose of analysis (independent samples t-test or paired samples t-test)
• Which test was used
• Sample statistics (for both samples)
• Test statistics
• t
• df
• p (two-tailed)

Independent Samples T-test


• Independent samples t-test example (significant): This analysis attempted to examine
whether students from Dr. Graham’s statistics course were more successful on the SAT
than students from Dr. T’s statistics course. An independent samples t-test indicated that
Dr. Graham’s students (M = 680, SD = 183) performed significantly better than Dr. T’s
students (M = 400, SD = 203), t(52.31) = 15.82, p < .001.
• When a p-value is very small, SPSS/PSPP will write .000 to represent that it is so
small that it cannot be expressed with three decimal places. In that case, write p < .001

• Independent samples t-test example (not significant) that includes CI: This analysis
attempted to examine whether students of Dr. Graham’s statistics course were taller than
students from Dr. T’s statistics course. An independent samples t-test indicated that Dr.
Graham’s students (M = 158.1 cm, SD = 5) were not significantly taller than Dr. T’s
students (M = 158.30 cm, SD = 4.80), t(48.41) = 0.33, p = .740, 95% CI [0.44, 1.23].
• It’s optional whether you want to include the confidence interval of the difference or
not. I include it here because more journals are starting to request confidence
intervals.

Paired Samples T-test


• Paired t-test example (significant): This analysis attempted to examine whether
students of Dr. Graham’s statistics course were more successful on the posttest rather
than the pretest. A paired samples t-test indicated that students performed significantly
better on the posttest (M = 7.70, SD = 1.23) than on the pretest (M = 4.24, SD = 1.26),
t(19.22) = 4.11, p = .01.
V. Writing About ANOVAs
• For one factor ANOVAs, make sure to include:
• Purpose of analysis
• Which test was used (independent samples or repeated measures ANOVA)
• Test statistic
• F
• df between-groups (#groups – 1)
• df within-group or df of the residual (the error df)
• p
• If there is a significant effect of the IV, then posthoc comparisons should also be included
• Sample statistics (for samples that are significantly different from one another)

• One-way ANOVA example (also called independent samples ANOVA) - F shows


that you conducted an ANOVA: This analysis attempted to examine whether students
of Dr. Graham’s statistics course outperformed students of Dr. X and Dr. Y on a test of
quantitative reasoning. A one-way ANOVA indicated that there was a significant
difference among the three classes, F(2, 19) = 12.52, p < .001. Bonferroni-corrected
pairwise comparisons revealed that Dr. Graham’s students (M = 14.59, SD = 1.33) scored
significantly better than Dr. X’s students (M = 2.12, SD = 1.70), p = .002. Dr. Graham’s
students also scored significantly better than Dr. Y’s students (M = 7.91, SD = 1.64), p = .
005. All other pairwise comparisons were not significant, ps > .05.
• F(2, 19) means the df between-groups was 2 and df residual (or within) was 19
• 12.52 is the F calculated from these samples
• p < .001 means that the resulting p-value smaller than three decimal places could
depict

• Repeated measures ANOVA example: This analysis attempted to examine whether


students of Dr. Graham’s statistics course scored better on their essay answers at the
beginning, middle, or end of the semester. A repeated measures ANOVA indicated that
there was a significant difference among the three time periods, F(2, 19) = 12.52, p < .001.
Bonferroni-corrected pairwise comparisons revealed that students were significantly better
at the end of the semester (M = 14.59, SD = 1.33) than at the beginning (M = 2.12, SD =
1.70, p = .004) and at the midpoint of the semester (M = 7.91, SD = 1.64, p = .036).
However, students were not significantly better at the middle than at the beginning of the
quarter, p = .080.
VI. Writing About Chi-Square Tests
• For chi-square tests, make sure to include:
• Purpose of analysis
• Sample statistics (for all samples!) – usually shown in a table
• Which test was used (goodness-of-fit or test of homogeneity/independence)
• Test statistic
• χ2
• df
• n
• p

• Chi-square goodness-of-fit example: This analysis attempted to examine whether


students from PCC statistics courses equally preferred textbooks A, B, or C. A chi-square
goodness-of-fit test indicated that students’ actual preferences differ significantly from
the pattern predicted by equal preferences, χ2(2, N = 120) = 14.15, p = .001.
• χ2 shows that you conducted a chi-square test
• χ2(2, N=120) means the df was 2 and sample size (N) was 120
• 14.15 is the χ2 calculated from these samples
• p = .001 means that the resulting p-value was equal to .001.

• Chi-square test of homogeneity example: This analysis attempted to examine whether


men prefer different textbooks (A, B, or C) than women. A chi-square test of
homogeneity indicated that the genders have significantly different preferences, χ2(2, N =
170) = 21.73, p < .001.

You might also like