An Ova
An Ova
An Ova
Elementary Concepts provides a brief introduction to the basics of statistical significance testing. If we are only comparing two means, ANOVA will produce the same results as the t test for independent samples (if we are comparing two different groups of cases or observations) or the t test for dependent samples (if we are comparing two variables in one set of cases or observations). If you are not familiar with these tests, you may want to read Basic Statistics and Tables.
Group 1 Group 2 Observation 1 Observation 2 Observation 3 Mean Sums of Squares (SS) Overall Mean Total Sums of Squares 2 3 1 2 2 4 28 6 7 5 6 2
The means for the two groups are quite different (2 and 6, respectively). The sums of squares within each group are equal to 2. Adding them together, we get 4. If we now repeat these computations ignoring group membership, that is, if we compute the total SS based on the overall mean, we get the number 28. In other words, computing the variance (sums of squares) based on the within-group variability yields a much smaller estimate of variance than computing it based on the total variability (the overall mean). The reason for this in the above example is of course that there is a large difference between means, and it is this difference that accounts for the difference in the SS. In fact, if we were to perform an ANOVA on the above data, we would get the following result:
MAIN EFFECT SS Effect 24.0 Error 4.0 df MS F p 1 24.0 24.0 .008 4 1.0
As can be seen in the above table, the total SS (28) was partitioned into the SS due to within-group variability (2+2=4) and variability due to differences between means (28-(2+2)=24).
within-subjects (repeated measures) factor. The interpretation of main effects and interactions is not affected by whether a factor is between-groups or repeated measures, and both factors may obviously interact with each other (e.g., females improve over the semester while males deteriorate).
* e-x/2]
where is the degrees of freedom e is the base of the natural logarithm, sometimes called Euler's e (2.71...) (gamma) is the Gamma function
The above animation shows the shape of the Chi-square distribution as the degrees of freedom increase (1, 2, 5, 10, 25 and 50).
t-Test (for Independent and Dependent Samples). The t-test is the most commonly used method to evaluate the differences in means between two groups. The groups can be independent (e.g., blood pressure of patients who were given a drug vs. a control group who received a placebo) or dependent (e.g., blood pressure of patients "before" vs. "after" they received a drug, see below). Theoretically, the t-test can be used even if the sample sizes are very small (e.g., as small as 10; some researchers claim that even smaller n's are possible), as long as the variables are approximately normally distributed and the variation of scores in the two groups is not reliably different (see also Elementary Concepts).
Dependent samples test. The t-test for dependent samples can be used to analyze designs in which the withingroup variation (normally contributing to the error of the measurement) can be easily identified and excluded from the analysis. Specifically, if the two groups of measurements (that are to be compared) are based on the same sample of observation units (e.g., subjects) that were tested twice (e.g., before and after a treatment), then a considerable part of the within-group variation in both groups of scores can be attributed to the initial individual differences between the observations and thus accounted for (i.e., subtracted from the error). This, in turn, increases the sensitivity of the design. One-sample test. In so-called one-sample t-test, the observed mean (from a single sample) is compared to an expected (or reference) mean of the population (e.g., some theoretical mean), and the variation in the population is estimated based on the variation in the observed sample.