spssMODULE 4
spssMODULE 4
spssMODULE 4
Descriptors/Topics
● Means
● T-test
● One-way ANOVA
● Non parametric tests
● Normality tests
Correlation and regression
● Linear correlation and regression
● Multiple regression (linear)
MEANS
➢ The Means procedure is useful for summarizing the central tendency of continuous or ordinal variables in
your dataset and gaining insights into their typical values.
➢ In SPSS, the "Means" procedure is used to calculate descriptive statistics, primarily the mean, for one or
more variables in your dataset.
➢ Here's how you can use it:
1. Select Variables: Before running the Means procedure, select the variables for which you want to compute
the mean. These variables should typically be continuous or ordinal in nature.
2. Run Means Procedure: Go to "Analyze" > "Descriptive Statistics" > "Descriptives". This will open the
Descriptives dialog box.
3. Select Variables: In the Descriptives dialog box, move the variables you want to analyze from the left-hand
box to the right-hand box. You can do this by selecting variables and clicking the arrow buttons in the
middle or by dragging and dropping variables.
4. Options: You can customize the analysis using various options in the Descriptives dialog box. Since you
specifically want means, you don't need to make any additional changes. However, you can choose to
include other statistics if desired.
5. Run: After selecting variables and customizing options, click the "OK" button to run the Descriptives
procedure.
6. Output: SPSS will generate output tables containing descriptive statistics for the selected variables. These
tables include the mean, standard deviation, minimum, and maximum values, among others.
7. Interpretation: Interpret the output to understand the mean value of each variable. The mean represents the
average value of the variable across all cases in your dataset and provides insight into the central tendency
of the data.
T-TEST
➢ The T-Test procedure is commonly used in research to compare means between two groups and assess
whether any observed differences are statistically significant.
➢ In SPSS, the "T-Test" procedure is used to compare the means of two groups on a continuous outcome
variable.
➢ Here's how you can use it:
1. Select Variables: Before running the T-Test procedure, make sure you have a categorical variable (the
grouping variable) that divides your data into two groups, and a continuous variable (the outcome variable)
that you want to compare between the groups.
2. Run T-Test Procedure: Go to "Analyze" > "Compare Means" > "Independent-Samples T Test". This will
open the Independent-Samples T Test dialog box.
3. Select Variables: In the Independent-Samples T Test dialog box, move your continuous outcome variable
to the "Test Variable(s)" box and your categorical grouping variable to the "Grouping Variable" box.
4. Options: You can customize the analysis using various options in the Independent-Samples T Test dialog
box. For example, you can specify whether the variances are assumed to be equal or not (equal variances not
assumed), and whether to perform a one-tailed or two-tailed test.
5. Run: After selecting variables and customizing options, click the "OK" button to run the Independent-
Samples T Test procedure.
6. Output: SPSS will generate output tables containing the results of the independent-samples t-test, including
means, standard deviations, t-values, degrees of freedom, and p-values. Additionally, you'll see a summary
of the Levene's test for equality of variances and the results of the t-test.
7. Interpretation: Interpret the output to determine whether there is a statistically significant difference in the
means of the two groups. Focus on the p-value associated with the t-test. If the p-value is less than your
chosen significance level (usually 0.05), you can conclude that there is a significant difference between the
means of the two groups.
ONE-WAY ANOVA
➢ The One-Way ANOVA procedure is commonly used in research to compare means between three or more
groups and assess whether any observed differences are statistically significant.
➢ In SPSS, the "One-Way ANOVA" procedure is used to compare the means of three or more groups on a
continuous outcome variable.
➢ Here's how you can use it:
1. Select Variables: Before running the One-Way ANOVA procedure, ensure you have a categorical variable
(the factor variable) that divides your data into three or more groups, and a continuous variable (the outcome
variable) that you want to compare between the groups.
2. Run One-Way ANOVA Procedure: Go to "Analyze" > "Compare Means" > "One-Way ANOVA". This
will open the One-Way ANOVA dialog box.
3. Select Variables: In the One-Way ANOVA dialog box, move your continuous outcome variable to the
"Dependent List" box and your categorical factor variable to the "Factor" box.
4. Options: You can customize the analysis using various options in the One-Way ANOVA dialog box. For
example, you can specify post hoc tests to determine which group means differ significantly from each other
after finding a significant overall ANOVA result.
5. Post Hoc Tests: If you choose to perform post hoc tests, click on the "Post Hoc" button in the One-Way
ANOVA dialog box. This will allow you to select the specific post hoc test you want to conduct (e.g.,
Tukey's HSD, Bonferroni).
6. Run: After selecting variables and customizing options, click the "OK" button to run the One-Way ANOVA
procedure.
7. Output: SPSS will generate output tables containing the results of the One-Way ANOVA, including the
overall F-statistic, degrees of freedom, p-value, and eta-squared (effect size). If you performed post hoc
tests, the results of these tests will also be displayed.
8. Interpretation: Interpret the output to determine whether there is a statistically significant difference in the
means of the groups. Focus on the p-value associated with the ANOVA test. If the p-value is less than your
chosen significance level (usually 0.05), you can conclude that there is a significant difference between at
least two group means. If post hoc tests were conducted, interpret the results to identify which specific
group means differ significantly from each other.
NON-PARAMETRIC TESTS
In SPSS, various non-parametric tests are available to analyze data that do not meet the assumptions of
parametric tests, such as normality or homogeneity of variance. Depending on your research question and data
characteristics, you may need to choose the appropriate test and interpret the results accordingly. Always ensure
that the assumptions of each test are met before interpreting the results.
some common non-parametric tests and how to perform them:
1. Chi-Square Test: Used to determine whether there is a significant association between two categorical
variables. Go to "Analyze" > "Descriptive Statistics" > "Crosstabs", then select the variables and click
"Statistics" to choose "Chi-square".
2. Mann-Whitney U Test: Compares the median values of a continuous variable between two independent
groups. Go to "Analyze" > "Nonparametric Tests" > "Independent Samples", then select the variables and
click "Define Groups" to specify the grouping variable.
3. Wilcoxon Signed-Rank Test: Compares the median values of a continuous variable between two related
groups (paired samples). Go to "Analyze" > "Nonparametric Tests" > "Legacy Dialogs" > "2 Related
Samples", then select the variables and click "Define Groups" to specify the pairing variable.
4. Kruskal-Wallis Test: Compares the median values of a continuous variable across three or more
independent groups. Go to "Analyze" > "Nonparametric Tests" > "Legacy Dialogs" > "K Independent
Samples", then select the variables and click "Define Groups" to specify the grouping variable.
5. Friedman Test: Compares the median values of a continuous variable across three or more related groups.
Go to "Analyze" > "Nonparametric Tests" > "Legacy Dialogs" > "K Related Samples", then select the
variables and click "Define Groups" to specify the grouping variable.
6. Sign Test: Tests whether the median of a single group differs from a hypothesized value. Go to "Analyze" >
"Nonparametric Tests" > "Legacy Dialogs" > "1-Sample", then select the variable and enter the
hypothesized value.
NORMALITY TESTS
In SPSS, you can conduct normality tests to assess whether a continuous variable follows a normal distribution.
Here are some common normality tests and how to perform them:
1. Shapiro-Wilk Test: This test assesses the null hypothesis that a sample comes from a normally distributed
population. To conduct the Shapiro-Wilk test:
• Go to "Analyze" > "Descriptive Statistics" > "Explore".
• Move the variable of interest to the "Dependent List" box.
• Click on "Plots" and select "Normality plots with tests".
• Click "Continue" and then "OK" to run the analysis.
• In the output, examine the "Tests of Normality" table, which includes the Shapiro-Wilk statistic and its
associated p-value.
2. Kolmogorov-Smirnov Test: This test compares the observed cumulative distribution function of the data
with the expected cumulative distribution function of a normal distribution. To conduct the Kolmogorov-
Smirnov test:
• Go to "Analyze" > "Nonparametric Tests" > "One-Sample K-S".
• Move the variable of interest to the "Test Variable List" box.
• Click "Options" to specify the distribution to compare against (e.g., Normal).
• Click "OK" to run the analysis.
• In the output, examine the Kolmogorov-Smirnov Z and its associated p-value.
3. Anderson-Darling Test: This test is similar to the Kolmogorov-Smirnov test but places more weight on
deviations in the tails of the distribution. SPSS does not have a built-in function for the Anderson-Darling
test, but you can find it in some statistical software packages or use specialized statistical packages or
programming languages like R or Python.
Interpretation of normality tests involves assessing the p-value associated with each test statistic. A low p-value
(typically less than 0.05) indicates evidence against the null hypothesis of normality, suggesting that the data
may not follow a normal distribution. However, it's essential to consider the sample size, as normality tests can
be sensitive to sample size, and small deviations from normality may not be practically significant.
Additionally, visual inspection of histograms or Q-Q plots can provide further insight into the distribution of the
data.
LINEAR CORRELATION AND REGRESSION
In SPSS, you can conduct linear correlation and regression analyses to explore the relationship between two or
more continuous variables. Here's how you can perform these analyses:
Linear Correlation:
1. Select Variables: Before running the correlation analysis, select the continuous variables you want to
examine for correlation.
2. Run Correlation Procedure: Go to "Analyse" > "Correlate" > "Bivariate". This will open the Bivariate
Correlations dialog box.
3. Select Variables: In the Bivariate Correlations dialog box, move your variables of interest from the left-hand
box to the right-hand box.
4. Options: You can customize the analysis using various options in the Bivariate Correlations dialog box. For
example, you can choose to include partial correlations, confidence intervals, or significance levels.
5. Run: After selecting variables and customizing options, click the "OK" button to run the correlation
analysis.
6. Output: SPSS will generate output tables containing correlation coefficients (Pearson's r) for each pair of
variables, along with significance levels and other relevant statistics.
7. Interpretation: Interpret the output to determine the strength and direction of the linear relationship between
pairs of variables. The correlation coefficient ranges from -1 to 1, with values closer to 1 indicating a strong
positive correlation, values closer to -1 indicating a strong negative correlation, and values close to 0
indicating little or no correlation.
Linear Regression:
1. Select Variables: Before running the regression analysis, select the predictor (independent) and outcome
(dependent) variables.
2. Run Regression Procedure: Go to "Analyse" > "Regression" > "Linear". This will open the Linear
Regression dialog box.
3. Select Variables: In the Linear Regression dialog box, move your predictor variable(s) to the
"Independent(s)" box and your outcome variable to the "Dependent" box.
4. Options: You can customize the analysis using various options in the Linear Regression dialog box. For
example, you can include categorical predictors, interaction terms, or save standardized residuals.
5. Run: After selecting variables and customizing options, click the "OK" button to run the regression
analysis.
6. Output: SPSS will generate output tables containing regression coefficients, standard errors, significance
levels, R-squared, and other relevant statistics.
7. Interpretation: Interpret the output to understand the relationship between the predictor(s) and outcome
variable. Focus on the regression coefficients to determine the direction and strength of the relationship.
Additionally, examine the significance levels to assess the statistical significance of the coefficients.
MULTIPLE REGRESSION (LINEAR)
➢ Multiple regression analysis allows you to identify the most important predictors of the outcome variable
and assess the strength of the relationships while controlling for other variables in the model.
➢ Performing multiple regression analysis in SPSS allows you to examine the relationship between a single
continuous outcome variable and multiple predictor variables.
➢ Here's a step-by-step guide:
1. Select Variables: Identify the continuous outcome variable (dependent variable) and the predictor variables
(independent variables) you want to include in the regression analysis.
2. Run Multiple Regression: Go to "Analyse" > "Regression" > "Linear". This will open the Linear
Regression dialog box.
3. Select Variables: In the Linear Regression dialog box, move the continuous outcome variable to the
"Dependent" box and the predictor variables to the "Independent(s)" box. You can do this by selecting
variables and clicking the arrow buttons.
4. Options: Customize the analysis using various options in the Linear Regression dialog box. For example:
• Include categorical predictors by clicking on "Categorical" and specifying the categorical variables.
• Include interaction terms by clicking on "Options" and selecting "Save as Variables" under
"Interaction".
• Save standardized residuals, predicted values, or other statistics by selecting the appropriate options.
5. Run the Analysis: After selecting variables and customizing options, click the "OK" button to run the
multiple regression analysis.
6. Output: SPSS will generate output tables containing regression coefficients, standard errors, significance
levels, R-squared, and other relevant statistics. The output will also include diagnostics for the model, such
as collinearity statistics and analysis of variance (ANOVA) table.
7. Interpretation: Interpret the output to understand the relationship between the outcome variable and
predictor variables. Focus on the regression coefficients to determine the direction and magnitude of the
relationships. Additionally, examine the significance levels to assess the statistical significance of the
coefficients. R-squared value provides information about the proportion of variance explained by the model