A Review of The Application of Logistic Regression
A Review of The Application of Logistic Regression
A Review of The Application of Logistic Regression
net/publication/326275068
CITATIONS READS
11 3,777
1 author:
Lian Niu
Iowa State University
10 PUBLICATIONS 383 CITATIONS
SEE PROFILE
All content following this page was uploaded by Lian Niu on 21 February 2019.
Lian Niu
This is the author’s copy before final acceptance by the journal. For the published version of
this article, please refer to the publisher’s website at
https://doi.org/10.1080/00131911.2018.1483892
2
Abstract
This study reviews the international literature of empirical educational research to examine
the application of logistic regression. The aim is to examine common practices of the report
and interpretation of logistic regression results, and to discuss the implications for
educational research. A review of 130 studies suggests that: (a) the majority of studies report
statistical significance and sign of predictors but do not interpret relationship magnitude in
terms of probabilities; (b) odds ratio is the most commonly reported effect size, and it tends
association magnitude and misleading conclusions; and (c) marginal effects and predicted
probabilities are reported by only 10.7% of reviewed studies, and the specification of
independent variables’ values is frequently missing. It is suggested that marginal effects and
predicted probabilities be reported more frequently to fully utilize the information provided
Keywords: logistic regression, odds ratio, relative risk, marginal effects, effect size
exaggeration
3
Introduction
dependent variable violates the assumptions of homoscedasticity and normality of the error
term for linear regression model. Consequently, the estimates of the standard error will not be
consistent estimates of the true standard errors, and the coefficient estimates will no longer be
efficient. In addition, estimating a linear probability model with the ordinary least squares
technique will lead to predicted values that are outside the plausible range of the probability
(0,1). For these reasons, the logistic regression model is used when the dependent variable is
dichotomous. This model transforms probability to odds and then takes the logarithm of the
odds. By doing so, both the lower and upper bound of the probability is removed. The logistic
represents the probability of the event not occurring, the ratio of the two represents the odds
of the event, and the left-hand side expression represents the log-odds, or the logit. On the
right-hand side of the equation, α represents the intercept, β represents the regression
coefficient, and x represents the independent variable (for more detailed background of
logistic regression, see also Kleinbaum et al. 1998; McCulloch and Searle 2001; Menard
As can be seen from the right-hand side of equation 1-1, the specification of the
logistic regression model is very similar to that of the linear regression model in terms of
independent variables. Like linear regression, logistic regression can handle both continuous
4
and estimate logistic regression models, the interpretation of the results is more complicated
and less intuitive compared to linear regression. This is due to the fact that in the logistic
regression model, the relationship between the probabilities and the set of independent
variables is not linear; instead, it is the relationship between the logit and the set of
coefficients represent the change in the log of odds corresponding to a certain amount of
change in the independent variable. Acknowledging the difficulty of interpreting the log of
odds, the common practice is to exponentiate the estimates of the coefficients to obtain the
odds ratio. This step eliminates the complication of interpreting the log, and the odds ratio
becomes the standard effect size output by statistical softwares. Nevertheless, the concept and
proper interpretation of the odds ratio still leave many researchers confused. A browse of the
current educational research literature shows that different approaches to interpreting logistic
regression results have been adopted. For example, the odds ratio is interpreted as a relative
risk in some studies (Chiang et al. 2012; Sullivan and Cosden 2015) but not in others (Jaeger
and Eagan 2011). Some studies do not interpret the odds ratio at all but report and interpret
marginal effects and predicted probabilities instead (King 2015; Ko and Jun 2015). Which
approach is most commonly adopted in educational research? Is the most common approach
The aim of this study is to review current literature in educational research where
logistic regression is used to examine the application of logistic regression, focusing on the
educational researchers and research literature readers with a better understanding of how
logistic regression results can be reported and accurately interpreted to better answer
educational research questions. Specifically, this study examines the following aspects of the
application of logistic regression: (a) the statistics being reported as logistic regression
5
results, and (b) the interpretation of statistical significance level, direction, and magnitude of
the association between predictor and event probability. The goals of this study are to reveal
suggestions of preferred approaches to enhance the quality of the application of this statistical
method.
Review Method
The first step of the study was to search for relevant literature, namely empirical
educational research studies in which logistic regression was used. To obtain a sample size
manageable for a review study, the search was limited to studies that were written in English,
published in peer-reviewed journals, and published from 2010 to 2016. The online databases
searched included Education Source by EBSCO, ERIC, PsycINFO, and Web of Science. The
following search terms were used to search anywhere in the document: logistic regression,
education. The search was limited to studies published in peer-reviewed journals to include
research that had been evaluated through rigorous peer review processes. The search yielded
143 articles. These documents then were reviewed, and studies that did not include empirical
analysis using logistic regression model were excluded. This process resulted in a list of 130
empirical research articles. The articles were then coded in terms of the statistics reported and
the interpretation (or the lack thereof) of such statistics. The coding was based on contents
anywhere in the articles, including tables, figures, texts, and appendices. If the detected
statistically significant relationship between the predictor and the event occurrence was
interpreted in terms of probabilities, then an article was coded as having interpreted the
magnitude of association. If odds ratio was explicitly interpreted as relative risk, namely if
the definition of relative risk was used to explain the numeric value of reported odds ratio,
then an article was coded as having treated odds ratio as relative risk. Given that this is a
6
single-authored study, no additional coder was involved to check the coding of articles.
Instead, the author used Table B1 in Appendix B to double-check the coding and to present a
summary of the reviewed studies. Given the straightforward binary (yes/no) nature of all
categories of the coding, the author believes that the lack of a second coder should not impact
Because the inclusion criterion was the application of logistic regression, the included
studies demonstrated a wide range of research topics. The most common outcome variables
concerned students’ choice among two or more options, such as student enrollment pattern
(Campbell and Mislevy 2013), students’ intention of urban/rural placement (Jones, Bushnell,
and Humphreys 2014), college major choice (Ferguson et al. 2012; Pinxten et al. 2015),
participation in student organizations (Case 2011), pursuit of financial aid (Radey and
Cheatham 2013), and decision to pursue graduate education (d’Aguiar and Harrison 2016).
Another common type of dependent variable is student outcome such as degree attainment
(Flynn 2014), academic success (Fong, Melguizo, and Prather 2015; Stegers-Jager et al.
2012), and retention (Evans 2013; O’Neill et al. 2011; Pruett and Absher, 2015; Santelices et
al. 2016). Students’ health and social behaviors also were among the examined outcomes
The manuscripts then were analyzed to determine the type of statistics being reported,
as well as the accuracy and extent of the interpretation of the significance, direction, and
magnitude of the association between predictors and event probability. The following section
reports the results of the analysis, summarizes the frequency and type of statistics being
reported as logistic regression results, determines the suitability of reported effect sizes and
the accuracy of their interpretation, and analyzes the scope of information revealed by the
reported effect sizes and the interpretation. A discussion section then follows to compare
7
approaches of interpreting logistic regression results, to identify common issues and their
Results
This section examines the application of logistic regression in two aspects. The first
examines the statistics being reported as logistic regression results. The second examines how
significance level, direction, and magnitude of the association between predictor and event
results in the reviewed studies. The majority of studies (81.5%) reported the odds ratio (OR)
as effect size. The confidence interval of either the coefficient estimate or the odds ratio was
reported by less than half (43.1%) of the studies, while the p value of the significance test
was reported by most studies (86.9%). This suggests that p values are more commonly used
Given the fact that odds ratio, confidence interval, and p value are among the standard output
of major statistical packages such as STATA and SAS, it is unsurprising to see that these
much less frequently reported. As can be seen from Table 1, marginal effects and predicted
probabilities each were reported by 5.4% of the reviewed studies. This might be because
researchers do not find it necessary to report and interpret effect sizes other than odds ratio. It
also is plausible that the additional steps such as code writing and specification of
independent variable values required to obtain these statistics affect their popularity.
8
overview of the basics of logistic regression is necessary. The goal of using logistic
variables (predictors) and the likelihood of the event. As indicated in equation 1-1, the left-
hand side of logistic regression model is the log of odds, and coefficient βk is the change in
the log of odds for one-unit change in xk, holding all other independent variables constant. To
eliminate the complication of interpreting βk in terms of changes in log odds, the common
practice is to exponentiate both sides of the equation to obtain the odds ratio, namely the ratio
of the odds of an event for two individuals differing by one-unit on x. The odds ratio can be
expressed as follows:
/
Odds ratio = Exp(β) = 2-1
/
where p represents the probability of the event occurring, x represents independent variable,
and x+1 represents one-unit change in independent variable. It can be seen that although the
odds ratio is related to event probability and it measures a ratio between two individuals
differing by one-unit on the independent variable, it does not measure the ratio of
probabilities. Instead, it measures the ratio of odds. The ratio of probabilities, also called
relative risk or risk ratio, measures the ratio of the probability of an event occurring in one
group (exposed to certain treatment/condition) to the probability of the same event occurring
in another group (unexposed to such treatment/condition). The difference between the two
follows:
where p represents the probability of the event occurring, x represents independent variable,
9
Comparing equations 2-1 and 2-2, the difference between the odds ratio and the
relative risk is obvious. This can be demonstrated by a simple example with hypothetical
event probabilities. Let the hypothetical probabilities of obtaining college degree be 0.6 and
0.8 for men (reference group) and women (comparison group), respectively. The relative risk
of women compared to men therefore is 0.8/0.6 = 1.33. The odds for women are 0.8/(1-0.8) =
4, and the odds for men are 0.6/(1-0.6) = 1.5. The odds ratio of women compared to men
equals (0.8/(1-0.8))/(0.6/(1-0.6)) = 4/1.5 = 2.66. It can be seen that for the hypothetical
probabilities, the odds ratio between the two genders is much larger than the gender relative
risk.
The interpretations of the odds ratio and the relative risk also are different. Relative
risk is easier and more intuitive to understand. Using the previous example, the probability of
women receiving a college degree is 1.33 times as large as the probability of men, or 33%
higher. We can say that women are 33% more likely to receive a college degree compared to
men. It is less easy to interpret the odds ratio. An odds ratio of 2.66 does not mean that
women are 2.66 times as likely as men, or 1.66 times more likely than men, to receive a
college degree. In other words, the ratio of odds does not translate into the ratio of probability
automatically. Therefore, it is important that the odds ratio is not mistaken for the ratio of
probabilities (namely relative risk). It also is important that the odds ratio not be interpreted
as relative risk in wordings such as “n percent more likely to” or “n times as likely to.” Doing
so would inflate the difference of probabilities between the comparison and reference groups
If the odds ratio should not be interpreted as relative risk, how should it be
interpreted? Using the same example, we can say that the odds of women receiving a college
degree are 2.66 times as large as the odds for men. But what exactly does this statement
reveal? In fact, not much can be inferred directly from this statement except that women are
10
more likely than men to receive a college degree (2.66> 1). It is, however, not clear how
much more likely women are to receive a college degree than men because the odds ratio
does not correspond to one probability ratio but rather to a series of probability ratios. Let the
probability of obtaining a college degree for men and women be 0.43 and 0.67, respectively.
It can be calculated that the odds ratio of women to men is still 2.66, but the relative risk is
1.56 instead of 1.33 as calculated before with the other pair of probabilities (0.6 and 0.8). The
odds ratio reveals the direction of the relationship but not the magnitude in terms of
probabilities. Knowing the value of the odds ratio does not indicate how much more or less
likely the event is to occur for two individuals differing by one-unit on the independent
variable.
As discussed above, p value of significance test was reported by most studies, and less
than half of the studies reported the confidence interval of the coefficient estimate or the odds
ratio. Both statistics were used to interpret whether independent variable was statistically
significant in predicting the event likelihood. Also, the direction of the association could be
easily determined by examining the value of the effect size statistic. Table 2 presents the
frequency and percentage of studies that explicitly interpreted the significance of independent
It can be seen that the majority of the studies explicitly interpreted the significance of
predictors and the direction of the associations. Typically, these two interpretations are
combined. For example, Mendez and Mendez (2016, 11) made the following statement in
their study on college students’ perception of faculty and physical appearance: “As we
approachability. First, younger faculty members were perceived to be more attractive than
11
Similarly, Pinxten et al. (2015, 1930) reported the association between students’ course-
There are a few studies where either significance or direction was not sufficiently
program, Aine et al. (2014, 197) reported the association between students’ evaluation of
learning and overall satisfaction as follows: “In the GP group, good evaluation of diagnostic
skills learning during GP training was predictive of overall training satisfaction”. Although
this interpretation is correct, it is not complete and does not explain whether good evaluation
on college students’ sexual activities, Burke, Gabhainn, and Young (2014, 41) reported the
results regarding the predictor “student status,” stating: “Univariate regression models
identified student status to be a significant predictor of having four or more sexual partners in
their life”. This statement combined with the omission of predictor level specification in the
text or in the tables makes it unclear whether being a student positively or negatively
predicted certain sexual activities. These incomplete and rather vague interpretations decrease
the amount of information made available to the readers; impact the readability of the studies;
and make it difficult to understand, assess, and adopt the specific findings. Table 2 shows that
about 15% of the reviewed studies failed to provide interpretation of the association direction,
and about 9% did not discuss the significance of predictors in text. Considering that predictor
significance and direction of the association are the primary results of logistic regression and
12
should be discussed sufficiently in text, these percentages suggest that there is room for
Once individual predictor significance and the direction of association are determined,
the crucial question to be answered is: how strong is the detected relationship? Using the
habit formed by interpreting linear regression results, one would want to know how much the
event probability changes for a certain amount of change in the predictor. However, unlike
linear regression, where coefficient estimate measures the change in the dependent variable
for certain change in the independent variable, logistic regression does not yield such a
measure. Logistic regression does not model the linear relationship between the predictors
and the event probability. As previously discussed, the odds ratio, which is a transformation
of the logistic regression coefficient estimate, is not a measure in terms of probability. This
does not mean, however, that the magnitude of the predictor cannot be properly interpreted to
help us better understand educational phenomena. The following sections provide a detailed
review of the included studies to examine if the magnitude of the significant predictors was
interpreted, what methods were used, and if the methods were appropriate.
Fifty-two studies, or 40% of all included studies, attempted to interpret the magnitude
much lower than those that included interpretation of predictor significance and direction of
association. The fact that 60% of the studies didn’t interpret the strength of the detected
find it sufficient to reveal the direction of the relationship and unnecessary to quantify the
strength of the relationship. It also is plausible that some researchers may be uncertain about
how to interpret the magnitude in probabilities, and choose to only interpret the direction to
13
avoid errors. A typical example can be found in a study on the relationship between new
degree structure and medical students’ career considerations (van den Broek et al. 2010), in
In the third year of medical school, pre-BaMa students were significantly more likely
to consider a temporary stop than BaMa students. Apparently, those who cannot
receive a bachelor degree think more about the opportunity to interrupt the course
Although this conclusion reveals that the new degree structure is related to lower preference
for a temporary stop during the program, it is unknown how large the difference in preference
is before and after the introduction of the new degree structure. Not only was the difference in
probabilities missing, even the meaning of odds ratio values was not interpreted.
Odds Ratio
As discussed previously, the odds ratio is easily available because it is part of the
standard output for logistic regression by major statistical software packages. Not
and Absher 2015; Radey and Cheatham 2013; Swecker, Fifolt, and Searby 2013; Wood
2013). This statistic frequently is interpreted in text similar to the following: “The odds that
services at 2-year colleges were more than 7 times those for students with learning disabilities
(OR = 7.1, p < .001)” (Newman and Madaus 2015, 213). This interpretation is not incorrect;
it explains that the odds of receiving accommodations for students with visual impairments
are seven times as large as the odds for students with learning disabilities. But what exactly
does that mean? This interpretation still does not eliminate the confusion caused by the
concept of odds, and therefore is not easy to comprehend. Explaining the odds ratio using this
14
Some studies show such confusion on the researcher’s part. Besides its ready
because it measures a ratio between the comparison and the reference group corresponding to
a certain amount of change in the predictor. It is rather intuitive to interpret this statistic as a
ratio of event probabilities. This is shown explicitly by a study on the relationship between
higher education and students’ entrepreneurial intentions, where it is concluded that “The
odds ratio of 1.573 indicates that senior students have a probability of stating an
entrepreneurial intention 1.5 times higher than the freshmen students” (Ertuna and Gurel
2011, 395). Clearly, odds ratio is incorrectly interpreted as ratio of probabilities. A review of
the included studies shows that 36 studies interpreted the value of odds ratio as relative risk
(ratio of probabilities). That is 69.2% of the 52 studies that attempted to interpret relationship
strength in terms of probabilities (see Table 2, row 3). For example, in a study on the
relationship between lecture quiz score and final exam performance, Wambuguh and Yonn-
Students attaining combined quizzes scores of at least 70% are approximately 9 times
(OR value of 8.78 in Table 2) as likely to pass the final examination with the same
score or better compared to those who got scores below 70% (χ2= 160.53, p<.001) .
(3)
In a study on the relationship between usage of student services and college students’
experiences of difficulty and distress, Julal (2013, 419, 421) explained the relationship
between experiencing personal difficulty and using student services as follows: “The odds
ratio of 34.34 showed that students who were faced with personal difficulty were more likely
to use one or more of the available support services. ... [They] were 34 times more likely to
15
use one or more support services”. Similarly, Baker-Eveleth, O’Neill, and Sisodiya (2014, 66)
described the association between students’ performance in predictor and subsequent courses
as follows: “as students perform worse in the [predictor] course [Business Law] they are
Exp(B)=9.807”.
Similar interpretation of odds ratio is abundant (Beran et al. 2012; Lee 2014;
Lombardi et al. 2013; Luna and Fowler 2011; Lyons and Akroyd 2014; Munisamy, Jaafar,
and Nagaraj 2014). As discussed previously, the odds ratio tends to be larger than relative
risk. Interpreting odds ratio as relative risk could lead to an exaggeration of the predictor’s
calculated based on the odds ratio reported by Julal’s (2013) study and hypothetical event
probabilities. Using equations 2-1 and 2-2, it can be demonstrated that relative risk can be
expressed in odds ratio and event probability using the following formulae:
Relative risk = 2-3
∗
where p(x) represents the event probability for the reference group, and p(x+1) represents the
event probability for the comparison group. Hypothetical probabilities of using student
services for students without personal difficulty (reference group) are shown in the second
column of Table 3. Using equation 2-1 and the odds ratio reported by Julal (2013) comparing
students with and without personal difficulty, hypothetical probabilities for students with
difficulty are calculated and shown in the first column of Table 3. Using equation 2-3 (or
equation 2-2), relative risks are calculated and shown in the third column of Table 3.
Comparison of the calculated relative risks and the reported odds ratio shows that the
latter overestimates the former extensively. If students without personal difficulty rarely use
16
student services as suggested by the hypothetical probability of 0.05, the odds ratio would
overestimate relative risk by a factor of 2.67. With the increase of the event occurrence for
the comparison group, the overestimation becomes stronger. For example, if 40% of students
without difficulty use student services, interpreting the odds ratio as relative risk would
exaggerate the relationship magnitude by a factor of 14.34. The exact extent of the
exaggeration is unknown because the exact event probabilities are unavailable, but it is
obvious that the study’s (Julal 2013) conclusion is not valid. The calculated relative risks are
still large and indicate strong predictive relationship between personal difficulty and usage of
student services, but they are not nearly as dramatic as a factor of 34 as suggested by Julal’s
(2013) conclusion.
interpreting odds ratio as relative risk. Hypothetical probabilities are approximated based on
descriptive statistics reported in the studies. For example, the overall completion rate of the
Free Application for Federal Student Aid (FAFSA) among single mother students is 87%
(Radey and Cheatham 2013). Based on this percentage, when the probability of completing
FAFSA among single mother students at public universities (reference group) is set at 0.8, the
odds ratio would overestimate relative risk by a factor of 8. Single mother students at for-
profit institutions (comparison group) would be estimated to be 22% more likely (or 122% as
likely) than their counterparts at public universities to complete FAFSA instead of nearly 10
times more likely to do so as claimed by Radey and Cheatham (2013). Looking at the
claimed difference of nearly 10 times, readers would conclude that students at for-profit
institutions are dramatically more inclined to apply for financial aid, while the actual
Similarly, college students with a history of high risk factors are estimated to be about 41%
more likely to engage in heavy drinking and about 93% more likely to experience blackouts
17
compared to students with a history of low risk factors, not 6.04 times and 2.51 times more
likely to do so, respectively; and high risk factor college students are estimated to be about
15% more likely to drink heavily compared to moderate risk factor students, not 2.85 times
more likely to do so as suggested by Sullivan and Cosden’s (2015) conclusion. It can be seen
that interpreting odds ratio as relative risk has led to inaccurate understanding of educational
Because the odds ratio does not reveal relationship strength in terms of probability,
alternative measures are needed to quantitatively interpret logistic regression results. Agresti
(2013) acknowledged the fact that it may be difficult to understand odds or odds ratio effects,
and suggested the use of instantaneous rate of change in the probability or estimated
variables to address such difficulty. The current review shows that marginal effects and
regression results. Marginal effects are partial derivative or instantaneous rates of change for
continuous independent variables and discrete changes for categorical independent variables.
change in the independent variable of interest, holding all other independent variables
constant. The interpretation of marginal effects for categorical and continuous variables is
different. For categorical variables, marginal effects measure the change in predicted
probability as the independent variable of interest changes from 0 to 1 while holding all other
independent variables constant. For continuous variables, marginal effects provide good
approximation to the amount of change in predicted probability for a one-unit change in the
independent variable while holding all other independent variables constant. The difference
18
lies in that marginal effects for continuous independent variables are approximation because
the relationship between the independent variable and predicted probability is not linear
(Williams 2016). Predicted probabilities are probabilities calculated using logistic regression
Both marginal effects and predicted probabilities are dependent on the values of
or following common practices. For example, to avoid the impact of outliers, the upper and
lower quartiles could be used instead of the smallest and largest values of the independent
variable when calculating estimated probabilities (Agresti 2013). It is important that the
important that the implications of the selected values of independent variables be reflected in
Agresti (2013). Specifically, marginal effects should not be interpreted as constant effects and
should always be interpreted in relation to the selected values of independent variables that
are held constant. The same applies to the calculated values of predicted probabilities.
Table 1 shows that seven studies claimed to report marginal effects to interpret the
magnitude of relationship strength. Four of these studies used this statistic correctly. Marginal
effects at the means were reported by three studies (Fernández-Macías et al. 2013; Jaeger and
Eagan 2011; Pursel et al. 2016), where the independent variables other than the one of
interest were held constant at their means. Average marginal effect was reported by one study
(Bielby et al. 2014), where the marginal effect of the independent variable of interest is
calculated for each case holding other independent variables at their respective values and the
Two of the seven studies did not report complete information in relation to marginal
effects. Ko and Jun (2015) used a figure to report marginal effects of the independent variable
19
“chance to benefit society” on college students’ preference for public sector jobs. The values
of the calculated marginal effects were not reported in the figure or in text, and the
specification of independent variables’ values was not reported. As a result, the readers still
were not provided with quantitative measures of the relationship strength. It also is difficult
to adopt the finding because the values of the other independent variables are unknown to the
readers. Melguizo, Torres, and Jaime’s (2011) study also suffered from this issue by omitting
the specification of independent variables’ values. One of the seven studies used the term
Table 1 also shows that seven studies reported predicted probabilities to quantify the
relationship strength. Similar to Ko and Jun’s study (2015), four of the seven studies did not
report the exact values of predicted probabilities, but instead used figures to visually show
differences in predicted probabilities (Flynn 2014; Jowett et al. 2014; King 2015; Pleitz et al.
2015). One study omitted the calculation method of predicted probabilities and specification
It can be seen that the reporting of marginal effects and predicted probabilities in the
quantitative interpretation of relationship magnitude suffers not only from low quantity, but
also from quality issues. The omission of specification of independent variables’ values is a
stated conditions. Another common issue is the omission of exact values of marginal effects
or predicted probabilities, providing the readers with only vague information about the
relationship strength. Even for the very few studies that reported the selected values of
independent variables, the interpretation of the marginal effects tends to lack emphasis on the
conditional nature of this statistic. For example, Jaeger and Eagan (2011) interpreted
20
Black students (ME = 0.04, p < .001) were significantly more likely to be retained
than their White counterparts. In-state students had a significantly higher probability
of retention compared to students who resided outside the state of the institution (ME
The interpretation does not make explicit that the marginal effects, or the changes in
predicted probabilities, hold only for otherwise average individuals. Marginal effects and
predicted probabilities are dependent on the selected values of independent variables. For
individuals with values much higher or lower than the average on independent variables, the
marginal effect of 0.04 for being Black and the marginal effect of 0.08 for being an in-state
student might not hold, and the conclusion might be affected. Emphasizing the conditional
nature of marginal effects and predicted probabilities could make the interpretation and
conclusion more accurate and enable readers to properly adopt the findings.
Discussion
The review of 130 empirical educational studies applying logistic regression reveals
three major issues that warrant attention: (a) a common lack of quantitative interpretation of
the magnitude of detected predictive relationship (60% of the studies did not attempt to
provide such in terms of probabilities); (b) common practice of interpreting odds ratio as
relative risk (69.2% of the studies that attempted to interpret strength of predictive
and (c) omission of specification of independent variables’ values in the calculation and
The current review indicates that most educational studies using logistic regression
report significance and sign of predictors, but do not quantitatively interpret the predictive
relationship strength in terms of probabilities. This issue can be attributed in part to the
21
confusion caused by the concept of the odds ratio, which is the standard output by major
statistical software packages used to estimate logistic regression models. In the reviewed
studies, the odds ratio is the most commonly reported effect size, and the interpretation of its
meaning falls into three categories: no interpretation, interpreted as change in odds, and
interpreted as relative risk. It is not incorrect not to interpret odds ratio or to interpret it as the
change in odds corresponding to a certain change in the predictor, but the amount of
information revealed by these options is low. For some studies, the goal is to detect
significant predictor(s) and the direction of the relationship. The magnitude of the association
may not be of interest to the researcher. Although in such case it may not be necessary to
report quantitative measures such as marginal effects, doing so would better utilize the
these statistics also could prevent the overestimation of the relationship strength. It is
suggested that researchers go beyond the value of the odds ratio and report marginal effects
or predicted probabilities. Technically, this could be done easily with the simple commands
used by major statistical software packages to calculate such statistics. For example, marginal
effects and predicted probabilities can be easily computed using the margins command in
STATA.
Interpreting odds ratio as relative risk causes more serious concern than the two
demonstrated in Tables 3 and 4. Almost 70% of the studies that attempted to interpret the
results in terms of probabilities did so by interpreting the odds ratio as a ratio of probabilities;
logistic regression. Exaggerating relationship magnitude not only undermines the validity and
rigor of research studies, but it also could mislead educators, education administrators, and
22
policy-makers. For example, by reading the finding of Radey and Cheatham (2013, 270) that
aid-eligible women at for-profit institutions are almost “10 times more likely” to complete an
application for financial aid than those at public institutions, a policy maker might draw the
conclusion that new policy and/or funding is needed to address this large gap and to urge
public institutions to make more efforts to assist students to seek financial aid. In fact, this
gap is overestimated extensively, and the actual gap could be in the range of about 22%
(Table 4). Empirical studies inform practice, research, and policy making of education. The
magnitude should not be overlooked. The relatively common presence of this issue in
researchers and peer reviewers and warrants attention. Given the similarity of model
specification between linear and logistic regression, it is not surprising that researchers tend
to interpret logistic regression results in the same way they interpret linear regression results.
Educational researchers should carefully distinguish between the two types of regression
models, update conceptual understanding of odds ratio and its limitations, and use more
models. Also, peer reviewers should pay more attention to the statistics used as effect size
Marginal effects and predicted probabilities both are dependent on the values of
independent variables other than the one of interest. This means that selecting different sets
of values for the other independent variables in the model would yield different marginal
effects and predicted probabilities. As a result, the calculated values of these statistics are
conditional. This review suggests that it is relatively common that the selection of
independent variables’ values is not reported and/or not incorporated in the interpretation of
23
the calculated statistics. Ignoring the conditional nature of the statistics could make it difficult
for readers to understand the educational phenomena of interest, or they even could mistake
the conditional values as constant effects. It is suggested that educational researchers be more
cautious when reporting marginal effects and predicted probabilities as logistic regression
results, and to always discuss their conditions. Table 5 summarizes the approaches of
Recommendations
In summary, this review indicates that there is much room for improvement in the
reviewers could consider the following recommendations when using logistic regression
models.
between predictors and event probabilities to fully utilize the information available from
logistic regression results. It is suggested that researchers calculate marginal effects and/or
predicted probabilities using methods discussed above in the section Marginal Effects and
Predicted Probabilities. Marginal effects and predicted probabilities could be easily computed
by major statistical software packages. These statistics provide a good approximation of the
predictor values of substantive interest. Currently, many articles only report and interpret
odds ratio and discuss the direction of the association. Doing so does not reveal changes in
event probability corresponding to certain change in predictors, and does not shed light on the
relative importance of the predictors. Even when relationship magnitude is not of primary
24
interest to the researchers, the readers, especially educational policy makers and practitioners,
oftentimes need such information in order to prioritize and allocate scarce resources.
Computing and reporting statistics in terms of probability could provide readers with a deeper
especially to clarify the definition of the odds ratio. One common error to avoid is that the
odds ratio should not be interpreted as relative risk or a ratio of probabilities. Although it is
very tempting to do so, treating the odds ratio as relative risk leads to exaggeration of
magnitude of the predictive relationship. Researchers should also explicitly remind readers
regression
Third, more attention should be paid to ensure that the interpretation of logistic
regression results is not confused with that of linear regression models. Specifically,
calculating marginal effects and/or predicted probabilities would require careful selection of
values of independent variables, such specification should always be reported, and the
conditional nature of these statistics should be emphasized in their interpretation and in the
overall conclusions.
Technical complexity may have been a factor that has kept researchers from moving
beyond odds ratio and towards calculated probabilities. However, advanced statistical
25
software packages have made such calculating relatively simple. The main challenge faced
revisit the key concepts of this method. Some might argue that the extra burden of calculating
and reporting probability-based statistics is not worthwhile because oftentimes knowing the
significance and direction of the association is sufficient, and even if not, that odds ratio
approximates relative risk. This review, however, suggests that odds ratio could be very
misleading as a substitute for relative risk. This review serves as a starting point for
detect any misleading conclusions drawn, and more importantly, improve accuracy in their
own research. It would be helpful for future research to explore whether the issues identified
in this review, such as effect size exaggeration, have had any noticeable impact on
Conclusion
The results of this review reveal several issues in the application of logistic regression
models in current educational research publications that warrant attention among researchers.
The primary concern is the weakness in conceptual understanding of the method, its
difference from linear regression model, and the meaning and limitation of the odds ratio.
this review indicates that some of the results being reported by educational research based on
logistic regression results might not be accurate and should be interpreted with caution by
and peer reviewers, properly address the issues discussed in this review to enhance the
quality of logistic regression studies. While this review has focused on the reporting and
26
interpretation of logistic regression results, it should be noticed that successful use of logistic
regression also relies on aspects such as model selection, suitability of model for data
structure, model specification, model testing, and measurement of model fit. Due to the
limitation of the scope of this review, these aspects have not been examined or discussed.
Future review studies on these topics could further shed light on the adoption of logistic
References
Agresti, A. 2013. Categorical Data Analysis. Hoboken: John Wiley & Sons.
and K. Mattila. 2014. “Factors Associated with General Practice Specialty Training
Satisfaction - Results from the Finnish Physician Study.” Education for Primary Care
25: 194-201.
Allison, P. D. 1999. Logistic Regression Using SAS: Theory and Application. Cary, NC: SAS
Institute Inc.
* Baker-Eveleth, L. J., M. O’Neill, and S. R. Sisodiya. 2014. “The Role of Predictor Courses
and Teams on Individual Student Success.” Journal of Education for Business 89 (2):
59-70. doi:10.1080/08832323.2012.757541
* Beran, T. N., C. Rinaldi, D. S. Bickham, and M. Rich. 2012. “Evidence for the Need to
doi:10.1177/0143034312446976
27
* Bielby, R., J. R. Posselt, O. Jaquette, and M. N. Bastedo. 2014. “Why are Women
* Burke, L., S. N. Gabhainn, and H. Young. 2015. “Student Sex: More or Less Risky than
doi:10.1080/14681811.2014.947362.
* Campbell, C. M., and J. Mislevy. 2013. “Student Perceptions Matter: Early Signs of
10.1007/s10803-011-1297-7.
* d’Aguiar, S., and N. Harrison. 2016. “Returning from Earning: UK Graduates Returning to
10.1080/13639080.2014.1001332.
* Ertuna, Z. I., and E. Gurel. 2011. “The Moderating Role of Higher Education on
doi:10.1108/00400911111147703
28
* Ferguson, E., D. James, J. Yates, and C. Lawrence. 2012. “Predicting who Applies to Study
10.1007/s11162-013-9321-8.
015-9368-9.
* Jaeger, A. J., and M. K. Eagan. 2011. “Examining Retention and Contingent Faculty Use in
doi:10.1177/0895904810361723
* Jones, M. P., J. A. Bushnell, and J. S. Humphreys. 2014. “Are Rural Placements Positively
29
Education Students Based on the Revised New Ecological Paradigm Scale.” The
doi:10.1080/03069885.2012.741680
* King, B. 2015. “Changing College Majors: Does it Happen More in STEM and Do Grades
doi:10.2505/4/jcst15_044_03_44
Analysis and Other Multivariable Methods. 3rd ed. Pacific Grove: Duxbury Press.
* Ko, K., and K.-N. Jun. 2015. “A Comparative Analysis of Job Motivation and Career
192-213. doi:10.1177/0091026014559430
* Lee, A. 2014. “Students with Disabilities Choosing Science Technology Engineering and
* Luna, G., and M. Fowler. 2011. “Evaluation of Achieving a College Education Plus: A
* Lyons, F. W., and D. Akroyd. 2014. “The Impact of Human Capital and Selected Job
30
doi:10.1080/10668926.2014.851965
McCulloch, C. E., and S. R. Searle. 2001. Generalized, Linear, and Mixed models. New York:
* Melguizo, T., F. S. Torres, and H. Jaime. 2011. “The Association Between Financial Aid
Availability and the College Dropout Rates in Colombia.” Higher Education 62 (2):
231-247. doi:10.1007/s10734-010-9385-8
* Mendez, J. M., and J. P. Mendez. 2016. “Student Inferences Based on Facial Appearance.”
* Munisamy, S., N. I. M. Jaafar, and S. Nagaraj. 2014. “Does Reputation Matter? Case study
* Newman, L. A., and J. W. Madaus. 2015. “An Analysis of Factors Related to Receipt of
* O’Neill, L., J. Hartvigsen, B. Wallstedt, L. Korsholm, and B. Eika. 2011. “Medical School
Prediction. 3rd ed. Fort Worth, TX: Harcourt Brace College Publishers.
* Pinxten, M., B. De Fraine, W. Van Den Noortgate, J. Van Damme, T. Boonen, and G.
31
University and Successful Completion of the First Year.” Studies in Higher Education
32-40.
* Radey, M., and L. P. Cheatham. 2013. “Do Single Mothers Take Their Share?: FAFSA
and the Role of Financial Aid: Lessons from Chile.” Higher Education 71: 323-342.
doi: 10.1007/s10734-015-9906-6.
* Sullivan, K., and M. Cosden. 2015. “High School Risk Factors Associated with Alcohol
Trajectories and College Alcohol Use.” Journal of Child & Adolescent Substance
32
* Swecker, H. K., M. Fifolt, and L. Searby. 2013. “Academic Advising and First-Generation
(1): 46-53.
Tabachnick, B. G., and L. S. Fidell. 2001. Using Multivariate Statistics. 4th ed. Boston: Allyn
and Bacon.
10.1080/07448481.2014.953166.
10.1007/s10964-015-0256-6.
* van den Broek, S., B. Muller, N. Dekker, A. Bootsma, and O. T. Cate. 2010. “Effect of the
* Wambuguh, O., and T. Yonn-Brown. 2013. “Regular Lecture Quizzes Scores as Predictors
Analysis.” International Journal for the Scholarship of Teaching and Learning 7 (1):
https://www3.nd.edu/~rwilliam/stats3/Margins02.pdf
* Wood, J. L. 2013. “The Same ... but Different: Examining Background Characteristics
Among Black Males in Public Two-Year Colleges.” The Journal of Negro Education
82 (1): 47-61.
33
* Yusif, H., I. Yussof, and Z. Osman. 2013. “Public University Entry in Ghana: Is it
1
2
3
Table 3. Relative risk based on hypothetical probabilities and reported odds ratio
Hypothetical Hypothetical Hypothetical Reported odds Overestimation
probability of probability of relative risk, ratio, students of relative risk
using student using student students with with difficulty (odds ratio /
service for service for difficulty vs. vs. students relative risk)
students with students without students without without
difficulty difficulty difficulty difficulty
0.64 0.05 12.88 34.34 2.67
0.79 0.10 7.92 34.34 4.33
0.86 0.15 5.72 34.34 6.00
0.90 0.20 4.48 34.34 7.67
0.92 0.25 3.68 34.34 9.34
0.94 0.30 3.12 34.34 11.00
0.95 0.35 2.71 34.34 12.67
0.96 0.40 2.40 34.34 14.34
Note. Odds ratio reported by Julal (2013).
4
Hypothetical Hypothetical
probability, probability,
Reported comparison reference Hypothetical Over-
Conclusion odds ratio group group relative risk estimation
Students attaining combined quizzes scores of at least 70% are
approximately 9 times (OR value of 8.78 in Table 2) as likely to
pass the final examination with the same score or better compared 8.78 0.93 0.6 1.55 5.67
to those who got scores below 70% (χ2 = 160.53, p < .001).
(Wambuguh & Yonn-Brown, 2013, p.3)
Students in the high HSR [high school risk] group were 6.04 times
6.04 0.92 0.65 1.41 4.28
more likely to engage in heavy episodic drinking (p < .001) and
2.51 times more likely to experience a blackout in college (p <.001)
than were students in the low HSR group (see Table 7). Students in
the high HSR group were also 2.85 times more likely to engage in 2.51 0.39 0.2 1.93 1.30
heavy episodic drinking in college as compared to the moderate
HSR group, p<.05. (Sullivan & Cosden, 2015, p. 24) 2.85 0.92 0.8 1.15 2.48
Those attending for-profit institutions were nearly 10 times more
likely to complete FAFSAs than aid-eligible women at public 9.75 0.98 0.8 1.22 8.00
universities. (Radey & Cheatham, 2013, p. 270)
Note. Hypothetical probabilities are approximated based on descriptive statistics reported in respective studies.
5
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 1
Aikins, R. D., A. Golub, and A. S. Bennett. 2015. “Readjustment of Urban Veterans: A Mental
Health and Substance Use Profile of Iraq and Afghanistan Veterans in Higher
doi:10.1080/07448481.2015.1068173
Andersen, L., and T. J. Ward. 2014. “Expectancy-Value Models for the STEM Persistence Plans
Angelin, M., B. Evengård, and H. Palmgren. 2015. “Illness and Risk Behaviour in Health Care
doi:10.1111/medu.12753
Barrett, K. L., W. G. Jennings, and M. J. Lynch. 2012. “The Relation Between Youth Fear and
Barron, J. S., E. Bragg, D. Cayea, S. C. Durso, and N. S. Fedarko. 2015. “The Short-Term and
Long-Term Impact of a Brief Aging Research Training Program for Medical Students.”
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 2
doi:10.2190/DE.41.4.e
Blackford, K., and J. Khojasteh. 2013. “Closing the Achievement Gap: Identifying Strand Score
Boek, S., S. Bianco-Simeral, K. Chan, and K. Goto. 2012. “Gender and Race are Significant
Bouck, E. C., and G. S. Joshi. 2015. “Does Curriculum Matter for Secondary Students with
Perceptions Among Ethnic Minority and Caucasian Medical Students Which May Affect
Their Relative Academic Performance.” Education for Primary Care 26 (1): 11-15.
Using Quality Point Status.” Journal of College Student Retention: Research, Theory and
De Backer, L., H. Van Keer, and M. Valcke. 2015a. “Socially Shared Metacognitive Regulation
During Reciprocal Peer Tutoring: Identifying its Relationship with Students’ Content
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 3
doi:10.1007/s11251-014-9335-4
De Backer, L., H. Van Keer, and M. Valcke. 2015b. “Exploring Evolutions in Reciprocal Peer
doi:0.1016/j.learninstruc.2015.04.001
Eckles, J. E., and E. G. Stradley. 2012. “A Social Network Analysis of Student Retention Using
011-9173-z
England-Siegerdt, C. 2010. “Do Loans Really Expand Opportunities for Community College
doi:10.1080/10668926.2011.525180
2013. “Academic Attainment Findings in Children with Sickle Cell Disease.” Journal of
Flannery, K. B., J. L. Frank, and M. M. Kato. 2012. “School Disciplinary Responses to Truancy:
Current Practice and Future Directions.” Journal of School Violence 11 (2): 118-137.
doi:10.1080/15388220.2011.653433
Fox, R. S., E. L. Merz, M. T. Solorzano, and S. C. Roesch. 2013. “Further Examining Berry’s
doi:10.1177/0748175613497036
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 4
Furlong, M., and M. Quirk. 2011. “The Relative Effects of Chronological Age on Hispanic
doi:10.1080/00131911.2011.571763
155. doi:10.1002/j.2161-1912.2012.00014.x
and Ideographic Perspectives.” Learning Disability Quarterly: Journal of the Division for
Haas, A. L., S. K. Smith, and K. Kagan. 2013. “Getting “Game”: Pregaming Changes During the
doi:10.1080/07448481.2012.753892
Harrell, I. L., and B. L. Bower. 2011. “Student Characteristics That Predict Persistence in
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 5
Performance, Course Completion Rates, and Student Perception of the Quality and
doi:10.1080/01587919.2013.770430
Hayes, J. B., R. A. Price, and R. P. York. 2013. “A Simple Model for Estimating Enrollment
17 (2): 61-68.
Hickman, G. P., and D. Wright. 2011. “Academic and School Behavioral Variables as Predictors
Hillman, N. W. 2013. “Cohort Default Rates: Predicting the Probability of Federal Sanctions.”
doi:10.1111/j.1743-498X.2010.00384.x
Houser, L. C.-S., and S. An. 2015. “Factors Affecting Minority Students’ College Readiness in
Hu, S., A. C. McCormick, and R. M. Gonyea. 2012. “Examining the Relationship Between
doi:10.1007/s10755-011-9209-5
Jamali, A., S. Tofangchiha, R. Jamali, S. Nedjat, D. Jan, A. Narimani, and A. Montazeri. 2013.
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 6
Jelfs, A., and J. T. E. Richardson. 2013. “The Use of Digital Technologies Across the Adult Life
doi:10.1111/j.1467-8535.2012.01308.x
Jonesa, L. Ø., T. Mangerb, O.-J. Eikeland, and A. Asbjørnsen. 2013. “Participation in Prison
Kolenovic, Z., D. Linderman, and M. M. Karp. 2013. “Improving Student Outcomes via
doi:10.1177/0091552113503709
Kovner, C. T., C. Brewer, C. Katigbak, M. Djukic, and F. Fatehi. 2012. “Charting the Course for
Education: In the Context of Japanese Higher Education Reforms Since the 1990s.”
Kushner, J., and G. M. Szirony. 2014. “School Counselor Involvement and College Degree
4 (1): 75-79.
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 7
Index and Their Subsequent Educational Achievement Level?” Journal of School Health
Lauderdale-Littin, S., E. Howell, and J. Blacher. 2013. “Educational Placement for Children with
Autism Spectrum Disorders in Public and Non-Public School Settings: The Impact of
Social Skills and Behavior Problems.” Education and Training in Autism and
Lee, M., H. Kim, and M. Kim. 2014. “The Effects of Socratic Questioning on Critical Thinking
doi:10.1080/16823206.2013.849576
Leppo, R. H. T., S. W. Cawthon, and M. P. Bond. 2014. “Including Deaf and Hard-of-Hearing
Lewis, T. F., and E. Wahesh. 2015. “Perceived Norms and Marijuana Use at Historically Black
doi:10.1002/jocc.12010
7 (2): 93-108.
Liu, Z., and Y. Gao. 2015. “Family Capital Social Stratification, and Higher Education
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 8
Long, G. L., C. Marchetti, and R. Fasse. 2011. “The Importance of Interaction for Academic
Magalhães, E., P. Costa, and M. J. Costa. 2012. “Empathy of Medical Students and Personality:
doi:10.3109/0142159X.2012.702248
Harassment During College: Influences on Alcohol and Drug use.” Journal of Youth and
McLaughlin, M. J., K. E. Speirs, and E. D. Shenassa. 2012. “Reading Disability and Adult
doi:10.1177/0022219412458323
McVie, S. 2014. “The Impact of Bullying Perpetration and Victimization on Later Violence and
Mendoza, P., and J. P. Mendez. 2012. “The Oklahoma’s Promise Program: A national model to
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 9
Novak, H., and L. McKinney. 2011. “The Consequences of Leaving Money on the Table :
Examining Persistence Among Students Who Do not File a FAFSA.” Journal of Student
Nunez, A.-M., and G. Crisp. 2012. “Ethnic Diversity and Latino/a College Access: A
Owens, J. 2010. “Foreign Students, Immigrants, Domestic Minorities and Admission to Texas’
Selective Flagship Universities Before and After the Ban on Affirmative Action.”
Measures and Attitudes Toward the H1N1 Influenza Pandemic in Schools.” Journal of
doi:10.1080/19325037.2012.749705
Prins, E., S. Monnat, C. Clymer, and B. W. Toso. 2015. “How is Health Related to Literacy,
From the Program for the International Assessment of Adult Competencies (PIAAC).”
Journal of Research and Practice for Adult Literacy, Secondary, and Basic Education 4
(3): 22-42.
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 10
Rohr, S. L. 2012. “How Well Does the SAT and GPA Predict the Retention of Science,
Student Retention: Research, Theory & Practice 14 (2): 195-208. doi: 10.2190/CS.14.2.c
Rojas, Y. 2013. “School Performance and Gender Differences in Suicidal Behaviour - A 30-year
Follow-Up of a Stockholm Cohort Born in 1953.” Gender and Education 25 (5): 578-
594. doi:10.1080/09540253.2013.797955
Rye, B. J., C. Mashinter, G. J. Meaney, E. Wood, and S. Gentile. 2015. “Satisfaction With
and Study Performance: Comparing Three Admission Processes Within One Medical
doi:10.1080/08832320903449477
Shen, F. C. 2015. “The Role of Internalized Stereotyping, Parental Pressure, and Parental
Song, Y., P. Loyalka, and J. Wei. 2013. “Determinants of Tracking Intentions, and Actual
Education Choices among Junior High School Students in Rural China.” Chinese
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 11
Stein, C. H., L. A. Osborn, and S. C. Greenberg. 2016. “Understanding Young Adults' Reports of
Contact With Their Parents in a Digital World: Psychological and Familial Relationship
016-0366-0
Strayhorn, T. L. 2010. “Money Matters: The Influence of Financial Factors on Graduate Student
Sullivan, A. L., D. A. Klingbeil, and E. R. Van Norman. 2013. “Beyond Behavior: Multilevel
“Why Some HOPE Scholarship Recipients Retain the Scholarship and Others Lose It.”
Different to Latino Students Compared to White and Black Students?” Journal of Youth
doi:10.1080/15348431.2013.734248
van Rooij, S. W. 2012. “Open-Source Learning Management Systems: A Predictive Model for
doi:10.1111/j.1365-2729.2011.00422.x
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 12
“Postsecondary Pathways and Persistence for STEM Versus Non-STEM Majors: Among
Wells, K., C. Makela, and C. Kennedy. 2014. “Co-Occurring Health-Related Behavior Pairs in
College Students: Insights for Prioritized and Targeted Interventions.” American Journal
doi:10.1108/09513541311297568
2013. “Presence of Medical Home and School Attendance: An Analysis of the 2005-2006
National Survey of Children with Special Healthcare Needs.” Journal of School Health
Wilson, I., B. Griffin, L. Lampe, D. Eley, G. Corrigan, B. Kelly, and P. Stagg. 2013. “Variation
Wolf, S., J. L. Aber, and P. A. Morris. 2015. “Patterns of Time Use Among Low-Income Urban
015-0294-0
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 13
Zhao, D., and B. Parolin. 2014. “Merged or Unmerged School? School Preferences in the
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 14
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 15
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 16
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 17
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 18
LOGISTIC REGRESSION IN EDUCATIONAL RESEARCH 19
View publication stats