Jels 12228

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Journal of Empirical Legal Studies

Volume 16, Issue 3, 605–629, September 2019

A Deeper Look at Bar Success: The


Relationship Between Law Student
Success, Academic Performance, and
Student Characteristics
Amy N. Farley,* Christopher M. Swoboda, Joel Chanvisanuruk,
Keanen M. McKinley, Alicia Boards, and Courtney Gilday

In recent years, law schools have experienced a decline in enrollment and bar passage,
and legal education has been challenged to understand this new phenomenon and con-
duct research that can inform practices and policies regarding law student success. This
article presents findings from research conducted at the University of Cincinnati College
of Law, a large, midwestern public university, which aimed to investigate which factors and
student characteristics contribute to bar passage within the home jurisdiction (Ohio).
Results suggest bar passage can be predicted by a wide battery of variables, most notably
student performance during the law school course of study. Despite some prior literature
that suggests otherwise, however, LSAT and undergraduate GPA were only weakly predic-
tive of first-time bar passage among admitted students: the best prelaw model based on stu-
dent admissions and demographic data identified just over one-third of the students who
ultimately failed the bar on the first attempt. Information from the first year of law
school—even just performance in one first semester course—explained significantly more
variation in bar passage. Furthermore, data from beyond the first year of legal study,
including upper-level course taking in bar-tested subjects, enriched the predictive power
of the model, enabling us to predict 78 percent of students who failed the bar compared
to just 58 percent after the first year. These preliminary results provide important insights
into bar passage, particularly given the increased public scrutiny around incoming student
credentials, bar success, and law school performance and accreditation.

*Address correspondence to Amy N. Farley, Assistant Professor of Educational Leadership & Policy Studies, School
of Education, University of Cincinnati, Cincinnati, OH 45221; email: [email protected]. Swoboda is Associate Pro-
fessor and Associate Director of Research Methods, School of Education, University of Cincinnati; Chanvisanuruk
is Assistant Dean for Academic Success and Bar Programs, College of Law, University of Cincinnati; McKinley is
Doctoral Student, Educational Studies, University of Cincinnati; Boards is Doctoral Student, Educational Studies,
University of Cincinnati; Gilday is Doctoral Student, Educational Studies, University of Cincinnati.
This research was supported by the AccessLex Institute and Association for Institutional Research (AIR)
Research and Dissertation Fellows Program (Grant RG15240) and the University of Cincinnati Arts, Humanities &
Social Sciences (AHSS) and Integrated Research Advancement Grants Program.

605
606 Farley et al.

I. Introduction
Over the last decade, law schools across the United States have struggled with an increasingly
complex educational and political climate, impacted simultaneously by shifting enrollment
patterns, declining bar passage rates, and the consequences of policy changes that have gen-
erally drawn greater attention to the assessment of student outcomes. Between 2008 and
2017, the national percentage of first-time bar takers who passed the exam fell from 88 per-
cent to 75 percent (NCBE 2018). As a result, many law schools have found themselves and
their students’ data on the front pages of unfortunate news articles characterizing their bar
results as failures, with “plummeting” scores and “record lows.” These articles often seek
something to blame for the declining performance of students over time, and they fre-
quently point—both directly and indirectly—to similar shifts in incoming student admissions
credentials, including the LSAT, as likely culprits (see, e.g., Olson 2015; Ryman et al. 2019;
Song 2014, 2015). The National Conference of Bar Examiners (NCBE)—the nonprofit orga-
nization responsible for developing and administering many of the bar exams used across
the United States, including the Uniform Bar Examination (UBE) adopted in 35 jurisdictions
as of January 2019 (NCBE 2019)—has advanced a similar explanation for the declining pas-
sage rates, noting that “the single best indicator of the decline may be LSAT scores”
(Albanese 2018). More damning, to explain the decline between 2013 and 2014, the then-
president of NCBE claimed that “all [factors] point to the fact that the group that sat in July
2014 was less able than the group that sat in July 2013” (Song 2014). The law school commu-
nity responded to that statement with considerable outrage, noting that NCBE too narrowly
defined the LSAT as the sole basis of student ability (Song 2014).
Regardless of the explanation for declining bar passage rates, the fact remains that
a distinctly different climate now faces legal educators, students, and policymakers, partic-
ularly as it relates to student outcomes on the bar and beyond. At the same time, the rela-
tive importance of the bar exam within legal education has increased, driven mostly by
two factors. First, in the last decade, U.S. News & World Report modified its methodology
for calculating law school rankings and began to more heavily weight postgraduation
employment in positions requiring bar passage. This shift ostensibly increased the influ-
ence of bar passage in overall rankings, even though U.S. News & World Report enacted
that reprioritization via changes to employment data calculations. While law school
schools have long felt compelled to increase bar passage rates to maintain or improve
their rankings (Stake & Alexeev 2015), that pressure has intensified under the new meth-
odology and has likely also been exacerbated by shrinking pools of potential students.
Second, in 2015, the American Bar Association (ABA) instituted Standard 316, formally
linking law school accreditation to bar success for the first time (“Standard 316 Bar
Passage” 2017). This change further intensified public scrutiny regarding bar passage—a
phenomenon that persisted into early 2019 when the ABA considered revisions to Stan-
dard 316 that were heavily debated by the field (Randazzo 2019; Ryman et al. 2019).
Those changes were rejected (Ward 2019), but as the pressure to meet Standard 316 has
increased, many law schools have struggled to reconcile that demand with other co-
ccurring challenges in legal education, including decreasing enrollment and a changing
applicant pool (Arewa et al. 2014; McEvers 2016; Sloan 2015; Taylor 2014).
A Deeper Look at Bar Success 607

The combination of these forces has created a “growing consensus that law schools
in the United States are in the midst of a ‘crisis’” (Landrum 2015:250). This crisis rhe-
toric is nothing new to the field of education, although the focus on law schools may rep-
resent a relatively new target. Several co-occurring trends in legal education compound
this so-called “crisis.” First, during the same period that bar results fell, law schools have
also faced decreasing student enrollment and an associated pressure to compete for a
shrinking pool of prospective students (Arewa et al. 2014; Sloan 2015; Taylor 2014).
While recent data suggest the worst may be over—with nearly no change in enrollment
in 2016 (Ward 2016)—a decade of declining enrollment has resulted in substantial
changes to law school policies and practice. Most notably, decreasing enrollment also
prompted changes to student composition within U.S. law schools, including a slight
reduction in mean undergraduate grade point averages (UGPAs) and LSAT scores
(Taylor 2015). As noted above, this reduction in student credentials has been the subject
of significant debate in the legal education community.
In addition to changed incoming credentials, legal education has also seen a mar-
ginal increase in student diversity (McEvers 2016; Nance & Madsen 2014; Taylor 2015),
with students of color representing 26 percent of law students in 2014 compared to just
21 percent in 2004 (Taylor 2014). Taylor (2015) is careful to note that this change is
mostly attributable to a decline in the enrollment of white students rather than an
increase in the number of underrepresented students. While it certainly represents pro-
gress, it may be more of an artifact of declining enrollment than an indicator of
increased access and opportunity, and he reminds us that students of color remain “pro-
foundly underrepresented” within law schools across the country (Taylor 2015:4).1 As a
result, law schools remain driven to increase law student diversity while also combating
enrollment and bar passage declines.
Together, these trends have sparked fierce competition to recruit students, resulting
in what some scholars have labeled a law school “arms race” for higher rankings and the
application and matriculation of prospective students. Further, given the heightened focus
on bar passage, law schools seek students who will not only successfully complete their
legal education but also ultimately pass the bar exam (Arewa et al. 2014; Wellen 2005)
and find bar-dependent employment after graduation (Yakowitz 2010). Although law
schools have always been concerned with producing competent graduates (Marks & Moss
2016; Merritt, Hargens, & Reskin 2001), the legal education field now faces intensified
efforts to raise bar passage rates and solicit, produce, and respond to research regarding
bar exam results, curriculum, and testing protocols (Goforth 2015). As a result, under-
standing how to best support incoming students and help graduates pass the bar is both
an ongoing and growing priority for legal educators, researchers, and policymakers.
To address this changing climate in legal education, this article aims to understand
more fully law student success and the various factors—including incoming credentials,

1
In fact, in 2015, the Washington Post referred to law as the “least diverse profession in the nation,” noting that
fewer people of color practice law than engage in engineering, accounting, or surgery (Rhode 2015).
608 Farley et al.

academic performance during law school, and student characteristics—that contribute to


positive student outcomes and bar passage at one midwestern law school. We draw on
prior literature suggesting that a variety of student factors and characteristics are predic-
tive of bar exam passage, including UGPA (Alphran et al. 2011; Austin et al. 2017;
Georgakopoulos 2013), LSAT score (Alphran et al. 2011; Austin et al. 2017;
Georgakopoulous 2013; Goforth 2015; Rosin 2008; Wightman 1998), and law school
cumulative and first year (1L) GPA (Austin et al. 2017; Christopher 2015; Goforth 2015;
Wightman 1998). Our analyses extend that current literature by examining the predictive
utility of each of those factors to understand which characteristics and elements most
contribute to student success on the bar exam. In particular, we seek to understand when
in a student’s academic career we can reasonably predict his or her performance on the
bar. That is, can we make reasonable predictions based on admissions data, data from
the first year of study, or upon graduation? We conclude with recommendations for law
schools, researchers, and the broader field of higher education with regard to bar passage
and future research.

II. Theoretical Grounding


This research is predicated on the notion that student experiences and contexts mat-
ter for student outcomes, including bar performance. As such, we draw on the frame-
work for student success in college (Kuh et al. 2006) and Astin and Antonio’s (2012)
input-environment-output (I-E-O) model. Both frameworks provide a clear way to con-
ceptualize the role of student inputs and the learning environment on student out-
comes and allow researchers to evaluate the role of environmental variables on
student outcomes (Thurmond et al. 2002). To a certain extent, these models provide
an alternative to the highly contested mismatch theory (Sander 2004), which argues
that affirmative action harms students of color in the long run by allowing candidates
with poor incoming credentials access to law schools where they cannot ultimately be
successful.

A. Complicating the Relationship Between Inputs and Outputs

Both Kuh et al. (2006) and Astin and Antonio (2012) acknowledge the importance of con-
text and student engagement in student success, disrupting notions that there exists a sim-
ple or straightforward relationship between incoming student credentials and student
outputs. According to Kuh et al. (2006), the college experience consists of two key aspects,
student behavior (e.g., time and effort put into studies, interaction with faculty, peer
involvement) and institutional conditions (e.g., resources, educational policies/practices,
programs, and structural features). Situated at the intersection of student behavior and
institutional conditions is student engagement. Educational policies and practices can
enhance student engagement through faculty-student contact, collaborative and active
learning environments, and a generally inclusive and affirming culture (Kuh et al. 2006).
As such, student experiences and academic preparation are key attributes to student
A Deeper Look at Bar Success 609

success, especially when defining success as educational attainment/persistence or desired


educational credential such as bar passage (Kuh et al. 2006).

B. A Note on Sander’s Mismatch Theory

Any discussion on the relationship between student incoming credentials and student
outcomes would be incomplete without an acknowledgment of Sander’s (2004) work
regarding mismatch in the law school context. Mismatch theory contends that affirmative
action is doing more harm than good to black law students and other students of color
by placing them in academic environments for which they are underprepared. Sander
and Taylor state that it is not necessarily a lack of talent or ability that lands black stu-
dents at the bottom, but the “unintended side effects of larger racial preferences, which
systematically put minority students in academic environments where they feel over-
whelmed” (2012:4). When students are admitted to school based on race as a way to
“boost” minority enrollment, they are subsequently placed in academic environments
where they are underprepared and underqualified, often resulting in academic
underperformance and disengagement from the learning process (Sander & Taylor
2012). Furthermore, mismatch not only causes students to fall behind academically, but
it also leads them to miss out on well-paying and highly sought jobs based on their poor
performance (Sander & Taylor 2012). Although Sander’s work (Sander 2004; Sander &
Taylor 2012) has been criticized for being racially insensitive, he argues that the differ-
ence in performance is not an intellectual one but an inequitable one due to the mis-
match in qualifications.
There have been numerous critics and counterresponses to Sander’s (2004) work.
These critiques range from the empirical—for example, Kidder and Onwuachi-Willing’s
claim that Sander and Taylor (2012) “cherry-pick the data to support a series of
unwarranted claims” (2013:896)—to conceptual, including Barnes’s (2007) work that
argues mismatch is not directly about race, but about any student who is underqualified
or at the bottom of his or her class. Particularly salient to the present study are critiques
that question Sander’s implicit claims regarding the relationship between incoming stu-
dent credentials and student outcomes. According to Harris and Kidder (2004), Sander
puts too much emphasis on entering credentials, including LSAT and UGPA. He also
fails to acknowledge the role institutional environment can have on academic perfor-
mance and learning (Harris & Kidder 2004).

C. Applying the Theoretical Framework to Bar Passage Research

While this article is not an investigation into mismatch—nor does it endeavor to be—it
does contribute to a body of literature regarding the complex factors that explain student
success and bar passage. We utilize a more holistic understanding of student outcomes
informed by the work of Kuh et al. (2006) and Astin and Antonio (2012), and in some
ways counter to the conceptualization undergirding Sander’s mismatch theory. Our ana-
lytical approach is informed by a belief that law school performance is based on many
complex factors—extending far beyond incoming credentials—that cannot be fully
610 Farley et al.

Figure 1: I-E-O model (Astin & Antonio 2012) of student outcomes in law programs.

explained by the traditional collection of quantitative measures of student academic


performance.
Toward this end, Figure 1 summarizes visually the modified version of Astin and
Antonio’s (2006) I-E-O model used to shape the present study. It provides a clear way to
operationalize the framework advanced by Kuh et al. (2006), highlighting the role of
both educational inputs (I) and the learning environment (E) in determining student
outcomes (O). The I-E-O model has successfully been used as a framework for assess-
ments in higher education, although it has yet to be applied in the systematic study of
bar passage. Sesate et al. (2017) use the I-E-O in a similar fashion within medicine; they
considered how input variables (e.g., sex, underrepresented minority status, MCAT,
GPA) and environment variables (e.g., specific courses) might be used to predict passage
of the U.S. Medical Licensing Exam Step 1 (Sesate et al. 2017). Extending this model to
the study of bar passage could be an important contribution to the legal education litera-
ture. Further, using this framework to approach data analysis holds researchers account-
able in addressing both student input and environmental factors along with student
outcomes. In return, it can create a more accurate and nuanced model. All this can help
researchers and practitioners better and more holistically understand the student charac-
teristics and factors related to bar passage.
A Deeper Look at Bar Success 611

III. Review of Prior Literature Regarding Bar Success


A. The Role of the Bar Exam in Law Student Success

Traditional postsecondary scholarship often operationalizes student success using gradua-


tion and employment attainment—in part due to the absence of other more formal mea-
sures. Like all professional programs, law schools have an obligation to adequately train,
educate, and produce individuals who can enter the profession with the knowledge and
tools necessary to succeed (Alphran et al. 2011; Gerst & Hess 2009). Unlike many pro-
grams, however, the legal profession includes a rigorous licensing exam that serves as the
ultimate measure of student success and acts as a final gatekeeper to the profession. As
such, the importance of the bar exam within legal education cannot be overstated.
Historically, the bar was designed to ensure society that practitioners were compe-
tent in their practice of the law (Trujillo 2007). It does not and reasonably cannot mea-
sure all the skills necessary to practice law, but is designed to allow students to
demonstrate their ability to analyze facts, identify issues, and test general law knowledge
(Trujillo 2007). Almost all jurisdictions now require a written examination of some sort,
although there remains large variation across the composition and requirement for said
exams (Goforth 2015).2 Passage rates also vary significantly by jurisdiction, with some
places recordings passage rates below 60 percent and others over 85 percent (Goforth
2015). Regardless of these differences, bar passage remains a critical gatekeeper on the
way to practicing law: in fact, Yakowitz (2010) estimated that over time there have been
approximately 150,000 law school graduates who sat for the bar exam at least once and
were never able to pass the test. Given that the bar can prohibit students from accessing
the profession, its importance is incontestable. Furthermore, given the shifting trends in
legal education and heightened scrutiny surrounding bar passage, many argue that the
stakes are higher for students to pass the bar today than at any other point in history.
Failing the bar can have significant implications for students, including unemployment, a
sense of professional incompetence, social embarrassment, and financial insecurity
accompanied by extensive debt (Christopher 2015; Kaufman et al. 2007).

B. Factors that Predict Bar Passage

1. The Utility of Student Inputs in Predicting Bar Performance

Much of the literature to date regarding bar passage has focused on the predictive utility
of student inputs in general and students’ incoming credentials, including undergradu-
ate GPA (UGPA) and the LSAT, in particular (e.g., Marks & Moss 2016). Several
researchers have shown that there is a significant relationship between LSAT scores and
bar passage (Alphran et al. 2011; Austin et al. 2017; Georgakopoulous 2013; Goforth
2015; Rosin 2008; Wightman 1998), although those results are somewhat disputed in the
field and the literature suggests that the relationship is far from perfect. Some

2
Certainly, that variation has decreased with the rapid adoption of the UBE (NCBE 2019).
612 Farley et al.

researchers have found an “irregular” relationship between LSAT and bar passage
(Cutright et al. 1975), while other researchers have suggested that LSAT scores are them-
selves statistical predictions accurate less than 50 percent of the time (Bell 1981). Other
critics, acknowledging the relationship between LSAT scores and bar passage, condemn
the “simple-minded reliance” on the LSAT (Howarth 1997:928). Indeed, the LSAC has
cautioned against relying too heavily on the LSAT to understand bar passage (Austin
et al. 2017) and notes that several factors “significantly affect bar passage rates above and
beyond LSAT scores” (Bernstine nd:1). The LSAC found, for example, only a modest cor-
relation of 0.30 between the LSAT and bar exam passage during its national longitudinal
bar passage study (Wightman 1998:37); to contextualize this finding, these results mean
that only 9 percent of the variation in bar performance was explained by the variation in
LSAT scores. Put more simply, while LSAT was found to be a statistically significant pre-
dictor, it explains very little in how students perform on the bar exam. Prior researchers
have also explored the relationship between LSAT scores, bar passage, and other factors
(e.g., Case 2011; Merritt 2015; Rush & Matsuo 2007). For instance, Klein (2001) goes so
far as to declare the LSAT’s purpose is to predict a law school applicant’s first-year
grades, even though others note LSAT scores only explain 16 percent of the variation in
first-year grades (Delgado 2001).
The literature is more unclear regarding the predictive power of the UGPA. Some
researchers have found evidence that UGPA can predict 1LGPA (Fordyce et al. 2017) or
bar passage (Marks & Moss 2016; Rush & Matsuo 2007; Wightman 1998), but others have
found little support for this claim (Alphran et al. 2011; Austin et al. 2017; Georgakopoulos
2013). To a certain extent, these findings are also limited because very few studies control
for data on many other important traits, such as college quality or major, work experience
type or duration, or criminal or disciplinary records (Marks & Moss 2016:214).
Prior research has also considered the relationship between bar passage and stu-
dent demographics. Subotnik argues that the bar exam has taken “an especially high toll
on minorities” (2013:368) and research has shown that African Americans pass the bar at
significantly lower rates than their white counterparts (Curcio 2002; Subotnik 2013;
Williams 2013). Nevertheless, few bar passage studies have examined student success pre-
dictors and student demographics in concert, limiting the utility of the scholarship to
date (Marks & Moss 2016).

2. The Utility of Environmental Factors in Predicting Bar Performance

While much of the literature has focused on student inputs, some studies have concen-
trated on law school experiences that might predict bar passage. Research regarding spe-
cific changes to a law school’s program is limited (see, e.g., Alphran et al. 2011; Schulze
2017). Research regarding the predictive power of law school grades is more common and
has demonstrated a strong association with bar passage (Austin et al. 2017; Goforth 2015;
Wightman 1998), with at least one study suggesting that law school grades are the strongest
overall predictor of bar passage (Christopher 2015). However, these studies are limited by
data availability as well, with many factors related to student success deemed too difficult to
measure or “too individualized to include in a statistical model” (Marks & Moss 2016:215).
A Deeper Look at Bar Success 613

IV. Research Method


The primary purpose of this study was to explore the characteristics and factors that con-
tribute to law student success, operationalized as bar passage, at the University of Cincin-
nati (UC), a public research institution. To date, the preponderance of literature has
focused on a single site, similar to the approach taken here. Single-case studies are some-
times scrutinized due to their perceived lack of generalizability and absence of breadth,
inherent researcher bias, and overreliance on theory (Flybvjerg 2005). However,
researchers conducting a case study ought to be clear about the formal unit they are
investigating, and how the results can prove to be transferable to other, similar contexts
(Gerring 2004). Furthermore, thorough case study research has many potential strengths,
including depth, conceptual validity, contextual understanding, power to link causes and
outcomes of an issue, and an ability to foster new hypotheses (Flybvjerg 2005). The
section that follows details the research questions, relevant data sources, and analytical
approach used in this particular investigation.

A. Research Questions

Given the mixed results in the prior literature regarding the predictive utility of various
student factors and characteristics in estimating bar exam passage, our research agenda
addresses a central overarching question: What do we know about bar success based on
a larger collection of relevant and available student data, and what remains unknown?
In this article, we explore that broad question via the following three research
questions.

Research Question 1: What are key predictors of law student success postgraduation,
operationalized as Ohio bar exam passage, at the UC College of Law, and how does this com-
pare to the extant literature?
Research Question 2: How early in a student’s course of study can we reliably predict bar suc-
cess and identify students at risk of failure?
Research Question 3: Can we use available data to learn more about student characteristics or
activities associated with success and failure, particularly for those who perform differently than
predicted by the empirical model?

While there is agreement that law school course taking, including law GPA, is generally
related to bar passage, there is less certainty regarding the effectiveness of measures col-
lected prior to law school enrollment, including UGPA and LSAT scores. Furthermore,
much of the literature to date has explored these relationships in isolation, rather than
attempting to craft a more cohesive narrative about the various influences on student
success and bar passage within a university (Marks & Moss 2016). This article seeks to
deepen the literature by doing just this.

B. Context and Student Sample: University of Cincinnati College of Law

Data for this study were drawn from five cohorts of students from the UC College of Law,
a large, urban university located in the Midwest. The UC College of Law is the fourth
614 Farley et al.

oldest continuously operating law school in the country and has a storied history, having
graduated William Howard Taft, the only person in U.S. history to serve as both the Presi-
dent and Chief Justice of the United States. The UC College of Law is competitive, largely
considered one of the nation’s premier small, urban, public law schools and classified as
a “regional elite” program (Arewa et al. 2014).
The majority of UC graduates elect to sit for the Ohio bar exam. Consistent with
nationwide trends, Ohio bar passage rates have declined over time (see Figure 2),
although Ohio’s bar passage rates remain consistently higher than nationwide results. In
2017, this placed Ohio slightly above the median state performance and well above the
lowest performing state (California), where 58 percent of first-time bar takers passed the
exam. In contrast, Oklahoma bar examinees earned the highest pass rate in 2017
(87 percent).
The UC College of Law is well-positioned as the context for this research for
several reasons. First, the college, like many other law schools across the country,
has struggled with many of the phenomena described in Section I. The composition
of the school is rapidly changing, with a significant reduction in overall enrollment
and a slight increase in the enrollment of students of color and first-generation col-
lege students. Consistent with national trends, the credentials of incoming students
have also declined slightly, largely attributable to a general reduction in the pool of
law school applicants. This reduction in qualifications—particularly those qualifica-
tions that are thought to predict bar exam passage—necessitates a more comprehen-
sive and nuanced understanding of the factors that contribute to law student
success. Finally, the college’s leadership is committed to collaborating on this
research agenda and using results to directly and immediately inform practice, some-
thing unique in educational research.

Figure 2: National and Ohio first-time bar passage rates.

NOTE: Figure 2 presents both Ohio and national first-time bar passage data from 2008 to 2017, drawn from NCBE’s
2017 Statistics (NCBE 2019).
A Deeper Look at Bar Success 615

Table 1: Law School Graduate Demographic Data for First-Time Ohio Bar Takers, by
Year
Graduate Admit OH Percent Percent Mean Mean Mean Bar Percent Bar
Cohort n Bar n Female URM UGPA LSAT Score Passage

2012 138 94 43 6 3.51 159 444 89


2013 144 80 36 6 3.53 160 453 84
2014 119 82 44 6 3.56 159 446 88
2015 103 71 38 9 3.46 158 452 84
2016 99 77 40 12 3.44 158 445 90

1. Relevant Student Population

Data were collected from five cohorts of students admitted in 2009–2013 and slated to grad-
uate between 2012–2016. Among the 603 students who matriculated to UC in this time-
frame, roughly 67 percent or 404 students3 ultimately sat for the Ohio bar, and 383 of those
students took the bar on time and executed waivers to provide bar passage data back to the
university. Those students were selected as the population of interest for this study because
of the greater data granularity available for the in-state bar examinees, as outlined below.
Table 1 presents data summarizing the number of admitted/matriculated students
and the number of graduates who took the Ohio bar from each cohort, along with basic
demographic data regarding Ohio bar takers. The incoming admission credentials
(i.e., UGPA and LSAT scores) are also presented. In general, these data mirror national
trends. The number of matriculated students decreased nearly 30 percent from 138 to
99 students between 2009 and 2013,4 and the number of Ohio bar takers decreased
nearly 20 percent (from 94 to 77 students). There was also a slight decrease in UGPA
and LSAT scores and the percentage of underrepresented minority (URM) students
increased from 6 percent to 12 percent. Ohio bar passage rates were more variable year
to year, and do not reflect an overall downward trend.

2. Data Sources

We utilized de-identified, longitudinal administrative data and bar performance data col-
lected from the UC College of Law. One feature that differentiates this research from
prior investigations of a similar nature is the relatively unprecedented access to bar
results from the Supreme Court of Ohio, including overall passage data, data for individ-
ual subscores and essays, and overall bar exam scores, including both raw and scaled
scores. These data were combined with administrative records to develop a more com-
plete narrative of bar passage at UC, aligned to Astin and Antonio’s (2012) I-E-O model.

3
Figure 2 presents detailed retention and graduation data for the 603 matriculated students.

4
For clarity, this is organized by intended graduation year, not admit year.
616 Farley et al.

a. I-E-O model input measures: Student input data included personal qualities and charac-
teristics related to bar success (Astin & Antonio 2012) that students bring with them to
their legal studies. We utilized two primary forms of student input data, collected at the
time of initial application to the UC College of Law, including (1) law school student
admissions data (i.e., LSAT score, UGPA, undergraduate institution, an indicator of
undergraduate institution rigor, and major), and (2) student demographic data, includ-
ing race/ethnicity, which was operationalized in the analyses as a binary indicator of
whether students identify as an underrepresented minority; gender; and age.

b. I-E-O environment measures: Environmental data included the experiences within the
classroom and beyond that lead to degree and bar performance (Astin &Antonio 2012).
Environment data enriches student input data, providing additional information about
how student outcomes develop. For the present study, environment measures include
comprehensive course-taking data from the UC College of Law administrative records
(e.g., course name, term and year of study, and student grade).

c. I-E-O output measures: Student output measures mostly focused on information from the
Ohio bar exam. Relevant bar data included a binary indicator of student passage, the total
Ohio bar scaled score, and Ohio bar subcomponent scaled scores. For the purposes of these
analyses, student success was operationalized as our primary outcome of interest: bar passage.
A sensitivity analysis was also conducted mirroring these analyses with total exam score and
supplemental analyses considered this outcome as well. Future research will more carefully
examine predictors of Ohio bar exam subcomponent scaled scores and the relationship of
performance on subcomponents to overall bar success. As shown in Figure 3, however, these
students appear quite similar to bar takers in other states in terms of academic performance.

C. Analytical Approach

While the preponderance of literature around student success in law schools has focused
on bar success, it is common for more general studies of postsecondary student success
to focus on graduation and retention of students over time. To situate the present study
in the broader landscape of postsecondary student success, we first characterized the path
to the bar exam for all matriculated students across five graduating cohorts before
exploring the predictors of bar passage. We used descriptive analyses of student attrition
and retention over the entire course of study, and mapped these results in a branching
decision tree to illustrate the time period most likely to exit students from the program.
These analyses also clarified the population of students who ultimately graduated and sat
for the bar exam, which comprised the population included in all subsequent analyses of
bar passage. This highlights an important interpretative caution for the present study
and most studies that examine bar passage rates: These analyses by definition limit the
population of interest to students who have completed law school and decided to seek
entrance to the bar. In the current analyses, this excludes roughly 5 percent of matricu-
lated students who did not go on to graduate and roughly 7 percent of graduates who
did not sit for the bar exam in any jurisdiction.
A Deeper Look at Bar Success 617

Figure 3: Flow chart of student retention, graduation, and bar examination results.

a
Ohio bar passage data were missing for 19 students; b out-of-state bar passage data were missing for 27 students.

After characterizing the population of students who took the bar, we employed
logistic regression to explore predictors of bar passage as posed in Research Question
1. The binary logistic model of bar passage can be generally specified as:
618 Farley et al.

ef ð X Þ
P ðY s = 1Þ = ,
1 + ef ð X Þ

where Ys is a dichotomous variable that takes a value of 1 if student s passes the Ohio bar
exam and 0 otherwise, and X represents a combination of candidate variables at various
times throughout the course of study, including student input variables available at the
time of admission and environment variables produced during the program of study
(e.g., course-taking and performance data).
To address Research Question 2, logistic regression model performance was com-
pared across various temporal configurations, most notably at admission (i.e., prelaw
model), following the first year of study (i.e., post-1L model), and upon graduation
(i.e., post-3L model). To explore the relative predictive power and accuracy of predic-
tors throughout the trajectory of the law school career, we relied on three measures of
model fit and accuracy (presented in Table 2). First, we compared Nagelkerke pseudo
R-squared values for each model. Pseudo R-squared values differ from traditional OLS
R-squared values and represent more generally an empirical estimate of how well a
given model explains the data when compared to similar models. Hosmer et al. (2013)
warn that these figures can mislead readers who are more familiar with a traditional
OLS approach, and they suggest they may be more helpful when selecting among can-
didate models during model building. We also compared models’ predictive accuracy,
using two measures: (1) the accuracy of at-risk identification, or the percent of first-time
Ohio bar takers who failed the bar and were also identified as at risk by the model
(i.e., those with predicted probabilities of passage below 75 percent), and (2) the rate of
false positives, or the percent of students identified as at risk who actually passed the
Ohio bar on the first attempt.5
Finally, analyses for Research Question 3 examined data patterns and trends
related to empirical predictions of passage and misclassification (i.e., students falsely
identified as at risk or probable passers). More specifically, we employed basic descriptive
analyses to examine the complex relationships between bar passage and predicted perfor-
mance, as mediated by student characteristics and student behaviors after the first year of
study, like electing to take a greater number of bar preparation courses.

V. Results
The results that follow are organized into three sections. The first section explores law
student performance at UC in general and examines student outcomes over time, map-
ping retention and student decision making over the entire course of study
(e.g., retention after the first year), at graduation, and for the bar exam. The second

5
In short, the most predictive models should seek to maximize accuracy of at-risk identification and minimize false
positives.
A Deeper Look at Bar Success 619

Table 2: Logistic Models of Bar Passage for First-Time Ohio Bar Takers
Prelaw Post-1L Post-3L
Variable Category Predictor Variables Constant–Only Model Model Model

Empty Model Constant 1.88*** –23.32 ** –16.80 + –33.80 **


0.15 8.55 9.63 11.61
Admissions LSAT 0.12* 0.02 0.01
0.05 0.06 0.07
Undergrad GPA 1.40* 0.82 0.66
0.56 0.65 0.76
Undergrad selectivity1 0.62+ 0.79* 0.91*
0.33 0.38 0.44
Humanities/socsci/ –0.69+ –0.91* –0.84+
policy major
0.38 0.42 0.49
Demographics Female 0.03 0.11 –0.05
0.34 0.39 0.47
Age –0.00 0.06 –0.10
0.044 0.053 0.06
Underrepresented –0.93+ –0.98 –0.63
minority
0.53 0.61 0.72
1L Information First semester GPA 2.68*** 0.84
0.60 0.80
Second semester GPA 0.42 –2.40**
0.51 0.84
3L Information Final law GPA 8.59***
1.69
Upper-level bar course 0.74***
count
0.17
Model Summaries Accuracy of at-risk – 36% 58% 78%
identification
Percent of false positives – 6% 9% 8%
Nagelkerke R-squared – 0.17 0.37 0.56
+p < 0.10; *p < 0.05; **p < 0.01; ***p < 0.001.
1
Undergrad selectivity is an overall measure of selectivity based on ACT scores, coded as 1 = inclusive, 2 = selective,
and 3 = more selective.
SOURCE: Indiana University Center for Postsecondary Research (nd) The Carnegie Classification of Institutions of Higher
Education, 2015 ed. Bloomington, IN: Author.

section summarizes results of logistic models of bar passage to answer Research Questions
1 and 2. The third section presents results of descriptive analyses to deepen understand-
ing of bar passage patterns outlined in Section II.

A. Student Success Over Time and the Path Toward the Bar

Figure 3 summarizes the patterns of student enrollment and attrition across all five
cohorts, at various points as students progress toward the bar exam. Several important
patterns emerge upon close examination of these data. First, at the UC College of Law,
620 Farley et al.

the vast majority of students successfully complete the law program, earning a JD. In
total, 595 of 603 matriculated students followed a traditional course of study, meaning
they continuously enrolled in courses each fall and spring throughout the program. Only
eight students in the five-year sample took a leave of absence and returned at a later date;
two of these students went on to take the Ohio bar and are included in subsequent ana-
lyses of bar passage. Among students following a traditional course of study, nearly 95 per-
cent of student received their JD. Among the 30 students on a traditional course of study
who did not graduate, the vast majority—26 out of 30—left before the beginning of the
second year, or 2L. While these 30 students had similar incoming credentials as gradu-
ates, their mean cumulative law school GPA at the time of attrition was considerably
lower (i.e., almost two standard deviations) than the mean cumulative GPA of their grad-
uating peers. This pattern suggests that although admission information would have been
unlikely to identify these students as at risk for noncompletion, course performance may
have been an early signal.
The trends in the data are less clear when differentiating between students who
elect to take the bar exam and those who do not. Once again, incoming credentials were
quite similar, although the differences in cumulative law school GPA were much smaller.
A myriad of reasons may inform students’ decisions to sit for the bar. Among bar takers,
402 students on the traditional course of study—and two additional students not on a tra-
ditional sequence—took the Ohio bar, and 124 took an out-of-state bar. Passage rates for
Ohio and out-of-state bar takers were similar, 87 percent and 82 percent, respectively. As
with graduation trends, there were once again marked differences in mean cumulative
law school GPA among those who pass the bar and those who do not, both within Ohio
and beyond.

B. Predicting Bar Passage

As described in Section IV, logistic regression was employed to explore predictors of bar
passage for the 385 students who graduated and sat for the Ohio bar exam. Once again,
it is worth noting that these analyses exclude students who elected not to sit for the bar
at all. It is unclear what motivated those students’ decisions or whether they would have
been equally likely to pass the bar as students who did take the bar. As such, the implica-
tions of this research apply most directly to students who intend to take the bar, and may
not be applicable to student success more broadly construed.
Figure 4 compares models across various temporal configurations, most notably at
admission, following the first year of study, and upon graduation, using pseudo R-squared
estimates.6 We present results for eight tested models—including the prelaw, post-1L, and
post-3L models (highlighted in dark gray)—ordered by the timing of the availability of
various predictors. It is notable that the Contracts-Only model, which includes only a

6
Linear models of bar exam score were also tested as a sensitivity analysis of this decision and the results were con-
sistent with the findings presented here. For ease of interpretation, faithfulness to our a priori modeling choices,
and to limit article length, we only present results of the logistic regression here.
A Deeper Look at Bar Success 621

Figure 4: Pseudo R-squared across various temporal models of bar passage.

NOTE: Figure 4 presents pseudo R-squared estimates, or empirical estimates, of how well a given model explains
the outcome of interest for eight models of bar passage, ordered by the timing of the availability of various predic-
tors. Three models highlighted in dark gray represent the nested prelaw, post-1L, and post-3L models. The prelaw
model includes predictors related to incoming student credentials (e.g., UGPA and LSAT), undergraduate selectiv-
ity and major, and student demographics. The post-1L model includes those same predictors, and adds information
from the first year of law school (e.g., first and second semester GPA). The post-3L model adds the final law school
GPA and the number of upper-level bar courses taken. The models represented in light gray bars are not nested
models, meaning they do not incorporate all predictors that precede them.

single predictor—student performance in the contracts course—has a higher pseudo R-


squared than prelaw model, which utilizes all information available at the time of admis-
sion, including incoming student credentials (e.g., UGPA and LSAT), undergraduate
selectivity and major, and student demographics. The selection of contracts versus other
first-year courses was not substantively important—except that it represented the course
with the strongest predictive power overall. Instead, it was included in Figure 3 to demon-
strate the inherent limitations in predicting passage based on incoming credentials and
the predictive power of in-school experiences during legal education. Similarly, the first
semester GPA and 1L GPA were even more predictive than the contracts course. The
post-1L model incorporates all available data at the end of the first year of study, includ-
ing all predictors included in the prelaw model and information from the first year of law
school (i.e., first and second semester GPA). The post-3L model adds the final law school
GPA and the number of upper-level bar courses taken, and is the most predictive model
overall.
622 Farley et al.

Table 3: Student Characteristics and Outcomes by Semester 1 GPA


Sem1 GPA Quartile n UGPA LSAT Sem1 GPA Final GPA Bar Score Bar Pass %

1 86 3.43 157 2.44 3.01 418 65


2 101 3.48 158 2.88 3.26 440 86
3 99 3.47 159 3.22 3.44 454 94
4 118 3.60 160 3.67 3.72 470 98
NOTE: There were a significant quantity of ties after the first semester, which explains the uneven distribution of
students into each quartile.

To provide more detailed information, Table 2 summarizes model estimates for


the three cumulative models highlighted in Figure 4. Several important findings emerge
from a deeper comparison of these models and the predictors contained within each.
First, both demographic data and admission data provided limited predictive utility
among students who matriculated to the UC College of Law and sat for the Ohio bar.
While both LSAT and UGPA are statistically significant predictors of bar passage, the
pseudo R-squared and overall model performance are comparatively weak. Furthermore,
the prelaw model was only able to accurately identify 36 percent of the students who go
on to fail the bar, whereas models that relied on data from in-school experiences yielded
considerably more accurate predictions. More specifically, the post-1L model accurately
identified 58 percent of failers, and the most comprehensive post-3L model accurately
detected nearly four out of five students who would fail the bar. Furthermore, neither
LSAT nor UGPA nor any demographic variable was a significant predictor of bar passage
once student performance in law school was included in the model, suggesting that the
relative explanatory power of those predictors diminished when controlling for in-school
experiences.7
Second, Table 2 demonstrates that information gleaned from the first year of
study—including the first and second semester GPA—comprises some of the most signifi-
cant predictors across models. This suggests that student performance in the first year of
legal study is quite meaningful in predicting bar performance and provides considerable
insights into bar passage. To explore the relationship between first semester GPA and bar
passage more closely, we present Table 3. Table 3 displays student characteristics and out-
comes by first semester GPA quartile. In general, students’ incoming admissions

7
One might assume this finding could be explained by multicollinearity between incoming credentials and aca-
demic performance during law school. In fact, Sander (2004) might suggest that student UGPA and performance
on the LSAT are almost directly related to student performance in the first year of law school. However, to test for
possible methodological issues related to multicollinearity, we reviewed the bivariate correlations and variance
inflation factors for our predictors. LSAT score was only weakly to moderately weakly correlated with total exam
score (0.30), first semester GPA (0.34), second semester GPA (0.27), and final law GPA (0.29). Using VIF, we
found potential issues of multicollinearity only between first semester GPA, second semester GPA, and cumulative
law school GPA. However, the VIF values were still well within the guidelines suggested by the literature, leading us
to conclude that there were no problematic multicollinearity issues with the LSAT measure.
A Deeper Look at Bar Success 623

credentials, including UGPA and LSAT, varied only slightly across quartile,8 while bar
passage rates varied considerably. More notably, almost all students who did not pass the
bar were clustered in the bottom half of the class (Quartiles 1 and 2).
Finally, the 3L model yields significantly more explanatory power than the preced-
ing models, including the 1L model. Not only did the 3L model more accurately predict
who failed the bar, but both cumulative law school GPA and the number of upper-level
bar courses taken appeared to be significant predictors of bar passage, even holding first-
year performance constant. We interpret this cautiously to suggest that what is happening
during law school—both within and beyond the first year of study—is strongly associated
with student bar performance.

C. A Deeper Look at Course-Taking Patterns and Bar Passage

To examine patterns and trends in the courses students took more closely, additional
analyses considered the relationship between courses addressing content tested on the
Ohio bar (e.g., upper-level bar courses) and student bar passage. Upper-level bar courses
are not required courses at the UC College of Law, but are instead post-first-year courses
aligned to tested content on the Ohio bar. The UC College of Law currently offers
10 such courses. Rush and Matsuo point out that although “law school faculties strongly
advocate that students take upper division, bar examination subject-matter courses”
(2007:227), there is little to no empirical evidence to date demonstrating such courses
can improve passage rates overall. To begin understanding this complex relationship, we
employed basic descriptive statistics to explore trends between the number of upper-level
bar courses (ULBC) students elected to take and predictions of bar passage within
our data.
Across the sample, students took anywhere from two to 10 of these courses, and
those choices were associated with their success on the bar exam. As noted in Table 2,
the number of ULBC taken was a significant predictor of bar passage in the post-3L
model. In fact, each additional ULBC was associated with a 2.1 times increased odds of
bar passage, while controlling for final GPA, the first year of study, and incoming creden-
tials and student demographics.
The potential impact of the relationship between ULBC taking and bar perfor-
mance may be particularly crucial for students defined at risk after their first year (i.e., by
the post-1L model). Table 4 presents data disaggregated by identified risk from the post-
1L model and actual bar performance. Overall, regardless of risk status, students who pas-
sed the bar took more upper-level bar courses on average than those who did not pass
the bar. This relationship holds among students identified at risk by the model, even
though those students had considerably lower GPAs in their first year.

8
In fact, the bivariate correlations between LSAT and first and second semester GPA were quite weak (0.34 and
0.27, respectively). This means that LSAT explains less than 10 percent of the variation in first and second semes-
ter GPA, which is roughly equivalent to its relationship with bar passage. The relationship between first semester
GPA and bar passage is much stronger (0.57).
624 Farley et al.

Table 4: ULBC Among At-Risk Students


Predicted Risk and Actual Performance n UGPA LSAT Mean Sem1GPA Mean N ULBC

At risk; passed bar 33 3.36 157 2.56 5.8


At risk; did not pass 29 3.39 155 2.40 4.9
Not at risk; passed bar 295 3.53 159 3.24 5.5
Not at risk; did not pass 21 3.43 159 2.94 4.5

VI. Discussion and Implications for Legal Education


With regard to the key predictors of law student success postgraduation, operationalized
as bar passage, the analyses outlined above suggest bar passage can be predicted by a bat-
tery of variables. Despite considerable attention in the existing literature, these results
suggest that student inputs from the I-E-O model—including student demographics,
LSAT, and UGPA—explained little variation in bar passage. The selectivity of undergrad-
uate institution is one exception here, with students from more selective institutions
more likely to pass the Ohio bar across all three temporal models, even when controlling
for incoming credentials and law school performance. On the other hand, as both Kuh
et al. (2006) and Astin and Antonio (2012) suggest, student experiences within the
institution—captured as the environment variables in the I-E-O model—provided much
more predictive power than data collected prior to matriculation. Both course perfor-
mance (i.e., law school GPA) and course-taking patterns (i.e., the number of upper-level
bar courses taken) predicted student bar performance. As noted above, controlling for
student law school GPA, demographics, and admissions data, each additional upper-level
bar course taken was associated with a 2.1 times increase in the odds of bar passage. Of
course, we cannot be certain whether the relationship between upper-level bar course tak-
ing and bar passage is attributable to coursework that better prepares students for the
exam or selection effects among students who opt into ostensibly more rigorous cour-
sework. Encouragingly, though, the relationship holds for students at risk of failing the
bar, which might support arguments for the former explanation. This finding may there-
fore have immediate implications for student advising and course sequencing in legal
education at UC and within the broader field.
In addition to considering which predictors are most strongly associated with bar
passage, this article also poses the question of when we can reliably predict bar success
and identify students at risk of failure. Not surprisingly, the post-3L model provided the
most accurate and robust predictions of bar passage overall, accurately identifying 78 per-
cent of the students who did not pass the bar. However, identifying students at risk of not
passing the bar after graduation may not be particularly helpful, especially if the aim is to
be able to intervene on their behalf. While overall model fit and accuracy are somewhat
diminished, insights gleaned from the post-1L model may provide more actionable infor-
mation, and that model was still able to predict 58 percent of the students who did not
pass the bar. While the post-1L model identified fewer at-risk students than the post-3L
model, we believe it is defensible to sacrifice predictive power in lieu of actionable results.
At the end of the first year of law school, there were enough data available to allow UC to
A Deeper Look at Bar Success 625

identify nearly three in five students at risk of failing the bar; this could enable the pro-
gram to design and implement intervention or intensive coaching and advising to
improve the likelihood of success for those students. This might also provide data that
could facilitate secondary analyses about the students currently served by academic sup-
port programs to ensure the students most in need of support are, in fact, the ones
served.
Consider, for example, the following thought experiment. Imagine the UC College
of Law was able to develop an intervention applied after the first year to support students
identified as at risk for not passing the bar. Using empirical data from the post-1L model,
62 students in the five cohorts would have received this intervention, or just over 12 each
year. Thirty-three of those students would have passed the bar anyway, so the intervention
would not likely have affected their performance. However, for the other 29 students,
even marginal gains on total bar score may have meant the difference between passing
and failing the bar exam. In fact, we have bar exam scores for 16 of these students who
took the Ohio bar and shared their data back to UC. Even a modest 10-point improve-
ment in scores would have resulted in seven of the 16 students, or 43 percent, passing
the bar when they would have previously failed. The individual value of passing the bar
for those seven students is invaluable: Not only would they have access to the legal profes-
sion and greater financial opportunities as a result, but they would also be spared some
of the deleterious effects that occur when students do not pass the bar, as summarized
above. Furthermore, at a small institution like UC, increasing bar passage for even seven
students could have substantial consequences for bar passage rates, which could also
affect national rankings and accreditation.
While the descriptive statistics deepen our understanding of bar passage and fail-
ure, it is still valuable to consider where the model is unable to accurately predict student
bar performance. Understanding the limitations of the model is an important line of
inquiry, although one that requires considerably more attention than the scope of the
present study. Consider this: despite the predictive power of the models described above,
even the best-performing model was unable to predict student performance accurately
100 percent of the time. Our best model was still only able to identify 78 percent of stu-
dents who did not pass the bar, meaning the other 22 percent of students who did not
pass were misidentified. Furthermore, that model identified 30 students as at risk who
ultimately passed the bar. The analyses of the relationship between predicted and
observed bar performance as it relates to ULBC taking helps us begin to understand the
factors related to passage beyond what is captured in the models presented in this article.
In general, however, those analyses raise more questions than answers, and additional
research is needed to more fully explore the complexity of bar passage.

VII. Conclusion and Next Steps


In total, this article provide important insights into bar passage at a major research uni-
versity. It does not only provide insights into student success at the UC College of Law,
however; it also has the potential to contribute to the growing literature base regarding
626 Farley et al.

bar exam outcomes and law student success. At a minimum, we believe our findings sug-
gest that legal education matters. At the UC College of Law, once students were admitted,
what happened during their course of study contributed in meaningful ways to the varia-
tion seen in bar passage—much more so than incoming credentials or student demo-
graphics. We hope these results contribute to the ongoing discussion regarding the role
of incoming credentials and law school experiences in predicting bar success. We also
anticipate that this article will spark conversations about the identification of struggling
students early in their law program and the importance of course taking in student
outcomes.
Nevertheless, these findings are limited in scope. To grow and evolve our collective
understanding of law student success, the field of legal education needs additional
research in at least three key areas. First, we need research that relies on more and better
data regarding law student success. As noted throughout this article, the predictive
models used in the present study yield insights about bar passage only among students
who self-select to sit for the bar following graduation. A small but nonetheless important
group of students remove themselves from these analyses by electing not to take the bar,
and it is unclear that our findings hold for this unique population of students. Future
analyses may want to explore these data more closely. Additional research might also con-
sider how the inclusion of additional academic and nonacademic predictors strengthen
model predictive power and provide insights into the characteristics of students mis-
classified by existing empirical models of student success. This collection of research
would paint a much more complete picture of law student success.
Second, the field desperately needs additional research to examine these patterns
across a wider variety of contexts and institutions, including schools where significantly
fewer students pass the bar. While case studies can provide rich information about a par-
ticular phenomenon, we recognize that the generalizability of the present findings are
limited because they rely on results from a single jurisdiction and graduates from a single
institution—and one where the vast majority of students are ultimately successful on the
bar exam. The need for multisite research is particularly timely given the widespread
adoption of the Uniform Bar Examination, which for the first time will result in portable
scores and comparable results across bar exam takers. For the first time in legal educa-
tion history, the UBE has the potential to allow for sophisticated analyses across states
and jurisdictions, which will strengthen the literature regarding law student success and
allow legal educators to understand more deeply the program elements and interventions
that ensure their students are successful upon graduation.
Finally, future research should investigate whether student engagement and student
experiences in law school affect bar passage more than student admission credentials or
academic performance. Conventional wisdom suggests that in-school experiences, particu-
larly participation in academic support programs (ASPs), may improve bar passage.
Although most law schools now offer a variety of support programs (Schulze 2011), the lit-
erature regarding their effectiveness is relatively shallow. Bar success is often the focus of
support programs, but Schulze (2011) argues that ASPs do more than just improve bar pas-
sage. They also “help humanize the law school environment” and support students in
developing self-determination and autonomy (Schulze 2011:272). Research that considers
A Deeper Look at Bar Success 627

the impact of in-school experiences on both quantitative outcomes (e.g., bar passage) and
long-term qualitative outcomes (e.g., satisfaction with the profession or perceptions of the
alma mater) would add important detail to the scholarship on law student success.
The consequences of declining bar passage rates are not just a concern within the
legal profession, but within the broader field of education and in society writ large. Legal
educational institutions possess a moral obligation to train and educate students ade-
quately and prepare them to be productive and competent members in the legal profes-
sion. Given the shifting enrollment in law schools, declining bar passage rates, and a
general adoption of policies and practices related to academic support for law students,
results from the present study could motivate some of the additional areas of research
outlined above. The time is ripe for additional inquiry into law student experiences, and
research like the present study has the potential to impact local and national conversa-
tions about law student success, particularly for underprepared or traditionally underrep-
resented students. We urge the field to continue to call attention to the good work
happening in law schools across the country and to highlight the need for researchers,
educators, and practitioners to more comprehensively understand law student success
before addressing the so-called crisis in legal education.

References
Albanese, M. A. (2018) “July 2018 MBE: The Storm Surge, Again<” Fall Bar Examiner 30. Retrieved
from http://www.ncbex.org/.
Alphran, D., T. Washington, & V. Eagan (2011) “Yes We Can, Pass the Bar: University of the District
of Columbia, David A. Clarke School of Law Bar Passage Initiatives and Bar Pass Rates—From
the Titanic to the Queen Mary!” 14 Univ. of the District of Columbia Law Rev. 9.
Arewa, O. B., A. P. Morris, & W. D. Henderson (2014) “Enduring Hierarchies in American Legal
Education,” 89 Indiana Law J. 941.
Astin, A. W., & A. L. Antonio (2012) Assessment for Excellence: The Philosophy and Practice of Assessment
and Evaluation in Higher Education, 2d ed. Lanham, MD: Rowman & Littlefield Publishers.
Austin, K. A., C. M. Christopher, & D. Dickerson (2017) “Will I Pass the Bar Exam?: Predicting Stu-
dent Success Using LSAT Scores and Law School Performance.” 45(3) Hofstra Law Rev. 753.
Barnes, K. Y. (2007) “Is Affirmative Action Responsible for the Achievement Gap Between Black and
White Law Students?” 101 Northwestern Univ. Law Rev. 1759.
Bell, D. A., Jr. (1981) “Law School Exams and Minority-Group Students,” 7(2) Black Law J. 304.
Bernstine, D. O. (nd) Why LSAT Scores Should Not Be Used to Label Law Schools and Their Students.
Retrieved from https://www.lsac.org.
Case, S. M. (2011) “The Testing Column: Identifying and Helping At-Risk Students,” 80(4) Bar Exam-
iner 30.
Christopher, C. M. (2015) “Eye of the Beholder: How Perception Management Can Counter Stereo-
type Threat Among Struggling Law Students,” 53 Duquesne Law Rev. 161.
Curcio, A. A. (2002) “Society of American Law Teachers Statement on the Bar Exam,” 52 J. of Legal
Education 446.
Cutright, P., K. Cutright, & D. G. Boshkoff (1975) “Course Selection, Student Characteristics and
Bar Examination Performance: The Indiana University Law School Experience,” 27(2) J. of
Legal Education 127.
Delgado, R. (2001) “Edward L. Barrett, Jr. Lecture on Constitutional Law: Official Elitism or Institu-
tional Self Interest? 10 Reasons Why UC-Davis Should Abandon the LSAT (And Why Other
Good Law Schools Should Follow Suit),” 34 UC Davis Law Rev. 593.
628 Farley et al.

Georgakopoulous, N. L. (2013) “Bar Passage: GPA and LSAT, Not Bar Reviews,” Robert H. McKinney
School of Law Legal Studies Research Paper 2013-30.
Gerst, S., & G. Hess (2009) “Professional Skills and Values in Legal Education: The GPS Model,” 42
Valparaiso Univ. Law Rev. 513.
Goforth, C. (2015) “Why the Bar Examination Fails to Raise the Bar,” 42 Ohio Northern Univ. Law
Rev. 47.
Harris, C. I., & W. C. Kidder (2004) “The Black Student Mismatch Myth in Legal Education: The Sys-
temic Flaws in Richard Sander’s Affirmative Action Study,” 46 J. of Blacks in Higher Education 102.
Hosmer, D. W., S. Lemeshow, & R. X. Sturdivant (2013) Applied Logistic Regression, 3d ed. Hoboken,
NJ: Wiley.
Howarth, J. (1997) “Teaching in the Shadow of the Bar,” 31 Univ. of San Francisco Law Rev. 927.
Indiana University Center for Postsecondary Research (nd) The Carnegie Classification of Institutions of
Higher Education, 2015 ed. Bloomington, IN: Author.
Kidder, W. C., & A. Onwuachi-Willig (2013) “Still Hazy After All These Years: The Data and Theory
Behind Mismatch,” 92 Texas Law Rev. 895.
Klein, S. P. (2001) “Law School Admissions, LSATs, and the Bar,” 15(1) Academic Questions 33.
Kuh, G. D., J. Kinzie, J. A. Buckley, B. A. Bridges, & J. C. Hayek (2006) What Matters to Student Success:
A Review of the Literature. Washington, DC: U.S. Department of Education.
Landrum, S. D. (2015) “Drawing Inspiration from the Flipped Classroom Model: An Integrated
Approach to Academic Support for the Academically Underprepared Law Student,” 53
Duquesne Law Rev. 245.
Marks, A. B., & S. A. Moss (2016) “What Predicts Law Student Success? A Longitudinal Study Correlating
Law Student Applicant Data and Law School Outcomes,” 13(2) J. of Empirical Legal Studies 205.
McEvers, K., narrator (2016). “As Law School Applicant Pool Shrinks, Student Bodies Diversify’”
April 26 All Things Considered. Washington, DC: National Public Radio.
Merritt, D. J., L. L. Hargens, & B. F. Reskin (2001) “Raising the Bar: A Social Science Critique
of Recent Increases to Passing Scores on the Bar Exam.” 69(2) Univ. of Cincinnati Law
Rev. 929.
Merritt, D. J. (2015) “LSAT Scores and Eventual Bar Passage Rates,” December 15 Faculty Lounge.
Retrieved from http://www.thefacultylounge.org/.
Nance, J. P., & P. E. Madsen (2014) “An Empirical Analysis of Diversity in the Legal Profession,” 47
(2) Connecticut Law Rev. 271.
National Conference of Bar Examiners (NCBE) (2018) “2017 Statistics,” Spring Bar Examiner
9. Retrieved from http://www.ncbex.org/.
——— (2019) Adoption of the Uniform Bar Examination with NCBE Tests Administered by Non-UBE Jurisdic-
tions. Retrieved from http://www.ncbex.org/exams/ube/
Olson, E. (2015) “Study Cites Lower Standards in Law School Admissions,” October 26 New York
Times. Retrieved from https://www.nytimes.com.
Rhode, D. L. (2015) “Law Is the Least Diverse Profession in the Nation. And Lawyers Aren’t Doing
Enough to Change That,” May 27 Washington Post. Retrieved from http://www.
washingtonpost.com.
Rosin, G. S. (2008) “Unpacking the Bar: Of Cut Scores and Competence,” 32 J. of the Legal Profes-
sion 67.
Rush, D. K., & H. Matsuo (2007) “Does Law School Curriculum Affect Bar Examination Passage? An
Empirical Analysis of Factors Related to Bar Examination Passage During the Years 2001
Through 2006 at a Midwestern Law School,” 57(2) J. of Legal Education 224.
Ryman, A., J. Ellis, R. Wolcott, & M. Payne (2019) “Law Schools Where Too Many Graduates Fail the
Bar Exam May Face Tougher Sanctions: With Tens of Thousands of Dollars in Student Debt,
Some Law School Grads Are Unable to Pass the Bar to Become Lawyers. Are Their Law
Schools to Blame?” January 24 USA Today. Retrieved from https://www.usatoday.com.
Sander, R. H. (2004) “A Systemic Analysis of Affirmative Action in American Law Schools,” 57
Stanford Law Rev. 367.
A Deeper Look at Bar Success 629

Sander, R. H., & S. Taylor, Jr. (2012) Mismatch: How Affirmative Action Hurts Students it’s Intended to
Help and Why Universities Won’t Admit It. New York: Basic Books.
Schulze, L. N., Jr. (2011) “Alternative Justifications for Law School Academic Support Programs:
Self-Determination Theory, Autonomy Support, and Humanizing the Law School,” 5 Charles-
ton Law Rev. 269.
(2017) “Using Science to Build Better Learners: One School’s Successful Efforts to Raise its
Bar Passage Rates in an Era of Decline,” Florida International Univ. Legal Studies Research Paper
17-11.
Sloan, K. (2015) “Has Law School Enrollment Hit Rock Bottom?” July 20 National Law J. Retrieved
from http://www.nationallawjournal.com/.
Song, J. (2014) “Fewer Law School Graduates Pass Bar Exam in California,” December 8 Los Angeles
Times. Retrieved from https://www.latimes.com.
(2015) “As Fewer Californians Pass the Bar, Are LSAT Scores an Early Indicator of Success?”
November 13 Los Angeles Times. Retrieved from https://www.latimes.com.
Stake, J. E., & M. Alexeev (2015) “Who Responds to U.S. News and World Report Rankings?” 12(3) J. of
Empirical Legal Studies 421.
Subotnik, D. (2013) “Does Testing = Race Discrimination?: Ricci, the Bar Exam, the LSAT, and the
Challenge of Learning,” 8 UMass Law Rev. 332.
Taylor, A. N. (2014) “As Law Schools Struggle, Diversity Offers Opportunities,” February 10 Chronicle
of Higher Education. Retrieved from http://www.chronicle.com/.
(2015) “Diversity as a Law School Survival Strategy,” 59 Saint Louis Univ. Law J. 321.
Tinto, V. (1975) “Dropout from Higher Education: A Theoretical Synthesis of Recent Research,” 45
Rev. of Educational Research 89.
Ward, S. F. (2016) “Slight Increase in 1Ls for 2016, New ABA Data Shows,” December 15 ABA J.
Retrieved from http://www.abajournal.com/.
(2019) “ABA House of Delegates Rejects Changes to the Bar Passage Standard for Law
Schools,” January 28 ABA J. Retrieved from http://www.abajournal.com.
Wellen, A. (2005) “The $8.78 Million Maneuver,” July 31 New York Times. Retrieved from http://www.
nytimes.com/.
Wightman, L. F. (1998) LSAC National Longitudinal Bar Passage Study. Newtown, PA: Law School
Admission Council.
Williams, D. (2013) “Do Racial Preferences Affect Minority Learning in Law Schools?” 10(2) J. of
Empirical Legal Studies 171.
Yakowitz, J. (2010) “Marooned: An Empirical Investigation of Law School Graduates Who Fail the
Bar Exam,” 60(1) J. of Legal Education 3.

You might also like