Description: Tags: 20064001r

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 191

National Assessment of Title I:

Interim Report
Volume I: Implementation
Institute of Education Sciences
National Center for Education National
Evaluation
and Regional Assistance Assessment of
U.S. Department of Education
Title I:
Interim Report

Volume I:
Implementation of
Title I
A Report Prepared for IES
by the Office of Planning,
Evaluation, and Policy
Development, Policy and
Program Studies Service

February 2006

NCEE 2006-4001R
National Assessment of Title I
Interim Report

Volume I: Implementation of Title


I

A Report Prepared for IES


by the Policy and Program Studies Service,
Office of Planning, Evaluation and Policy Development

Stephanie Stullich
Elizabeth Eisner
Joseph McCrary
Collette Roney
Institute of Education Sciences
National Center for Education Evaluation and Regional Assistance NCEE 2006-
4001R
U.S. Department of Education
February 2006
U. S. Department of Education
Margaret Spellings
Secretary

Institute of Education Sciences


Grover J. Whitehurst
Director

National Center for Education Evaluation and Regional Assistance


Phoebe Cottingham
Commissioner

February 2006; Reprinted May 2006

This report is in the public domain. Authorization to reproduce it in whole or


in part is granted. While permission to reprint this publication is not
necessary, the citation should be: Stephanie Stullich, Eisner, Elizabeth,
McCrary, Joseph, and Roney, Collette, Policy and Program Studies Service,
Office of Planning, Evaluation, and Policy Development. National
Assessment of Title I Interim Report: Volume I: Implementation of Title I,
Washington, DC: U.S. Department of Education, Institute of Education
Sciences, 2006.

To order copies of this report,

 Write to ED Pubs, Education Publications Center, U.S. Department of


Education, P.O. Box 1398, Jessup, MD 20794-1398.

 Call in your request toll free to 1-877-4ED-Pubs. If 877 service is not yet
available in your area, call 800-872-5327 (800-USA-LEARN). Those who
use a telecommunications device for the deaf (TDD) or a teletypewriter
(TTY) should call 800-437-0833.

 Fax your request to 301-470-1244.

 Order online at www.edpubs.org.

This report also is available on the Department’s website at


http://www.ed.gov/ies.

Upon request, this report is available in alternate formats such as Braille,


large print, audiotape, or computer diskette. For more information, please
contact the Department's Alternate Format Center at 202-260-9895 or 202-
205-8113.
 Contents
Executive Summary.............................................................................. .....................iii

I. Introduction.................................................................................. ..........................1
A. National Assessment of Title I......................................................................1

II. Overview of Title I Program................................................................................. ....3


A. Key Provisions of Title I under the No Child Left Behind Act.........................3
B. Profile of Title I Participants and Resources..................................................6

III. Trends in Student Achievement...........................................................................11


A. Student Achievement on State Assessments.............................................12
B. Student Achievement on the National Assessment of Educational Progress
........................................................................................................ ...............21
C. Graduation Rates.......................................................................................27

IV. Implementation of State Assessment Systems....................................................31


A. Development of Assessments Required under No Child Left Behind..........31
B. Inclusion and Accommodations.................................................................33
C. Reporting Assessment Data for Use in School Improvement Efforts..........35

V. Accountability and Support for School Improvement...........................................39


A. School and District Identification for Improvement....................................40
B. Adequate Yearly Progress Ratings for Schools and Districts.......................45
C. Communication of School Performance Results.........................................48
D. School Improvement Efforts and Assistance for Identified Schools and
Districts........................................................................................ ..................50
E. Accountability Under State Initiatives and Title III of NCLB.........................58

VI. Title I School Choice and Supplemental Educational Services.............................63


A. Eligibility and Participation.........................................................................63
B. Parental Notification...................................................................................66
C. Monitoring and Evaluation of Supplemental Service Providers..................68

VII. Teacher Quality and Professional Development..................................................71


A. State Definitions of Highly Qualified Teachers............................................72
B. Teachers’ Highly Qualified Status...............................................................75
C. Professional Development..........................................................................79
D. Qualifications of Title I Paraprofessionals...................................................82

References....................................................................................................... .........85

Acknowledgments....................................................................................................87

Appendices.............................................................................................. .................89
Appendix A. Descriptions of Major Data Sources Included in This Report......89
Appendix B. Supplemental Exhibits...............................................................99
Appendix C. Standard Error Tables..............................................................109
Endnotes................................................................................................................ .131
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
............................................................................................................ .........................
 Executive Summary
The Title I program began in 1965 as part of the Elementary and Secondary
Education Act (ESEA) and is intended to help ensure that all children have the
opportunity to obtain a high-quality education and reach proficiency on
challenging state standards and assessments. The No Child Left Behind Act of
2001 (NCLB) built upon and expanded the assessment and accountability
provisions that had been enacted as part of the ESEA’s previous reauthorizing
legislation, the Improving America’s Schools Act (IASA), while also creating new
provisions related to parental choice and teacher quality. These and other
changes were intended to increase the quality and effectiveness not only of the
Title I program, but also of the entire elementary and secondary education
system in raising the achievement of all students, particularly those with the
lowest achievement levels.

As part of the No Child Left Behind Act, the Congress mandated a National
Assessment of Title I to evaluate the implementation and impact of the program.
The mandate specifically requires a longitudinal study of Title I schools, as well
as an Independent Review Panel composed of expert researchers and
practitioners to advise the U.S. Department of Education on the conduct of the
National Assessment. An interim report is due in 2005 and a final report is due
in 2007.

This report constitutes Volume I of the National Assessment of Title I Interim


Report and focuses on implementation of key Title I provisions related to state
assessments, accountability, school choice and supplemental educational
services, and teacher quality, as well as examining trends in student
achievement. The report draws on data from two evaluations of NCLB
implementation conducted by the Department, the National Longitudinal Study
of NCLB and the Study of State Implementation of Accountability and Teacher
Quality Under NCLB, both of which collected data in the 2004-05 school year.
The report also includes data from earlier studies, state performance reports,
the National Assessment of Educational Progress, and other sources.

The final report will provide more complete data on Title I implementation and
outcomes, including information about the targeting and uses of Title I funds,
services for private school students, findings from a parent survey about
parents’ experiences with choice options, and analyses of a) student outcomes
associated with participation in the Title I choice and supplemental services
options and b) the impact on student achievement of identifying schools for
improvement.

A. Key Provisions of Title I under the No Child Left Behind


Act
NCLB, which went into effect beginning with the 2002-03 school year,
strengthened the assessment and accountability provisions of the law, requiring
Exhibit E-2E-1
Exhibit
Number of States Showing an Increase in the Percentage of 4th-Grade Students
Key Provisions
Performing at or Above of the
the State’s Proficient No
Level Child
from Left
2000-01 Behind
to 2002-03, Act Subgroup
by Student
Reading Mathematics
State
All students States must implement 11 out ofannual
23 states state assessments inofreading
17 out 23 states and
assessme
Low-income mathematics in grades 3-8 and
12 out of 16 states at least once in 10 out of 1010-12,
grades states and in
nts
Black science at least once in ofeach
5 out 7 statesof three grade spans: 5 out3-5, 6-9, and 10-12.
of 7 states
Hispanic Assessments must be 6 outaligned
of 7 states with challenging state 5 out ofcontent
7 states and
White academic achievement standards.
7 out of 7 states States must provide for
7 out of 7 states
LEP participation of all students, including students with
12 out of 20 states disabilities
15 out of 20 states and
Migrant 11 out of 15
limited English proficient states students. States12
(LEP) out of 16
must states
provide for the
Students with disabilities 14 out of 20 states 16 out of 20 states
assessment of English language proficiency of all LEP students.
Adequate States must set annual targets that will lead to the goal of all students’
Exhibit
yearly reads: The proportion
reaching proficiencyof students
in reading performing at or above
and mathematics states’ For each
by 2013-14.
“proficient”
progress levels in 4th- grade reading (or another nearby
measure of school performance, states must include absolute elementary grade)
targets
increased
(AYP) from 2000-01 to 2002-03 in 11 out of 23 states that
that must be met by key subgroups of students (major racial/ethnic had consistent
trend data available. groups, low-income students, students with disabilities, and LEP
students). Schools and districts must meet annual targets for each
Note: For states that did not consistently assess students in 4th-grade reading and mathematics
studentthis
from 2000-01 to 2002-03, subgroup in theonschool,
table is based and must
either 3rd-grade test 95%results.
or 5th-grade of students in each
subgroup, in order to make “adequate yearly progress.” States also
Source: Consolidated must define
State an “other
Performance academic
Reports (n = 23 indicator”
states). that schools must meet in
addition to proficiency targets on state assessments.
Schools Schools and districts that do not make AYP for two consecutive years
identified are identified for improvement and are to receive technical assistance
for to help them improve. Those that miss AYP for additional years are
improvem identified for successive stages of interventions, including corrective
ent action and restructuring (see below). To leave “identified for
improvement” status, a school or district must make AYP for two
consecutive years.
Public Districts must offer all students in identified schools the option to
school transfer to a non-identified school, with transportation provided by the
choice district.
Suppleme In schools that miss AYP for a third year, districts also must offer low-
ntal income students the option of supplemental educational services from
education a state-approved provider.
al services
Corrective In schools that miss AYP for a fourth year, districts also must
actions implement at least one of the following corrective actions: replace
school staff members who are relevant to the failure to make AYP;
implement a new curriculum; decrease management authority at the
school level; appoint an outside expert to advise the school; extend the
school day or year; or restructure the internal organization of the
school.
Restructur In schools that miss AYP for a fifth year, districts also must begin
ing planning to implement at least one of the following restructuring
interventions: reopen the school as a charter school; replace all or
most of the school staff; contract with a private entity to manage the
school; turn over operation of the school to the state; or adopt some
other major restructuring of the school’s governance. Districts must
spend a year planning for restructuring and implement the school
restructuring plan the following year.
Highly All teachers of core academic subjects must be “highly qualified” as
qualified defined by NCLB and the state. To be highly qualified, teachers must
teachers have a bachelor’s degree, full state certification, and demonstrated
competence in each core academic subject that they teach. Subject-
matter competency may be demonstrated by passing a rigorous state
test, completing a college major or coursework equivalent, or (for
veteran teachers) meeting standards established by the state under a
“high, objective uniform state ii standard of evaluation” (HOUSSE).
Are achievement gaps between disadvantaged students and other
students closing over time?

State assessments and NAEP both provide some indications that


achievement gaps between disadvantaged students and other students
may be narrowing, but recent changes are small. For example, state
assessments show a slight reduction in the achievement gap between low-
income students and all students in most states, typically a reduction of one to
three percentage points. On the Trend NAEP, achievement gains for black and
Hispanic students since the 1970s substantially outpaced gains made by white
students, resulting in significant declines in black-white and Hispanic-white
achievement gaps, but recent changes in achievement gaps often were not
statistically significant.

3. Graduation Rates

Are graduation rates improving over time?

Under NCLB, high schools are held accountable for graduation rates, but
methods for calculating graduation rates vary considerably across states. The
averaged freshman graduation rate (calculated by NCES based on data from the
Common Core of Data) is useful for providing a common standard against which
state-reported graduation rates may be compared. The median state
graduation rate in 2002 was 84 percent based on state reports and 75 percent
based on the averaged freshman graduation rate.5

The recent trend in the averaged freshman graduation rate has been
fairly level, and the mean graduation rate in 2002 (73 percent) was the
same as in 1996.

D. Implementation of State Assessment Systems


1. Development of Assessments Required under No Child Left
Behind

To what extent have states implemented the annual assessments in


reading, mathematics, and science that will be required under NCLB?

While some states have standards and assessments in place in all of the
required grade levels, most states need to implement additional assessments to
meet the NCLB requirements by 2005-06 for reading and mathematics and by
2007-08 for science. As of March 2005, 27 states had completed their first full
administration of all required reading assessments; 26 states had done so for all
required mathematics assessments; and 22 states had done so for all required

iii
science assessments. Most of the remaining states had at least field-tested all of
the required assessments.6

How are states developing their English language proficiency


assessments?

Many state approaches to assessing English language proficiency (ELP)


were still evolving as of 2004-05. All states had an assessment in place for
2004-05, but 44 states indicated that they anticipated making revisions to their
ELP assessments. Twenty states reported that they had an ELP assessment in
place that met NCLB requirements, 27 states plan to have an ELP assessment
that meets NCLB requirements in place for 2005-06, and five states had not
made a decision as to which ELP assessment instrument they will use in 2004-
05.7

2. Inclusion and Accommodations

To what extent do state assessment systems include students with


special needs?

Most states have met the requirement to annually assess 95 percent or


more of their students, including major racial/ethnic groups, students with
disabilities, limited English proficient (LEP) students, and low-income students.
However, 14 states did not meet the minimum test participation requirement for
one or more student subgroups. Ten states assessed fewer than 95 percent of
one or more minority student groups (black, Hispanic, and/or Native American),
and nine states did not meet the test participation requirement for LEP
students.8

The lowest participation rates were for students with disabilities. While
states missing the test participation requirement for other subgroups often
missed by just one or two percentage points, states that failed to assess 95
percent of students with disabilities typically had lower participation rates for
those students (as low as 77 percent in one state).

3. Disaggregated Student Achievement Data

How fully are states meeting NCLB requirements for reporting state
assessment data?

The number of states that report student achievement data has more
than doubled since NCLB was enacted. Fifty states present data
disaggregated by race/ethnicity and gender and for limited English proficient
students, students with disabilities, and low-income students on state report
cards.9

iv
E. Accountability and Support for School Improvement
1. School Identification for Improvement

What types of schools are identified for improvement?

States identified 13 percent of all schools for improvement for 2004-


05. Of these, 9,028 were Title I schools (18 percent of Title I schools),
representing nearly a 50 percent increase over the approximately
6,000 Title I schools identified for the previous two years (see Exhibit E-
5). Most (76 percent) of the identified Title I schools were in their first year or
second year of improvement, 12 percent were in corrective action, and 12
percent were in restructuring status. The number and percentage of Title I
schools identified for improvement varied considerably across states.10

Schools in large and urban districts, and those with high


concentrations of poor, minority, and LEP students, were more likely to
be identified than other schools. For example, just over one-third of all
schools with 75 percent or more of their students from low-income families or
minority groups were identified schools in 2004-05, compared with fewer than 5
percent of schools with low concentrations of these students. Middle schools
also were more likely to be identified (18 percent of middle schools) than were
elementary or high schools (11 percent at each level). Ten percent of districts
(or 1,511 districts) also were identified for 2004-05; 32 percent of these had no
identified schools.11

v
Exhibit E-5
Number and Percentage of Identified Title I Schools,
1996-97 to 2004-05

10,000

9,028
8,000 8,408 8,348 8,375 18%
7,353 20% 19% 18%
7,021
6,000 17% 6,441
16% 6,094 5,963
13%
12% 12%
4,000

2,000

0
1996- 1997- 1998- 1999- 2000- 2001- 2002- 2003- 2004-
97 98 99 00 01 02 03 04 05

Exhibit reads: In 2004-05, 9,028 Title I schools had been identified for
improvement based on test scores for 2003-04 and earlier years; these identified
schools represented 18 percent of all Title I schools in that year.
Note: The first year that schools were identified for improvement based in part on NCLB AYP
definitions was 2003-04, based on assessments administered in 2002-03. However, schools are
identified when they miss AYP for two consecutive years, and 2004-05 was the first year that includes
schools identified because they missed NCLB AYP targets for two consecutive years.

Sources: Consolidated State Performance Reports (1996-97 to 2002-03); Study of State


Implementation of Accountability and Teacher Quality Under NCLB (2003-04 and 2004-05) (based on
data from 50 states and the District of Columbia).

2. Adequate Yearly Progress

What are the reasons schools did not make adequate yearly progress
(AYP)?

Three-fourths (75 percent) of all schools and 71 percent of districts


met all applicable AYP targets in 2003-04 testing. The number of all
schools missing AYP (21,540) based on 2003-04 testing is nearly double the
number of schools identified for improvement for 2004-05 (11,530).12 If many
non-identified schools that did not make AYP in 2003-04 testing miss AYP again
the following year, the number of identified schools could rise substantially in
2005-06.

vi
Schools most
commonly missed Exhibit E-6
AYP for the Reasons Schools Missed AYP, 2003-04
achievement of all
students and/or
multiple subgroups;
only in a minority of
cases did schools
miss only one AYP
target. Based on
data from 33 states,
among schools that
missed AYP in 2003-
04, 33 percent did not
meet achievement
targets for the “all
students” group in
reading or
mathematics, and
another 18 percent
missed AYP for the
achievement of two or
more subgroups (see Exhibit reads: In 2003-04 testing, 33 percent of
Exhibit E-6). Only 23 schools missed AYP for the achievement of the all
percent missed AYP students group in reading and/or mathematics.
solely due to the
achievement of a Source: Study of State Implementation of Accountability and Teacher
Quality Under NCLB (based on data from 33 states and 15,731
single subgroup. schools that missed AYP in these states).
Twenty percent missed
AYP due to the “other
academic indicator,” but only 7 percent missed for this indicator alone. More
than one-fourth (29 percent) missed AYP due to insufficient test participation
rates, but only 6 percent missed solely due to test participation. The remaining
13 percent of schools that missed AYP missed for other combinations of AYP
targets.13

However, schools that were held accountable for more subgroups were
less likely to make AYP. Among schools for which AYP was calculated for six
or more subgroups, 39 percent did not make AYP, compared with 10 percent of
schools for which AYP was calculated based on only one subgroup. More than
one-fifth of those schools that were held accountable for the achievement of
African-American students, LEP students, or students with disabilities did not
make AYP for those subgroups in 2003-04 testing. Schools with subgroups of
students from low-income families, Hispanic students, or Native American
students were somewhat less likely to miss AYP for those subgroups (12 to 15
percent). Schools were much less likely to miss AYP due to the achievement of
white or Asian students (1 percent and 4 percent of schools with these
subgroups, respectively).14

vii
3. School Improvement Activities

What assistance is provided to districts and schools identified for


improvement? What interventions are implemented in these districts
and schools?

All states notified schools about their identification status for 2004-05
based on 2003-04 testing, and a majority provided preliminary results
before September 2004, but 20 states did not, and only 15 states
provided final results by that time.15 NCLB regulations require states to
notify schools and districts of their school improvement status prior to the
beginning of the school year; this is important to enable districts with identified
schools to notify parents of eligible students about their Title I choice options in
a timely manner.

Identified schools were much more likely to report needing assistance


in a variety of specific areas than non-identified schools, and they also
reported receiving more days of assistance than non-identified schools.
Identified schools were most likely to report needing assistance to improve the
quality of teachers’ professional development (80 percent), and most schools
needing this assistance reported that they received it (91 percent). The most
common improvement strategies implemented by identified schools included
developing a school improvement plan, using assessment data to inform
instruction, and providing additional instruction to low-achieving students.16

Nearly one-third (30 percent) of identified elementary schools reported


increasing the amount of instructional time in reading by more than 30
minutes in 2004-05, and 17 percent reported a similar increase in
instructional time for mathematics. Non-identified schools less frequently
reported such increases. At the secondary school level, identified schools also
more commonly reported increasing instructional time for low-achieving
students in reading (55 percent).17

Almost all states had implemented a statewide system of support for


identified schools by fall 2004, and these often involved school support
teams and specialized individuals. Twenty-one states noted that an
important objective of their statewide systems of support was to build district
capacity to provide support to identified schools. Most states applied NCLB
consequences for school identification (i.e., public school choice, supplemental
services, corrective actions, and restructuring) to Title I identified schools only.18
Most states (42) reported that providing assistance to all schools identified for
improvement was a moderate or serious challenge in 2003-04.19

Large and urban districts more commonly provided assistance of


various kinds to identified schools than smaller districts. For example, in
2002-03, two-thirds of very large districts reported employing more than one
full-time equivalent (FTE) staff member per identified school to provide
assistance to those schools, compared with one-third of small districts.20

viii
Title I schools in corrective action status nearly universally experienced
the interventions NCLB defines for schools in this stage of
improvement. Corrective actions were implemented in 95 percent of Title I
schools in corrective action status in 2004-05. The most common corrective
actions experienced by Title I schools in this status in 2003-04 and 2004-05
resembled forms of technical assistance rather than sanctions. For instance, 90
percent of Title I schools in corrective action were required to implement new
research-based curricula or instructional programs and 58 percent had an
outside expert appointed to advise the school.21

ix
F. School Choice and Supplemental Educational Services
1. Eligibility and Participation

How many students are eligible to participate, and how many


actually do so?

Although more students Exhibit E-7


were eligible to Number of Students Participating in
Title I School Choice and Supplemental Services
participate in the Title I
school choice option, a 300,000
2002-03 2003-04 2004-05
larger number actually 233,000

participated in the 200,000


supplemental services
option. Based on district
100,000
reports, twice as many
38,000 45,000 42,000
students were eligible to 18,000

transfer to another school 0

under the Title I school choice School Choice Supplemental Services

option in 2003-04 (3.9 million)


as were eligible to receive
supplemental services (1.4
million). However, six times Exhibit reads: The number of students
as many students actually participating in Title I school choice rose from
participated in the 18,000 in 2002-03 to 45,000 in 2004-05.
supplemental services option
(233,000) as participated in Source: Study of Title I Accountability Systems and
the school choice option School Improvement Efforts (2002-03); National
Longitudinal Study of NCLB and Study of State
(38,000) in that year (see Implementation of Accountability and Teacher Quality
Exhibit E-7). Under NCLB (2003-04 and 2004-05).

The number of schools


where supplemental services were offered tripled from 2002-03 to
2003-04 (from 800 to 2,500), while the number where Title I school
choice was offered increased from 5,100 in 2002-03 to 6,200 in 2004-
05. Title I school choice was offered in about 6,200 schools and 1,800 districts
in 2004-05, and supplemental services were offered in 2,500 schools and 500
districts in 2003-04.22

x
The number of state- Exhibit E-8
approved supplemental Supplemental Service Providers:
service providers has tripled Share of Providers and Participants, by Provider Type, 2003-04
over the past two years, 100%
Percent of Approved Providers Percent of Participating Students
rising from 997 in May 2003
80%
to 2,734 in May 2005. 70%
59%
Private firms accounted for 76 60%
percent of approved providers 40%
40%
in May 2005 and served 59 25%
percent of participating 20%
6%
students in the previous school 0% 2% 0%
0%
year (2003-04). A growing All Private Faith-Based Districts and Colleges and
number and percentage of Providers Public Schools Universities

faith-based organizations have


obtained state approval, rising
from 18 providers (2 percent of
providers) in May 2003 to 249
(9 percent) in May 2005, but
they served less than one-half Exhibit reads: Private providers
of one percent of student accounted for 70 percent of state-
participants in 2003-04. School approved providers in May 2004 and 59
districts and public schools percent of participating students during
accounted for 17 percent of the 2003-04 school year.
providers in May 2005, but Source: PPSS review of SEA websites, May 2004 (51
served a larger proportion of states); National Longitudinal Study of NCLB.
participants (40 percent in
2003-04) (see Exhibit E-8).23

2. Parental Notification

How and when do districts and schools inform parents of eligible


children about the Title I school choice and supplemental services
options?

The timing of parental notification was often too late to enable parents
to choose a new school before the start of the 2004-05 school year.
Almost half (49 percent) of districts notified parents after the school year had
already started, and in these districts this notification occurred, on average, five
weeks after the start of the school year.24

3. Monitoring of Supplemental Service Providers

How are states monitoring and evaluating the effectiveness of


supplemental service providers?

States report that they are working to develop and implement systems
for monitoring and evaluating the performance of supplemental service
providers, but, as of early 2005, 15 states had not established any

xi
monitoring process, 25 states had not yet established any standards
for evaluating provider effectiveness, and none had finalized their
evaluation standards. Seventeen states say they will evaluate student
achievement on state assessments, although only one of these plans to use a
matched control group. The most common approaches that states have
implemented to monitor providers are surveying the districts about provider
effectiveness (25 states) and using providers’ reports on student-level progress
(18 states).25

xii
G. Teacher Quality and Professional Development
1. State Definitions of Highly Qualified Teachers

How have states implemented the requirements to define “highly


qualified teacher” and to develop a “high objective uniform state
standard of evaluation” (HOUSSE)?

Most states meet the requirement to test the content knowledge of


new teachers through the Praxis II subject assessments developed by
the Educational Testing Service (41 states). States vary considerably in
the passing scores that they require teachers to obtain on the Praxis II exams in
order to be certified to teach or to be deemed “highly qualified” under NCLB.26

Nearly all states (47) allowed veteran teachers to demonstrate their


subject-matter competency through a high objective uniform state
standard of evaluation (HOUSSE), as of the spring of 2005. The most
common type of HOUSSE option involved a point system wherein teachers were
allowed to accumulate a state-determined number of points in order to earn a
highly qualified status (29 states). Most states allowed points to be earned
retroactively for such things as successful completion of certain college courses
(28 states) or publishing articles and/or receiving teaching awards or honors (23
states). Four states allowed teachers to earn some points for evidence of
improved student achievement. Twenty-six states allowed teachers to earn one-
quarter or more of their HOUSSE points for a specified number of years of prior
teaching experience in their subject(s). Eight states used their current, initial
teacher certification systems as their official HOUSSE option; they reported that
the certification requirements contained high standards of subject-area
expertise.27

2. Teachers’ Highly Qualified Status

How many teachers meet the NCLB requirement to be “highly


qualified”?

The large majority of teachers across the country have been


designated as “highly qualified” under NCLB. According to state-reported
data for 42 states, 86 percent of classes were taught by highly qualified
teachers in 2003-04.28 Principal and teacher reports for 2004-05 provide
somewhat lower estimates of the percentage of classes taught by highly
qualified teachers, but this is because a sizeable percentage did not know their
“highly qualified” status. For example, 74 percent of teachers reported that
they were considered highly qualified under NCLB, but 23 percent said they did
not know their status and only 2 percent said they were not highly qualified.29

Students in schools that have been identified for improvement were


more likely to be taught by teachers who were not highly qualified than
were students in non-identified schools. For example, only one percent of

xiii
elementary teachers in non-identified schools said they were considered not
highly qualified, compared with 5 percent in schools that were in the first or
second year of being identified for improvement, 8 percent in schools in
corrective action, and 6 percent of schools in restructuring.30

Schools with high concentrations of poor and minority students have


more teachers who are considered not highly qualified than do other
schools. In high-poverty schools, for example, 5 percent of elementary
teachers and 12 percent of secondary English and math teachers reported in
2004-05 that they were considered not highly qualified under NCLB, compared
with one percent in low-poverty elementary schools and 3 percent in low-
poverty secondary schools.31

3. Professional Development

To what extent are teachers participating in professional development


activities that are sustained, intensive, and focused on instruction?

Most teachers reported receiving some professional development in


reading and math content and instructional strategies, but fewer than
one-quarter of the teachers participated in such training for more than
24 hours over the 2003-04 school year and summer. For example, 90
percent of elementary teachers participated in at least one hour of professional
development focused on instructional strategies for teaching reading, but only
20 percent participated for more than 24 hours over the 2003-04 school year
and summer.32

Teachers in high-poverty schools were more likely to participate in


professional development focused on reading and mathematics than
were teachers in low-poverty schools. For example, 53 percent of
secondary English teachers in high-poverty schools reported participating in
professional development focused on in-depth study of topics in reading or
English compared with 36 percent of their colleagues in low-poverty schools.

4. Qualifications of Title I Paraprofessionals

How many paraprofessionals meet the NCLB qualifications


requirements?

According to principal reports, 63 percent of Title I instructional aides


had been determined to meet NCLB qualification requirements as of
the 2004-05 school year. However, 87 percent of Title I instructional aides
indicated that they had at least two years of college (and/or an associate’s
degree) or had passed a paraprofessional assessment. Nearly one-quarter (23
percent) of Title I instructional aides reported that, of the time that they spent
tutoring or working with students in a classroom, a teacher was present only half
or less of this time.33

xiv
I. Introduction
The Title I program began in 1965 as part of the Elementary and Secondary
Education Act (ESEA) and is intended to help ensure that all children have the
opportunity to obtain a high-quality education and reach proficiency on
challenging state standards and assessments. As the largest federal program
supporting elementary and secondary education (funded at $12.7 billion in
FY 2006), Title I, Part A, targets these resources primarily to high-poverty
districts and schools, where the needs are greatest. Title I provides flexible
funding that may be used to provide additional instructional staff, professional
development, extended-time programs, and other strategies for raising student
achievement. The program focuses on promoting schoolwide reform in high-
poverty schools and on ensuring students’ access to scientifically based
instructional strategies and challenging academic content. Title I holds states,
school districts, and schools accountable for improving the academic
achievement of all students and turning around low-performing schools, while
providing alternatives to students in such schools to enable those students to
receive a high-quality education.

The No Child Left Behind Act of 2001 (NCLB), which went into effect beginning
with the 2002-03 school year, reauthorized the Title I program and made a
number of significant changes in key areas. NCLB strengthened the assessment
and accountability provisions of the law, requiring that states annually test all
students in grades 3-8 and once in grades 10-12 on assessments that are
aligned with challenging state standards. States must also set targets for school
and district performance that lead to all students achieving proficiency on state
reading and mathematics assessments by the 2013-14 school year. Schools and
districts that do not make adequate yearly progress (AYP) towards this goal are
identified as needing improvement and are subject to increasing levels of
interventions designed to improve their performance, as well as provide
additional options to their students. NCLB also required that states establish
definitions for “highly qualified” teachers and that all teachers of core academic
subjects become highly qualified. These and other changes were intended to
increase the quality and effectiveness not only of the Title I program, but also of
the entire elementary and secondary education system in raising the
achievement of all students, particularly those with the lowest achievement
levels.

A. National Assessment of Title I

As part of the No Child Left Behind Act, the Congress mandated a National
Assessment of Title I to evaluate the implementation and impact of the
program.34 This mandate specifically requires a longitudinal study of Title I
schools to examine the implementation and impact of the Title I program. In
addition, the law also requires the establishment of an Independent Review
Panel to advise the Secretary on methodological and other issues that arise in
carrying out the National Assessment and the studies that contribute to this

1
assessment. An interim report to Congress is due in 2005 and the final report is
due in 2007.

This report, which constitutes Volume I of the National Assessment of Title I


Interim Report, focuses on implementation of key Title I provisions and examines
achievement trend data. The report draws on data from a set of implementation
studies conducted by the U.S. Department of Education to assess the degree to
which the program is being implemented as intended, describe the problems
and challenges to implementation, and identify areas where states, districts, and
schools have made significant progress. In addition, the report also draws on
information from state performance reports, the National Assessment of
Educational Progress, and external research studies. Key data sources for this
report include the following:

 National Longitudinal Study of NCLB (NLS-NCLB). This study is


examining the implementation of NCLB provisions concerning
accountability, teacher quality, Title I school choice and supplemental
services, and targeting and resource allocation. The study is surveying
districts, principals, classroom teachers, special education teachers, and
Title I paraprofessionals in a nationally representative sample of 300
districts and 1,483 schools in the 2004-05 and 2006-07 school years. The
study is also surveying parents and supplemental service providers in a
small subsample of districts in both years, and is collecting targeting and
resource allocation data from all 300 districts in 2004-05 only. Finally, the
study includes two exploratory achievement analyses that are examining
achievement outcomes for students participating in the Title I choice and
supplemental services options (in nine districts) and the impact of
identifying schools for improvement on student achievement (in two
states).35

 Study of State Implementation of Accountability and Teacher


Quality Under NCLB (SSI-NCLB). This companion study to the NLS-
NCLB is collecting information from all states1 about their implementation
of accountability, assessment, and teacher quality provisions of the law,
as well as Title III requirements for inclusion of students with limited
English proficiency. The study is surveying state education staff members
responsible for implementing these provisions in 2004-05 and 2006-07. In
addition, the study is analyzing extant data relating to state
implementation, including state lists of schools and districts that did not
make adequate yearly progress and of those that were identified as in
need of improvement.36

1
The Elementary and Secondary Education Act defines the term “state” to include the District of
Columbia and Puerto Rico (Section 9101(40)). Accordingly, this report presents data on all 52 “states”,
except in cases where data for one or more states was not reported.

2
 Study of Title I Accountability Systems and School Improvement
Efforts (TASSIE). This study examines implementation of Title I
accountability provisions during the transition years from 2001-02 (prior
to implementation of NCLB) through 2003-04 (the second year of NCLB
implementation). The study surveyed a nationally representative sample
of 1,200 districts and 740 schools that had been identified for
improvement under the previous authorization of ESEA.37

 Case Studies of the Early Implementation of Supplemental


Educational Services. These case studies in nine districts examine
early experiences of districts implementing the NCLB supplemental
services provisions in 2002-03 and 2003-04.38

 Consolidated State Performance Reports. These annual state


reports, required under NCLB, provide data on student achievement on
state assessments as well as basic descriptive information, such as
numbers of identified schools and numbers of student participants.

 National Assessment of Educational Progress. The NAEP provides


information on overall trends in student achievement on a consistent
assessment for populations targeted by Title I.

References in the text to differences between groups or over time that are
based on nationally representative samples highlight only those differences that
are statistically significant using the t statistic and a significance level of 0.05.
The significance level, or alpha level, reflects the probability that a difference
between groups as large as the one observed could arise simply due to
sampling variation, if there were no true difference between groups in the
population. The tests were conducted by calculating a t value for the difference
between a pair of means and comparing that value to a published table of
critical values for t. Analyses of data on student achievement on state
assessments, percentages of schools and districts identified for improvement,
and reasons for schools not making adequate yearly progress were based on the
full population of schools as reported by each state.

The final report will provide more complete data on Title I implementation and
outcomes, including information about the targeting and uses of Title I funds,
services for private school students, findings from the NLS-NCLB parent survey
and supplemental service provider survey, and exploratory analyses of student
outcomes associated with participation in the Title I choice and supplemental
services options and of the impact on student achievement of identifying
schools for improvement.

II. Overview of the Title I Program

3
A. Key Provisions of Title I Under the No Child Left Behind
Act

The No Child Left Behind Act built upon and expanded the assessment and
accountability provisions that had been enacted as part of the ESEA’s previous
reauthorizing legislation, the Improving America’s Schools Act (IASA), while also
creating new provisions related to parental choice and teacher quality. These
changes were intended to strengthen the Title I program’s ability to leverage
systemic improvements throughout states, districts, and schools, in order to
help ensure that all children have the opportunity to obtain a high-quality
education and to reach proficiency on challenging state academic standards and
assessments.

IASA initiated the Title I requirements for states to develop and implement state
standards and aligned assessments in reading and mathematics that were to be
used for all students, not just Title I students; states were required to implement
assessments aligned with state standards at least once in each of three grade
spans: grades 3-5, 6-9, and 10-12. NCLB extended the state assessment
requirements to cover testing in additional grades, requiring that states
establish reading and mathematics assessments in each grade from 3-8 and
once in grades 10-12, as well as requiring adoption of state standards and
assessments in science. In addition, NCLB established the expectation that all
students should be included in the state assessment, including students with
disabilities or limited English proficiency (LEP). NCLB also instituted a
requirement to assess the English language proficiency of LEP students.

NCLB strengthened the accountability provisions of the law, specifying that


states must set annual targets for school and district performance that would
lead to all students achieving proficiency on state reading and mathematics
assessments by the 2013-14 school year. Schools and districts that do not make
adequate yearly progress (AYP) towards this goal for two consecutive years are
identified as needing improvement and are subject to increasing levels of
interventions designed to help improve their performance as well as to provide
additional educational options to their students. IASA had also included
provisions for measuring schools’ adequate yearly progress and identifying low-
performing schools as in need of improvement if they did not make AYP for two
consecutive years; however, the implementation of these concepts is very
different under NCLB. First, NCLB created an ambitious new goal that all
students should reach proficiency by 2013-14 and required that AYP targets
should lead to that goal. Moreover, whereas IASA allowed AYP to be calculated
based on achievement for the school as a whole, NCLB requires that AYP targets
must be met by key subgroups of students; in order to “make AYP,” a school
must reach the state’s AYP targets for each of these subgroups, if there is a
sufficient number of such students in the school to provide valid and reliable
data, as well as for the school as a whole.

NCLB created new educational options for students in schools that have been
identified for improvement. Allowing students to transfer to a non-identified

4
school, a rarely used “corrective action” under the previous law, is now an
option that districts must offer for all students in identified schools. In addition,
in identified schools that miss AYP for a third time, districts must offer students
from low-income families the opportunity to receive supplemental educational
services such as tutoring from a state-approved provider.

NCLB also established minimum qualification requirements for teachers and for
Title I paraprofessionals, provisions that were not previously part of the law.
Notably, the requirement that teachers must be “highly qualified” applies to all
teachers of core academic subjects and not just to teachers in Title I schools.

Exhibit 1 compares the No Child Left Behind Act with the Improving America’s
Schools Act on key provisions in the areas of assessments, accountability, and
teacher quality.

Exhibit 1
Comparison of Key Provisions of the No Child Left Behind Act of 2001 (NCLB)
and the Improving America’s Schools Act of 1994 (IASA)

NCLB IASA
State States must implement annual state States must implement annual
assessments assessments in reading and state assessments in reading and
mathematics in grades 3-8 and at least mathematics at least once in
once in grades 10-12, and in science at each of three grade spans: 3-5,
least once in each of three grade spans: 6-9, and 10-12. Assessments
3-5, 6-9, and 10-12. Assessments must must be aligned with challenging
be aligned with challenging state state content and performance
content and academic achievement standards. States must provide
standards. States must provide for for participation of all students,
participation of all students, including including students with
students with disabilities and limited disabilities and LEP students.
English proficient (LEP) students.
States must provide for the annual
assessment of English language
proficiency of all LEP students.
Adequate States must set annual targets that States must set annual targets
yearly lead to the goal of all students’ for continuous and substantial
progress achieving proficiency in reading and improvement sufficient to
(AYP) mathematics by 2013-14. For each achieve the goal of all Title I
measure of school performance, states students achieving proficiency in
must include absolute targets that must reading and mathematics, but no
be met by key subgroups of students specific timeline is mandated.
(major racial/ethnic groups, low-income Targets for school performance
students, students with disabilities, and may be absolute or relative and
LEP students). Schools and districts apply to the school as a whole,
must meet annual targets for each not to individual subgroups within
student subgroup in the school, and a school. No minimum test
must test 95% of students in each participation requirement.
subgroup, in order to make “adequate
yearly progress.” States also must
define an “other academic indicator”
that schools must meet in addition to

5
proficiency targets on state
assessments.
Schools Schools and districts that do not make Schools and districts that do not
identified AYP for two consecutive years are make AYP for two consecutive
for identified for improvement and are to years are identified for
improvemen receive technical assistance to help improvement. When a school
t them improve. Those that miss AYP for continues to miss AYP for three
additional years are identified for additional years, districts must
successive stages of intervention, take corrective action. To leave
including corrective action and “identified for improvement”
restructuring. To leave “identified for status, a school or district must
improvement” status, a school or make AYP for two consecutive
district must make AYP for two years.
consecutive years.

Exhibit 1 (continued)
Comparison of Key Provisions of NCLB and IASA

NCLB IASA
Public Districts must offer all students in Districts must offer all students in
school identified schools the option to transfer identified schools the option to
choice to a non-identified school, with transfer to a non-identified school
transportation provided by the district. unless a) the district is in a state
receiving a minimum grant (small
states), or b) the school choice
option is prohibited by state or
local law.
Supplement In schools that miss AYP for a third Not applicable.
al year, districts also must offer low-
educational income students the option of
services supplemental educational services from
a state-approved provider.
Corrective In schools that miss AYP for a fourth In schools that miss AYP for a fifth
actions year, districts also must implement at year, districts must implement
least one of the following corrective corrective actions which may
actions: replace school staff members include: withhold funds; provide
who are relevant to the failure to make health, counseling, and social
AYP; implement a new curriculum; services; revoke authority for
decrease management authority at the schoolwide program; decrease
school level; appoint an outside expert decision making authority at the
to advise the school; extend the school school level; create a charter
day or year; or restructure the internal school; reconstitute the school
organization of the school. staff; authorize students to
transfer to another school; or
implement opportunity-to-learn
standards.
Restructurin In schools that miss AYP for a fifth year, Not applicable.
g districts also must begin planning to
implement at least one of the following
restructuring interventions: reopen the
school as a charter school; replace all
or most of the school staff members;
contract with a private entity to
manage the school; turn over operation
of the school to the state; or adopt

6
some other major restructuring of the
school’s governance. Districts must
spend a year planning for restructuring
and implement the school restructuring
plan the following year.
Highly All teachers of core academic subjects Not applicable.
qualified must be “highly qualified” as defined
teachers by NCLB and the state. To be highly
qualified, teachers must have a
bachelor’s degree, full state
certification, and demonstrated
competence in each core academic
subject that they teach. Subject-matter
competency may be demonstrated by
passing a rigorous state test,
completing a college major or
coursework equivalent, or (for veteran
teachers) meeting standards
established by the state under a “high,
objective uniform state standard of
evaluation” (HOUSSE).

7
Schools that have been identified for improvement under NCLB are divided
among four stages of improvement status: 1) “Year 1” of identification, when
they must make school choice available; 2) “Year 2” of identification, when they
must also offer supplemental services; 3) corrective action status; and 4)
restructuring status. Schools move to the next stage of improvement status
when they miss AYP again, and not just because they have remained in
improvement status for another year. For example, a school that missed AYP in
2002-03 and 2003-04 would be in the first stage of improvement in 2004-05. If
the school then made AYP in 2004-05 testing, it would remain in the first stage
of improvement status and would not have to offer supplemental services. In
2005-06 testing, if the school made AYP again (for a second consecutive year), it
would move out of improvement status, and if it missed AYP, it would then move
to the next stage of improvement and would have to offer supplemental
services. Note that once a school is identified, it does not need to miss AYP in
consecutive years to move to the next stage of improvement, but it does need
to make AYP in consecutive years to move out of improvement status.

The NCLB provisions went into effect beginning with the 2002-03 school year,
but a number of important provisions do not take effect until later years (see
Exhibit 2). States developed AYP definitions using the new NCLB criteria during
2002-03 and were to use these criteria for AYP determinations, beginning with
the 2002-03 state assessment data. These determinations first affected schools
that were identified for improvement for 2003-04. However, NCLB-specified
interventions for identified schools (such as school choice, supplemental
educational services, and technical assistance) were first implemented in 2002-
03, for schools that had been identified under the AYP procedures already in
place based on the IASA provisions.

Exhibit 2
Timeline for Implementation of Key NCLB Provisions

2002-03 • States use results from assessments administered in this year to make AYP
determinations under the new NCLB provisions
• Districts implement Title I school choice and supplemental services
• Newly hired teachers and paraprofessionals must meet NCLB qualification requirements
2003-04 • First year that schools are identified for improvement based on NCLB AYP definitions
2005-06 • States implement reading and mathematics assessments in additional grades
• States develop or adopt science standards
• Existing teachers and paraprofessionals must meet NCLB qualification requirements
2007-08 • States implement science assessments in three grade spans

The new state assessment requirements are not due to be implemented until
2005-06 for reading and mathematics and 2007-08 for science. Thus, AYP
determinations prior to 2005-06 are based on state assessments adopted under
the previous law. The highly qualified teacher requirements went into effect in

8
2002-03 for newly hired staff members in Title I schools, but existing staff
members teaching core academic subjects in all schools have until the end of
the 2005-06 school year to meet the requirements. Due to the extended
timeline for implementing a number of NCLB requirements, this report often
examines progress towards deadlines that have not yet arrived, as well as the
extent to which states, districts, and schools are implementing NCLB
requirements already in effect.

9
B. Profile of Title I Participants and Resources
Title I Part A funds go to nearly all (93 percent) of the nation’s school
districts and to 55 percent of all public schools. Schools may use Title I
funds for one of two approaches: schoolwide programs, or targeted assistance
programs. High-poverty schools2 (those with 40 percent or more students from
low-income families) are eligible to adopt schoolwide programs to raise the
achievement of low-achieving students by improving instruction throughout the
entire school. Schools that are not eligible for (or do not choose to operate)
schoolwide programs must use Title I funds to provide targeted services to
specifically identified low-achieving students. Schoolwide programs accounted
for 54 percent of all Title I schools in 2002-03, and the use of the schoolwide
option has been growing steadily over the past decade (see Exhibit 3).39

Exhibit 3
Number of Schoolwide Programs and Targeted
Assistance Schools, 1994-95 to 2002-03
Schoolwide Programs
60,000
Targeted Assistance Schools
50,000 5,050

40,000 14,891 28,162


19,372 25,184

30,000
46,638
20,000
31,763
27,084 23,977
10,000 24,842

0
1994-95 1996-97 1998-99 2000-01 2002-03

Exhibit reads: The number of school wide programs increased from 5,050 in
1994-95 (10 percent) to 28,162 in 2002-03 (54 percent).
Source: Consolidated State Performance Reports (for 50-52 states).

2
School poverty levels are usually based on the percentage of students eligible for the free and
reduced-price lunch program, although districts have the flexibility to use certain other measures
described in the law. In this report, survey data for “high-poverty schools” included schools where at
least 75 percent of the students were eligible for free or reduced-price lunches, and “low-poverty
schools” included schools where fewer than 35 percent were eligible for such lunches. For NAEP data,
“high-poverty schools” included schools where 76-100 percent of the students were eligible for free or
reduced-price lunches, and “low-poverty schools” were defined as those with 0-25 percent eligible for
subsidized lunches).

10
Fueled by the growth in schoolwide programs, the number of students
counted as Title I participants has more than doubled in recent years,
rising from 6.7 million in 1994-95 to 16.5 million in 2002-03 (a 146
percent increase). The dramatic increase in participation is due in part to the
way that students are counted: when a school converts from targeted assistance
to a schoolwide program, all students in the school are counted as Title I
participants, instead of just the lowest-achieving students who are receiving
specific targeted services. In 2002-03, 84 percent of Title I participants were in
schoolwide programs and only 15 percent were in targeted assistance schools;
the remaining participants were in private schools (1 percent) or local
institutions for neglected or delinquent children (1 percent).40

Title I funds may be


used for children from
Exhibit 4
preschool age to high Distribution of Title I Participants by Grade Span, 2002-03,
school, but districts and Compared with Total Public School Enrollment, Fall 2001
schools often choose to
focus these funds on
students in the early 100%
8%
grades. Nearly half (47 17%
20% Grades 10-12
80%
percent) of Title I 24%
Grades 7-9
participants in 2002-03 60% 28%
Grades 4-6
were in pre- 24%
40%
kindergarten through 34%
Grades 1-3

grade 3, compared with 20% 23% Pre-K and


Kindergarten
32 percent of all public 13% 9%
0%
school students in the Title I Participants 2002-03 Total Enrollment Fall 2001
previous school year; 75
percent of Title I
participants were in pre-
kindergarten through
grade 6. Relatively few Exhibit reads: Thirteen percent of Title I
high school students participants were in pre-kindergarten and
receive Title I services; kindergarten, compared with 9 percent of all
for example, students in public school students.
grades 10 through 12
accounted for 20 Source: Consolidated State Performance Reports and
percent of all public NCES Common Core of Data (52 states).
school students, but
only 8 percent of Title I participants (see Exhibit 4).41

Minority students account for two-thirds of Title I participants. In 2002-03,


35 percent of participants were white, 33 percent were Hispanic, 27 percent
were black, 3 percent were Asian, 2 percent were American Indian or Alaska
Native, and 1 percent were from other racial/ethnic groups. More than one in
seven Title I participants (16 percent) had limited English proficiency (2.6 million
in 2002-03), 12 percent had disabilities (1.9 million), and 2 percent (338,000)
were children of migratory workers.42

11
Funding for Title I Part A has increased by 61 percent over the past five
years, from $7.9 billion in FY 2000 to $12.7 billion in FY 2005. After
adjusting for inflation, Title I funding increased by 46 percent from 2000 to 2005
and by 138 percent since the program’s inception in 1965 (see Exhibit 5).

A variety of formulas used to allocate Title I funds are intended to target these
resources to high-poverty districts and schools, where the needs are greatest.
About half of all Title I Part A funds are allocated to districts in the highest-
poverty quartile, which have 25 percent of all children and 49 percent of the
nation’s poor children. The targeting of Title I funds to high-poverty districts
increased slightly from FY 1997 to FY 2004, with the share of funds for the
highest-poverty districts rising from 50 percent to 52 percent (see Exhibit 6).

The final report will provide more detailed information on the targeting and uses

Exhibit 5
Appropriations for Title I Grants to LEAs,
FY 1966 to FY 2005
(in 2005 Constant Dollars)
$ in Billions
14

12

10

0
1966 1970
66 70 75 1980
80 85 1990
90 95 2000
00 05
Year

Exhibit reads: Appropriations for Title I Part A, measured in constant 2005


dollars, have grown from $5.3 billion in FY 1966 to $7.9 billion in FY 2000 and
$12.7 billion in FY 2005.

Source: U.S. Department of Education, Budget Service.

of Title I funds, including the distribution of funds at the school level.

12
Exhibit 6
Distribution of Title I Funds
by District Poverty Quartile, FY 1997 and FY 2004

100%
Highest Poverty
80% Quartile
50% 52%
Second Highest
60% Poverty Quartile
Second Lowest
40%
27% 27% Poverty Quartile

20% Lowest Poverty


15% 16% Quartile
8% 6%
0%
FY 1997 FY 2004

Exhibit reads: The share of Title I funds allocated to the highest-poverty


districts increased slightly from 50 percent in FY 1997 to 52 percent in FY 2004.

Note: District poverty quartiles are based on Census Bureau estimates of the number of school-
age children and poor children living in each district. The poverty quartiles were created by
ranking all districts by the percentage of poor school-age children and then dividing these
districts into quartiles that each contain 25 percent of the school-age children.

Sources: Study of Education Resources and Federal Funding (FY 1997); National Longitudinal
Study of NCLB (FY 2004) (based on data for 51 states).

13
Key Findings on Trends in Student Achievement

This chapter examines trends in student achievement using both state


assessment data (through the 2002-03 school year) and the National
Assessment of Educational Progress (through 2004 and 2005).
However, any changes observed here should not be viewed as a result
of NCLB, because states, districts, and schools only began to implement
the NCLB provisions in 2002-03. Rather, these data provide a baseline
indicator of achievement levels and trends that existed at the time
when NCLB was first being implemented. Moreover, even when
additional years of assessment data become available, such data will
be limited in their ability to precisely address the impact of NCLB,
because it is difficult to separate the impact of NCLB from the effects of
other state and local improvement efforts.

Are students, especially disadvantaged students, showing


achievement gains on state assessments and on the National
Assessment of Educational Progress (NAEP)?

In the 23 states that had consistent three-year trend data from 2000-01
to 2002-03, most student subgroups showed gains in the percentage of
students performing at or above the state’s proficient level in 4th- and
8th-grade reading and mathematics. The increases in student
proficiency were often small.

Recent trends on the Main NAEP assessment (from 2000 to 2005) show
gains in 4th-grade reading and mathematics for black and Hispanic
students and for students in high-poverty schools. Gains were larger
for mathematics than for reading, and achievement trends were less
positive for older students. Gains were especially large on the Long-
Term Trend NAEP, and the most recent gains for black and Hispanic 9-
year-olds from 1999 to 2004 substantially extended the gains these
groups had made since the 1970s.

Are achievement gaps between disadvantaged and other


students closing over time?

State assessments indicate a slight reduction in the achievement gap


between low-income students and all students from 2000-01 to 2002-
03, typically between one and three percentage points. On the Trend
NAEP, achievement gains for black and Hispanic students since the
1970s substantially outpaced gains made by white students, resulting
in significant declines in black-white and Hispanic-white achievement
gaps, but recent changes in achievement gaps often were not
statistically significant.

Are graduation rates improving over time?

14
Under NCLB, high schools are held accountable for graduation rates,
but methods for calculating graduation rates vary considerably across
states. State-reported graduation rates for 2002 ranged from a high of
97 percent in North Carolina to a low of 62 percent in Georgia.

However, a consistently defined measure, the interim averaged


freshman graduation rate, provided somewhat lower estimates of
graduation rates, and the two measures often produced different
numbers for individual states. In 2002, the median state graduation
rate was 84 percent based on state reports and 75 percent based on
the averaged freshman graduation rate. From 1996 to 2002, the
averaged freshman graduation rate was fairly stable (73 percent in both
years).

15
III. Trends in Student Achievement
This chapter examines trends in student achievement using both state
assessment data and the National Assessment of Educational Progress (NAEP).
Student achievement on state assessments represents the primary criterion that
the Title I legislation applies to measure school success, but these data cannot
be aggregated across states to examine national trends or used to make
comparisons among states. Because each state has developed its own
standards and assessments, the content and rigor of these assessments are not
comparable across states. In addition, many states have revised their
assessment systems in recent years, so they often do not have the trend data
needed to assess student progress. The National Assessment of Educational
Progress provides a high-quality assessment that is consistent across states,
making the data useful for examining national trends in student achievement.
However, the NAEP is not aligned with individual state content and achievement
standards, so it does not necessarily measure what students are expected to
learn in their states. This report draws on both types of assessment to examine
the best available information about the recent progress of our schools in raising
student achievement.

This interim report examines trends on the Main NAEP from the early 1990s
through 2005, with a focus on the most recent period from 2000 to 2005, in
order to show trends in NAEP results during the early years of NCLB
implementation. We also examine long-term trends on the Trend NAEP from the
1970s through 2004. For state assessments, we examine recent three-year
trends (2000-01 through 2002-03) in 23 states that had consistent assessments
in place over this period. The report focuses on presenting achievement trends
for 4th-grade reading3 and mathematics assessments (because Title I funds are
predominantly used at the elementary level), although assessments for other
grades are shown as well.

This chapter also examines trends in graduation rates since 1996, as an


important measure of outcomes for high school students.

For both state assessment and NAEP results, recent achievement trends are
positive overall and for key subgroups. At this early stage of NCLB
implementation—states, districts, and schools only began to implement the
NCLB provisions in 2002-03—it is too early to say whether these trends are
attributable to NCLB, to other improvement initiatives that preceded it, or to a
combination of both. The data presented in this chapter provide a baseline
indicator of achievement levels and trends that existed at the time that NCLB
implementation began, rather than an indicator of outcomes associated with
NCLB. They may very well reflect pre-existing state standards-based reform
efforts and accountability systems that NCLB was intended to strengthen.
Moreover, even when additional years of assessment data become available,
3
For simplicity, the term “reading” is used throughout this report to refer to the set of subjects that
may be variously known as reading, English, or language arts.

16
such data will be limited in their ability to precisely address the impact of NCLB,
because it is difficult to separate the impact of NCLB from the effects of other
state and local improvement efforts.

17
Key Evaluation Questions for Student Achievement

1. Are students whom Title I is intended to benefit (including low-income


students, racial/ethnic minorities, LEP students, migrant students, and
students with disabilities) making progress toward meeting state academic
achievement standards in reading and mathematics?

2. Are students, especially disadvantaged students, showing achievement


gains on the National Assessment of Educational Progress?

3. Are achievement gaps between disadvantaged students and other


students closing over time?

4. Are graduation rates improving over time?

A. Student Achievement on State Assessments

Limited data are available on student achievement trends on state


assessments at this time, making it difficult to use this measure to
examine changes in student achievement since the enactment of the
No Child Left Behind Act. Many states have revised their assessment
systems in recent years, and as a result there are only 23 states that have
three-year trend data available for the most recently reported three-year period
from 2000-01 to 2002-03. In addition, most of these 23 states do not have
disaggregated trend data available for all student subgroups. Although nearly
all states have by now met the requirement to disaggregate data by
race/ethnicity and other subgroups, fewer states had done so as of the 2000-01
school year, which is the starting point for this trend analysis.

The report focuses on presenting achievement trends for 4th-grade reading and
mathematics; however, many states did not administer assessments in the 4th
grade in all three years and in such cases we used state assessment data for an
adjacent grade. Assessments for other grade levels were also examined and are
presented in an appendix.

Differences across states in the percentage of students performing at


the state’s proficient level should not be viewed as an indicator of
states’ relative effectiveness in educating students. State assessments
differ both in the content and the difficulty of test items, as well as in the level
that is labeled as “proficient,” so states with higher percentages of students at
the proficient level are not necessarily “higher performing” in an absolute sense.
For example, some states that have similar proportions of students scoring at
the proficient level on the National Assessment of Educational Progress may
vary considerably in the percentage of students achieving proficiency on the
state assessment (see Exhibit 7). Consequently, while state assessments may
be used to compare achievement over time within a state, they may not be

18
used to make comparisons across states. In addition, caution should be used
when examining changes over time in the proportion of students performing at
or above each state’s proficiency level. The data come from the Consolidated
State Performance Reports submitted by each state to the U.S. Department of
Education, and cannot speak to the reasons for observed losses or gains over
time within each state. Observed losses or gains could reflect a number of
things, including changes in the assessment system, population changes, or
changes in the proficiency level of a stable population.

Exhibit 7 should not be viewed as recommending that state proficiency levels


should match NAEP proficiency levels. NAEP achievement levels are still being
used on a trial basis. There continue to be concerns about the procedures used
to set the achievement levels, and the Commissioner of the National

19
Exhibit 7
Percentage of 4th-Grade Students Achieving At or Above the "Proficient" Level
on NAEP and State Assessments in Reading, 2003

Connecticut 69
43
Massachusetts 56
40
New Hampshire 77
40
New Jersey 78
39
Colorado 87
37
Minnesota 76
37
Vermont 37
Maine 49
36
Iow a 76
35
Montana 77
35
Virginia 72
35
Missouri 34
34
New York 34
Ohio 66
34
Wyoming 41
34
Delaw are 79
33
Indiana 74
33
Kansas 69
33
North Carolina 81
33
Pennsylvania 58
33
South Dakota 85
33
Washington 67
33
Wisconsin 80
33
Florida 61
32
Maryland 58
32
Michigan 66
32
Nebraska 83
32
North Dakota 74
32
Utah 78
32
Illinois 60
31
Kentucky 62
31
Oregon 83
31
Idaho 75
30
Rhode Island 62
29
West Virginia 29
Alaska 71
28
Arkansas 61
28
Georgia 80
27
Texas 86
27
Oklahoma 65
26
South Carolina 32
26
Tennessee 81
26
Arizona 64
23
Alabama 63
22
California 39
21
Haw aii 43
21 State Assessment, 2003
Louisiana 61
20 NAEP, 2003
Nevada 45
20
New Mexico 70
19
Mississippi 87
18
District of Columbia 46
10
0 20 40 60 80 100

Source: Consolidated State Performance Reports and National Center for Education Statistics, Main
NAEP. Note: The preferred grade for this table was 4th grade; however, in states
that did not consistently assess students in 4th-grade reading, we used either 3rd- or 5th-grade

20
assessment results.43

21
Center for Education Statistics has not determined that they are “reasonable,
valid, and informative to the public.” NAEP and current state assessments were
established at different times to meet different purposes, and there is no one
“right” level that should be defined as “proficient.” Under NCLB, each state has
been given the responsibility to establish standards and assessments and to
define a “proficient” level that all students are expected to reach by 2013-14. In
contrast, when the NAEP proficiency levels were created about 15 years ago,
there was no expectation that all students must reach the NAEP Proficient level
by a particular date. Assessment systems vary tremendously, both between
NAEP and state systems, as well as across states that are using different
approaches with the NCLB framework, and similar-sounding terms often may not
be comparable.

Student achievement as measured by state assessments rose from


2000-01 to 2002-03 for most student subgroups in a majority of states
that had three-year trend data available. Exhibit 8 shows the percentage
of students achieving at or above the proficient level on state reading and
mathematics assessments in 2000-01 through 2002-03 in 4th grade or an
adjacent grade; these data show achievement gains in 11 out of 23 states in
reading and in 17 out of 23 states in mathematics.
Exhibits 9 and 10 show similar information for a variety of student subgroups; on
average, about three-quarters of the states show achievement gains from 2000-
01 to 2002-03 for each subgroup. For example, states show gains in elementary
reading for low-income students in eight out of 11 states, for black students in
five out of seven states, and for Hispanic students in six out of seven states.4
Results for LEP students, migrant students, and students with disabilities show
similar patterns, as do 8th-grade reading and mathematics (see Appendix).

4
The total number of states examined here varies because states often did not have disaggregated
assessment data available for all student subgroups for the full time period examined here (dating
back to the 2000-01) school year).

22
Exhibit
Exhibit109
Proportion
Proportion
of of
Students
Students
Performing
PerformingAtAt
or or
AboveExhibit
Above
Their 8State’s
Their State’s
Proficient
Proficient
Level
Level
forfor
Mathematics
Reading in in
2002-03,
2002-03,
and ChangeProportion of Students
from 2000-01, Performing
in 4th Grade At or Above
or Another Their Grade,
Elementary State’sfor
Proficient
VariousLevel
Student Subgroups
in Reading and Mathematics, in 4th Grade or Another Elementary Grade, 2000-01 to 2002-03
Low-Income Black Hispanic
Reading White LEP
Mathematics Migrant Disabilities
Percent Chang Percent
2000-01Change Percent2002-03
2001-02 Change Change
Percent Change
2000-01 Percent
2001-02 Chang
2002-03PercentChangeChang Percent Change
Proficien e from Proficien from Proficien
Alabama 64 63 from Proficien
-1 from
69 Proficien e from
64 Proficient -5 e from Proficien from
t in 2000- t in 2000-01 t in 2000-01 t in 2000-01 t in 2000- in 2002- 2000- t in 2000-01
Arizona
2002-03 01 2002-0375 2002-03 64 -11
2002-03 57 2002-03 01 57 03 0 01 2002-03
Connecticut 71 69 69 -2 81 80 81 2
Arizona 37 18
6 33
37 18
5 28
32 -2
6
Delaware 75 80 79 4 73 72 74 1
Connecticut Illinois 62 63 60 -2 74 1874
45 1 73
5 -1 38
47 -3
4
Delaware 68
62
Kansas 7
3 63 63 69 6 67 6767
51 28
11 74 7 44
41 14
13
Illinois Kentucky
41
56 1
3 58 60 62 4 34 4136
49 5 38
-9 34
48 4 -1 0 33
54 2
3
Kansas Louisiana
55
61 10
12 44
48 59 14
11 57 52
56 61 19
13 2 79
74 54
11
9 5050 29 60
12 51
52 6 21 11 49
59 16
13
Maine 51 49 49 -2 23 23 28 5
Kentucky 51
26 6
5 43
19 -8
6 54
31 -2
5 65
40 4
9 39
28 0
4 47
19 -2
5 43
19 11
8
Massachusetts 51 54 56 5 51 39 40 -11
Louisiana Mississippi 81 84 87 6 63 5672
61 0 74
6 11 30
35 14
11
Maine Missouri 30
7 32 -3 36 40
22 34 10
15 2 29
50 374
8 2938
18 7 37
1 20 0 9 10
20 12
-3
Massachusetts Montana 30
15 79 6
5 73 26
15 77 7
4 -2 48
65 737
6 1669
14 2 75
4 25
17 2 5 9 26
18 9
6
Mississippi New Jersey 80
61 79 148 79 91
80 78 3
2 -1 88
95 663
8 66 68 2
North Carolina 74 77 81 7 87 89 92 5
Missouri 22
24 3
2 14
21 3
2 23
21 14
2 20
18 5
3
Ohio 56 66 66 10 59 62 59 0
Montana 65
64
Oklahoma 0
2 66 63 65 -1 64 2662
32 -8 65
-9 1 36
40 1
4
North Carolina 70
87
Oregon 10
9 71
87 84 14
13 85 64
82 83 -1
1 -1 95
89 756
2 4877
72 -3
1 78 60
80 3 4 9 48
71 4
3
Ohio Pennsylvania 56 57 58 2 54 4253 5 56
-5 27
25 2 -18
-14 36
34 -1
-9
Oklahoma South64 Carolina11
63 37 34 32 -5 26 3836
48 1 33
-5 59
69 7 17 9 19
23 -1
2
Utah 82 80 78 -4 73 74 74 1
Oregon 53
51 -6
4 50
48 -6
5 49
51 -10
-1
Virginia 64 71 72 8 77 80 83 6
Pennsylvania 36
35
Washington 1
2 67 66 67 0 43 1952
28 7 55 25
29 12 3 4 22
24 6
5
South Carolina # of20
18
states with-3
6 17 12
-1 22
26 12
-6 43
47 201 14
7 -7
2 14
12 -2
-5 35
37 24
29
11 out of 23 states 17 out of 23 states
Utah achievement
65 gains
-4 38
36 -16
-11 65
60 14
12 41 -10
-5
Virginia 56
75 30
11 35
56 16
13 54
64 19
29
Washington Exhibit reads: The proportion of students performing at20 24or above 0
8 Alabama’s
30
24 11
4 31
25 1
8
# of states with 8 “proficient”
out
10 of
out11ofstates
10 level in7 states
5 out of 4th-grade 6 outreading
5 of 7 states (or another nearby 15
7 out of 7 states elementary
12 out of 20 11grade)
out
12 out
of 15
of states
16 16 out of 20 states
achievement states states states 14 out of 20 states
gains
declined from 64 percent in 2000-01 to 63 percent in 2002-03. Overall, states
that had consistent assessments during this period showed increases in the
Note:
Note:The percent
Thepreferred
preferred grade proficient
gradefor
forthis tableon
thistable was
wasthese
4th elementary
4thgrade;
grade; however, reading
however,ininstates
statesthatassessments
thatdid
didnot in 11 out
notconsistently
consistently of students
assess
assess 23
studentsin
in4th-grade
4th-grade
reading,
mathematics,
a nearbystates.
a nearby
grade grade
was used
was(3rd
usedgrade
(gradefor3Arizona,
for Arizona,
Delaware,
Delaware,
Illinois,
Illinois,
Missouri,
Missouri,
Oregon,
Oregon,
and and
Virginia
Virginia
and and
5th grade for
5 for
Kentucky,
Kentucky,Oklahoma,
Oklahoma,and andPennsylvania).
Pennsylvania). Gray cells indicate that the state did not report disaggregated assessment data for that
subgroup for all three
Note:years included grade
The preferred in the analysis; however,
for this table all grade;
was 4th of these states have
however, sincethat
in states developed
did not the capacity to report
disaggregated
Source: Consolidated
data. State Performance Reports (for 21 states).
consistently assess students in 4th-grade reading and mathematics, a nearby grade was used
(3rd grade for Arizona, Delaware, Illinois, Missouri, Oregon, and Virginia and 5th grade for
Source: Consolidated State
Kansas Performance
(reading), Reports
Kentucky (forOklahoma,
(math), 21 states).and Pennsylvania).

Source: Consolidated State Performance Reports (for 23 states).

In many cases, the increases in student proficiency shown in Exhibits 8 through


10 were small, but some states reported data showing substantial increases or
declines for one or more student subgroups. Although all states shown in these
tables indicated that they had a consistent assessment in place during this
period, we do not know whether the administration of those assessments,
including inclusion and accommodation practices, was also consistent. In
addition, the number of tested students for some subgroups in some states may
be small.

State assessments indicate a slight reduction in the achievement gap


between low-income students and all students (see Exhibit 11). In most

23
cases, the gap reduction was from one to three percentage points; however, a few
states showed larger reductions in the gap.

Exhibit 11
Change in the Achievement Gap: Difference Between the Proportion of Low-Income Students and All
Students Performing At or Above Their State’s Proficient Level, in 4th Grade or Another Elementary Grade,
2000-01 to 2003-03

Gap in Reading Gap in Mathematics


2000-01 2001-02 2002-03 Change 2000-01 2001-02 2002-03 Change in
in Gap Gap
Delaware 14 15 11 -3 14 15 12 -2
Illinois 22 23 19 -3 21 20 17 -4
Kansas 18 17 14 -4 18 15 13 -5
Kentucky 13 12 11 -2 13 13 12 -1
Missouri 13 14 12 -1 15 15 13 -2
Montana 14 11 12 -2 11 7 11 0
North Carolina 14 12 11 -3 9 7 5 -4
Oklahoma 13 2 1 -12 12 2 2 -10
Pennsylvania 21 22 +1 21 21 0
South Carolina 16 15 14 -2 12 15 13 +1
Utah 13 13 0
Number of states
9 out of 11 states 7 out of 10 states
with gap reduction

Exhibit reads: State assessments showed a reduction in the achievement gap


between low-income students and all students in elementary reading in 9 out of 11
states that had consistent assessment data from 2000-01 to 2002-03; however, in
most cases the change was small (from one to three percentage points).

Source: Consolidated State Performance Reports (for 11 states).

24
An important question is whether these recent growth rates will be sufficient to
bring states to the goal of 100 percent of their students’ performing at or above
their state’s proficient level by the 2013-14 school year. To examine this
question, we calculated the average annual change in each state’s percent
proficient over the period from 2000-01 to 2002-03, and determined the percent
proficient that would be attained by 2013-14 if the state continued to progress at
that rate.5 Exhibit 12 shows these calculations for the low-income subgroup, and
Exhibit 13 summarizes the number of states that would be predicted to meet the
100 percent goal for six different student subgroups.

Based on trend data for 21 states, most would not meet the goal of 100
percent proficiency by 2013-14 unless the percentage of students
achieving at the proficient level increases at a faster rate. For example,
among the 11 states that had consistent elementary reading assessment data for
low-income students, four states would meet the 100 percent goal by 2013-14 for
this subgroup if they sustained the same rate of growth that they achieved from
2000-01 to 2002-03. Not surprisingly, states that began the period with a
relatively low percentage of students performing at the proficient level defined by
the state were often less likely to be predicted to meet the 100 percent goal.

Exhibit 12
Predicted Percentage of Low-Income Students That Would Reach Their State’s Proficient Level in 2013-14,
in Elementary Reading, If Achievement Trajectories from 2000-01 to 2002-03 Continued Through 2013-14

Actual Percent Proficient

2000-01 2002-03 Annual Change


Delaware 3 61 68 3.5 100
Illinois 3 40 41 0.5 47
Kansas 5 45 55 5.0 100
Kentucky 4 45 51 3.0 84
Missouri 3 19 22 1.5 39
Montana 4 65 65 0.0 65
North Carolina 4 60 70 5.0 100
Oklahoma 5 53 64 5.5 100
Pennsylvania 5 35 36 0.5 42
South Carolina 4 21 18 -1.5 2
Utah 4 69 65 -2.0 43
Number of states predicted to reach 100% proficient by 2013-14 4 out of 11 states (36%)

5
More specifically, we multiplied the annualized percentage-point change from 2000-01 to 2002-03 by
the number of years remaining to 2013-14 (11 years), and added that figure to the percent proficient in
2002-03. If the product was greater than 100 percent, the predicted percent proficient in 2013-14 is 100
percent (since there cannot be more than 100 percent of students reaching the proficient level). It
should be noted that this method assumes no variation in the rate of change.

25
Exhibit reads: The percent of low-income students reaching Delaware’s
proficient level in 3rd-grade reading rose from 61 percent in 2000-01 to 68
percent in 2002-03, an average gain of 3.5 percentage points per year. If this
rate of increase were sustained over the next 11 years from 2002-03 to 2013-14,
Delaware would succeed in having 100 percent of these students reach the
proficient level.

Source: Consolidated State Performance Reports (for 11 states).

26
Looking across six different student subgroups (low-income, black,
Hispanic, LEP, migrant, and students with disabilities), an average of 33
percent of the subgroups within these states would be predicted to
reach 100 percent proficiency based on current growth rates. This
percentage declines to 26 percent of subgroups in elementary mathematics and
to 13 to 18 percent in 8th-grade reading and mathematics (see Exhibit 13).

Exhibit 13
Predicted Number of States That Would Reach the Goal of 100% Proficient by 2013-14,
for Various Subgroups, If Achievement Trajectories from 2000-01 to 2002-03 Continued Through 2013-14

Grade 3, 4, or 5 Grade 6, 7, or 8
Student Subgroup
Reading Mathematics Reading Mathematics
Low-income 4 out of 11 states 3 out of 10 states 3 out of 10 states 1 out of 9 states
Black 3 out of 6 states 3 out of 5 states 4 out of 10 states 3 out of 10 states
Hispanic 2 out of 6 states 1 out of 5 states 2 out of 10 states 2 out of 10 states
Limited English proficient 3 out of 19 states 3 out of 19 states 1 out of 16 states 2 out of 18 states
Migrant 6 out of 15 states 5 out of 15 states 3 out of 13 states 2 out of 15 states
Students with disabilities 7 out of 19 states 4 out of 19 states 1 out of 17 states 0 out of 17 states
Average proportion of state
33% 26% 18% 13%
subgroups predicted to reach 100%

Exhibit reads: For the low-income student subgroup, four out of 11 states would
reach the state’s proficient level on an elementary reading assessment, if the rate
of change from 2000-01 to 2002-02 were to continue through 2013-14.

Note: The average shown at the bottom of each column is based on summing the numerators and
denominators reflected in the cells of that column, and dividing the total of the numerators by the
total of the denominators.
Source: Consolidated state performance reports (for 21 states).

Most state AYP targets do not project an even growth rate over the full period
from 2002-03 to 2013-14; indeed, states use a variety of growth trajectories for
their AYP targets, and many are planning for achievement growth rates to
accelerate as 2013-14 approaches. Based on recent achievement trajectories,
such acceleration will often be necessary if states are to meet the goal of 100
percent proficient by 2013-14.

B. Student Achievement on the National Assessment of


Educational Progress
This report examines short-term trends on the Main NAEP as well as longer-term
trends on the Trend NAEP.44 The Main NAEP, created in the early 1990s, provides
an assessment that is more consistent with current content focuses and testing
approaches, while the Trend NAEP continues the original NAEP assessment in
order to track long-term trends since the early 1970s.

In general, the Main NAEP places greater emphasis on open-ended and extended
response items and less emphasis on multiple-choice questions. In reading, the

27
Trend NAEP features shorter passages and focuses on locating specific
information, making inferences, and identifying the main idea of a passage,
whereas the Main NAEP requires students to read longer passages and also asks
students to compare multiple texts on a variety of dimensions. In mathematics,
the Trend NAEP focuses on basic computational skills in four content areas—
numbers and operations, measurement, geometry, and algebra—while the Main
NAEP also includes data analysis and probability.45

Results from the Main NAEP and Trend NAEP are not comparable because they
cover different content and also different samples. Students are sampled by
grade for the Main NAEP (grades 4, 8, and 12) and by age for the Trend NAEP
(ages 9, 13, and 17). In addition, the Main NAEP reports on the percentages of
students performing at various achievement levels (Basic, Proficient, and
Advanced) as well as average scale scores, while the Trend NAEP reports only
scale scores. The National Center for Education Statistics (NCES) has stated that,
although results from these two NAEP assessments cannot be compared directly,
comparisons of the patterns they show over time, especially for student
demographic groups, may be informative.

The most recent NAEP results are from 2004 on the Trend NAEP and 2005 on the
Main NAEP. The discussion below examines both recent trends (since 1999 on the
Trend NAEP and since 2000 on the Main NAEP), in order to show trends in NAEP
results during the early years of NCLB implementation, as well as longer-term
trends on both NAEP assessments.

1. Main NAEP

Recent NAEP trends show gains in 4th-grade reading and especially in


mathematics for black and Hispanic students and for students in high-
poverty schools (see Exhibits 14 through 19). Average scale scores for all
students were significantly higher in 2005 than in 2000 in 4th-grade reading and
mathematics and 8th-grade mathematics (see Exhibits 14 and 15). For 8th-grade
reading, the average score declined slightly from 2002 to 2005 (NAEP did not
administer an 8th-grade reading assessment in 2000). Recent trend data for
12th-grade students are not available, because the 12th-grade NAEP assessment
was not administered in 2003 or 2005. Over the complete period during which
the Main NAEP assessment was administered, scores increased significantly in
mathematics at all three grade levels and in reading for 4th- and 8th-grade
students, but decreased significantly for 12th-graders.

28
Exhibit 14 Exhibit 15
Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores by School Grade Level Average Scale Scores by School Grade Level

320 320
12th Grade

290*
12th Grade 300 303*
300
286 289 * 300
285 294* 297
* 8th Grade
280 280
272* 276 * 278
8th Grade 260 267* 269*
260 262*
261 263* 261* 260 237
258* 257* 234*
240
240
222* 224*
219* 4th Grade
4th Grade 220 212*
220

215* 212* 213 * 211*


217 216* 217 200
200 1990 1992 1996 2000 2003 2005
1992 1994 1998 2000 2002 2003 2005

* Indicates that the score is significantly different from the most recent score (2005 for 4th and 8th
grade, 2002 for 12th grade reading, and 2000 for 12th grade mathematics) (p<.05).

Source: National Center for Education Statistics, Main NAEP.

29
Looking at high-poverty schools, defined as those with 75 percent or more of their
students eligible for free or reduced-price lunches, average scale scores rose from
2000 to 2005 by 14 points in 4th-grade reading and 16 points in 4th-grade
mathematics (see Exhibits 16 and 17). Over the complete period during which
the Main NAEP assessment was administered, there was no significant change in
reading, but a markedly higher gain in mathematics (a 27-point gain). In short,
NAEP mathematics scores in high-poverty schools rose over the period from 1990
to 2005, including the most recent five-year period, while NAEP reading trends for
these schools fluctuated up and down but were at about the same level in 2005
as in 1992.

Exhibit 16 Exhibit 17
Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 4th Grade by School Poverty Level Average Scale Scores in 4th Grade by School Poverty Level
Low-Poverty Schools
260 251
260 247*
Low-Poverty Schools 239*
240 235* All Schools
240 233 232* 233 230*
230 230 237
225 * 225 * 234 *
218 *
All Schools 220
220 222 * 224 *
219 * 221
217 216* 217 216*
215 212 * 213 * 211 *
212 *
209 *
200 200 205* High-Poverty Schools

196 194 * 197 194 * 195*


192
180 187 * 180
182 * 183* High-Poverty Schools

160 160
1992 1994 1998 2000 2002 2003 2005
1990 1992 1996 2000 2003 2005

* Indicates that the score is significantly different from the one in 2005 (p<.05).

Note: “High-poverty” was defined as schools with 76 to 100 percent of their students eligible for free or
reduced-price lunches, and “low-poverty” indicates that 0 to 25 percent were eligible for subsidized
lunches.

Source: National Center for Education Statistics, Main NAEP, unpublished tabulations.

30
Recent NAEP trends by race/ethnicity also show gains in both 4th-grade
reading and mathematics for black and Hispanic students (see Exhibits 18
and 19). From 2000 to 2005, black students gained 10 points in 4th-grade
reading and Hispanic students gained 13 points, both greater than the 5-point
gain for white students over the same time period. In 4th-grade math, black
students gained 17 points from 2000 to 2005 and Hispanic students gained 18
points, again greater than the 13-point gain for white students.

Exhibit 18 Exhibit 19
Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 4th Grade by Race/Ethnicity Average Scale Scores in 4th Grade by Race/Ethnicity
260 260 White
246
243*
240 White 240 233 *
231 * Hispanic
227 227 228 227 *
223 * 222* 223* 223 * 225
219 * 221*
220 220
Hispanic 207 * 207* 220
216*
199 199 201 199 * 201 * Black
200 194 * 192 * 200
186* 188 * 203 *
198 197* 199 198 *
191 * 192 * Black 192 *
189 *
180
184* 180 187 *

160 160
1992 1994 1998 2000 2002 2003 2005 1990 1992 1996 2000 2003 2005

* Indicates that the score is significantly different from the one in 2005 (p<.05).

Source: National Center for Education Statistics, Main NAEP.

Over the longer term, 4th-grade mathematics scores show even larger gains from
1990 to 2005 for black and Hispanic students (33 points and 26 points,
respectively), while white students gained 27 points. In 4th-grade reading, the
13-year trend from 1992 to 2005 shows somewhat smaller gains for black and
Hispanic students (eight points and seven points, respectively).

31
Looking at the trends on 4th-grade NAEP assessments in terms of the
percentage of students achieving at or above the proficient level,
patterns are mixed. For the recent period from 2000 to 2005, black students
show a modest increase on the reading assessment from 9 percent proficient in
2000 to 12 percent proficient in 2005, while Hispanic or white students show no
significant change (see Exhibits 20 and 21). On the mathematics assessment,
however, all three racial/ethnic groups show significant gains in the percentage
achieving at or above the proficient level, with black students rising from 4
percent proficient to 13 percent proficient, and Hispanic students rising from 7
percent to 19 percent. Over the longer period since the early 1990s, 4th-graders
in all three racial/ethnic groups show significant gains in both subjects,
particularly in mathematics.

Trends in the achievement gaps between minority and white students do


not show consistent patterns. For example, the black-white achievement gap
in scale scores on the 4th-grade mathematics assessment declined from 32 points
in 1990 to 26 points in 2005; however, the black-white achievement gap in the
percent of students scoring at the proficient level on the same assessment
increased from 14 percentage points in 1990 to 34 percentage points in 2005.
The Hispanic-white gap is unchanged over this period when looking at scale
scores, but increases for the percent proficient measure. The 8th-grade
mathematics assessment shows a similar pattern. Changes in the black-white
and Hispanic-white achievement gaps on the 12th-grade mathematics
assessment and on all three reading assessments were in most cases not
statistically significant.

Exhibit 20 Exhibit 21
Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Percent Proficient in 4th Grade by Race/Ethnicity Percent Proficient in 4th Grade by Race/Ethnicity
50% 50% 47%
White 42%*
39% 39% 39% White
40%
35%* 36% * 36% 40%
33% *
30%*
30% 30% 26% *
Hispanic
22% *
Hispanic 19%
20%
20%
14% 14% 15% 15% * 15%*
11% 12% 12%
10% *
10%
12% 12% 12% 10% 7%* 7% * 13%
10% * 9% * 4% * 5% *
8% * 8% * Black 10%* Black
2%*
0% 1%* 3% * 4%*
0%
1992 1994 1998 2000 2002 2003 2005
1990 1992 1996 2000 2003 2005

* Indicates that the score is significantly different from the one in 2005 (p<.05).

Source: National Center for Education Statistics, Main NAEP.

32
2. Trend NAEP

The long-term achievement trends measured by the Trend NAEP show


significant gains for 9-year-olds and 13-year-olds in both reading and
mathematics, but 17-year-olds did not make significant gains in either
subject (see Exhibits 22 and 23). As was found in the Main NAEP, achievement
gains were much larger in mathematics than in reading. In mathematics, for
example, the average score for 9-year-olds rose from 219 in 1973 to 241 in 2004,
a 22-point gain, compared with a 15-point gain for 13-year-olds and no significant
change for 17-year olds. In reading, the average score for 9-year-olds rose from

Exhibit 22 Exhibit 23
Reading Achievement on the Trend NAEP, 1971 to 2004: Mathematics Achievement on the Trend NAEP, 1973 to 2004:
Average Scale Scores by Student Age Group Average Scale Scores by Student Age Group

320 320 Age 17


308
304 302* 305 307 306 307 307
300* 298*
Age 17
300 289* 290* 290* 290* 288 288 288
300 Age 13
285 286 285 285
281
* 274* 274* 276*
280 280 269* 269* 270* 273
266 * 264*
260 260
260 258 258 Age 9
258 257 257 257 259 259
255* 256 * 241
Age 13
240 240 230* 230 *231* 231* 232*
222*
215*
219 219 * 219* 219*
212* 212*
209* 211* 211*
220 210 * 211* 212 * 220
208*
Age 9
200 200
1971 1975 1980 1984 1988 1990 1992 1994 1996 1999 2004 1973 1978 1982 1986 1990 1992 1994 1996 1999 2004

* Indicates that the score is significantly different from the one in 2004 (p<.05).

Source: National Center for Education Statistics, Trend NAEP.

208 in 1971 to 219 in 2004, an 11-point gain, compared with a 4-point gain for
13-year-olds and no change for 17-year-olds.

Recent gains from 1999 to 2004 are significant for


9-year-olds in both mathematics and reading and
for 13-year-olds in mathematics. In mathematics, 9-
year-olds saw a 9-point increase over this five-year
period, from 232 to 241, while 13-year-olds gained 5
points and scores for 17-year-olds were essentially
unchanged. In reading, only 9-year-olds had a significant
change in scores (a 7-point gain).

33
Black and Hispanic students show substantial gains on the Trend NAEP,
both in the most recent period as well as over the full three decades
covered by the assessment (see Exhibits 24 and 25). From 1999 to 2004,
black 9-year-olds gained 14 points in reading and 13 points in mathematics; long-
term gains were 30 points in reading (since 1971) and 34 points in mathematics
(since 1973). Similarly, Hispanic 9-year-olds gained 12 points in reading and 17
points in mathematics from 1999 to 2004, with long-term gains of 22 points in
reading and 28 points in mathematics. In reading, black and Hispanic students
made strong gains in the 1970s, but the trends leveled out during the 1980s and

Exhibit 24 Exhibit 25
Reading Achievement on the Trend NAEP, 1971 to 2004: Mathematics Achievement on the Trend NAEP, 1978 to 2004:
Average Scale Scores for 9-Year-Olds by Race/Ethnicity Average Scale Scores for 9-Year-Olds by Race/Ethnicity

260 260 White


247
239*
240 235* 235* 237* 237* Hispanic
240 White 230
225 * 227*
226 224* 224*
221* 221*
217 * 218 * 218* 217* 218* 218* 220 * 214* 212 215*
220 214 * 220 * 210* 213* 224
Hispanic 205*
205 202* 203* 204* Black
212* 212* 211*
200 208* 208*
200 194* 192*
195* 193* 202 *
190* 189* * 186*
187 * 200 195 *
183
* 191* Black 190 * 192 *
189* 186* 189* 180
180 185 185* 186 *
170 * 181 * 182*

160
160 1973 1978 1982 1986 1990 1992 1994 1996 1999 2004
1971 1975
1975 1980
1980 1984 1988 1990
19901992 1994 1996 1999 2004

* Indicates that the score is significantly different from the one in 2004 (p<.05).

Source: National Center for Education Statistics, Trend NAEP.

1990s until the most recent jump in scores from 1999 to 2004. In mathematics,
black and Hispanic scores rose through the late 1970s and 1980s, were fairly flat
during the 1990s, and then increased dramatically from 1999 to 2004.

Gains for black and Hispanic students substantially outpaced gains


made by white students, resulting in significant declines in black-white
and Hispanic-white achievement gaps since the 1970s. However, the
change in achievement gaps from 1999 to 2004 in most cases was not
statistically significant. For example, the 13-point mathematics gain for black
9-year-olds from 1999 to 2004 was greater than the 8-point gain for white
students, but the 5-point reduction in the gap between their scores was not
statistically significant. However, the 12-point reduction in the black-white gap
over the long term (declining from a 35-point gap in 1973 to a 23-point gap in
2004) was statistically significant.

34
C. Graduation Rates
Under NCLB, in addition to reading, math, and eventually science achievement,
high schools are held accountable for graduation rates. In AYP determinations for
2003-04, one-third of high schools did not meet their state’s graduation rate
target.

States have differing methods for calculating and reporting graduation rates, so
they are not consistent across the country and cannot provide a national picture
of progress on this indicator. To provide more consistent data, the Task Force on
Graduation, Completion, and Dropout Indicators recommended the use of a
graduation rate measure called the Exclusion-Adjusted Cohort Graduation
Indicator (EACGI), which is also referred to as a true cohort graduation rate.46 In
order to calculate this indicator, longitudinal individual student-level data systems
are needed, and most states do not yet have such data systems in place.

The Averaged Freshman Graduation Rate (AFGR) is an alternative measure that


can be calculated in the absence of a longitudinal individual student record
system. The National Center on Education Statistics (NCES) has recently reported
data on this measure for all states, using state-reported enrollment and diploma
data from the NCES Common Core of Data (CCD). This interim graduation rate
uses the state’s report of diploma recipients as the numerator for regular
graduates; the denominator is the average of the number of 8th graders five
years earlier, 9th graders four years earlier, and 10th graders three years earlier.
This measure provides a common standard against which state-reported
graduation rates may be compared.6 We present here both the state-reported rates
and the averaged freshman graduation rates for each states because the state-
reported rates are what each state uses for making AYP determinations, while the
averaged freshman graduation rate provides data for all states using a consistent
measure.

Based on the state-reported data, state average graduation rates in


2002 ranged from a high of 97 percent in North Carolina to a low of 62
percent in Georgia.47 The range of state graduation rates based on the
averaged freshman graduation rate was somewhat lower—ranging from a high of
86 percent in New Jersey to a low of 58 percent in South Carolina—and the two
measures often produced different numbers for individual states (see Exhibit 26).
The median state graduation rate was 84 percent based on state reports and 75
percent based on the averaged freshman graduation rate.

The recent trend in the averaged freshman graduation rate, from 1996
to 2002, has been fairly level, and the graduation rate in 2002 (73
percent) was the same as in 1996. Graduation rates calculated for the
preceding five years were slightly higher (e.g., 76 percent in 1991), but these
6
For more information about various graduation indicators, see Marilyn Seastrom, Chris Chapman,
Robert Stillwell, Daniel McGrath, Pia Peltola, Rachel Dinkes, and Zeyu Xu (forthcoming), A Review and
Analysis of Alternative High School Graduation Rates, Volume I. User’s Guide to Computing High School
Graduation Rates, Volume 2, An Analysis of Alternative High School Graduation Rates (NCES 2006-602).
Washington, DC: U.S. Department of Education, National Center for Education Statistics.

35
data may not be strictly comparable because of improvements in reporting over
time.

36
Exhibit 26
Comparison of Averaged Freshman Graduation Rates
and State-Reported Graduation Rates, 2002
New Jersey 86 90
North Dakota 85
85
Wisconsin 91
85
Iow a 89
84
Minnesota 88
84
Nebraska 85
84
Vermont 90
82
Connecticut 89
80
Maryland 84
80
Montana 84
80
Pennsylvania 86
80
Utah 86
80
Idaho 81
79
South Dakota 96
79
Massachusetts 94
78
New Hampshire 85
78
Illinois 85
77
Kansas 83
77
Missouri 82
77
Ohio 83
77
Virginia 85
77
Maine 86
76
Oklahoma 76
Rhode Island 81
76
Arkansas 84
75
Arizona 76
75
Colorado 82
75
Texas 81
74
West Virginia 83
74
Wyoming 77
74
California 87
73
Indiana 91
73
Michigan 86
73
Haw aii 79
72
Nevada 75
72
Washington 79
72
Oregon 81
71
Delaw are 83
70
Kentucky 81
70
District of Columbia 68
North Carolina 97
68
New Mexico 67
Alaska 67
66
Louisiana 64
Florida 63 68
Alabama 62
Georgia 6162
Mississippi 81
61
New York 77
61
Tennessee 76
60
South Carolina 78
58

40 60 80 100

Averaged Freshman Graduation Rate State-Reported Graduation Rate

Sources: U.S. Department of Education, National Center for Education Statistics, averaged freshman
graduation rates calculated from data in the Common Core of Data.48 U.S. Department of Education,
Policy and Program Studies Service, analysis of state-reported graduation rates from Consolidated
State Performance Reports and State Education Agency Web sites; state-reported rates for 2003 or
2004 were used for 16 states where 2002 rates were not available.49

37
38
Conclusions

NCLB established the ambitious goal of, by 2013-14, having all children achieve
proficiency in reading and math according to state standards. Recent data from
both state assessments and the National Assessment of Educational Progress
show promising trends. Although the NAEP and state assessment data were not
designed to address questions about the causal impact of NCLB, they can still be
informative for examining changes over time in student achievement. Student
achievement as measured by state assessments rose from 2000-01 to 2002-03
for most student subgroups—such as low-income students, blacks, Hispanics,
migrants, and those with limited English proficiency or with disabilities—in a
majority of the states where consistent assessment practices make it possible to
track trends from 2001 to 2003. Similarly, recent trends on the Main NAEP
assessment show gains in 4th-grade reading and mathematics for black and
Hispanic students and for students in high-poverty schools; however, recent
trends are mixed for 8th-grade students and not available for 12th grade students.
The long-term achievement trends measured by the Trend NAEP show significant
gains for 9-year-olds and 13-year-olds in both reading and mathematics, although
both recent and long-term trends on the Trend NAEP are flat for 17-year-olds. It
remains to be seen whether the current trajectories will remain steady or
accelerate in the years to come; the latter will be required if all states are to reach
the 100 percent proficient target within the next decade.

39
Key Findings on Implementation of State Assessment Systems

To what extent have states implemented the annual assessments


in reading, mathematics, and science that will be required under
NCLB?

Many states have put in place all of the assessments they intend to use
to meet NCLB requirements. As of March 2005, 27 states had completed
their first full administration of all required reading assessments; 26
states had done so in all required mathematics assessments; and 22
states had done so in all required science assessments. Most of the
remaining states had at least field-tested all of the required assessments.

How are states developing their English language proficiency


assessments?

Many state approaches to assessing English language proficiency (ELP)


were still evolving as of 2004-05. All states had some kind of ELP
assessment in place, but 44 states reported that they anticipate making
revisions to their ELP assessments.

To what extent do state assessment systems include students


with special needs?

As of 2003-04, most states were meeting the requirement to annually


assess at least 95 percent of their students, including students from
major racial/ethnic groups, students with disabilities, limited English
proficient students, students from low-income families, and migrant
students. However, 14 states did not meet the test participation
requirement for one or more student subgroups.

The lowest test participation rates were for students with disabilities.
While states missing the test participating requirement for other
subgroups often missed by just one or two percentage points,
participation rates for students with disabilities were often well below the
95 percent threshold—as low as 77 percent in Texas and 84 percent in
the District of Columbia.

How fully are states meeting NCLB requirements for reporting


state assessment data?

The number of states that report student achievement data


disaggregated for individual student subgroups has more than doubled
since NCLB was enacted. In 2004-05, 50 states released state report
cards that presented state assessment results disaggregated by
race/ethnicity and gender and for limited English proficient students,

40
students with disabilities, and students from low-income families. In
contrast, in 2002-03, only 20 states disaggregated data by race/ethnicity
on report cards. Most states are also providing assessment data to
districts and schools, including individual student-level data as well as
longitudinal data.

41
IV. Implementation of State Assessment Systems

A central feature of the No Child Left Behind Act is its emphasis on high
expectations for all students, schools, and districts. To support this goal, Title I
requires states (including the District of Columbia and Puerto Rico) to develop or
adopt challenging state content and achievement standards as well as
assessments that are aligned with them. By 2005-06, all states are to assess all
students in grades 3-8 and once in grades 10-12 in reading and mathematics. By
2007-08, states also must administer annual science assessments at least once in
grades 3-5, 6-9, and 10-12.

NCLB establishes a high standard for inclusion of all students in the state
assessment system, requiring that each state, district, and school ensure that at
least 95 percent of students participate in the state assessment system, both
overall and within specific subgroups, including the major racial/ethnic categories
as well as economically disadvantaged students, students with disabilities, and
limited English proficient students. State assessment systems must produce
results for individual students and be reported at the school, district, and state
levels. When reporting assessment data, states, districts, and schools must
disaggregate for each of the above subgroups as well as by gender and migrant
status.

NCLB also requires states to provide for the assessment of English language
proficiency of all limited English proficient students, beginning in 2002-03. The
assessments, aligned to state English language proficiency standards, must
include the four domains of reading, writing, speaking, and listening.

Key Evaluation Questions for State Assessments

1. To what extent have states implemented the annual assessments in reading,


mathematics, and science that will be required under NCLB?

2. How are states developing their English language proficiency assessments?

3. To what extent do state assessment systems include students with special


needs?

4. How fully are states meeting NCLB requirements for reporting state
assessment data?

A. Development of Assessments Required under No Child


Left Behind
All states had adopted academic content standards in reading and mathematics
as of March 2005; 51 states had adopted science standards by then (Iowa had
not).50 Many states have been active in developing or revising their content

42
standards since the passage of NCLB; since 2002, 29 states have adopted or
revised reading content standards and 28 states have adopted or revised
mathematics standards.51

States have made substantial progress toward the adoption of required


reading, mathematics, and science assessments. As of March 2005, 27
states had completed their first full administration of all required reading
assessments; 26 states had done so in all required mathematics assessments;
and 22 states had done so in all required science assessments. In addition, all but
four states (the District of Columbia, Iowa, Illinois, and Nebraska) had at least
field-tested all of the required reading and mathematics assessments. Twenty-
five states had field-tested all of the required science assessments.52

Because NCLB
expanded the Title I Exhibit 27
State Approaches to Developing Assessments Required by 2005-06
state assessment
requirements to Percentage of Grade 3-8
additional grades, Assessments Across All States
states needed to Reading Math
implement additional (N=312) (N=312)
assessments in a Kept existing assessment 31% 30%
number of grades. For Modified existing assessment 5% 5%
about one-third of all of Adopted new assessment:
the required reading • Augmented existing off-the-shelf test 12% 12%
and mathematics • Developed new assessment 45% 46%
assessments in grades • Other approach 4% 4%
3 though 8, states are Data not available 3% 3%
using existing
assessments that were
Exhibit reads: In order to meet the NCLB requirements
already in place by
for state assessments in grades 3 though 8 in reading,
2004-05 (see Exhibit
states used existing assessments for 31 percent of the
27). For the remaining required assessments and modified existing assessments
required assessments in an additional 5 percent of the cases for 2004-05.
in these grades, states
are adopting new Source: Study of State Implementation of Accountability and Teacher
assessments, typically Quality Under No Child Left Behind.
by developing a new
assessment (45 to 46 percent of the necessary reading and mathematics
assessments), although in some cases states are augmenting an existing “off-the-
shelf” assessment published by a test developer (12 percent of the needed
assessments). Individual states may have used different approaches for different
grade levels and/or subjects.

1. Alternate Assessments for Students with Disabilities

Nearly all states have developed and administered alternate


assessments for students with disabilities. In 2004-05, 48 states
administered alternate assessments for students with disabilities in both reading

43
and mathematics. In 45 of these states, at least one of these assessments was
based on alternate achievement standards. Twenty-five states also had alternate
assessments based on grade-level achievement standards.53

2. English Language Proficiency Assessments

Many state approaches to assessing English language proficiency (ELP)


were still evolving as of 2004-05. All 52 states had some kind of ELP
assessment in place in 2004-05, but these assessments did not necessarily meet
NCLB requirements, and 44 states indicated that they anticipated making
revisions to their ELP assessments. Twenty states reported in 2004-05 that they
had an ELP assessment in place that met NCLB requirements, 27 states plan to
have an ELP assessment that meets NCLB requirements in place for 2005-06, and
five states had not made a decision as to which ELP assessment instrument they
would use in 2004-05 to meet NCLB requirements.54

Under Title III requirements, states must develop ELP standards (which must be
linked to state academic content standards); states are required to provide for the
assessment of their limited English proficient (LEP) students on ELP assessments
aligned with their ELP standards.55 Half of the states (25) indicated that they had
linked their ELP assessment to ELP standards and 22 states either have not made
that linkage or have linked their ELP standards with the ELP assessment that will
be used in 2005-06.

States were asked if they used any of four approaches in developing their ELP
assessments that were administered in 2004-05, and several reported taking
more than one approach. Six states modified an out-of-state source, such as an
existing published test by a test developer, for their ELP assessment. Twenty-nine
states adopted their entire ELP assessment from an out-of-state source by, for
example, purchasing a test from a testing company. Twelve states developed
their ELP assessment as part of a multi-state consortium. Eight states developed
their own ELP assessment in-house or had it developed specifically for their state.

B. Inclusion and Accommodations


As of 2003-04, most states are meeting the requirement to annually
assess at least 95 percent of their students, including students from all
major racial/ethnic groups, students with disabilities, limited English proficient
students, and students from low-income families. However, 14 states did not
meet the minimum test participation requirement for one or more student
subgroups (see Exhibit 28). For example, 10 states assessed less than 95 percent
of one or more minority student groups (black, Hispanic, and/or Native American),
and 10 states did not meet the test participation requirement for LEP students.
(Note that, in their calculation of participation rates, some states include students
who do not actually sit for the state assessment by assigning such students the
lowest obtainable score on the assessment. Thus, participation rates of 100
percent may be reported when in fact fewer students actually sat for the state’s
assessment.)

44
Most of the states that did not assess at least 95 percent of their black, Hispanic,
or Native American students missed the mark by a small margin, typically less
than two percent. Five states—Alabama, Connecticut, the District of Columbia,
Kentucky, and Georgia—missed by a wider margin for at least one of the
racial/ethnic groups reported.

Three states—the District of Columbia, Georgia, and Texas—failed to meet the


participation requirement for most, if not all, subgroups as well as for the “all
students” group.

1. Inclusion of Students with Disabilities in the State Assessment


System

Most states assessed at least 95 percent of their students with


disabilities on the state assessment system in 2003-04 (see Exhibit 28).
Forty-seven states assessed at least 95 percent of their students with disabilities
in reading and 46 states assessed at least 95 percent in mathematics. The
number of states that assessed fewer than 95 percent of their students with
disabilities (four states in reading, six states in mathematics) was similar to or
lower than the number of states that did not meet the test participation
requirement for other subgroups. However, states that failed to assess 95
percent of students with disabilities had lower participation rates for those
students than for other groups that missed the test participation requirement. For
instance, Texas assessed only 77 percent of its students with disabilities,
compared with 94 percent of its black students.

Many students with disabilities participated in their state’s assessments


with accommodations. Overall, schools reported that 96 percent of students
with disabilities participated in their state reading assessment in 2001-02. More
than one-fourth (28 percent) of all students with disabilities participated in the
regular state assessment with no accommodations, 60 percent participated with
accommodations, and 9 percent took an alternate reading assessment.56 One
study of 14- to 17-year-old students with disabilities found that the most common
accommodations on mandated state assessments in 2002 were additional time
(57 percent of those assessed with accommodations), taking the test in an
alternate setting (45 percent), or having a reader who delivered instructions
and/or test items (33 percent). Almost 30 percent of these students received no
accommodations.57

Exhibit 28
Participation of Selected Student Subgroups in State Assessment Systems, 2003-04

Black Hispanic Native American Students with LEP Students Low-Income


Disabilities Students

Reading Math Reading Math Reading Math Reading Math Reading Math Reading Math
Number of states assessing
46 44 43 45 44 44 47 46 41 45 42 42
at least 95% of students

45
Percent of students
assessed, in states
assessing less than 95%:
Alabama 93.8% 94.5% 91.6% * * * 86.1% 87.8% ** ** 90.5% 91.6%
Arkansas * 93.4% * * ** ** 90.0% 91.0% * * ** **
California * 93.0% * * * 92.0% * 91.0% * * * *
Connecticut 92.2% 93.6% 91.7% 92.6% 93.0% * * * 88.2% 91.9% 93.2% 93.7%
Delaware * * * * 93.8% 92.7% * * 94.6% * * *
District of Columbia 92.7% 92.4% 94.8% 94.8% 90.0% 90.0% 85.1% 84.2% * * 94.4% 94.0%
Georgia 92.9% 92.8% 90.0% 90.0% 88.6% 88.5% * * 90.4% 90.5% 94.6% 94.6%
Kansas * * * * * * * * 94.9% * * *
Kentucky * * 92.0% 94.0% * * * * 83.0% 89.0% * *
Maine * * * * * * * * 91.0% * * *
Missouri * * * * * * * * 92.5% * * *
New York * * * * * * * 91.0% * 94.0% * *
Tennessee * * 93.3% * * * * * 85.2% * * *
Texas 94.1% 94.2% 93.0% 93.2% 94.6% 94.8% 77.4% 77.4% 83.3% 83.8% 93.0% 93.2%

Exhibit reads: Forty-six states assessed at least 95 percent of their black students
in reading in 2003-04.
Alabama, Connecticut, the District of Columbia, Georgia, and Texas assessed less
than 95 percent of their
black students in reading.

* State assessed at least 95 percent of this subgroup in this subject.


** Data not available.

Notes: Six states did not report data for the participation of at least one subgroup.58 Data from the
2003-04 Consolidated State
Performance Reports have not been verified by states, and should be considered preliminary.

Source: Consolidated State Performance Reports, 2003-04.

2. Inclusion of Limited English Proficient Students in the State


Assessment System

Most states also met the 95 percent assessment criterion for limited English
proficient (LEP) students in 2003-04. In reading, 41 states assessed at least 95
percent of their LEP students, while 45 did so for mathematics (see Exhibit 28). In
their assessment of LEP students, most states provided some sort of
accommodation, such as modifying the presentation (47 states), timing or
scheduling (46), or setting (46). The most frequent presentation accommodations
included the use of dictionaries, reading aloud the questions in English, and
reading aloud or explaining the directions. Most states gave timing or scheduling
accommodations in the form of extra assessment time. Forty-four states made
the setting accommodations of small-group or individual or separate room
administration available to LEP students. Nearly half of the states (23) made
response accommodations—such as allowing responses in the student’s native
language and writing answers directly in the test booklet—available to their LEP
students.59

46
C. Reporting Assessment Data for Use in School

Improvement Efforts

The number of states that report student achievement data


disaggregated for individual student subgroups has more than doubled
since NCLB was enacted. Fifty states presented data disaggregated by
race/ethnicity and gender and for limited English proficient students, students
with disabilities, and low-income students on state report cards released in 2004-
05 (presenting assessment data for 2003-04).60 In contrast, in 2002-03 only 20
states disaggregated data by race/ethnicity on report cards.61 Fewer states (37)
include migrant students on their report cards; however, those states that do not
report migrant students may have too few to report.

Most states are also providing data to districts and schools. Forty-three
states (of the 45 for which there were data) were providing individual student
data to school districts as of 2003-04 (see Exhibit 29). Most states were also
showing school-level and subgroup-level results over time. However, states
provided individual student data showing change over time less frequently (18
states).

Exhibit 29
Reporting of State Assessment Results to Districts or Schools
in Various Formats and by Various Groups, 2003-04

Number of States
(Out of 45 Responding)
Percentage scoring at or above proficient 45
School or district
results showing… Percentage scoring at each achievement level 43
Scale score or other similar score 41
School as a whole 45
Subgroups 45
Results for… Each grade level 42
Each classroom 29
Individual students 43
School results 37
Trends in… Subgroups within the school 34
Individual student results 18

Exhibit reads: Forty-five states reported


assessment data results from their 2003-04
assessments for the percentage of students
scoring at or above the proficient level.62

Source: Study of State Implementation of Accountability and


Teacher Quality Under No Child Left Behind.

47
Conclusions
NCLB requires states to assess annually all students in reading and mathematics
in grades 3-8, and at least once in grades 10-12, beginning with the 2005-06
school year. By 2007-08, annual science assessments must also be in place.
Most states have already administered or field-tested all the assessments needed
to meet the law’s requirements, although many of these assessments are still
subject to review by the U. S. Department of Education. NCLB also requires states
to provide for the assessment of English language proficiency of all limited English
proficient students, beginning in 2002-03; all states had some kind of ELP
assessment in place in 2004-05, but these assessments did not necessarily meet
NCLB requirements, and most states indicated that they expect to revise their ELP
assessments.

Most states are also meeting the law’s requirement to assess at least 95 percent
of their students and a similar percent of key subgroups. A few states have not
met this inclusion standard, and those states often fall short by a substantial
margin for students with disabilities or students with limited English proficiency.

NCLB has led to increased reporting on student achievement to the public as well
as to states and districts. As required, state report cards now disaggregate
achievement data by race/ethnicity, for limited English proficient students, and by
disability status and poverty. Almost all states now also publish report cards that
include assessment results by school and district. Furthermore, states are
reporting results back to districts and schools, most commonly annual assessment
results for the school, subgroups, and individual students. Trend data showing
improvement from year to year is less common, particularly trend data for
individual students.

48
Key Findings on Accountability and Support for School
Improvement

What types of schools and districts are identified for


improvement?

States identified 11,530 schools, or 13 percent of all schools, for


improvement for 2004-05, and 9,028 were Title I schools. One-fourth of
the identified Title I schools had not made adequate yearly progress (AYP)
for four or more years and were identified for corrective actions or
restructuring.

Schools in large and urban districts, and with high concentrations of poor
and minority students, were much more likely to be identified than other
schools. More diverse schools that were held accountable for more
student subgroups were more likely to be identified; for example, 37
percent of Title I schools with six or more subgroups were identified,
compared with 8 percent of those with only one subgroup.

What are the reasons schools do not make adequate yearly


progress (AYP)?

Schools most commonly missed AYP for the achievement of all students
in reading and/or mathematics (38 percent of schools that missed AYP
based on 2003-04 testing). Smaller percentages of schools missed AYP
for only one student subgroup or test participation rates.

What assistance is provided to districts and schools identified


for improvement? What interventions are implemented in these
districts and schools?

Identified schools reported significantly greater needs for assistance than


non-identified schools, and they also received more days of assistance.
Identified schools often reported receiving support in the areas NCLB
mandates. The most common improvement strategies implemented by
identified schools included developing a school improvement plan, using
assessment data to inform instruction, and providing additional
instruction to low-achieving students. Nearly all Title I schools in
corrective action status experienced interventions that NCLB delineates
for schools in this status.

Almost all states had implemented a statewide system of support for


identified schools by fall 2004, and districts reported providing several
kinds of assistance to both identified and non-identified schools, with
large districts more likely than small districts to provide assistance. Most
states reported that providing assistance to all schools identified for
improvement was a moderate or serious challenge in 2003-04.

49
How fully have state and districts implemented other key
accountability provisions (such as unitary accountability
systems, reporting, accountability under Title III, etc.)?

Most states implemented NCLB consequences for school identification


only for Title I schools, though districts commonly provided assistance to
all types of schools.

50
V. Accountability and Support for School Improvement
The intent of NCLB is to promote improved achievement for all students by
requiring states to establish accountability systems that hold all schools, including
Title I schools and non-Title I schools, to the same academic standards. Under
Title I, states must assess all students and use the results to determine whether
schools and districts make adequate yearly progress. States have developed
definitions for the adequate yearly progress (AYP) expected of schools and
districts, with annual targets leading to the ultimate goal: namely, that students
from all groups—including students from low-income families and each major
racial and ethnic group, students with disabilities, and limited English proficient
(LEP) students—reach the proficient level on state assessments by 2013-14.
Required state and district report cards must present the assessment results and
other information related to school performance to parents and the public.

Schools and districts become identified for improvement when they miss AYP for
two consecutive years. Title I prescribes specific consequences for identified Title
I schools, and states may choose to apply the same consequences to non-Title I
schools. Districts must provide identified Title I schools with technical assistance
in developing or revising school improvement plans, analyzing assessment data,
identifying and implementing proven professional development and instructional
strategies, and developing budgets. Identified Title I schools also must reserve 10
percent of their Title I allocations for professional development. States, in turn,
must provide assistance to identified Title I schools through statewide systems of
support, including school support teams and distinguished educators.

When a Title I school becomes identified for improvement, the district also must
provide parents of each student at the school the option to transfer their child to a
non-identified school in the district. If the school misses AYP again after being
identified, the district must give students from low-income families the option to
receive supplemental educational services (e.g., tutoring) from state-approved
providers. If such schools miss AYP for another year after identification, districts
must take at least one of a series of “corrective actions” at the school, such as
requiring a new curriculum or replacing school staff members. If a school does
not make AYP after one year of corrective action, NCLB calls for major
restructuring of the school, beginning with a year of planning for restructuring
followed by actual restructuring the next year. Identified schools and districts exit
improvement status when they make AYP for two consecutive years.

Title III, Language Instruction for Limited English Proficient and Immigrant
Students, along with Title I, outlines additional accountability requirements
related to LEP students. Under Title III, states implement English language
proficiency (ELP) standards and assessments aligned with those standards. For
Title III subgrantees, states also define annual measurable achievement
objectives (AMAOs), which include targets for student progress in gaining and
attaining English language proficiency and AYP under Title I, as well as
consequences for subgrantees that repeatedly do not meet these targets.

51
Key Evaluation Questions for Accountability

1. What types of schools and districts are identified for improvement and thus
subject to NCLB accountability requirements?

2. What are the reasons schools do not make adequate yearly progress (AYP)?

3. What assistance is provided to districts and schools identified for


improvement? What interventions are implemented in these districts and
schools?

4. How fully have states and districts implemented other key accountability
provisions (such as unitary accountability systems, reporting, accountability
under Title III, etc.)?

A. School and District Identification for Improvement


Schools identified for improvement are the subject of many NCLB accountability
provisions, which makes understanding their characteristics and the reasons they
miss AYP vital to improvement efforts. Findings in this section pertain to all
schools and districts unless a specific focus on Title I schools is noted.

1. School Identification for Improvement

States had identified 11,530 schools for improvement in 2004-05, or 13


percent of the nation’s schools in the 50 states and the District of
Columbia. Title I schools accounted for more than three-fourths of all identified
schools, and the 9,028 identified Title I schools represented 18 percent of all Title
I schools. About three-quarters of the identified Title I schools were in their first
year or second year of improvement, with another 12 percent in corrective action
and 12 percent in restructuring status.63

The 9,028 identified Title I schools in 2004-05 represented a nearly 50


percent increase over the approximately 6,000 identified Title I schools
for 2002-03 and 2003-04 (see Exhibit 30). The number of Title I schools in
corrective action rose from 926 in 2003-04 to 1,047 in 2004-05, while the number
in restructuring status rose from 838 to 1,065. There was considerable change in
which schools were identified: Almost half of the Title I identified schools in 2004-
05 had not been identified the previous year, and one-fourth (23 percent) of Title I
identified schools in 2003-04 were no longer identified in 2004-05. Most of the
identified Title I schools in 2004-05 were in districts that had few identified Title I
schools; 72 percent of districts with identified schools had only one or two
identified schools.64

52
The numbers and percentages of schools identified for improvement
varied considerably across states (see Exhibit 31). States differ in the
content and rigor of their assessments and academic achievement standards as
well as other features of their accountability systems. As a result, variation across
states in the numbers and percentages of identified schools likely reflects
differences in state accountability systems as well as differences in student
achievement; states with more identified schools are not necessarily lower
performing than states with fewer identified schools. Seven states had identified
5 percent or fewer of their Title I schools, while six states had identified more than
one-third their Title I schools in 2004-05. Similarly, the numbers of Title I schools
in corrective action or restructuring status varied by state, from none in several
states to more than 100 in a few states.65

Exhibit 30
Number and Percentage of Identified Title I Schools,
1996-97 to 2004-05

10,000

9,028
8,000 8,408 8,348 8,375 18%
7,353 20% 19% 18%
7,021
6,000 17% 6,441
16% 6,094 5,963
13%
12% 12%
4,000

2,000

0
1996- 1997- 1998- 1999- 2000- 2001- 2002- 2003- 2004-
97 98 99 00 01 02 03 04 05

Exhibit reads: In 2004-05, 9,028 Title I schools had been identified for
improvement based on test scores for 2003-04 and earlier years; these identified
schools represented 18 percent of all Title I schools in that year.
Note: The first year that schools were identified for improvement based in part on NCLB AYP
definitions was 2003-04, based on assessments administered in 2002-03. However, schools are
identified when they miss AYP for two consecutive years, and 2004-05 was the first year that includes
schools identified because they missed NCLB AYP targets for two consecutive years.

Sources: Consolidated State Performance Reports (1996-97 to 2002-03); Study of State


Implementation of Accountability and Teacher Quality Under NCLB (2003-04 and 2004-05) (based on
data from 50 states and the District of Columbia).

Non-Title I identified schools represented 22 percent of all identified


schools nationwide, and they accounted for more than half of all
identified schools in 10 states. Thirty-four states reported that they would
identify for improvement non-Title I schools that failed to make AYP based on

53
2003-04 testing, and these states identified 2,503 non-Title I schools for 2004-05.
Few states required NCLB consequences of public school choice and supplemental
services for identified non-Title I schools (three states each). Similarly, only eight
states had assigned non-Title I schools to corrective action status, and only eight
states had put non-Title I schools in restructuring status in 2004-05. Overall,
states had placed fewer than 300 non-Title I schools in corrective action or
restructuring.66

Most districts with identified schools have very few, though a smaller number of
districts had large numbers of identified schools. Of the 2,912 districts that had
one or more identified schools in 2004-05, nearly three-fourths (73 percent) of
these districts had only one or two identified schools. However, 4 percent of
districts with identified schools (129 districts) contained 13 or more identified
schools each.67

Middle schools were more commonly identified than either


elementary schools or high schools. Eighteen percent of middle schools
were identified schools in 2004-05, compared with 11 percent each of
elementary and high schools. However, because elementary schools account
for a majority of all schools, they also account for a larger absolute number of
identified schools (5,498) compared with middle schools (2,877) and high
schools (1,879).68

54
Exhibit 31
Number and Percentage of Identified Schools, by State, 2004-05*

All Schools Title I Schools Title I Schools by Improvement Status


Number Percent Number Percent Year 1 or Year 2 Corrective Restructuring
Action
Total 11,530 13% 9,028 18% 6,916 1,047 1,065
Alabama 80 6% 80 9% 35 7 38
Alaska 179 36% 128 40% 112 8 8
Arizona 135 7% 135 13% 87 37 11
Arkansas 300 27% 195 24% 190 4 1
California 1,618 18% 1,618 29% 1,167 173 278
Colorado 87 7% 87 10% 57 27 3
Connecticut 134 12% 79 17% 71 0 8
Delaware 44 21% 16 15% 13 3 0
District of Columbia 96 47% 96 58% 82 14 0
Florida 964 29% 964 68% 964 0 0
Georgia 413 20% 249 26% 118 27 104
Hawaii 138 49% 88 62% 28 6 54
Idaho 71 10% 28 6% 28 0 0
Illinois 655 15% 655 27% 395 238 22
Indiana 77 4% 77 7% 49 18 10
Iowa 66 4% 13 2% 13 0 0
Kansas 21 1% 21 3% 17 3 1
Kentucky 134 10% 134 13% 128 6 0
Louisiana 570 37% 432 46% 389 27 16
Maine 51 7% 29 5% 29 0 0
Maryland 255 19% 113 24% 49 7 57
Massachusetts 391 20% 277 24% 233 20 24
Michigan 511 13% 126 18% 56 25 45
Minnesota 48 2% 43 4% 35 8 0
Mississippi 71 8% 71 10% 67 2 2
Missouri 130 6% 130 10% 122 8 0
Montana 69 8% 67 10% 30 4 33
Nebraska 46 4% 9 2% 8 1 0
Nevada 111 21% 46 20% 44 2 0
New Hampshire 61 13% 23 9% 22 1 0
New Jersey 520 22% 384 28% 287 97 0
New Mexico 182 23% 114 20% 50 35 29
New York 508 11% 508 19% 272 53 183
North Carolina 160 7% 160 14% 154 6 0
North Dakota 21 4% 21 5% 8 6 7
Ohio 487 13% 390 15% 300 31 59
Oklahoma 142 8% 113 9% 98 4 11
Oregon 214 17% 35 6% 31 2 2
Pennsylvania 629 20% 377 17% 301 76 0
Rhode Island 61 19% 32 21% 27 5 0
South Carolina 207 19% 207 39% 186 10 11
South Dakota 59 8% 57 16% 53 2 2
Tennessee 207 13% 108 13% 66 0 42
Texas* 198 3% 198 4% 196 2 0
Utah 16 2% 16 7% 14 2 0
Vermont 25 7% 17 8% 14 3 0
Virginia 111 6% 111 14% 103 8 0
Washington 156 7% 72 8% 57 15 0
West Virginia 37 5% 37 9% 36 0 1
Wisconsin 51 2% 35 3% 18 14 3
Wyoming 15 4% 7 4% 7 0 0

Note: This table shows data reported by 51 states from October 2004 to April 2005. Some states decided
appeals prior to this data collection, and others made appeal decisions later; for example, Texas later
approved more than 100 appeals, resulting in a final count of 91 identified schools. This chapter uses the
numbers that states reported for this data collection.

Source: Study of State Implementation of Accountability and Teacher Quality Under NCLB.

55
Schools with high
concentrations of poor Exhibit 32
Percentage of Identified Schools with
and minority students Selected Characteristics, 2004-05
were much more likely
100%
to be identified than
other schools. Just over 80%

one-third of schools with


60%
high percentages of
students from low-income 40% 36% 34%

families or minority groups 18%


20%
were identified schools in 4% 3% 5%
2004-05, compared with 0%

under 5 percent of schools School Poverty


>=75% <35%
% Minority
>=75% <25%
% LEP
>=5% 0%
with low concentrations of
these students (see Exhibit
32). Schools in urban
areas and in large districts,
along with schools with Exhibit reads: In 2004-05, 36 percent of
high concentrations of LEP schools with poverty rates equal to or above 75
students, also were percent were identified for improvement,
identified at higher rates compared with 4 percent of schools with poverty
than other schools. About rates below 35 percent.
20 percent of these
schools were identified, Source: Study of State Implementation of Accountability and
compared with 10 percent Teacher Quality Under NCLB (based on data reported by 51
states for 88,160 schools).
or fewer of the schools in
suburban or rural areas, in
medium or small districts, or with medium or smaller concentrations of LEP
students.69

Minority students and students from low-income families were more


likely to attend schools identified for improvement than were other
students. For example, 32 percent of African-American students, 28 percent of
Hispanic students, and 21 percent of Native American students attended schools
identified for improvement in 2004-05, compared with 9 percent of white
students. Similarly, 26 percent of students from low-income families attended
schools identified for improvement, compared with 17 percent of all students.70 In
absolute terms, the largest group of students in identified schools was students
from low-income families (4.4 million), followed by African-American students (2.5
million), white students (2.4 million), and Hispanic students (2.3 million). Overall,
7.8 million students attended identified schools in 2004-05.71

2. District Identification for Improvement

Ten percent of districts had been identified for improvement in 2004-05


(see Exhibit 33). Across the 40 states plus the District of Columbia that reported

56
they had identified districts, 1,511 districts had been identified for improvement
in 2004-05. Nineteen states had identified 10 percent or fewer of their districts,
and 12 states had identified a third or more of their districts. Among the
identified districts, 49 districts in 11 states were identified for corrective action.
Twenty-six percent of all students, or about 12.6 million students, attend schools
in identified districts across 48 states with available data. 72

Large and urban districts with high concentrations of poor, minority, and
LEP students were more likely to be identified than other districts.
District size mattered most, with one-third of large districts identified in 2004-05,
compared with 17 percent of medium districts and 5 percent of small districts.
Nineteen percent of districts with high concentrations of minority students were
identified, compared with 5 percent of districts with low concentrations of
minority students; distributions were similar for other district characteristics, with
greater likelihood of identification for districts that were more urban and served
high concentrations of students from low-income families and LEP students.73

Approximately 32 percent of identified districts in 2004-05 (477


districts) contained no schools identified for improvement.74 Under NCLB,
schools and districts are held accountable for AYP targets only when they have at
least a minimum number of students in the subgroup categories. Because
district-level AYP calculations include students from all schools, districts may meet
the minimum subgroup sizes for certain groups of students even if none of their
schools do. If such groups do not make AYP at the district level while not counted
at the school level, the result will be that districts may be identified for
improvement when none of their schools are. Because assistance commonly
focuses on schools, this situation raises questions about how to provide support to
identified districts where no particular school has been designated as low-
performing under NCLB.

57
Exhibit 33
Number and Percentage of Identified Districts, by State, 2004-05*

Number Percent Number Percent


Total 1,511 10%
Alabama 0 0% Montana 56 12%
Alaska 31 58% Nebraska 4 1%
Arizona 74 23% Nevada 9 53%
Arkansas 0 0% New Hampshire 15 8%
California* 14 <1% New Jersey 28 5%
Colorado 57 32% New Mexico 0 0%
Connecticut 39 23% New York 60 9%
Delaware 0 0% North Carolina 41 35%
District of Columbia 1 100% North Dakota 13 6%
Florida 67 100% Ohio 49 8%
Georgia 12 7% Oklahoma 22 4%
Hawaii 0 0% Oregon 15 8%
Idaho 44 39% Pennsylvania 175 35%
Illinois 248 28% Rhode Island 6 17%
Indiana 22 7% South Carolina 68 76%
Iowa 9 2% South Dakota 5 3%
Kansas 7 2% Tennessee 25 18%
Kentucky 53 30% Texas 0 0%
Louisiana 0 0% Utah 21 53%
Maine 0 0% Vermont 7 2%
Maryland 9 38% Virginia 80 59%
Massachusetts 14 4% Washington 29 10%
Michigan 0 0% West Virginia 27 49%
Minnesota 17 4% Wisconsin 1 <1%
Mississippi 36 24% Wyoming 1 2%
Missouri 0 0%

Exhibit reads: In 2004-05, 1,511 districts were identified for improvement,


representing 10 percent of all districts.

Note: This table shows data reported by 51 states from October 2004 to April 2005. Some states
decided appeals prior to this data collection, and others made appeal decisions later; for example,
California later increased its number of identified districts to 58. This chapter uses the numbers that
states reported for this data collection.

Source: Study of State Implementation of Accountability and Teacher Quality Under NCLB.

58
B. Adequate Yearly Progress Ratings for Schools and
Districts
In determining whether a school or district makes adequate yearly progress (AYP),
NCLB requires states to consider state assessment results, student participation
rates in assessments, and an “other academic indicator.” For state assessment
results, states must set absolute annual targets that lead to the goal of all
students achieving proficiency in reading and mathematics by 2013-14. For test
participation, schools and districts must assess at least 95 percent of their
students in order to make AYP. For the other academic indicator, states must use
graduation rates for high schools, but they have flexibility in selecting a measure
for elementary and middle schools.

Calculating AYP separately for key subgroups of students is a key feature of the
NCLB accountability system. AYP must be calculated for up to nine student
groups in a school or district: all students, five major racial and ethnic groups,
economically disadvantaged students, students with disabilities, and LEP
students. To enhance validity and reliability, states also define a minimum
subgroup size that must be met before AYP is calculated for that subgroup for a
school or district. The number of AYP targets a school must meet will vary by
state definitions of AYP, enrollment size, and the demographic composition of the
school. Schools that serve diverse populations often must meet more AYP targets
than those whose enrollments are homogeneous.

NCLB includes a “safe harbor” provision: Schools may make AYP if the percentage
of students in a subgroup that did not meet the AYP target decreases by 10
percent from the preceding school year, and if the school makes AYP for the
relevant subgroup for the other academic indicator and participation rate. In
addition, regulations allow states and districts to count as proficient scores of
students with disabilities who take alternate assessments based on alternate
achievement standards, as long as the number of those proficient scores does not
exceed one percent of all students in the grades assessed.7

States first applied NCLB AYP definitions to state assessment results from 2002-
03; these determinations first affected schools that were identified for
improvement for the following year, 2003-04. Findings below include all schools
(both Title I and non-Title I schools).

7
The U.S. Department of Education has provided additional flexibility concerning AYP determinations for
LEP students and students with disabilities. Flexibility for LEP students, which became effective in
February 2004, includes permitting LEP students in their first year of enrollment in U.S. schools to take
an English language proficiency assessment instead of the state reading assessment; permitting states
to exclude those students’ reading and mathematics scores from AYP calculations; and permitting states
to retain formerly LEP students in the LEP subgroup for AYP calculations for up to two years after they
attain English proficiency. New flexibility for students with disabilities, announced in May 2005, will allow
eligible states to make adjustments to their AYP decisions based on 2004-05 assessment results to
reflect the need for modified achievement standards for students with disabilities who may not be able
to reach grade-level standards in the same timeframe as other students, while the Department proposes
new regulations to address this matter.

59
1. Schools and Districts Making Adequate Yearly Progress

Three-fourths (75 percent) of all schools and 71 percent of districts met


all applicable AYP targets in 2003-04 testing. The percentage of schools
that made AYP varied greatly among states, ranging from 23 percent to 95
percent of schools in a state. The number of schools missing AYP (21,540) based
on 2003-04 testing is nearly double the number of schools identified for
improvement for 2004-05 (11,530). If many non-identified schools that did not
make AYP in 2003-04 testing miss AYP again the following year, the number of
identified schools could rise substantially in 2005-06.75

2. Reasons Schools Do Not Make Adequate Yearly Progress

Schools most commonly missed AYP for the achievement of all students
and/or multiple subgroups in 2003-04; only in a minority of cases did
schools miss just
one AYP target. Exhibit 34
Failure to meet AYP for Reasons Schools Missed AYP, 2003-04
the all students group
or for multiple
subgroups suggests
systemic low
performance of a
school, especially
when it misses AYP in
multiple subjects.

Based on data from


33 states, among
schools that missed
AYP in 2003-04, 33
percent did not
meet achievement
targets for the “all
students” group in
reading and/or
mathematics; 15 Exhibit reads: In 2003-04 testing, 33 percent of
percent missed for schools that missed AYP missed for the
all students in both achievement of the all students group in reading
subjects.76 An and/or mathematics.
additional 18 percent
of these schools Notes: Schools included in the "Achievement of the ‘All
Students’ Group” and the "Achievement of Two or More
missed AYP for the Subgroups" segments of the graph may have also missed AYP
achievement of two or for test participation or the other academic indicator.
more subgroups However, schools included in the "Achievement of a Single
Subgroup Only" segment are those that missed AYP for that
although they made
factor alone and did not miss any other AYP targets. “Other”
AYP for the all includes: schools that missed AYP for combinations of the
students group (see achievement of a single subgroup, test participation, and/or
the other academic indicator (8%), or through a small school
analysis (5%).

Source: Study of60State Implementation of Accountability and


Teacher Quality Under NCLB (based on data reported by 33
states for 15,731 schools that missed AYP in these states).
Exhibit 34). Less than one-fourth (23 percent) missed AYP solely due to the
achievement of a single subgroup. One-fifth (20 percent) missed AYP due to the
“other academic indicator” (and 33 percent of high schools missed AYP for their
other academic indicator, graduation rate), but only 7 percent missed AYP for the
other academic indicator alone. More than one-fourth (29 percent) missed AYP for
test participation, but only 6 percent missed AYP solely because of their test
participation rates. The remaining 13 percent of schools that missed AYP missed
for other combinations of AYP targets.77

Overall, by subject area, 64 percent of schools that missed AYP missed for a
reading achievement target and 58 percent missed for a target in mathematics,
either for all students or a subgroup, while 42 percent missed AYP in both
subjects.78

61
A majority of students from most racial and ethnic groups and from low-
income families attended schools held accountable for the performance
of their subgroups. Across 34 states with available data, nearly 80 percent or
more of all white, African-American, and Hispanic students, as well as students
from low-income families, attended schools where AYP was calculated for these
subgroups based on 2003-04 test results. Percentages were smaller for Native
American and Asian students (25 percent and 45 percent, respectively).79

The numbers of
subgroups for AYP Exhibit 35
were calculated and Percentage of Schools Held Accountable for Subgroup That
the numbers of Missed AYP for Achievement for That Subgroup, 2003-04
grades tested both
appear related to
the likelihood of a Low-Income Students 14%

school making AYP. LEP Students 26%

In schools where AYP Students with Disabilities 37%


was calculated for African-American 22%
African-American Native American/Alaska Native 15%
students, LEP Asian 4%
students, or students Hispanic 12%
with disabilities, more White 1%
than one-fifth did not
0% 10% 20% 30% 40% 50%
make AYP for those
subgroups in 2003-04
testing (see Exhibit
35). For example, in
schools for which AYP
Exhibit reads: In 2003-04 testing, 14 percent of
was calculated for
schools that had the minimum number of students
students with
from low-income families necessary for AYP to be
disabilities, 37 percent calculated for this subgroup at the school missed AYP
of these schools for this subgroup.
missed AYP for that
subgroup. Schools Source: Study of State Implementation of Accountability and
with LEP and African- Teacher Quality Under NCLB (based on data reported by 34 states
American subgroups for 68,638 schools in these states).
missed AYP for those
subgroups in 26 and 22 percent of the cases, respectively. Schools with
subgroups of students from low-income families, Hispanic students, or Native
American students were somewhat less likely to miss AYP for those subgroups (12
to 15 percent). Schools were much less likely to miss AYP due to low achievement
of white or Asian students (1 percent and 4 percent, respectively).80

Schools that were held accountable for more subgroups were less likely
to make AYP. Among schools for which AYP was calculated for six or more
subgroups, 39 percent did not make AYP, compared with 10 percent of schools for
which AYP was calculated based on only one subgroup. In high-poverty schools
that had six or more subgroups, 66 percent missed AYP, compared with 35

62
percent of low-poverty schools that had six or more subgroups. Similarly, the
numbers of grades tested also appeared related to the likelihood of a school
making AYP. In states where the scores of students in grade 3 through grade 8
and at least one high school grade were used to determine AYP for 2003-04,
schools were less likely to make AYP (71 percent) than were schools in states that
used test results from fewer grades (82 percent).81

63
3. Other Academic Indicators in Adequate Yearly Progress

States commonly selected attendance as the other academic indicator


for elementary and middle schools. Thirty-eight states used attendance rate
as the other academic indicator for elementary and middle schools in 2003-04,
and 14 states selected other indicators, such as other measures of academic
performance (e.g., assessments results in writing or science).82

Twenty percent of all schools that did not make AYP missed for the other
academic indicator in 2003-04, and one-third of all high schools missed
AYP for their other academic indicator, graduation rate. States varied
considerably in the percentage of high schools that missed AYP for graduation
rate targets, from zero to 82 percent of high schools across states. Elementary
and middle schools less frequently missed AYP for this reason (10 percent). In
many states (22), fewer than 10 percent of elementary or middle schools missed
AYP for the other academic indicator, though percentages in other states ranged
from 11 percent to 64 percent.83

4. Other Factors Affecting AYP Designations: Alternate Assessment


Scores and Appeals

For limited numbers of students with disabilities, regular assessments, even with
accommodations, are not appropriate. Title I regulations allow for students with
the most significant cognitive disabilities to be assessed on alternate assessments
based on alternate achievement standards; states and districts may count as
proficient for AYP the scores of such students as long so those proficient scores
did not exceed 1.0 percent of all students tested. The regulations also allow
states and districts to receive exceptions to exceed the 1.0 percent cap.

Nearly all states included the scores of students assessed using


alternate assessments based on alternate achievement standards in
their AYP calculations, but few states or districts used waivers to exceed
the 1.0 percent cap. For AYP determinations for 2003-04 testing, 49 states
included the scores of students assessed using alternate assessments for their
AYP calculations. For the 1.0 percent cap, three states were granted exception for
2003-04 testing from the U.S. Department of Education, and 18 states reported
granting such exceptions to districts. Among the states that granted this
flexibility to districts, only six could report on the number of exceptions they
granted (a total of approximately 134 exemptions).84

Appeals of AYP designations were common in fall 2004. Under NCLB,


schools and districts identified for improvement are allowed to appeal the
determinations if they believe the identification was made in error. Several states
could not easily provide data on school and district appeals of AYP designations.
Among the 38 states that reported on appeals of school AYP designations based
on 2003-04 testing, approximately 2,580 schools appealed their designations, and
states approved 44 percent of these appeals. The numbers of appeals from
schools by state ranged from zero to more than 300, with the percentages of

64
approvals ranging from zero to 100 percent. At least four states approved more
than 100 appeals for schools. Thirteen of 32 states also said that one or more
districts had appealed their AYP designations based on 2003-04 testing, and these
states approved 50 percent of the 236 district appeals. Most appeals involved
either errors in data or the misclassification of students by subgroup.85

C. Communication of School Performance Results


Many of the accountability provisions for identified schools in NCLB hinge on local
understanding of school performance, both for motivating school staff members
to pursue improvement efforts and for providing information on the effectiveness
of their children’s schools that parents can use. Communication about school,
district, and state performance generally appears to be occurring as outlined in
NCLB, with shortcomings mainly in the areas of timeliness, data newly required
for report cards, and preparation of district report cards.

1. Timing of School Identification

All states notified schools about their identification status for 2004-05
based on 2003-04 testing, and a majority of states provided preliminary
results before September 2004, but 20 states did not, and only 15 states
provided final results by that time. In order to be able to plan for the
upcoming school year, including school improvement planning and notifying
parents about choice options for children in identified schools, school districts and
schools need to receive notification of their school improvement status prior to
the beginning of the school year. This is also a requirement under Title I
regulations. To enable such planning at the local level, many states first notified
schools whether they had been identified for improvement based on preliminary
data, then followed up with final data later. Thirty-one states reported releasing
preliminary data on whether schools were identified for improvement based on
2003-04 testing before September 2004, though six did not release preliminary
data until November 2004 or later. Fifteen states released final data on school
identification status prior to September 2004, whereas 21 states did not notify
schools of their final designation for 2004-05 until November 2004 or later.86
Correspondingly, among principals who indicated their school was identified, 57
percent reported learning of their improvement status before September 2004,
with 35 percent of identified schools being notified in August. Only 9 percent of
identified schools reported they did not learn of their identification status until
November 2004 or later.87

Consistent with findings prior to NCLB, principals of identified schools


sometimes were not aware their school had been identified for
improvement. Much of school improvement under NCLB hinges on principals in
identified schools pursuing school improvement efforts in response to being
identified, and this requires principals’ awareness of their schools’ status. In
2004-05, only 78 percent of principals of identified Title I schools correctly
reported that their school had been identified for improvement, although this is
up from 59 percent in 2001-02.88

65
2. Report Cards
State report cards released in 2004-05 (presenting data from 2003-04)
generally included the percentages of all students and student
subgroups scoring at proficient levels in reading and mathematics, as
well as the percentage tested, but did not universally include other
required elements. Only for the migrant student subgroup did states less
commonly report the percentage of students scoring at proficient levels (37
states). About two-thirds of state report cards included data on other academic
indicators used for AYP, and under half of state report cards contained
comparisons of student achievement to AYP targets, graduation rates
disaggregated for student subgroups, and required data on the professional
qualifications of teachers.89

About a third of Title I districts did not prepare district report cards in
2003-04. District report cards commonly included assessment data along with
attendance and graduation rates, but half or fewer of district report cards
included data on teacher quality, a new requirement under NCLB, and on the
number and percentage of district Title I schools identified for improvement.90

Though not required under NCLB, most districts also reported preparing
school report cards. Eighty-seven percent of Title I districts reported preparing
school report cards in 2003-04, up from 81 percent in 2002-03, but down from 93
percent in 2001-02. School report cards for 2003-04 more commonly included
data on whether a school was identified for improvement than they had in 2002-
03 (69 percent vs. 35 percent of Title I districts with school report cards).91

D. School Improvement Efforts and Assistance for


Identified Schools and
Districts
Critical to success in improving schools through the accountability provisions in
No Child Left Behind is ensuring that schools and districts identified for
improvement have the resources and support necessary for them to improve
instruction and make adequate yearly progress for all of their student groups.
This report examines the implementation of systems of support designed to bring
about such school improvement as well as school-level improvement efforts.
General findings regarding identified schools in this section refer to all schools
identified for improvement, both Title I schools and non-Title I schools, whereas
findings presented specifically for identified Title I schools relate to particular
NCLB requirements that may not apply to non-identified schools.

1. State and District Assistance for Identified Schools

Almost all states had implemented a statewide system of support for


identified schools by fall 2004, as required under NCLB. Overall, 37 states
provided support to identified schools through some version of a school support
team. Another 29 states included individual educational professionals in their

66
statewide systems of support. While these individuals may serve the roles NCLB
outlines for distinguished principals and distinguished teachers, often more
general terms defined their work (e.g., school improvement specialists, principal
mentor, coaches). School support teams and specialized individuals in many of
these states serve non-Title I identified schools as well as Title I identified schools
(19 states and 17 states, respectively). Twenty states incorporated a triage
approach in which the level of support they provided to identified schools was
attuned to the severity of individual schools’ needs.92

Though states often used multiple strategies to improve identified schools, most
states focused their efforts on one of five primary support strategies: school
support teams (19 states); specialized individuals (13 states); regional centers,
area educational agencies, or county offices (9 states); providing resources or
hosting statewide meetings (6 states); or depending on districts to provide
support (3 states).93

Many states reported providing assistance to large proportions of their


identified schools, and many reported this was a challenge. Thirty-nine
states reported they were able to provide some level of support to all of their
identified schools in 2004-05, and 24 states provided support to Title I and non-
Title I identified schools.94 However, most states (42) reported that providing
assistance to all schools identified for improvement was a moderate or serious
challenge in 2003-04. Most states also reported related resource limitations as
moderate or serious challenges to implementing NCLB, including adequacy of
state education agency staff size (45 states), adequacy of state funds (40 states),
adequacy of federal funds allocated for state-level implementation (39 states),
and the adequacy of state education agency staff expertise (30 states).95 Twenty-
one states noted that an important objective of their statewide system of support
involved building district capacity so that districts would be better able to provide
support to identified schools in the future.96

Schools identified for improvement more commonly experienced a


higher intensity of support than other schools, although many districts
with identified schools provided a variety of assistance to both
identified and non-identified schools. Among districts with identified schools
in 2003-04 or 2004-05, a majority reported providing assistance to some or all of
their identified schools in such areas as school planning (87 percent), analyzing
assessment data (83 percent), and identifying effective curricula, instructional
strategies, or school reform models (65 percent). Districts were no more likely to
provide such assistance to identified schools than to other schools in their
districts that were not low-performing. For each type of support studied, at least
some districts with identified schools reported not providing the assistance. For
example, 30 percent provided no assistance with identifying effective curricula,
instructional strategies, or school reform models, and 13 percent did not provide
assistance in analyzing assessment results, even though NCLB requires districts
to provide identified schools with assistance in these areas.97

67
Identified schools, however, reported receiving more days of assistance from their
districts than non-identified schools. For 2004-05 and the previous school year,
75 percent of identified schools reported six or more days of assistance from their
districts, compared with 56 percent of non-identified schools. Forty-eight percent
of identified schools received at least 11 days of assistance, and 25 percent
received more than 25 days of assistance.98

Larger districts more commonly provided assistance of various kinds to


identified schools than smaller districts. For example, in 2003-04, larger
districts more commonly provided identified schools with an extensive range of
assistance on topics related to planning and data use than other districts, and
they were more likely to sponsor professional development on an extensive range
of topics.99 Similarly, school support teams sponsored by larger districts spent
more days in identified schools than those sponsored by smaller districts. In
2002-03, two-thirds of very large districts reported employing more than one full-
time equivalent (FTE) staff member per identified school to provide assistance to
those schools, compared with one-third of small districts.100

Less than three-quarters of districts with identified schools reported


having the staff, expertise, time, or money to improve identified schools.
Among high-poverty districts (those with 50 percent or more of their students
eligible for free or reduced-price lunches), only 51 percent reported that they had
expertise available (either “somewhat” or “to a great extent”) to improve
identified schools. Similarly, 54 percent of high-poverty districts reported having
the staff needed to improve identified schools, while 44 percent reported
sufficient time and 20 percent reported sufficient money. High-minority districts
showed similar patterns.101

Identified schools most frequently reported needing assistance to


improve the quality of teachers’ professional development (80 percent)
(see Exhibit 36). Most identified schools needing this assistance reported that
they received it (91 percent), and about three-fourth of the schools receiving this
assistance reported that it was sufficient. Other areas where schools frequently
reported needing assistance included getting parents more engaged in their
child’s education (74 percent), addressing the instructional needs of students with
disabilities (71 percent), identifying effective curricula and instructional strategies
(70 percent), and improving students’ test taking skills (70 percent). Most
identified schools reported that they received the assistance they needed in these
areas, and about 70 percent or more thought the assistance they received was
sufficient. However, only about half of identified schools (51 percent) reported
receiving needed technical assistance on parent involvement, and only 53 percent
of those that needed and received such assistance reported that the assistance
they received was sufficient.102

Identified schools were much more likely to report needing assistance in


a variety of specific areas compared with non-identified schools. For
example, 80 percent of identified schools reported needing technical assistance to
improve the quality of professional development, compared with 53 percent of

68
non-identified schools. Similarly, 74 percent of identified schools reported
needing assistance to get parents more engaged in their child’s education,
compared with 46 percent of non-identified schools.103 Across all types of
assistance shown in Exhibit 36, about half or less of non-identified schools
reported needing assistance.

69
Exhibit 36
Percentage of Non-Identified and Identified Schools That Reported Needing Various Types
of Technical Assistance and Whether Identified Schools Received Assistance, 2003-04 to 2004-05

Percent of Percent of Percent of Percent of Identified


Non-Identified Identified Identified Schools Reporting That
Schools That Schools That Schools Needing Assistance Received
Needed Needed Assistance That When Needed Was
Assistance Assistance Received It Sufficient
(n = 881) (n = 430) (n = 212 to 343) (n = 147 to 313)
Improve quality of teachers’ professional 53% 80%* 91% 74%
development
Get parents more engaged in their child’s 46% 74%* 51% 53%
education
Address instructional needs of students 49% 71%* 72% 69%
with IEPs
Identify effective curricula, instructional 54% 70%* 92% 72%
strategies, or school reform models
Improve students’ test taking skills 32% 70%* 71% 71%
Analyze assessment results to understand 41% 68%* 92% 94%
students’ strengths and weaknesses
Identify or develop detailed curriculum 49% 62%* 93% 67%
guides, frameworks, pacing sequences,
and/or model lessons aligned with state
standards
Develop or revise school improvement 28% 62%* 89% 89%
plan
Recruit, retain, or assign teachers in order 28% 62%* 76% 80%
to staff all classes with a teacher who is
“highly qualified”
Address problems of student truancy, 37% 57%* 68% 42%
tardiness, discipline, and dropouts
Implement the provisions of NCLB relating 38% 52%* 86% 95%
to “qualified” paraprofessionals
Address instructional needs of LEP 37% 49%* 69% 71%
students

Exhibit reads: Fifty-three percent of non-identified schools reported needing technical


assistance to improve the quality of teachers’ professional development compared with
80 percent of identified schools. Among identified schools that reported needing such
technical assistance, most schools (91 percent) reported receiving assistance in this
area, and about three-quarters (76 percent) of the schools that received this assistance
reported that the assistance was sufficient.
*Indicates that identified schools were significantly more likely to report needing assistance than non-
identified schools (p<.05).
Source: National Longitudinal Study of NCLB, Principal Survey.

70
2. School Improvement Strategies

The most common improvement strategies reported by identified


schools involved placing a major focus on using achievement data to
inform instruction (82 percent) and providing additional instruction to
low-achieving students (78 percent). Other common strategies included a
major focus on aligning curricula and instruction with standards and assessments
(72 percent), new instructional approaches or curricula in reading and
mathematics (61 percent and 59 percent, respectively), and increasing the
intensity, focus, and effectiveness of professional development (60 percent).
Eighty-three percent of identified schools also reported developing a school
improvement plan. Reports from identified schools showed technical assistance
often being provided in these areas. Though many non-identified schools
reported similar school improvement strategies, they were less likely to report a
major focus on most activities (see Exhibit 37).104

Exhibit 37
Percentage of Schools Reporting Major Focus
on Various School Improvement Strategies, 2004-05

Identified Non-Identified
Schools Schools
(n=430) (n=881)
Using student achievement data to inform instruction and school 82%* 67%
improvement
Providing additional instruction to low-achieving students 78%* 60%
Aligning curriculum and instruction with standards and/or assessments 72% 70%
Implementing new instructional approaches or curricula in 61%* 49%
reading/language arts/English
Increasing the intensity, focus, and effectiveness of professional 60%* 42%
development
Implementing new instructional approaches or curricula in mathematics 59%* 41%
Restructuring the school day to teach core content areas in greater depth 52%* 31%
Providing extended-time instructional programs 51%* 31%
Implementing strategies for increasing parents’ involvement in their 32%* 13%
children’s education
Increasing instructional time for all students 26%* 13%

Exhibit reads: In 2004-05, 82 percent of identified schools reported a major


focus on using student achievement data to inform instruction and school
improvement, compared with 67 percent of non-identified schools.
*Indicates statistically significant difference between identified and non-identified schools (p<.05).

Source: National Longitudinal Study of NCLB, Principal Survey.

71
Teacher reports also indicate widespread use of state assessment
results to support instruction and school improvement. For example,
about 70 percent of reading and mathematics teachers reported using 2003-04
state assessment data moderately or extensively for identifying and correcting
gaps in the curriculum and for identifying areas where they needed to strengthen
their knowledge or skills. In 2004-05, teachers in identified schools were more
likely than teachers in non-identified schools to report using the previous year’s
state assessment data in several ways. For example, among teachers who teach
mathematics at either the elementary or secondary level, 74 percent of those in
identified schools reported using the state assessment data to identify students
who need remedial assistance, compared with 60 percent in non-identified
schools. Similarly, mathematics teachers in identified schools were more likely to
report using the information to tailor instruction to individual student needs (75
percent vs. 61 percent) and to recommend tutoring or other educational services
for students (60 percent vs. 45 percent). Similar patterns were reported for these
uses of state reading assessments.105

Identified schools at both the elementary and secondary levels reported


increasing the amounts of instructional time devoted to reading and
mathematics. About half (51 percent) of identified schools reported a major
focus on using extended time instructional programs (such as after-school
programs) and 26 percent reported an increase in instructional time for all
students. Nearly one-third (30 percent) of identified elementary schools reported
increasing the amount of instructional time devoted to reading by more than 30
minutes in 2004-05, and 17 percent reported a similar increase in instructional
time for mathematics (see Exhibit 38). Non-identified schools less commonly
reported such increases in instructional time (13 percent for reading and 8
percent for mathematics).106

Exhibit 38
Change in Instructional Time Per Day at Elementary Schools,
by Subject Area, 2003-04 to 2004-05

Identified Schools Non-Identified Schools


(n=430) (n=881)
Decreased Increased Decreased Increased
More Than More Than More Than More Than
30 Minutes 30 Minutes 30 Minutes 30 Minutes
Reading 0% 30%* 0% 13%*
Mathematics 0% 17%* 0% 8%*
Science 1% 5% 0% 4%
Social studies 3% 1% 1% 1%
Art/music 3% 1% 1% 0%
Physical education/health 2% 2% 1% 0%
Other 1% 4% 3% 0%

72
Exhibit reads: Thirty percent of identified schools reported that instructional
time spent per day on reading in their schools increased more than 30 minutes
from 2003-04 to 2004-05, compared with 13 percent of non-identified schools.

*Indicates significant difference between identified and non-identified schools (p<.05).

Source: National Longitudinal Study of NCLB, Principal Survey.

73
At the secondary school level, about two-fifths of all schools reported increasing
instructional time for low-achieving students in reading (40 percent) and
mathematics (42 percent), with identified secondary schools reporting increasing
time for reading at significantly higher rates than non-identified schools
(55 percent vs. 36 percent). Smaller percentages of secondary schools reported
increasing instructional time in science (13 percent) and social studies
(11 percent) for low-achieving students. Few elementary or secondary schools
reported decreasing instructional time in any subject.107

Some identified schools have not followed NCLB requirements for school
improvement planning and professional development for identified
schools. Since they were first identified, only 82 percent of identified Title I
schools in 2004-05 had developed a joint school improvement plan with their
district or state, despite the requirement that all identified Title I schools do so.108
Similarly, only 89 percent of districts required identified Title I schools to spend at
least 10 percent of their Title I allocation on professional development in 2003-04,
though this represents an increase over the 79 percent of districts that
implemented this requirement in 2002-03.109

3. Corrective Actions and Restructuring for Identified Schools

States placed 1,047 Title I schools in corrective action and 1,065 Title I schools in
restructuring for 2004-05, making these schools subject to the particular menus
of interventions outlined in NCLB.

Title I schools in corrective action status almost universally experienced


the interventions NCLB defines for schools in this stage of improvement.
One or more corrective actions were implemented in 95 percent of Title I schools
in corrective action status in 2004-05. The most common corrective actions
experienced by Title I schools in this status resembled forms of technical
assistance rather than sanctions. For example, 90 percent of Title I schools in
corrective action reported that they were required to implement new research-
based curricula or instructional programs and 58 percent had an outside expert
appointed to advise the school, while 27 percent reported that management
authority at the school level had been significantly reduced and 5 percent
reported replacement of school staff members relevant to the school’s low
performance (see Exhibit 39).110

Many of the interventions that NCLB defines as corrective actions were also
implemented in schools in earlier stages of identification for improvement. For
example, 66 percent of schools in their second year of improvement were
required to implement new research-based curricula or instructional programs.111

Very few Title I schools in restructuring status in 2004-05 reported


experiencing any of the mandated interventions (see Exhibit 39). This may
in part reflect the two stages of restructuring status, where schools in this status
first spend a year planning for restructuring and then implement the restructuring
the following year. Few principals of schools in the first or second year of

74
restructuring status reported state take-over of the school (7 percent), re-opening
of the school as a public charter school (2 percent), contracting with a private
entity to manage the school (2 percent), or replacement of all of the school staff
(2 percent).112 About one-fourth (24 percent) of schools in restructuring status
reported that a new principal had been appointed, although this may partly reflect
normal principal turnover, as similar percentages of schools in other stages of
improvement status also reported this.8

8
The NLS-NCLB survey question did not exactly parallel the law on one intervention: the law gives the
option of “replacing all or most of the school staff (which may include the principal) who are relevant to
the failure to make adequate yearly progress,” while the survey asked if the state or district had
“replaced all of the school staff” or “appointed a new principal.”

75
Exhibit 39
Percentage of Identified Title I Schools Experiencing Various Types of Interventions
Since Identification for Improvement, 2004-05

Percent of Percent of Percent of Percent of


Schools in Schools in Schools in Schools in
Year 1 of Year 2 of Corrective Restructuring
Improvement Improvement Action
(n=199) (n=74) (n=50) (n=76)

Actions Required for All Identified Schools


Parents were notified of schools’ improvement 89% 96% 96% 100%
status
District or state developed a joint improvement 81% 73% 92% 83%
plan with the school
Students were offered the option to transfer to a 82% 75% 88% 98%
higher-performing school, with transportation
provided

Action Required for Identified Schools That Miss AYP After Identification
Eligible students were offered supplemental 46% 90% 94% 100%
educational services from a state-approved
provider

Corrective Actions
Implemented a new research-based curriculum or 48% 66% 90% 70%
instructional program
Significantly decreased management authority at 4% 5% 27% 26%
the school level
Appointed outside expert to advise the school 30% 34% 58% 57%
Extended length of school day 24% 29% 42% 31%
Extended length of school year 9% 15% 32% 16%
Restructured internal organization of the school 12% 22% 21% 33%
Replaced school staff members relevant to 2% 17% 5% 18%
school’s low performance

Restructuring Interventions
Reopened the school as a public charter school 0% 0% 0% 2%
Entered into a contract with a private entity to 0% 1% 0% 2%
manage the school
Operation of school turned over to state 2% 0% 1% 7%
Replaced all of the school staff 0% 1% 0% 2%
Appointed new principal 21% 20% 18% 24%

Exhibit reads: In 2004-05, 89 percent of schools in their first year of being


identified for improvement reported that parents had been notified of the
school’s improvement status.

Source: National Longitudinal Study of NCLB, Principal Survey.

76
Schools in restructuring status frequently reported experiencing actions
that NCLB specifies for the “corrective action” stage of school
improvement, such as implementing a new research-based curriculum or
instructional program (70 percent), appointment of an outside expert to advise
the school (57 percent), restructuring the internal organization of the school (33
percent), extending the length of the school day (31 percent), significantly
decreasing management authority at the school level (26 percent), and replacing
school staff members relevant to school’s low performance (18 percent).

4. Assistance and Sanctions for Identified Districts


Districts form a central part of NCLB accountability, both because they must meet
AYP targets or face identification for improvement and because NCLB identifies
them as the primary providers of assistance to all identified schools within their
jurisdictions. NCLB requires states to provide certain kinds of technical assistance
to all districts and to make available other assistance to identified districts.
During 2004-05, 41 states had at least one identified district, with a total of 1,511
identified districts (10 percent of all districts) across the states.113

Three-quarters of all districts reported needing assistance in one or


more of the 10 areas where NCLB requires states to provide support,
and most of these districts reported receiving such assistance and that
it met their needs. For the 2003-04 and 2004-05 school years, districts most
commonly reported that they needed assistance in clarifying accountability-
system rules and requirements (50 percent); analyzing student assessment data
to understand program strengths and weaknesses (42 percent); identifying and
implementing effective curricula, instructional strategies, or school reform models
(41 percent); and identifying and implementing strategies to address the
instructional needs of students with disabilities (40 percent). One-quarter of all
districts reported needing technical assistance in five or more of the ten
mandated areas, and from 7 percent to more than a third of districts reported
they had not received assistance in mandated areas where they perceived
needs.114 States reported providing support for identified districts in a variety of
ways. Fourteen focused support specifically at the district level. Other states
integrated support for identified districts with support for identified schools or
general support for all districts (15 and 9 states, respectively).115

Districts took multiple actions in response to being identified, most


commonly offering or requiring specific professional development for
teachers (80 percent of identified districts). Other actions frequently taken
by identified districts included distribution of test preparation materials to schools
(67 percent), increased district monitoring of instruction and student performance
(61 percent), and professional development for principals (59 percent) (see
Exhibit 40). Just over half of identified districts in 2004-05 (54 percent) reported
that they had taken four or more of the actions listed in Exhibit 40 following their
identification for improvement.116

77
Exhibit 40
Percentage of Districts Taking Various Actions in Response to
Being Identified for Improvement, 2004-05

Offered/required specific professional development for teachers 80%


Distributed test preparation materials to some or all schools 67%
Increased district monitoring of instruction and student performance at school sites 61%
Offered/required specific professional development for principals 59%
Reallocated fiscal resources to target specific needs (e.g., particular groups of students, subjects, 51%
or schools)
Implemented a district-wide curriculum in reading 39%
Developed or revised district content standards 24%
Reorganized the district office staff to increase efficiency or focus on instruction 23%
Implemented a district-wide curriculum in mathematics 17%
Hired a consultant to advise district administrators on effective strategies 11%
Created smaller schools, or schools-within-schools 11%
Changed the budget allocation formula for schools 10%
Implemented new personnel procedures for hiring or assigning principals and teachers 8%

Exhibit reads: In 2004-05, 80 percent of identified districts reported that they had
offered or required specific professional development for teachers in response to
being identified for improvement.

Source: National Longitudinal Study of NCLB, District Survey (n=75 districts).

E. Accountability Under State Initiatives and Title III of


NCLB
1. Other State Accountability Initiatives
State accountability systems under NCLB must hold all students and schools (Title
I and non-Title I) to the same academic standards. However, schools in some
states still may experience different forms of accountability because some states
apply NCLB consequences for identification to Title I schools only and because
some states include components in their accountability systems that go beyond
those required under Title I.

About half of the states (24) implemented accountability initiatives that


went beyond those required in NCLB in 2004-05. A key difference between
these other state initiatives and NCLB is that many state initiatives relied upon
growth measures to track progress towards accountability targets. For example,
17 states included a growth measure in their separate initiatives. Eight states
used different measures of student achievement (i.e., norm-referenced tests,

78
locally determined tests, or tests in subjects other than reading and
mathematics), and two states used different rules for how to include students.
Additionally, some of these other state initiatives used different designations of
school performance (such as using letter grades or identifying “high-improving”
schools) or reported the results of the state initiatives separately from reporting
for NCLB.117

NCLB and other state or district accountability initiatives did not


commonly generate conflicting ratings of school performance for 2004-
05, according to principal reports. Among principals that said their school
was identified for improvement under NCLB, only 2 percent reported that the
school had been designated as high-performing under a state or district
accountability initiative (and 38 percent said the school had been designated as
low-performing under the other accountability initiative). Similarly, among
principals that said their school was not identified for improvement under Title I,
only 3 percent reported that their school had been designated as low-performing
under a state or district accountability initiative. Principals operating schools
under state and district accountability initiatives as well as NCLB gave mixed
reports about the benefits and drawbacks of multiple approaches to
accountability. Just over half of all principals (58 percent) said that the multiple
initiatives gave them a more complete picture of their school’s effectiveness,
while just under half (44 percent) said the multiple initiatives resulted in staff
confusion about targets for student achievement.118

2. Accountability Under Title III


As noted earlier, Title III of NCLB, Language Instruction for Limited English
Proficient and Immigrant Students, along with Title I, outlines additional
accountability requirements related to LEP students. Many of these accountability
provisions related to English language proficiency (ELP) among LEP students are
new additions under NCLB, unlike accountability provisions under Title I that build
on similar requirements in the previous reauthorization. Most of these NCLB
accountability requirements apply to Title III subgrantees, which may be districts
or consortia of districts, though states may choose to implement any of the
requirements for other districts in their states as well. The English Language
Proficiency assessment requirements under Title I apply to all districts.

For accountability related to LEP students, states have developed and


implemented ELP standards and assessments. States also have set Annual
Measurable Achievement Objectives (AMAOs), which include targets for student
progress in gaining English language proficiency and targets for student
attainment of English language proficiency, as well as AYP targets for LEP
students under Title I. States began calculating whether districts met AMAO
targets based on 2003-04 testing; that year, more than 4,900 Title III subgrantees
across the 50 states and the District of Columbia served more than four million
LEP students, or about 80 percent of LEP students nationwide.119

Among Title III subgrantees in the 36 states that reported data on


whether their subgrantees had met AMAO targets, 63 percent met AMAO

79
targets based on 2003-04 testing (1,898 subgrantees). States varied
considerably in the proportion of their subgrantees that met AMAO targets; nine
states had 25 percent or fewer of their subgrantees make AMAO targets, while 13
states had more than 75 percent of subgrantees meet targets. At the state level,
33 states (out of 42 responding) reported meeting their AMAO targets for students
making progress in learning English, while 41 out of 45 met some or all of their
AMAO targets for students’ attainment of English language proficiency.9 120

About two-thirds of the states (35 states) calculated AMAOs for 2003-04
testing for Title III districts only, while another 13 states reported
calculating AMAO performance data for all districts with LEP students.
Most of these states also reported the results to their districts (34 and 11 states,
respectively).121 Though NCLB requires that states assess the English language
proficiency of all LEP students, Title III only requires states to calculate whether
Title III districts meet the AMAOs and report the results to these districts, but
states also may choose to calculate and report to non-Title III districts whether
they met AMAOs.

Fewer than half of the states had articulated a specific strategy for
providing technical assistance to subgrantees that missed AMAO targets
in 2003-04 testing. Such assistance was not required in 2004-05; instead, it
must be provided beginning in 2005-06 for subgrantees that miss AMAO targets
for a second consecutive year. Twelve states reported they planned to conduct
needs assessments among subgrantees that miss AMAO targets twice, and 10
states reported they would provide technical assistance through existing technical
assistance frameworks in the state.122

Conclusions
Under NCLB, state accountability systems are intended to drive substantial
improvement in student achievement. In the latest school year for which data is
available, 2004-05, 11,530 schools were identified for improvement (13 percent of
all schools); nearly 80 percent were Title I schools. The proportion of schools
identified varied widely across states, but to a large extent this reflects
differences in state accountability systems and not necessarily differences in
school performance. Schools serving large numbers of poor, minority and LEP
students were most likely to be identified. African-American students and
Hispanic students were three times more likely to attend identified schools than
were white students. Although schools can miss making adequate yearly
progress because of the performance of a single subgroup, schools most
commonly missed AYP for the achievement of all students and/or multiple
subgroups.

9
For more information on student achievement on English language proficiency assessments and states’
AMAOs, see the Biennial Evaluation Report to Congress on the Implementation of the State Formula
Grant Program, 2002-2004, English Language Acquisition, Language Enhancement and Academic
Achievement Act (ESEA, Title III, Part A).

80
States, districts, and schools generally reported pursuing improvement efforts for
identified schools and districts consistent with NCLB requirements, and identified
schools reported receiving more hours of assistance than did non-identified
schools. However, states and districts indicated limited capacity to assist all
identified schools.

Overall, this report’s findings point to widespread implementation of


accountability systems under NCLB and also to limitations in the extent to which
these may reach all low-performing schools. Future reports will examine the
effectiveness of these accountability systems and school improvement efforts in
raising the academic performance of identified schools and their students.

81
Key Findings on School Choice and Supplemental Educational
Services

How many students are eligible to participate, and how many


actually do so?

Nearly three times as many students were eligible to transfer to another


school under the Title I choice option in 2003-04 (3.9 million) as were
eligible to receive supplemental services (1.4 million). However, six times
as many students actually participated in the supplemental services
option (233,000) as participated in the school choice option (38,000).

The number of students participating in the Title I school choice option


more than doubled over the three-year period from 2002-03 to 2004-05,
rising from 18,000 to 45,000 participants, while student participation in
the supplemental services option increased more than five-fold over the
two-year period from 2002-03 to 2003-04.

The number of state-approved supplemental service providers has tripled


over the past two years, rising from 997 in May 2003 to 2,734 in May
2005. Private firms served the majority of participating students (59
percent) in 2003-04, while school districts and public schools served 40
percent. A growing number and percentage of faith-based organizations
have obtained state approval, rising from 16 providers (2 percent of all
providers) in March 2003 to 250 providers (9 percent) in March 2005, but
they served less than one-half of one percent of student participants in
2003-04.

How and when do districts and schools inform parents of eligible


children about the Title I school choice and supplemental
services options?

The timing of parental notification was often too late to enable parents to
choose a new school before the start of the 2004-05 school year. Almost
half of districts notified parents after the school year had already started,
and in these districts this notification occurred, on average, five weeks
after the start of the school year.

How are states monitoring and evaluating the effectiveness of


supplemental service providers?

States report that they are working to develop and implement systems
for monitoring and evaluating the performance of supplemental service
providers, but as of early 2005, 15 states had not established any
monitoring processes and 21 states had not finalized their monitoring
processes; 25 states had not yet established any standards for evaluating
provider effectiveness and none had finalized their evaluation standards.

82
The most common approaches that states have implemented to monitor
providers are surveying the districts about provider effectiveness (25
states) and using providers’ reports on student progress (18 states). The
most common standard that states have adopted to evaluate the
effectiveness of providers is student achievement on state assessments,
although only one state plans to use a matched control group.

83
VI. School Choice and Supplemental Educational Services
In Title I schools that have been identified as in need of improvement, NCLB
provides parents with new options for their children, including the option to
transfer to another public school or to receive supplemental educational services
(most commonly, after-school tutoring). The choice and supplemental services
provisions are designed not only to improve educational opportunities for
individual students, but also to provide an incentive for low-performing schools to
improve.

Districts are required to offer students the option to transfer to another school in
the first year that a school is identified for improvement; all students in the school
are eligible for this option, and the district must provide transportation for
participating students. Supplemental educational services are not required until
an identified school misses AYP again (for a third time), and only low-income
students in these schools are eligible to receive the services; the district is not
required to provide transportation.

States must develop criteria for approving supplemental service providers and
must provide school districts with a list of available approved providers in their
area. States also have the responsibility for monitoring the performance of
participating providers.

Districts must notify parents of their school choice and supplemental service
options and disseminate information about school performance and provider
qualifications and effectiveness that parents need to make informed decisions.
Each district that must offer these options must allocate an amount equal to
20 percent of its Title I Part A allocation to provide supplemental services and
transportation for students using the school choice option, unless a lesser amount
is needed to satisfy all requests. In addition, each such district must make
available, for each child receiving supplemental services, an amount equal to the
district’s Title I Part A allocation per low-income student, unless the actual cost of
such services is less than that amount.

Key Evaluation Questions for School Choice and Supplemental


Educational Services

1. How many students are eligible to participate, and how many actually do so?

2. How and when do districts and schools inform parents of eligible children
about the Title I school choice and supplemental services options?

3. How are states monitoring and evaluating the effectiveness of supplemental


service providers?

84
A. Eligibility and Participation
Although more students were eligible to participate in the Title I school
choice option, a larger number actually participated in the supplemental
educational services option. Nearly three times as many students were
eligible to transfer to another school under the Title I choice option in 2003-04
(3.9 million) as were eligible to receive supplemental services (1.4 million). More
students are eligible for the choice option because it applies to all identified
schools and all students in those schools are eligible, whereas the supplemental
services option only applies to identified schools that have missed AYP for a third
year and only low-income students in those schools are eligible. Nevertheless, six
times as many students actually participated in the supplemental services option
(233,000) as participated in the school choice option (38,000) in that year (see
Exhibit 41).123

Exhibit 41
Student Eligibility and Participation for
Title I School Choice and Supplemental Services, 2003-04

School Choice Supplemental Services


Number eligible 3,946,000 1,377,000
Number participating 38,000 233,000
Percent participating 1% 17%

Exhibit reads: More than 3.9 million students were eligible for Title I school choice
in 2003-04, while 1.4 million were eligible for supplemental services.
Sources: Study of State Implementation of Accountability and Teacher Quality Under NCLB; National
Longitudinal Study of NCLB, District and Principal Surveys. Estimates of eligible students are based on
data reported by 51 states. Estimates of participating students areExhibit
based42on an n of 109 responding
Number of
schools for school choice and 92 districts for supplemental services.Students Participating in
Title I School Choice and Supplemental Services
300,000

The number of students 2002-03 2003-04 2004-05


233,000
participating in the Title I
school choice option 200,000

more than doubled over


the three-year period
100,000
from 2002-03 to 2004-
38,000 45,000 42,000
05, rising from 18,000 to 18,000
45,000 participants (see 0
Exhibit 42). School Choice Supplemental Services

Student participation
in the supplemental
services option Exhibit reads: The number of students
participating in Title I school choice rose from
increased more than
18,000 in 2002-03 to 45,000 in 2004-05.
five-fold over the two-
year period from Sources: Study of Title I Accountability Systems and School
Improvement Efforts, District Survey (2002-03); National
Longitudinal Study of NCLB, District and Principal Surveys (2003-
04 and 2004-05). School choice estimates are based on an n of
852002-03, 109 schools in 2003-04, and 121 schools
247 districts in
in 2004-05. Supplemental services estimates are based on an n
of 90 districts in 2002-03 and 92 districts in 2003-04.
2002-03 to 2003-04, rising from 42,000 to 233,000 participants. Data on
supplemental services participants is available only through 2003-04 because the
NLS-NCLB survey was administered in fall 2004 and the total number of
supplemental services participants is usually not known until later in the school
year (because students may begin supplemental services as late as the spring,
whereas school choice transfers typically occur before or near the start of the
school year).

It should be noted that these longitudinal comparisons of participation trends are


based on two different studies that collected data from different samples and
using different methods (for example, the earlier TASSIE study collected data on
choice participation from school districts while the NLS-NCLB study collected this
data from school principals).

The number of
Exhibit 43
schools where
Number of Schools Where Title I School Choice
supplemental and Supplemental Services Were Offered
services were
offered tripled from 8,000
2002-03 2003-04 2004-05
2002-03 to 2003-04, 6,200
while the number 6,000
where Title I school 5,100
4,600
choice was offered
increased from 4,000
5,100 in 2002-03 to 2,500
6,200 in 2004-05 2,000
(see Exhibit 43). Title I 800
school choice was
offered in about 6,200 0
School Choice Supplemental Services
schools and 1,800
districts in 2004-05,
and supplemental
services were offered Exhibit reads: The number of schools where
in 2,500 schools and supplemental services were offered rose to 2,500 in
500 districts in 2003- 2003-04, while the number of schools where choice
04. Most districts was offered grew to 6,200 in 2004-05.
required to offer
supplemental services Sources: Study of Title I Accountability Systems and School
Improvement Efforts, District Survey (2002-03 and 2003-04);
reported that they did National Longitudinal Study of NCLB, Principal Survey (2004-05).
offer such services (89 School choice estimates are based on an n of 314 districts in
percent in 2003-04). 2002-03, 327 districts in 2003-04, and 308 schools in 2004-05.
Supplemental services estimates are based on an n of 71
districts in 2002-03 and 206 districts in 2003-04.
More than one-third
(39 percent) of
districts required to offer the school choice option in 2004-05 did not do
so, but often such districts had no non-identified schools in the district
to which students could transfer. Among districts that were required to offer
school choice in 2004-05, 20 percent reported that having no non-identified

86
schools within the district, either because there was only one school per grade
level or because all schools in the district were identified for improvement, was a
major challenge to implementing Title I school choice. Some districts pointed to a
lack of space in non-identified schools (25 percent) or an inability to negotiate
agreements with other districts to receive students who wished to transfer
(16 percent) as major challenges.124

Fifty-eight percent of districts with high schools identified for improvement


reported that they were not offering the school choice option at the high school
level, as did 46 percent at the middle school level and 30 percent at the
elementary level.125 Over three-fourths (77 percent) of the nation’s school
districts with high schools have only one high school, while 67 percent of districts
with middle schools have only one middle school and 53 percent of districts with
elementary schools have only one elementary school.126

Nationwide, states reported approving a total of 2,734 supplemental


service providers as of May 2005, almost three times as many as had
been approved two years earlier, in May 2003, when the number was 997.127
Private non-profit and for-profit organizations accounted for 76 percent of
approved providers, up from 60 percent of all providers in May 2003. A growing
number and percentage of faith-based organizations have obtained state
approval, rising from 18 providers (2 percent of all providers) in May 2003 to 249
(9 percent) in May 2005. School districts and public schools accounted for 17
percent of providers, down from 33 percent two years earlier.128 However, state
approval does not guarantee that a provider will actually serve students. Faith-
based providers served less than one-half of one percent of

student participants in Exhibit 44


2003-04, although Supplemental Service Providers:
they accounted for 6 Share of Providers and Participants, by Provider Type, 2003-04

percent of approved 100%


Percent of Approved Providers Percent of Participating Students
providers in May 2004
80%
(see Exhibit 44). In 70%
59%
contrast, the “market 60%

share” garnered by 40%


40%
districts and public 25%
schools (40 percent of 20%
6%
participating students) 0% 2% 0%
0%
was much higher than All Private Faith-Based Districts and Colleges and
their share of state- Providers Public Schools Universities

approved providers
(25 percent) might
suggest.
Exhibit reads: Private providers accounted for 70
percent of state-approved providers in May 2004
Nevertheless, private
and 59 percent of participating students during the
organizations served a 2003-04 school year.
majority of
participating students Sources: Policy and Program Studies Service review of State
Education Agency Web sites, May 2004; National Longitudinal
Study of NCLB, Principal Survey (2003-04). Percentages of
providers are based on data reported by 51 states. Percentages
of participating students
87 are based on an n of 71 districts.
(59 percent) in 2003-04; about one-third of participants were served by national
for-profit companies (34 percent); while 12 percent were served by other for-profit
companies and 13 percent by community-based organizations. Colleges and
universities accounted for a small proportion of approved providers (2 percent)
and an even smaller share of participants (less than one percent). Charter
schools also served less than one percent of participants.

The amount of funding that participating districts reported allocating as


the maximum amount for each child receiving supplemental services
was $1,434 in 2004-05. This amount did not vary significantly by district
urbanicity or size. In case studies of nine districts implementing supplemental
services, the districts varied widely in the percent of the Title I, Part A, allocation
that they opted to set aside for supplemental services: five districts reserved an
amount equal to 15 percent or more of their Title I allocation, two districts
reserved 10 percent, and the remaining two districts reserved only 2 to 6
percent.129

B. Parental Notification
Districts that were required to offer Title I school choice and
supplemental services in identified schools most frequently reported
notifying parents about their choice options through written notification
materials (60 percent and 88 percent, respectively), but districts also
reported using other strategies to communicate with parents (see Exhibit
45). The most common approach was to hold special meetings to inform parents
about their school choice (37 percent) and supplemental services options (72
percent). Notices in district or school newsletters, enrollment fairs or open
houses, and notices in public newspapers were other approaches used. The
percentage of students in districts using each notification strategy was always
higher, sometimes considerably higher, than the percentage of districts would
suggest, indicating that large districts were more likely to use each type of
notification strategy. For example, districts providing written notification enrolled
80 percent of the students in districts required to provide the school choice
option, although they accounted for only 60 percent of such districts.

Exhibit 45
District Strategies for Communicating with Parents
About Title I School Choice and Supplemental Services Options, 2004-05 130

School Choice Supplemental Services


(n=170 districts) (n=101 districts)
Percent of Percent of Percent of Percent of
districts students districts students
Written notification 60% 80% 88% 92%
Individual meetings with interested parents 37% 62% 72% 75%
Notices in district or school newsletters 28% 53% 61% 66%
Enrollment fairs or open houses to provide
information about alternate schools and providers 14% 38% 48% 70%

88
Notices in public newspapers 18% 39% 20% 42%
Public service announcements 7% 29% 17% 37%
Outreach through a local community partner
(e.g., Parent Information & Resource Center) 7% 18% 17% 38%
Other 19% 21% 24% 35%

Exhibit reads: Districts that were required to offer Title I school choice most
frequently reported notifying parents about their choice options through written
materials (60 percent). Districts providing written notification about the school
choice option enrolled 80 percent of all students in districts required to offer this
choice option.

Source: National Longitudinal Study of NCLB, District Survey.

However, the timing of parental notification was often too late to enable
parents to choose a new school before the start of the school year. Only
29 percent of affected districts notified parents about the school choice option
before the beginning of the 2004-05 school year. Another 21 percent notified
parents at the beginning of the school year, which would have given parents very
little time to make important decisions about which school their child should
attend. The remaining 50 percent of districts notified parents after the school
year had already started; in these districts, notification occurred, on average,
five weeks after the start of the school year.131 Districts that notified parents
before the start of the school year accounted for 52 percent of the students in
districts offering Title I school choice.

One reason for the delay in notifying parents about their choice options may be
that some states did not provide final determinations about schools’ AYP and
identification status until late in the summer or, in some cases, after the school
year had begun (see Chapter V).

89
In interviews with a small sample of parents10 in schools where supplemental
services were being offered in 2003-04, about half of the parents interviewed said
they had received enough information to choose good providers for their children,
while nearly as many reported that they knew little or were confused about the
services available to them. The parents often indicated that teacher and principal
recommendations were important factors in their decision whether to enroll their
child in supplemental services and in choosing a provider. Parents interviewed
also said that location and the availability of transportation were critical issues in
selecting a provider.132

C. Monitoring and Evaluation of Supplemental Service


Providers
States report that they are working to develop and implement systems
for monitoring the performance of supplemental service providers, but
as of early 2005, 15 states had not established any monitoring
processes and 21 states had not finalized their monitoring processes.
The most common approaches that states have implemented to monitor
providers are surveying the districts about provider effectiveness (25 states) and
using providers’ reports on student progress (18 states). Fewer states reported
conducting on-site evaluations (14 states) or having districts report student-level
data to the state (9 states). Three states were maintaining databases of student-
level achievement data to monitor provider effectiveness (Louisiana, Maryland,
and New Jersey); four states were planning to do so (Colorado, Florida, Oklahoma,
and Tennessee).133

As of early 2005, half of the states had not yet established any standards for
evaluating provider effectiveness and none had finalized their evaluation
standards.

In states with evaluation standards, a variety of measures would be used.


Seventeen states say they will evaluate provide effectiveness based on student
achievement on state assessments, although only one of these plans to use a
matched control group. Thirteen states plan to allow the use of provider-
developed tests, and 10 states will use other measures, such as student grades,
homework completion, or school- or teacher-administered tests. Seventeen
states plan to measure parent or student satisfaction with the services.134

In interviews with a small sample of parents whose children received


supplemental services in 2003-04, many of the parents interviewed reported that
they were satisfied with the services their children had received and believed that
after-school tutoring had helped their children. Some parents said that their
10
Information from these case studies are not nationally representative and these findings cannot be
generalized to all parents nationwide. Rather, they provide indications of issues and experiences
reported by some parents in order to supplement data reported by districts and schools. A subsequent
parent survey conducted through the National Longitudinal Study of NCLB in spring and summer 2005
will provide similar information for a much larger sample of parents, although that survey also will not be
nationally representative; the NLS-NCLB parent survey will be included in the final report on the National
Assessment of Title I.

90
children’s grades had improved, while others pointed to improved mathematics or
reading skills. However, some parents reported that they were disappointed with
the services and saw no improvement in their children’s reading or mathematics
skills.135

91
Conclusions
NCLB requires Title I schools that have been identified for improvement to offer
options for parents to transfer their children to another public school or to obtain
supplemental educational services, most typically after-school tutoring. Although
many more students are eligible to use the school choice option, the early
experience with these provisions indicates that after-school tutoring is by far the
more popular option. In the 2003-04 school year, six times as many students
participated in the supplemental services option (233,000) as participated in the
school choice option (38,000). Stated differently, only one percent of eligible
students changed schools under the NCLB provision, and 17 percent of eligible
students enrolled to receive supplemental services.

In addition to written notification, affected districts report using a variety of other


strategies to inform parents about their Title I school choice and supplemental
services options. However, this notification often occurred after the school year
had begun, which may be too late for parents to choose a new school.

Future reports will examine additional issues related to choice and supplemental
services—what factors parents consider when making decisions about whether to
choose and what to choose, the characteristics of participating students, and
student achievement outcomes associated with participation in the Title I school
choice or supplemental services options.

92
Key Findings on Teacher Quality and Professional Development

How have states implemented the requirements to define


“highly qualified teacher” and develop a “high objective uniform
state standard of evaluation (HOUSSE)?

In all but two states, teachers may take exams to demonstrate their
subject-matter competency for the purposes of meeting the NCLB highly
qualified teacher requirement, frequently one of the Praxis II subject
assessments (41 states). States vary considerably in the scores that they
require teachers to obtain in order to be certified to teach and/or to be
deemed highly qualified under NCLB.

Most states (47) allowed veteran teachers to demonstrate their subject-


matter competency through a HOUSSE as of spring 2005. The most
common type of HOUSSE offered involved a system wherein teachers
were allowed to accumulate a state-determined number of points in order
to earn highly qualified status (29 states).

How many teachers meet the NCLB requirement to be “highly


qualified”?

The large majority of teachers across the country have been designated
by their states as “highly qualified” under NCLB. According to state-
reported data for 42 states, 86 percent of classes were taught by highly
qualified teachers in 2003-04. Principal and teacher reports for 2004-05
provide somewhat lower estimates of the percentage of teachers who are
highly qualified. For example, 74 percent of all teachers reported that
they were considered highly qualified under NCLB, although 23 percent
responded that they did not know if they were considered highly
qualified.

To what extent are teachers participating in professional


development activities that are sustained, intensive, and focused
on instruction?

Nearly all elementary and secondary teachers of reading and


mathematics participated in some professional development that focused
on strategies for teaching reading or math, but fewer than one-quarter
participated in such training for more than 24 hours over the entire 2003-
04 school year and summer.

How many paraprofessionals meet the NCLB qualification


requirements?

According to principal reports, only 63 percent of Title I instructional aides


were “qualified” under NCLB during the 2004-05 school year; for most of

93
the remaining aides (26 percent), principals reported that they did not
know their qualifications status. However, 87 percent of Title I
instructional aides indicated that they had at least two years of college
(and/or an associate’s degree) or had passed a paraprofessional
assessment.

94
VII. Teacher Quality and Professional Development
Although there has not been an extensive examination of the relationship
between instructors’ content knowledge and students’ achievement, some
research suggests that teachers who have strong preparation in the subjects they
teach are more effective than teachers without strong subject-area preparation.136
Because many policymakers have been concerned that some teachers graduate
from their teacher preparation programs without adequate subject-matter
preparation and that other teachers are assigned to teach subjects for which they
have not been certified to teach, NCLB requires all teachers of core academic
subjects to be “highly qualified” by the end of the 2005-06 school year. NCLB
specifies the core academic subjects to be English, reading or language arts,
mathematics, science, foreign languages, civics and government, economics,
arts, history, and geography.137

To be “highly qualified” each teacher must have: (1) a bachelor’s degree; (2) full
state certification; and
(3) demonstrated competency, as defined by the state, in each core academic
subject he or she teaches. The law requires new elementary teachers to
demonstrate subject-matter competency by passing a rigorous state test; new
secondary teachers must either pass a subject-matter test or have a college
major (or coursework equivalent), advanced degree, or advanced certification in
the subject(s) they plan to teach. For veteran teachers, the law allows each state
to create its own “high, objective, uniform state standard of evaluation” (HOUSSE)
to measure subject-matter competency.

NCLB makes professional development a key strategy for improving teachers’


skills and effectiveness. Each district that receives Title I funds must spend at
least 5 percent of its Title I allocation on professional development. The quality of
that professional development will be critically important if it is to have the
intended effects of improving instruction and student learning.

NCLB also increased the minimum qualification requirements for Title I-funded
paraprofessionals who provide instructional services. Specifically, NCLB requires
that aides providing instructional services must have at least two years of college
or an associate’s degree, or they must meet a rigorous standard of quality
through a formal state or local assessment. All new Title I instructional aides
must be qualified upon hire, and all existing Title I instructional aides must
become qualified by the end of the 2005-06 school year.

Key Evaluation Questions for Teacher Quality and Professional


Development

1. How have states implemented the requirements to define “highly qualified


teacher” and to develop a “high objective uniform state standard of
evaluation” (HOUSSE)?

95
2. How many elementary teachers meet the NCLB requirement to be “highly
qualified” as defined by their state? What percentage of secondary classes
in core academic subjects are taught by teachers who are highly qualified?
How does this vary across states, grade levels, and school poverty levels?

3. To what extent are teachers participating in professional development


activities that are sustained, intensive, and focused on instruction?

4. How many paraprofessionals meet the NCLB qualifications requirements?


What are states, districts, and schools doing to help paraprofessionals meet
these requirements?

A. State Definitions of Highly Qualified Teachers


By December 2004, all states had drafted definitions of highly qualified
teachers and had either adopted a HOUSSE alternative for existing
teachers (that is, those who were not new to the profession) or made
the decision to not offer such an alternative at that time. On state Web
sites, all but three states discuss the application of the “highly qualified”
requirement to teachers of the “core academic subjects.” Twenty-nine states
gave more specific definitions of the core subject areas than those provided in the
NCLB statute. For example, six states provided more detail on the specific
“science” fields for which teachers must meet the highly qualified requirement,
and 25 states outlined which disciplines within the “arts” are considered core
academic subjects.138

Most states meet the requirement to test the content knowledge of new
teachers through the Praxis II subject assessments developed by the
Educational Testing Service (ETS).139 Based on an analysis of state Web sites
and the ETS Web site in early 2005, of the 41 states that use one or more of the
various Praxis II examinations, 25 use the Praxis II exams alone and 16 list the
Praxis II exams as well as other exams. Ten states do not list the Praxis II exams
but list other exams, such as tests developed for use in specific states (e.g., the
Massachusetts Test for Educator Licensure).140

The two states (Iowa and Montana) that do not identify any exams that
elementary or secondary teachers can take in order to demonstrate subject-
matter competency also do not require teachers to pass any subject-matter
exams in order to become initially certified to teach.141

States vary considerably in the qualifying scores that they use on Praxis
II subject assessments for the purposes of initial teacher certification
and for determining whether teachers are “highly qualified” under NCLB
(see Exhibit 46). States set different qualifying scores (often called “cut scores”
or “passing scores”) for reasons involving each state’s individual context and
challenges; each state assembles a panel of experts that reviews the test and
recommends a cut score to the state licensing board or state department of

96
education.142 Twenty-nine of the 35 states that use the Praxis II Mathematics
Content Knowledge exam set their cut scores below the national median and nine
states set theirs below the 25th percentile (ranging from the 14th to the 22nd
percentile).143 In contrast, four states set the cut score above the national
median, and one of those four states set its cut score at the 74th percentile.

As far as could be determined from extant sources (state Web sites and the ETS
Web site), states are, with the exception of Alabama, using the same cut scores
for both highly qualified determinations and initial teacher certification
requirements. (The scores listed for Alabama are used for the highly qualified
designation only; the state is in the process of determining the cut scores for
initial certification.) Note that this analysis did not distinguish between the use of
exams for teachers at different grade levels; in particular, states may vary in
whether middle school teachers take a general elementary examination or a
specific subject-matter examination.144

Nearly all states (47) officially offered a HOUSSE option to their veteran
teachers as of spring 2005 (see Exhibit 47).145 The five states that did not
offer a HOUSSE option at that time were Colorado, the District of Columbia,
Mississippi, Missouri, and Puerto Rico.

97
98
Exhibit 46
State Definitions of “Highly Qualified Teacher”: Use of Praxis II Exams and Cut Scores, 2004-05
State Uses At Least Praxis II: Elementary Praxis II: English Praxis II: Mathematics
One Exam from Education Content Language, Literature Content Knowledge
Praxis II Series for Knowledge and Composition:
Some/All Teachers Content Knowledge
Total Number of
States Using 41 20 36 36
Praxis II: Subject
Assessments
Alabama X 137 151 118
Alaska X 143 158 146
Arkansas X 159 116
California X
Colorado X 147 162 156
Connecticut X 172 137
Delaware X 159 121
District of Columbia X 142 141
Georgia X 168 136
Hawaii X 164 136
Idaho X 143 158 119
Indiana X 153 136
Kansas X 165 137
Kentucky X 148 160 125
Louisiana X 150 160 125
Maine X 145 169 126
Maryland X 142 164 141
Minnesota X 140 148 124
Mississippi X 153 157 123
Missouri X 158 137
Nevada X 150 144
New Hampshire X 164 127
New Jersey X 141 162 137
New Mexico X
North Carolina X Composite with other tests
North Dakota X 151 139
Ohio X 143 167 139
Oklahoma X
Oregon X 159 138
Pennsylvania X 160 136
Rhode Island X 145
South Carolina X 162 131
South Dakota X 137 154 124
Tennessee X 140 157 136
Utah X 150 168 138
Vermont X 148 172 141
Virginia X 143 172 147
Washington X 141 158 134
West Virginia X 155 133
Wisconsin X 147 160 135
Wyoming X
National Median Score 163 178 143
Range from 25th to 75th Percentile 150-175 166-188 127-156
Range from 10th to 90th Percentile** 139-185 156-196 111-171

Source: Educational Testing Service (n=41 states).

99
The most common type of HOUSSE option offered in spring 2005
involved a system wherein teachers were allowed to accumulate a state-
determined number of points in order to earn a highly qualified status
(29 states). Most states allowed points to be earned retroactively for such

100
things as successful completion of certain college courses (28 states) or
publishing articles and/or receiving teaching awards or honors (23 states). If
teachers could not document successful completion of college courses or
professional development in their specific content(s), they were required to earn
the points by successfully completing college courses or professional
development activities. Four states (Florida, Georgia, Minnesota, and Oklahoma)
allowed teachers to earn some points for evidence of improved student
achievement.146

Fifteen states allowed teachers to earn up to 50 percent of their HOUSSE


points for a specified number of years of prior teaching experience in
their subject area(s). This is the maximum weight that states are permitted to
give to prior teaching experience under HOUSSE. Eleven additional states allow
teachers to earn from 24 to 49 percent of points for number of prior years of
teaching experience.

Exhibit 47
Number of States Offering Various Types of HOUSSE Options
for Determining Whether Veteran Teachers Are "Highly Qualified" Under NCLB

Number of States
State offers a HOUSSE option 47
 Uses a point system for HOUSSE 29
 Uses teacher performance evaluation as a HOUSSE 7
 Uses teacher certification systems (or the on-going evaluation
8
components of those systems) as an official HOUSSE
 Uses a HOUSSE that provides teachers a menu of options for
demonstrating “highly qualified” status. 5
State does not offer a HOUSSE option 5

Exhibit reads: Of the 47 states offering a HOUSSE option, 29 use a point


system.

Note: Two states (Pennsylvania and Tennessee) are counted twice because they use more than one
of these approaches.

Source: Study of State Implementation of Accountability and Teacher Quality Under NCLB (n=52
states).

Seven states used a performance evaluation as their HOUSSE option.


South Carolina’s performance evaluation appeared to be extensive, taking place
over the course of an entire semester. The evaluation assessed content
knowledge, effective use of instructional strategies, and the monitoring of student
performance. Two-member teams of highly qualified teachers conducted the
evaluations. In contrast, at least two states appeared to use performance
evaluations for HOUSSE that may have been in place prior to the passage of
NCLB. These consisted of one or two classroom observations conducted by a
supervisory teacher or principal over the course of one school year.147

101
Tennessee stood out among the five states that provided teachers a menu of
options for demonstrating their highly qualified status. Tennessee allowed a
teacher to be deemed highly qualified if the teacher had demonstrated improved
student achievement on state tests of reading and mathematics over three
consecutive years.

Eight states—Idaho, Montana, Nebraska, Oregon, Pennsylvania, South Dakota,


Washington, and Wisconsin—used their current, initial teacher certification
systems as their official HOUSSE options as of spring 2005.148 These states
reported that their certification requirements currently contain high standards of
subject-area expertise.149

B. Teachers’ Highly Qualified Status


The large majority of teachers across the country have been designated
as “highly qualified” under NCLB. According to state-reported data for 42
states, 86 percent of elementary and secondary classes were taught by highly
qualified teachers in 2003-04 (see Exhibit 48). Most states reported that the large
majority (90 percent or more) of classes were taught by highly qualified teachers;
only eight states reported that this percentage was below 75 percent; and only
California and Tennessee reported that it was below 60 percent.

102
Exhibit 48
Percentage of Classes Taught by Teachers Who Are Highly Qualified Under NCLB,
as Reported by States, 2003-04

Percent Percent
Total 86%
Alabama 77% Montana 99%
Alaska -- Nebraska 91%
Arizona 96% Nevada 64%
Arkansas -- New Hampshire 73%
California 52% New Jersey 94%
Colorado 91% New Mexico 67%
Connecticut 99% New York 92%
Delaware 73% North Carolina 85%
District of Columbia -- North Dakota 77%
Florida 89% Ohio 93%
Georgia 97% Oklahoma 98%
Hawaii 73% Oregon 87%
Idaho 97% Pennsylvania 97%
Illinois 98% Puerto Rico --
Indiana 96% Rhode Island 76%
Iowa 95% South Carolina 75%
Kansas 95% South Dakota 93%
Kentucky 95% Tennessee 58%
Louisiana 90% Texas 92%
Maine 90% Utah Exhibit 49 69%
Maryland Percentage
67% of Teachers
VermontReporting that They Are Considered
--
Massachusetts 94% Highly Qualified
Virginia under NCLB, 2004-05 95%
Michigan 92% Washington 99%
Minnesota 99% West Virginia 96%
Mississippi 93% Wisconsin 98%
Missouri 100% 96% Wyoming 99%
23% 20% 24% Don't Know
29%
80% 2% 6%
Note: Forty-seven states provided data for this table, but the national
8% estimate is based on 42 states
that reported both a numerator and 15% of classes taught
60%a denominator for calculating the percentage
by highly qualified teachers. Not Highly
40% 75% 74% Qualified
68%
Source: Consolidated State Performance Reports. 52%
20%
Highly
0% Qualified
Elementary Secondary Secondary Special
Compared with the Teachers English Math Education
state-reported data, (n=4,059) Teachers Teachers Teachers
(n=1,787) (n=1,627) (n=1,158)
principal and
teacher reports
provide somewhat
lower estimates of
the percentage of Exhibit reads: Seventy-five percent of
teachers who are elementary teachers reported that they were
highly qualified; considered highly qualified under NCLB, while 2
however, this percent said they were considered not highly
appears to be qualified, and 23 percent said they did not know
because they often their highly qualified status.
did not know
teachers’ highly Note: The percentages for “special education teachers” do
not total 100 because special educators were offered a fourth
response category – “do not need to meet highly qualified
requirement.” Four percent of special educators gave this
response. 103

Source: National Longitudinal Study of NCLB, Teacher Survey.


qualified status. For example, 74 percent of regular classroom teachers11
reported in 2004-05 that they were considered highly qualified under NCLB, while
principals reported that 82 percent of elementary teachers were highly qualified
and that 76 percent of secondary classes were taught by highly qualified
teachers.150 However, 23 percent of classroom teachers responded that they did
not know their highly qualified status, and principals often chose to skip a similar
survey item, particularly for special education teachers and ESL and bilingual
teachers, which may suggest that they too are often unsure about their teachers’
status. A statistical analysis of the background characteristics of the teachers
who did not know their highly qualified status found that 92 percent of such
teachers were very similar in their educational and professional qualifications to
those teachers who reported that they were indeed highly qualified.151

Middle school teachers and special education teachers were more likely
to report that they were considered not highly qualified under NCLB
than were elementary teachers or high school teachers. For example,
although 6 percent of secondary English teachers reported in 2004-05 that they
were not highly qualified (see Exhibit 49), middle school English teachers were
twice as likely as high school English teachers to say they were not highly
qualified (8 percent vs. 4 percent). Similarly, 12 percent of middle school
mathematics teachers said they were not highly qualified, compared with 5
percent of high school mathematics teachers.152 These findings are not
surprising, since middle school teachers are less likely to have majors in English
or mathematics than their high school counterparts. For example, in a 1999-2000
survey, only 28 percent of middle school mathematics teachers reported that they
had a major in mathematics, compared with 79 percent of high school
mathematics teachers.153 Few elementary teachers (2 percent) reported that they
were not highly qualified. However, 15 percent of special education teachers said
they were not highly qualified.154

Principals reported similar but slightly higher rates of highly qualified teachers for
all categories of teachers. For example, according to principals, 82 percent of
elementary teachers were highly qualified and 77 percent of secondary
mathematics classes were taught by highly qualified teachers.

Teachers in schools with high concentrations of poor and minority


students were somewhat more likely to report that they were considered
not highly qualified under NCLB.155 In high-poverty schools, 5 percent of
elementary teachers and 12 percent of secondary teachers reported in 2004-05
that they were considered not highly qualified under NCLB, compared with one
percent in low-poverty elementary schools and 3 percent in low-poverty
secondary schools. In high-minority schools, 5 percent of elementary teachers
reported that they were not highly qualified, as did 9 percent of secondary
teachers.156
11
Teacher survey data used in this report are from the National Longitudinal Study of NCLB, which is not
representative of all teachers; rather, the study sampled elementary classroom teachers, secondary
English teachers, and secondary math teachers. For simplicity, we use the term “teachers” to refer to
these data. The study also surveyed a sample of special education teachers (both elementary and
secondary), and data for these teachers are reported separately.

104
Students in schools that were identified for improvement for 2004-05
were more likely to be taught by teachers who were not highly qualified
under NCLB than were students in non-identified schools (see Exhibit 50).
For example, only one percent of elementary teachers in non-identified schools
said they were considered not highly qualified, compared with 5 percent in
schools that were in the first or second year of being identified for improvement,
8 percent in schools in corrective action, and 6 percent of schools in restructuring.
At the secondary level, 15 percent of teachers in schools identified for
restructuring said they were considered not highly qualified, as did 12 percent of
teachers in schools in the first or second year of improvement status.157

Exhibit 50
Percentage of Teachers Reporting That They Are
Considered Not Highly Qualified Under NCLB,
20% by School Improvement Status, 2004-05

15% *
15%
12% *

10% 8% * 8%
6% *
5%*
5% 4%

1%
0%
Elementary Teachers Secondary Teachers

Not Identified for Improvement


Identified for Improvement (Year 1 or Year 2)
Identified for Corrective Action
Identified for Restructuring

Exhibit reads: In schools that were not identified for improvement, one percent of
elementary teachers reported that they were considered to not be highly qualified
under NCLB.
* Indicates that percentage was significantly different from percentage for non-identified schools
(p<.05).

Source: National Longitudinal Study of NCLB, Teacher Survey (n= 4,051 elementary teachers and
3,218 secondary teachers).

105
Reasons for teachers being considered not highly qualified under NCLB
differed by school grade level. Elementary teachers most commonly
reported that the reason was lack of full certification, while secondary
teachers were more likely to report that they had not demonstrated
subject-matter competency (see Exhibit 51). About one-third (35 percent) of
elementary teachers who said that they were not highly qualified reported that
this was because they lacked full certification, compared with only 16 percent of
secondary English teachers and 19 percent of secondary mathematics teachers.
Over half (59 percent) of secondary mathematics teachers who were not highly
qualified indicated that lack of subject-matter competency in mathematics was
the reason, while only 18 percent of secondary English teachers who were not
highly qualified indicated that lack of subject-matter competency in English was
the reason.

Exhibit 51
Reasons Why Teachers Were Considered
Not Highly Qualified Under NCLB, 2004-05

100% Lack full certification

80% Have not demonstrated subject matter


competency in elementary curriculum, in English,
or in math 59%
60% Have not demonstrated subject matter
competency in another subject they teach

40% 35%
31%
19%
20% 14% 16% 18% 13%

0%
Elementary Teachers Secondary English Secondary Math
(n=135) Teachers (n=152) Teachers (n=243)

Exhibit reads: Thirty-five percent of elementary teachers who said they


were considered not highly qualified under NCLB did not have full
certification.

Note: Elementary teachers who reported that they were not highly qualified due to “lack of full
certification” represented fewer than one percent of all elementary teachers nationally.

Source: National Longitudinal Study of NCLB, Teacher Survey.

106
Teachers who reported
Exhibit 52
that they had not met Percentage of Secondary Teachers Who Are Novice
the NCLB highly Teachers or Lack a College Major in the Subject That
qualified requirement They Teach, by Highly Qualified Status, 2004-05
also appeared less 100% English teachers with fewer than 3 years of teaching experience
qualified on other Math teachers with fewer than 3 years of teaching experience
85%
measures; for 80%
English teachers who do not have a major in English
Math teachers who do not have a major in math 75%
example, they were
more likely to lack a 60%
59%
college major in the 46%
subjects they taught or 40%
three or more years of
teaching experience 20%
18%
11%
(see Exhibit 52). 7% 9%
Among secondary 0%
English teachers, 75 Highly Qualified Not Highly Qualified
percent of those who
reported that they
were not highly
qualified under NCLB Exhibit reads: Secondary English teachers who
did not have a major in said they were not highly qualified under NCLB were
English, compared with more likely to be novice teachers with fewer than
46 percent of those three years of teaching experience (18 percent) than
who said they were those who were considered highly qualified (7
highly qualified. percent).

Source: National Longitudinal Study of NCLB, Teacher Survey


(n=1,075 to 1,255 for highly qualified teachers; n=138 to 152 for
107not highly qualified).
teachers who are
Similarly, 18 percent of English teachers who were not highly qualified had fewer
than 3 years of experience, compared with 7 percent of highly qualified English
teachers.
Many districts and schools reported that they did not notify parents
about whether their child’s teacher was highly qualified, as required
under NCLB. High-poverty schools with teachers who did not meet the “highly
qualified” requirement were much more likely to report having notified parents of
the highly qualified status of their child’s teacher (76 percent) than were low-
poverty schools (31 percent).158

C. Professional Development
Research indicates that professional development that places a strong
emphasis on academic content, and on how students learn specific
content, is associated with gains in student achievement.159 Research also
indicates that teachers reported that professional development enhanced their
knowledge and skills when it was sustained and intensive; connected to state
standards and to teachers’ goals or other learning experiences; involve teams of
teachers from the same grade levels, departments, or schools; and allow teachers
to observe and practice the skills and techniques being introduced or to actively
engage in conversations about teaching and learning.160

NCLB requires states to report on the percentage of teachers who


participated in “high quality” professional development, but the validity
of these data is questionable. It is not clear that states have rigorous,
consistent definitions of “high quality” or accurate mechanisms for collecting such
data. In addition, 14 states did not submit these data in their September 2003
Consolidated Application as required.161 Based on the 38 states that did report
these data for 2002-03, the reported percentage of teachers participating in high-
quality professional development varied widely. Eleven states reported that 90
percent or more of their teachers had participated; 10 other states reported that
fewer than 50 percent of their teachers had participated.

1. Content Focus and Intensity of Professional Development

Most teachers reported that they participated in some professional


development that had a focus on instructional strategies for teaching
reading or mathematics, but fewer than one-quarter of teachers
participated in such training for more than 24 hours over the 2003-04
school year and summer (see Exhibit 53). For example, 90 percent of
elementary teachers participated in at least one hour of professional development
focused on instructional strategies for teaching reading, but only 20 percent
participated for more than 24 hours over the 2003-04 school year and summer.162
Although there is no hard evidence on the minimum number of contact hours or
duration necessary for professional development to have an impact on teaching
practice and student achievement, researchers argue that professional

108
development is more likely to have an impact if it involves many contact hours
over a long time period.163

Exhibit 53
Percentage of Teachers Participating in Professional Development
Focused on Instructional Strategies for Reading and Mathematics,
2003-04

100% 9%
20% 16%
22%
More than
80%
24 Hours 26%
39% 6 to 24 30%
60% 36% Hours
1 to 5 37%
40% hours 31%
31% 30% None
20%
29% 23%
10% 12%
0%
Elementary Secondary Elementary Secondary
Teachers English Teachers Math
(n=4,007) Teachers (n=3,994) Teachers
(n=1,740) (n=1,580)

PD in Reading PD in Mathematics

Exhibit reads: Twenty percent of elementary teachers reported that they


received more than 24 hours of professional development focused on
instructional strategies for teaching reading during the 2003-04 school year.
Exhibit 54
Source: National Longitudinal Study of Percentage
NCLB, Teacher Survey. Participating in Professional
of Teachers
Development Focused on In-Depth Study of Topics in
Reading and Mathematics, 2003-04
100% 6%
13% 16% 10%
Teachers were 80%
More than 14% 15%
28% 24 Hours
unlikely to report that 24%
6 to 24 29%
25%
they participated in 60% Hours

professional 40% 32% 30% 1 to 5


hours
development focused None 51% 49%
20%
on “in-depth study” 27% 30%
of reading and 0%
mathematics topics Elementary
Teachers
Secondary
English
Elementary
Teachers
Secondary
Math
for more than 24 (n=3,982) Teachers (n=3,950) Teachers
(n=1,719) (n=1,565)
hours over the 2003-
PD in Reading PD in Mathematics
04 school year (see
Exhibit 54). Only
13 percent
of elementary teachers Exhibit reads: Thirteen percent of elementary
and 16 percent of teachers reported that they received more than 24
hours of professional development focused on in-
depth study of reading topics during the 2003-04
school year.109
Source: National Longitudinal Study of NCLB, Teacher Survey.
secondary English teachers participated in this type of professional development.
In addition, about half of all general elementary teachers (51 percent) and
secondary mathematics teachers (49 percent) did not participate in any
professional development focused on the in-depth study of mathematics during
the 2003-04 school year and summer.164

Special education teachers were less likely than general education


teachers to report that they participated in professional development
focused on reading and mathematics. For example, while 72 percent of
general elementary teachers received training on instructional strategies for
teaching mathematics during the 2003-04 school year, only 48 percent of special
education teachers received such training. However, 88 percent of special
educators participated in professional development focused on strategies for
teaching students with disabilities, while only 50 percent of general elementary
teachers participated in such training.165

Teachers in high-poverty schools were more likely to report that they


participated in professional development focused on reading and
mathematics than were teachers in low-poverty schools. For example,
elementary teachers in high-poverty schools (49 percent) were more likely than
their counterparts in low-poverty schools (36 percent) to participate in
professional development focused on the in-depth study of topics in reading
during the 2003-04 school year. Likewise, 49 percent of secondary English
teachers reported participating in professional development focused on in-depth
study of topics in reading or English, compared with 36 percent of their colleagues
in low-poverty schools.166

2. Other Characteristics of Teachers’ Professional Development

The majority of teachers (63 percent) reported that their professional


development activities during the 2003-04 school year were often
designed to support state or district standards or assessments. In
addition, more than two-thirds of teachers reported that at least some of their
professional development was based explicitly on what they had learned in earlier
professional development experiences. However, only 17 percent of these
teachers said that this was often the case.

Eighty percent of teachers reported that they participated in some


professional development with other teachers from their school.
Elementary teachers (89 percent) and secondary English teachers (90 percent)
were more likely to participate in some professional development with other
teachers from their same schools, departments or grade levels than were their
peers who teach secondary mathematics (83 percent) or special education (78
percent).167

At least two-thirds of teachers reported that they participated in some


professional development that provided opportunities for active learning
during the 2003-04 school year. Approximately two-thirds of elementary and

110
special educators reported that they participated in at least some professional
development that provided them with the opportunity to practice what they had
learned and receive feedback; more than half of secondary English and
mathematics teachers participated in training that involved this kind of activity.
In addition, more than 50 percent of these same groups of teachers reported that
they reviewed student work or scored assessments as part of some of their 2003-
04 professional development activities. Elementary teachers were more likely
than secondary mathematics and English teachers to report that they engaged in
each of these activities.168

3. Professional Development for Teachers Who Are Not Highly


Qualified

Teachers who reported that they were not considered highly qualified
were no more likely to report that they participated in content-focused
professional development than were highly qualified teachers. However,
elementary teachers who said they were not highly qualified under NCLB were
more likely to report participating in a sustained mentoring or new-teacher
induction program (47 percent, compared with 26 percent of highly qualified
elementary teachers) during the 2003-04 school year. However, no significant
differences were found for secondary teachers or for other types of support, such
as peer coaching or release time for course preparation or college courses.169

D. Qualifications of Title I Paraprofessionals


Paraprofessionals account for more than one-third of Title I-funded instructional
staff members, and they spend over half of their time tutoring students one-on-
one or working with students in groups. Due to concerns about the quality of the
instructional support provided by these staff members, NCLB strengthened
requirements for their qualifications. In order to be considered qualified, Title I
instructional aides must have passed a state-endorsed or state-required
paraprofessional assessment or must have either two years of college or an
associate’s degree.

According to principal reports, 63 percent of Title I instructional aides


had been identified as “qualified” under NCLB as of the 2004-05 school
year, and 11 percent were not qualified.170 For the remaining 26 percent of
Title I aides, principals either indicated that they did not know the aides’ status or
skipped the question entirely.171 (By the end of the 2005-06 school year, all Title I
instructional aides must be “qualified” as defined in NCLB.) A survey of the aides
themselves suggests that a higher percentage may meet the NCLB requirement
when final determinations are made; 87 percent of Title I instructional aides
indicated that they either had passed a state or district paraprofessional
assessment (55 percent) or had two years of college or an associate’s degree (56
percent).172

More than three-quarters of Title I instructional aides reported that they spent at
least some of their work day tutoring students one-on-one (79 percent) or working

111
with students in groups (87 percent); on average these aides reported spending
about 57 percent of their time on these two activities. Nearly one-quarter (23
percent) reported that, of the time that they spent tutoring or working
with students in a classroom, a teacher was present for half or less of
this time.173

Among Title I instructional aides who said they were not qualified under
NCLB, 30 percent reported “not enough money or funding to become
qualified” as a major challenge and 21 percent reported “not enough
time to get qualified.” Other major challenges reported by aides were
insufficient encouragement from school and district (17 percent), level of difficulty
of the test (13 percent), and insufficient information about what they needed to
do (8 percent).174

The majority of states, districts, and schools reported that they had
adopted at least one strategy to help Title I aides comply with the NCLB
“qualified” requirements as of fall 2004. At the state level, the most
common strategies were working with local colleges and universities to design
needed courses or offering evening and weekend courses to Title I aides (21
states) and offering test preparation courses for aides wishing to take the state
competency exam (13 states). Other common strategies included offering
funding for course tuition (10 states) and paying the state test fee for interested
aides (six states).175

Nearly three-quarters (74 percent) of principals reported that their


district or school was providing non-qualified paraprofessionals with
training related to their classroom duties. Other strategies included the
creation of school-level liaisons to work with paraprofessionals on their
qualifications (56 percent) and providing incentives for paraprofessionals to
increase their qualifications and become “qualified” under NCLB (36 percent).176

Conclusions
Due to concern that too many teachers, particularly those in low-performing
schools, had not met state certification requirements or lacked expertise in the
subjects they were teaching, NCLB requires that all teachers be “highly qualified”
by 2005-06. Although most states are well on the way to meeting the law’s
requirements, we do not have evidence about whether the qualifications of the
teaching workforce have actually changed.

The large majority of teachers (86 percent) have been designated as “highly
qualified” according to state-reported data for 2003-04. However, teachers in
schools with high concentrations of low-income students or minority students
were more likely to be considered not highly qualified under NCLB. In addition,
almost one-fourth of teachers surveyed said they did not know their highly
qualified status.

112
The way in which states are implementing the HOUSSE option for veteran
teachers has been the subject of considerable debate. Nearly all states (47) offer
a HOUSSE option, with the most common type involving a point system. Most
states allow points to be earned for completion of college courses, published
articles, and teaching awards or honors. Four states recognize improved student
achievement. Twenty-six states allow a substantial percentage of required points
to be earned for prior teaching experience.

Professional development has been and remains a key strategy for improving
teacher effectiveness. Most teachers reported receiving some professional
development in reading and math, but a relatively small proportion participated in
such training for an extended period of time. For example, only 20 percent of
elementary school teachers reported receiving more than 24 hours of training in
reading instruction in 2003-04. Teachers were less likely to receive training in
instructional strategies for teaching mathematics or in-depth study of topics in
reading or mathematics. Special education teachers were less likely than general
education teachers to receive training focused on reading and mathematics.
Classroom teachers in high-poverty schools received more training in both
reading and mathematics.

113
 References

Anderson, Leslie M., and Lisa Weiner (2004). Early Implementation of


Supplemental Educational Services Under the No Child Left Behind Act: Year One
Report. Washington, D.C.: U.S. Department of Education, Office of the Under
Secretary, Policy and Program Studies Service.

Anderson, Leslie M., and Katrina G. Laguarda (2005). Case Studies of


Supplemental Services Under the No Child Left Behind Act: Findings From 2003-
04. Washington, D.C.: U.S. Department of Education, Office of Planning,
Evaluation, and Policy Development, Policy and Program Studies Service.

Kennedy, Mary (1998). Research Monograph No. 13: Form and Substance in
Inservice Teacher Education. Madison: National Institute for Science Education,
University of Wisconsin.

Center on Education Policy (2005). From the Capital to the Classroom: Year 3 of
the No Child Left Behind Act. Washington, D.C.: Center on Education Policy.

Chaney, Bradford (1995). Student Outcomes and the Professional Preparation of


8th Grade Teachers. NSF/NELS:88 Teacher Transcript Analysis. Rockville, MD:
Westat.

Cohen, David K., Milbrey W. McLaughlin, and Heather C. Hill (1998). Instructional
Policy and Classroom Performance: The Mathematics Reform in California (RR-39).
Philadelphia: Consortium for Policy Research in Education.

Garet, Michael S., Beatrice F. Birman, Andrew C. Porter, Laura Desimone, Rebecca
Herman, and Kwang Suk Yoon (1999). Designing Effective Professional
Development: Lessons From the Eisenhower Professional Development Program.
Washington, D.C.: U.S. Department of Education, Office of the Under Secretary,
Planning and Evaluation Service.

Goldhaber, Dan D., and Dominic J. Brewer (2000). “Does Teacher Certification
Matter? High School Certification Status and Student Achievement” in
Educational Evaluation and Policy Analysis, 22: 129-145.

Padilla, Christine, Heidi Skolnik, Alejandra Lopez-Torkos, Katrina Woodworth,


Andrea Lash, Patrick M. Shields, Katrina G. Laguarda, and Jane L. David
(forthcoming). Title I Accountability and School Improvement Efforts From 2001 to
2004. Washington, DC: U.S. Department of Education, Office of Planning,
Evaluation, and Policy Development, Policy and Program Studies Service.

Padilla, Christine, Katrina Woodworth, Andrea Lash, Patrick M. Shields, and Katrina
G. Laguarda (2005). Evaluation of Title I Accountability Systems and School
Improvement Efforts: Findings From 2002-03. Washington, DC: U.S. Department

114
of Education, Office of Planning, Evaluation, and Policy Development, Policy and
Program Studies Service.

Perie, Marianne, Wendy S. Grigg, and Patricia L. Donahue (2005). The Nation's
Report Card: Reading 2005 (NCES 2006-451). Washington, DC: U.S. Department
of Education, National Center for Education Statistics.

Perie, Marianne, Wendy S. Grigg, and Gloria S. Dion (2005). The Nation's Report
Card: Mathematics 2005 (NCES 2006-453). Washington, DC: U.S. Department of
Education, National Center for Education Statistics.

Schiller, Ellen, Ellen Bobronnikov, Fran O’Reilly, Cristofer Price, and Robert
St.Pierre (2005). The Study of State and Local Implementation and Impact of the
Individuals with Disabilities Education Act: Final 2nd Interim Report (2002-2003
School Year). Bethesda, MD: Abt Associates.

Seastrom, Marilyn, Lee Hoffman, Chris Chapman, and Robert Stillwell (2005). The
Averaged Freshman Graduation Rate for Public High Schools From the Common
Core of Data: School Years 2001-02 and 2002-03 (NCES 2006-601). Washington,
DC: U.S. Department of Education, National Center for Education Statistics.

Seastrom, Marilyn, Chris Chapman, Robert Stillwell, Daniel McGrath, Pia Peltola,
Rachel Dinkes, and Zeyu Xu (forthcoming). A Review and Analysis of Alternative
High School Graduation Rates, Volume I. User’s Guide to Computing High School
Graduation Rates, Volume 2, An Analysis of Alternative High School Graduation
Rates (NCES 2006-602). Washington, DC: U.S. Department of Education, National
Center for Education Statistics.

Seastrom, Marilyn, Kerry Gruber, Robin Henke, Daniel McGrath, and Benjamin
Cohen (2004). Qualifications of the Public School Teacher Workforce: Prevalence
of Out-of-Field Teaching: 1987-88 to 1999-2000 (NCES 2002-603). Washington,
DC: U.S. Department of Education, National Center for Education Statistics.

Shields, Patrick M., Camille Esch, Andrea Lash, Christine Padilla, Katrina
Woodworth, Katrina G. Laguarda, and Nicholas Winter (2004). Evaluation of Title I
Accountability Systems and School Improvement Efforts (TASSIE): First-Year
Findings. Washington, DC: U.S. Department of Education, Office of the Under
Secretary, Policy and Program Studies Service.

Beth Sinclair (forthcoming). State ESEA Title I Participation Information for 2002-
03: Final Summary Report. Washington, DC: U.S. Department of Education,
Office of Planning, Evaluation, and Policy Development, Policy and Program
Studies Service.

U.S. Department of Education, Office of English Language Acquisition (2005).


Biennial Evaluation Report to Congress on the Implementation of the State
Formula Grant Program, 2002-2004, English Language Acquisition, Language

115
Enhancement and Academic Achievement Act (ESEA, Title III, Part A). Washington,
D.C: U.S. Department of Education.

U.S. Department of Education, Office of Planning, Evaluation, and Policy


Development, Policy and Program Studies Service, unpublished data from the
Study of State Implementation of Accountability and Teacher Quality Under No
Child Left Behind.

U.S. Department of Education, Office of Planning, Evaluation, and Policy


Development, Policy and Program Studies Service, unpublished data from the
National Longitudinal Study of No Child Left Behind.

Wagner, Mary, Lynn Newman, Renee Cameto, and Phyllis Levine (2005). Changes
Over Time in the Early Postschool Outcomes of Youth with Disabilities: A Report of
Findings from the National Longitudinal Transition Study (NLTS) and the National
Longitudinal Transition Study-2 (NLTS2). Menlo Park, CA: SRI International.

Williams, Andra, Rolf K. Blank, and Carla Toye (forthcoming). State Education
Indicators With a Focus on Title I: 2002-03. Washington, DC: U.S. Department of
Education, Office of Planning, Evaluation, and Policy Development, Policy and
Program Studies Service.

116
 Acknowledgments

This report benefited from the contributions of many individuals and organizations
that provided valuable information and advice. Although they are too numerous
to mention each by name, we appreciate their support, and would like to
specifically acknowledge the contributions of the following individuals.

The National Assessment of Title I is being conducted under the guidance of an


Independent Review Panel that provided ideas and feedback through both
meetings and reviews of the data and draft reports; their insights and suggestions
made this a better report. The panel members are: Kaleem Caire, Thomas Cook,
Christopher Cross, Gayle Fallon, David Francis, Norma Garza, Eric Hanushek,
Sharon Johnson, Paul Peterson, Stephen Raudenbush, John Stevens, Eric Smith,
Patricia Supple, Tasha Tillman, Thomas Trabasso, Maris Vinovskis, and Rodney
Watson.

The report was prepared under the leadership and direction provided by Alan
Ginsburg, director of the Policy and Program Studies Service (PPSS), David
Goodwin, director of program and analytic studies in PPSS, Daphne Kaplan, PPSS
team leader, Ricky Takai, associate commissioner for the evaluation division of the
National Center for Education Evaluation and Regional Assistance (NCEE), and
Audrey Pendleton, senior education researcher at NCEE.

Studies conducted by independent research firms under contract to the U.S.


Department of Education provided most of the information presented in this
report. Important contributions were made by:

• The National Longitudinal Study of No Child Left Behind (NLS-NCLB), led by


Georges Vernez of the RAND Corporation and Michael Garet and Beatrice
Birman of the American Institutes for Research, assisted by Brian Stecher
(accountability team leader), Brian Gill (choice), and Meredith Ludwig
(teacher quality). Other NLS-NCLB team members who provided special
assistance for this report include Charles Blankenship, Hiro Hikawa, Felipe
Martinez, Jennifer McCombs, Scott Naftel, and Kwang Suk Yoon.
• The Study of State Implementation of Accountability and Teacher Quality
Under No Child Left Behind (SSI-NCLB), led by Jennifer O’Day and Kerstin
Carlson LeFloch of the American Institutes for Research. Other SSI-NCLB
team members who provided important assistance for this report are
Andrea Cook, Laura Hoard, Lori Nathanson, and James Taylor.
• The Evaluation of Title I Accountability Systems and School Improvement
Efforts (TASSIE), led by Patrick Shields and Christine Padilla of SRI
International.

Other researchers who provided useful assistance for this report include Brian
Gong of the Center for Assessment, Allison Henderson and Beth Sinclair of Westat,
and Andra Williams of the Council of Chief State School Officers.

117
Teachers, principals, school district staff, and state education agency
representatives across the country took time out of their busy schedules to
respond to our surveys, interviews, and requests for information. Without their
efforts, this report would not have been possible, and we greatly appreciate their
support for this national assessment as well as their core work of educating
America’s children.

Many Department staff reviewed drafts of this report and provided useful
comments and suggestions as well as, in some cases, providing data for the
report. We would like to acknowledge the assistance of: Andrew Abrams, Millicent
Bentley-Memon, Kerri Briggs, Chris Chapman, William Cordes, Thomas Corwin,
Laurette Crum, Sarah Dillard, Kathryn Doherty, Brian Fu, Arnold Goldstein, Patricia
Gonzalez, Lee Hoffman, Rene Islas, Jacquelyn Jackson, Stacy Kreppel, Milagros
Lanauze, Kathleen Leos, Jeannette Lim, David Malouf, Darla Marburger, Carlos
Martinez, Richard Mellman, Meredith Miller, Michael Petrilli, Anne Ricciuti, Kay
Rigling, Krista Ritacco, Patricia O’Connell Ross, Ross Santy, Zollie Stevenson, Bob
Stonehill, and Elizabeth Witt.

While we appreciate the assistance and support of all of the above individuals,
any errors in judgment or fact are of course the responsibility of the authors.

118
 Appendix A: Description of Major Data Sources
Included in This Report

119
National Longitudinal Study of NCLB (NLS-NCLB)
Purpose

The National Longitudinal Study of No Child Left Behind (NLS-NCLB), which is


Congressionally-mandated under Section 1501(c) of the Elementary and
Secondary Education Act, is examining the implementation of the No Child Left
Behind provisions for the Title I and Title II programs in a nationally-representative
sample of schools and districts. The study includes four components focused on
particular provisions of the law: 1) accountability; 2) teacher quality; 3) Title I
school choice and supplemental services; and 4) targeting and resource
allocation. The study is collecting data in the 2004-05 and 2006-07 school years.

The NLS-NCLB study is being conducted by the RAND Corporation in collaboration


with the American Institutes for Research and the National Opinion Research
Center.

Sample Design

The nationally representative sample includes 300 districts and 1,483 schools
within those districts, including both Title I and non-Title I schools. In order to
ensure sufficient sample sizes of schools identified for improvement under Title I,
the study oversampled high-poverty districts and schools, as well as oversampling
Title I schools. The distribution of sample schools by grade level is similar to the
distribution of all schools. The original sample included 1,502 schools, but 19
were determined to be out-of-scope and the net sample was 1,483 schools.

Exhibit A.1
Characteristics of NLS-NCLB District and School Sample,
Compared with the Universe of Districts and Schools
Sample Universe
Number Percent Number Percent
Districts, by Poverty Quartile (Census poverty) 300 14,972
Highest poverty quartile 163 54% 3,743 25%
Second highest poverty quartile 41 14% 3,743 25%
Second lowest poverty quartile 50 17% 3,743 25%
Lowest poverty quartile 46 15% 3,743 25%
Schools, By Poverty Level 1,502 83,298
75-100% eligible for free or reduced price lunch 596 40% 11,282 13%
50-74% eligible for free or reduced price lunch 363 24% 15,461 19%
35-49% eligible for free or reduced price lunch 106 7% 12,844 15%
<35% eligible for free or reduced price lunch 291 19% 33,884 41%
Missing 146 10% 9,827 12%
Schools, by Title I Status 1,502 83,298
Title I 1,163 77% 46,048 55%
Non Title I 259 17% 31,312 38%
Missing 80 5% 5,938 7%
Schools, by Grade Level 1,502 83,298
Elementary 906 60% 50,597 61%

120
Middle 298 20% 15,700 19%
High 298 20% 17,001 20%

District poverty quartiles were based on Census Bureau estimates of the number
of school-age children and poor children living in each district (2002 Small Area
Income and Poverty Estimates). The poverty quartiles were created by ranking all
districts by the percentage of poor school-age children and then dividing these
districts into quartiles that each contain 25 percent of the school-age children.
School poverty levels were based on the percentage of students eligible for free
or reduced-price lunches. The eligibility threshold for the subsidized lunch
program is looser than the official poverty definition (eligibility for reduced-price
lunches is set at 185 percent of the official poverty definition), so school poverty
rates are generally higher than district poverty rates.

The teacher sample includes approximately seven teachers per school (six
classroom teachers and one special education teacher). School staff rosters were
collected and divided into teacher strata by grade level; a stratum of Title I
paraprofessionals was also created. After school rosters were stratified,
independent random sampling took place within each stratum. At the elementary
level, one teacher was selected per grade. At the secondary level, about three
math teachers and three English teachers were selected per school. One Title I
paraprofessional was selected from each Title I school that uses such
paraprofessionals. The resulting sample included a total of 8,791 classroom
teachers (including 4,772 elementary teachers, 2,081 secondary English teachers,
and 1,938 secondary mathematics teachers), 1,408 special education teachers,
and 950 paraprofessionals.

The parent sample consists of a maximum of 400 parents in each of eight districts
for a total of 3,094 parents. In each district, the 400 parents were selected
randomly from four groups: 100 parents of students receiving supplemental
services in schools identified for improvement; 100 parents of students not
receiving supplemental services in schools identified for improvement; 100
parents of students who moved from an identified to a non-identified school; and
100 parents of students in non-identified schools. Some districts had fewer than
100 students who moved from an identified to a non-identified school. The eight
districts were selected based on availability of the necessary longitudinal
individual student achievement data and sufficient numbers of students
participating in the Title I school choice and supplemental services options to
support the above target sample sizes.

Finally, a sample of 125 supplemental education services providers was drawn


from all such providers in a subset of 15 of the 300 districts, including the eight
districts that were selected for the parent surveys.

Data Collection

121
Data collection instruments for this study that were used in this report include
mail surveys of district federal program coordinators, school principals, classroom
teachers, and Title I paraprofessionals; survey administration for the 2004-05
school year began in October 2004 and was completed in March 2005. Topics
covered in the survey questionnaires included accountability systems, AYP and
identification for improvement, technical assistance, improvement strategies, use
of assessment results, Title I school choice and supplemental educational
services, teacher quality, and professional development.

Surveys of parents and supplemental service providers were also conducted but
were not completed in time for this report. Other components of the data are also
ongoing, including review of extant data such as state report cards and school
improvement plans, analyses of state assessment data, and targeting and
resource allocation data.

The study includes two exploratory achievement analyses that are examining
achievement outcomes for students participating in the Title I choice and
supplemental services options (in nine districts) and the impact of identifying
schools for improvement on student achievement (in two states). Both analyses
are using quasi-experimental designs.

122
For the targeting and resource allocation component, the study is collecting data
from each of the 50 states on state suballocations of federal program funds to
school districts for the six programs included in this component: Title I Part A, Title
II Part A, Title III, Reading First, Comprehensive School Reform (CSR), and Perkins
Vocational Education. Districts in the 300-district sample were asked to provide
budget, expenditure, and administrative records, including personnel and payroll
records, for these six programs. The sample districts were also asked to provide
their allocations to schools for Title I, Reading First, and CSR, as well as school-
level budgets and plans for the uses of these funds in the sample schools within
their district. For schools operating schoolwide programs under Title I, the study
is collecting schoolwide plans and budgets if applicable. The information on
targeting and resource allocation will be collected one time only, for the 2004-05
school year.

Response Rates, Weighting, and Handling of Missing Data

Survey response rates for 2004-05 were 96 percent for the school district survey,
89 percent for the principal survey, 84 percent for the teacher surveys, and 87
percent for the Title I paraprofessional survey.

Survey data were weighted in order to produce national estimates. At the school
level, for example, the base weight for each school is the reciprocal of the
school's two-stage selection probability, equal to the product of the probability of
selecting the district and the conditional probability of selecting the school, given
the district. In addition, the weights were adjusted, controlling for covariates, to
handle instances of total school non-response. School weights were raked to
population counts of schools in four dimensions: school size, region by poverty
stratus, metro status, and school type. Two sets of weights were finally produced
for schools: (1) a set for estimating the proportion of schools with a defined
attribute, and (2) a set for estimating the proportion of students attending schools
with a defined attribute. Similar weighting procedures were employed for the
district, teacher, and paraprofessional survey data.

Missing data were imputed for principal survey data on the total number of
elementary classroom teachers and secondary classes, which were used as
denominators for calculating the percentage of elementary teachers who were
considered highly qualified under NCLB and the percentage of secondary classes
that were taught by highly qualified teachers (reported on page 78). There were
18 out of 930 elementary school principals that did not answer the survey item
asking about the total number of classroom teachers at their schools, and 36 out
of 385 secondary school principals that did not answer the survey item about the
total number of class sections. Data for elementary classroom teachers were
imputed by taking the student teacher ratios for the principals who answered the
item and then fitting a regression model on this ratio by the total number of
students enrolled and the school poverty level as the predictors. Using the
regression coefficients, the predicted student teacher ratio was computed for
each of the 18 schools and then converted to the estimated number of classroom
teachers in the school. Data on the total number of secondary class sections

123
were imputed in a similar manner. There were two elementary school principals
and five secondary school principals whose values could not be imputed due to
missing values in the predictor variables.

Reporting

This study will issue a series of reports in collaboration with the Study of State
Implementation of Accountability and Teacher Quality Under NCLB. Interim
reports on accountability and teacher quality are due in Spring 2006, an interim
report on public school choice and supplemental services under Title I is due in
Summer 2006, and a report on targeting and resource allocation is due in Fall
2006. Reports from the second wave of the data collection are due in late 2007.
The reports will be available at:
www.ed.gov/about/offices/list/opepd/ppss/reports.html.

124
Study of State Implementation of Accountability and
Teacher Quality Under NCLB (SSI-NCLB)
Purpose

This companion study to the National Longitudinal Study of NCLB is collecting


information from all states about their implementation of accountability and
teacher quality provisions under Titles I, II and III of NCLB.

The SSI-NCLB study is being conducted by the American Institutes for Research in
collaboration with the Council of Chief State School Officers and REDA
International.

Study Design

The study is surveying administrators at state education agencies responsible for


implementing NCLB accountability, assessment, and teacher quality provisions in
2004-05 and 2006-07. The study is also analyzing extant data including state
lists of schools and districts that did and did not make adequate yearly progress
(AYP) and of those that have been identified for improvement.

Data Collection

The study has conducted telephone interviews with state-level personnel with
responsibilities in the key areas of this evaluation, such as state federal program
coordinators responsible for administering Title I, Title II, and Title III, as well as
state assessment directors. The interviews began in September 2004 and were
completed in March 2005. Topics covered in the interviews included state
assessment and accountability systems, state implementation of supplemental
educational services, state teacher quality and professional development
initiatives, and accountability and teacher quality under Title III.

The study also collected extant data from a range of sources including
consolidated state applications and consolidated state performance reports, state
report cards, and state educational agency websites. In particular, the study
compiled a detailed school-level database on the identification status of schools
and whether the schools met or missed AYP targets. The database contains the
identification status of 88,160 schools (Title I and non-Title I) in 50 states and the
District of Columbia. The database also contains the AYP status of 87,892 schools
located in approximately 15,000 districts across 50 states and the District of
Columbia. Some states did not report data on certain AYP targets; as a result, the
number of states and schools for which data is available on individual AYP targets
varies from 33 states (including 15,731 schools missing AYP and 61,868 schools
overall) to the full dataset.

Response Rates

125
Interviews for 2004-05 were completed for all 50 states plus the District of
Columbia and Puerto Rico.

Reporting

This study will issue a series of reports in collaboration with the National
Longitudinal Study of NCLB. Interim reports on accountability and teacher quality
are due in Spring 2006, and an interim report on public school choice and
supplemental services under Title I is due in Summer 2006. Reports from the
second wave of the data collection are due in late 2007. The reports will be
available at: www.ed.gov/about/offices/list/opepd/ppss/reports.html.

126
Study of Title I Accountability Systems and School
Improvement Efforts (TASSIE)
Purpose

This study focuses on the implementation of Title I accountability provisions from


2001-02, the year before NCLB went into effect, through 2003-04, the second
year of implementation of NCLB. Based on surveys of all states, a nationally
representative sample of districts, and a sample of schools, this study examines
the demographic characteristics of schools identified for improvement, school
improvement activities in identified schools, corrective actions and restructuring
activities for identified schools, and the implementation of public school choice
and supplemental services under Title I.

The TASSIE study is being conducted by SRI International.

Study Design

The study includes surveys of all states, a nationally representative sample of


1,300 districts and 740 schools that had been identified for improvement in 2001-
02 under the previous re-authorization of ESEA. The study also includes case
studies of 20 schools identified for improvement under Title I in 15 districts in five
states.

Data Collection

Data collection instruments for this study include mail surveys of district Title I
administrators and school principals, a telephone survey of state Title I
administrators, and site visit protocols for the case studies. The district and
school surveys, along with the case studies, were conducted in 2001-02, 2002-03
and 2003-04. The state survey was conducted twice, in 2002-03 and 2003-04.
Topics covered in the surveys included school and district identified for
improvement, school improvement activities in identified schools, corrective
actions and restructuring activities for identified schools, and the implementation
of public school choice and supplemental services under Title I.

Response Rates, Weighting, and Handling of Missing Data

TASSIE district and school samples are both stratified random samples in which
the probability of selection into the sample varies across strata. To estimate
population parameters, the sampled districts or schools are weighted so that the
total of the weights within a stratum equals the number of districts or schools in
that stratum in the sampling frame.

Survey response rates ranged from 88 to 90 percent for the district survey and
from 83 to 85 percent for the principal survey. To estimate population parameters
from the survey respondents, the weights assigned to respondents within any

127
stratum were modified to absorb the weights that would otherwise accrue to non-
responding schools in the stratum (thus, respondents’ weights were adjusted to
sum to the total number in the stratum). A new set of weights were derived for
each year of the survey since the set of respondents varied from one year to
another. The longitudinal estimates presented in this report use the analysis
weights assigned for the 2001-02, 2002-03, and 2003-04 respondent pools,
respectively.

There were limited cases of missing data, and there was no imputation for missing
data.

Reporting

Reports on the 2001-02 and 2002-03 data collections have been released and are
available at www.ed.gov/about/offices/list/opepd/ppss/reports.html#title. A final
report that includes data from the third and final year of the study (2003-04) is
forthcoming.

Shields, Patrick M., Camille Esch, Andrea Lash, Christine Padilla, Katrina
Woodworth, Katrina G. Laguarda, and Nicholas Winter (2004). Evaluation of
Title I Accountability Systems and School Improvement Efforts (TASSIE):
First-Year Findings. Washington, DC: U.S. Department of Education, Office of
the Under Secretary, Policy and Program Studies Service.

Padilla, Christine, Katrina Woodworth, Andrea Lash, Patrick M. Shields, and


Katrina G. Laguarda (2005). Evaluation of Title I Accountability Systems
and School Improvement Efforts: Findings From 2002-03. Washington, DC:
U.S. Department of Education, Office of Planning, Evaluation, and Policy
Development, Policy and Program Studies Service.

Padilla, Christine, Heidi Skolnik, Alejandra Lopez-Torkos, Katrina Woodworth,


Andrea Lash, Patrick M. Shields, Katrina G. Laguarda, and Jane L. David
(forthcoming). Title I Accountability and School Improvement Efforts From
2001 to 2004. Washington, DC: U.S. Department of Education, Office of
Planning, Evaluation, and Policy Development, Policy and Program Studies
Service.

128
Case Studies of the Early Implementation of Supplemental
Educational Services
Purpose

This study examined how states and districts were implementing the
supplemental educational services provisions of NCLB during the first two years
they were in effect, the 2002-03 and 2003-04 school years.

The study was conducted by Policy Studies Associates under subcontract to SRI
International.

Study Design and Data Collection

The case studies focused on nine school districts in six states implementing NCLB
supplemental services during the 2002-03 and 2003-04 school years. In each
district, case studies included visits to approximately three schools and three
supplemental services providers. Case studies also included telephone interviews
of state personnel; in-person interviews with district administrators, school
principals and providers; and focus groups with teachers and parents

Reporting

The interim and final reports from this study have been released and are available
at www.ed.gov/about/offices/list/opepd/ppss/reports.html#title.

Anderson, Leslie M., and Lisa Weiner (2004). Early Implementation of


Supplemental Educational Services Under the No Child Left Behind Act:
Year One Report. Washington, D.C.: U.S. Department of Education, Office of
the Under Secretary, Policy and Program Studies Service.

Anderson, Leslie M., and Katrina G. Laguarda (2005). Case Studies of


Supplemental Services Under the No Child Left Behind Act: Findings From
2003-04. Washington, D.C.: U.S. Department of Education, Office of
Planning, Evaluation, and Policy Development, Policy and Program Studies
Service.

129
Consolidated State Performance Reports
Purpose

Section 1111 of the No Child Left Behind Act (NCLB) requires states to provide an
annual report to the Secretary that includes data on student achievement on
state assessments, disaggregated for various student subgroups specified in the
law, as well as the number and names of schools identified for improvement
under Title I, the reasons why each school was so identified, the percentage of
classes taught by teachers who are highly qualified under NCLB, and other
information. Section 9303 gives States the option of reporting on multiple ESEA
programs through a single consolidated report, and all states do in fact use the
consolidating reporting option. The Consolidated State Performance Reports also
collect basic descriptive information about programs, such as numbers of
participating schools and students, and numbers of schools identified for
improvement.

Study Design and Data Collection

The Consolidated State Performance Reports are divided into Part I, which
includes achievement data on state assessments, implementation of Title I
accountability requirements, and other information considered high-priority, and
Part II, which includes the remaining required information and has a later due
date. For 2002-03, Part I reports were due to the U.S. Department of Education in
December 2003 and Part II reports were due in June 2004. State reports for 2003-
04 were due in early 2005 (January 2005 for Part I and April 2005 for Part II);
however, these data were not available in time for inclusion in this report due to
delays in state submissions and the need for data cleaning which has not yet
been completed.

Further information about the Consolidated State Performance Reports, including


the data collection forms and instructions, are available at
www.ed.gov/admins/lead/account/consolidated/index.html.

Reporting

Two annual reports summarize data from the Title I Part A portion of the
Consolidated State Performance Reports. Reports presenting the 2001-02 data
have been released and are available at
www.ed.gov/about/offices/list/opepd/ppss/reports.html#title. Reports presenting
the 2002-03 data used in this report are forthcoming:

Williams, Andra, Rolf K. Blank, and Carla Toye (forthcoming). State Education
Indicators With a Focus on Title I: 2002-03. Washington, DC: U.S. Department
of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.

130
Beth Sinclair (forthcoming). State ESEA Title I Participation Information for
2002-03: Final Summary Report. Washington, DC: U.S. Department of
Education, Office of Planning, Evaluation, and Policy Development, Policy and
Program Studies Service.

131
 Appendix B: Supplemental Exhibits

132
Exhibit B-1
Percentage of 8th-Grade Students Achieving At or Above the "Proficient" Level
on NAEP and State Assessments in Mathematics, 2003

Minnesota 77
44
Massachusetts 37
38
North Dakota 44
36
Connecticut 77
35
Montana 70
35
New Hampshire 74
35
South Dakota 55
35
Vermont 35
Wisconsin 73
35
Colorado 69
34
Kansas 60
34
Iowa 72
33
New Jersey 57
33
Nebraska 75
32
New York 32
North Carolina 82
32
Oregon 59
32
Washington 37
32
Wyoming 35
32
Indiana 71
31
Utah 70
31
Virginia 75
31
Alaska 64
30
Maryland 40
30
Ohio 53
30
Pennsylvania 51
30
Illinois 52
18 29
Maine 29
Idaho 53
28
Michigan 40
28
Missouri 14
28
Delaware 47
19 26
South Carolina 26
Texas 73
25
Kentucky 31
24
Rhode Island 34
24
Florida 57
23
California 29
22
Georgia 67
18 22
Arizona 21
Tennessee 79
21
Nevada 38
20
Oklahoma 65
20
West Virginia 20
Arkansas 22
19 State Assessment, 2003
Hawaii 17
17 NAEP, 2003
Louisiana 52
17
Alabama 56
16
New Mexico 64
15
Mississippi 48
12
District of Columbia 40
6

0 20 40 60 80 100

Source: Consolidated State Performance Reports and National Center for Education Statistics, Main
NAEP. Note: The preferred grade for this table was 8th grade; however, in states
that did not consistently assess students in 8th-grade mathematics, we used either 7th- or 6th-grade

133
assessment results.

134
Exhibit B-2
Proportion of Students Performing At or Above Their State’s Proficient Level
in Reading and Mathematics, in 8th Grade or Another Middle School Grade, 2000-01 to 2002-03

Reading Mathematics
2000-01 2001-02 2002-03 Change 2000-01 2001-02 2002-03 Change
Alabama 64 58 -6 71 54 -17
Arizona 42 46 4 18 18 0
Colorado 64 65 89 25 37 39 69 32
Connecticut 77 78 78 1 76 77 77 1
Delaware 72 72 70 -2 43 48 47 4
Illinois 66 68 63 -3 50 52 52 2
Kansas 66 67 71 5 57 56 60 3
Kentucky 54 56 57 3 27 26 31 4
Louisiana 51 48 55 4 46 41 52 6
Maine 41 43 45 4 20 21 18 -2
Massachusetts 67 64 34 34 37 3
Mississippi 49 48 57 8 40 45 48 8
Missouri 34 32 32 -2 14 14 14 0
Montana 73 71 71 -2 69 68 70 1
New Jersey 73 73 74 1 62 58 57 -5
North Carolina 83 85 26 -57 80 83 82 2
Ohio 58 56 65 7 61 59 53 -8
Oklahoma 70 70 71 1 63 64 65 2
Oregon 62 64 60 -2 55 58 59 4
Pennsylvania 60 58 64 4 51 52 51 0
South Carolina 24 27 20 -4 18 19 19 1
Utah 36 78 67 31 66 40 70 4
Virginia 73 70 70 -3 68 70 75 7
Washington 40 44 48 8 27 30 37 10
# of states with
14 out of 23 states 17 out of 24 states
achievement gains

Note: The preferred grade for this table was 8th grade; however, in states that did not
consistently assess students in 8th-grade reading and mathematics, a nearby grade was used
(7th grade for Kansas (math), Kentucky (reading), Missouri (reading), and Washington, and 6th
grade for Alabama and Ohio).

Source: Consolidated State Performance Reports (for 23 states).

135
Exhibit B-3
Proportion of Students Performing At or Above Their State’s Proficient Level for Reading in 2002-03,
and Change from 2000-01, in 8th Grade or Another Elementary Grade, for Various Student Subgroups

Low-Income Black Hispanic White LEP Migrant Disabilities


Percent Change Percent Change Percent Change Percent Change Percent Change Percent Change Percent Change
Proficient from Proficient from Proficient from Proficient from Proficient from Proficient in from Proficient from
in 2002-03 2000-01 in 2002-03 2000-01 in 2002-03 2000-01 in 2002-03 2000-01 in 2002-03 2000-01 2002-03 2000-01 in 2002-03 2000-01
Arizona 34 -9 27 -7 62 -8 15 13 20 16 17 15
Connecticut 20 2 38 0
Delaware 54 7 16 -8 25 6
Illinois 45 1 45 4 45 -3 73 -3 25 5 14 -39 21 -2
Kansas 55 10 47 17 53 22 75 8 53 33 50 23 39 10
Kentucky 43 6 35 3 51 1 60 3 29 1 40 5 19 6
Maine 26 4 41 12 45 3 18 3 25 11 7 2
Massachusetts 39 12 31 9 75 11 16 -14 29 5
Mississippi 40 9 56 1 73 7
Missouri 18 3 9 -2 7 -3 6 1
Montana 55 0 18 -2 26 0
North Carolina 74 10 76 7 65 -6 92 2 41 -8 57 -13 50 2
Ohio 32 12 26 -8 30 -1
Oklahoma 68 11 57 9 63 8 84 8 41 5 74 28 22 -1
Oregon 40 0 32 -1 65 -1 22 -3 22 -2 17 -4
Pennsylvania 39 1 33 8 33 4 71 3 18 6 22 3 21 6
South Carolina 9 -3 8 -2 13 -6 29 -4
Virginia 35 -7 37 1
Washington 7 3 13 4 10 3
# of states with
8 out of 10 states 9 out of 12 states 7 out of 12 states 8 out of 12 states 10 out of 16 states 7 out of 13 states 11 out of 17 states
achievement gains

Note: The preferred grade for this table was 8th grade; however, in states that did not consistently assess students in 8th-grade reading, a nearby
grade was used (6th grade for Ohio and 8th grade for Kentucky, Massachusetts, Missouri, and Washington). Gray cells indicate that the state did not
report disaggregated assessment data for that subgroup for all three years included in the analysis; however, all of these states have since developed
the capacity to report disaggregated data.

Source: Consolidated State Performance Reports (for 19 states).

136
Exhibit B-4
Proportion of Students Performing At or Above Their State’s Proficient Level for Mathematics in 2002-03,
and Change from 2000-01, in 8th Grade or Another Elementary Grade, for Various Student Subgroups

Low-Income Black Hispanic White LEP Migrant Disabilities


Percent Change Percent Change Percent Change Percent Change Percent Change Percent Change Percent Change
Proficient from Proficient from Proficient from Proficient from Proficient from Proficient in from Proficient from
in 2002-03 2000-01 in 2002-03 2000-01 in 2002-03 2000-01 in 2002-03 2000-01 in 2002-03 2000-01 2002-03 2000-01 in 2002-03 2000-01
Arizona 8 2 8 2 27 2 7 7 5 3
Connecticut 31 4 36 -3
Delaware 27 6 24 -2 12 5
Illinois 30 5 23 4 33 4 66 2 20 2 18 -9 14 1
Kansas 41 9 28 15 33 13 67 9 22 8 26 6 34 6
Kentucky 10 2 23 6 34 4 14 -5 16 3 9 6
Maine 6 3 10 -4 18 -3 12 3 7 0
Massachusetts 11 1 11 2 44 4 11 4 9 -1 8 1
Mississippi 31 10 49 7 65 9
Missouri 6 1 13 5 6 5
Montana 52 1 17 -7 23 3
North Carolina 70 5 69 6 68 0 90 3 52 -2 64 -8 46 2
Ohio 36 -3 24 -17 25 -10
Oklahoma 61 14 48 11 59 13 78 8 43 13 61 25 18 2
Oregon 63 24 31 -3 63 5 28 -32 24 -11 17 0
Pennsylvania 26 -3 19 4 23 1 59 0 23 2 20 -4 14 2
South Carolina 8 2 6 0 14 0 28 1 8 4 12 12 3 1
Utah 53 2 19 -10 44 13 27 -1
Virginia 65 8 39 10
Washington 6 2 8 4 5 2
# of states with
9 out of 10 states 11 out of 12 states 8 out of 12 states 10 out of 12 states 11 out of 18 states 8 out of 15 states 13 out of 17 states
achievement gains

Note: The preferred grade for this table was 8th grade; however, in states that did not consistently assess students in 8th-grade mathematics, a
nearby grade was used (6th grade for Ohio and 8th grade for Kansas and Washington).

Source: Consolidated State Performance Reports (for 20 states).

137
Exhibit B-5
Change in the Achievement Gap: Difference Between the Proportion of Low-Income Students and
All Students Performing At or Above Their State’s Proficient Level, in 8th Grade or
Another Middle School Grade, 2000-01 to 2003-03

Gap in Reading Gap in Mathematics


2000-01 2001-02 2002-03 Change 2000-01 2001-02 2002-03 Change in
in Gap Gap
Delaware 22 18 16 -6 22 21 20 -2
Illinois 22 18 18 -4 25 23 22 -3
Kansas 20 16 16 -4 25 21 19 -6
Kentucky 16 16 14 -5
Missouri 17 15 14 -3 9 8 8 -1
Montana 16 16 16 0 18 18 18 0
North Carolina 14 13 -48 -62 15 14 12 -3
Oklahoma 14 4 3 -11 16 7 4 -12
Pennsylvania 22 25 +3 22 25 +3
South Carolina 15 15 11 -4 12 12 11 -1
Utah 15 17 +2
Number of states
8 out of 10 states 7 out of 10 states
with gap reduction

Source: Consolidated State Performance Reports (for 11 states).

Exhibit B-6 Exhibit B-7


Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 8th Grade by School Poverty Level Average Scale Scores in 8th Grade by School Poverty Level
Low-Poverty Schools
300 293
300 290*
287 *
279* All Schools
Low-Poverty Schools
280 276*
280 274 273 273 270 *
270 278
267* 276 *
264* 272 *
All Schools 269 *
267 *
260 260
261 263* 261* 262 *
258* 260
257* 255 254
251 251*
240 240 246* High-Poverty Schools
240 240 239 240 240*
237
233 High-Poverty Schools
220 220
1992 1994 1998 2002 2003 2005
1990 1992 1996 2000 2003 2005

* Indicates that the score is significantly different from the one in 2005 (p<.05).

Note: “High-poverty” was defined as schools with 76 to 100 percent of their students eligible for free or
reduced-price lunches, and “low-poverty” indicates that 0 to 25 percent were eligible for subsidized
lunches.

Source: National Center for Education Statistics, Main NAEP, unpublished tabulations.

138
Exhibit B-8 Exhibit B-9
Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 8th Grade by Race/Ethnicity Average Scale Scores in 8th Grade by Race/Ethnicity
300 300 White
287* 288
283*
279*
280 White 280 276*
271* 270* 269
268 269 *
265 * 265* Hispanic
* 261
258
260 260
Hispanic 252*
249*
245 244 245 245 * 247*
241 254 Black
239* 252*
238 *
240 244* 244* 242 Black 240
243*
242
239*
236 * 235* 236 * 236*

220 220
1992 1994 1998 2002 2003 2005 1990 1992 1996 2000 2003 2005

* Indicates that the score is significantly different from the one in 2005 (p<.05).

Source: National Center for Education Statistics, Main NAEP.

Exhibit B-10 Exhibit B-11


Reading Achievement on the Main NAEP, 1992 to 2005: Mathematics Achievement on the Main NAEP, 1990 to 2005:
Percent Proficient in 8th Grade by Race/Ethnicity Percent Proficient in 8th Grade by Race/Ethnicity
50% 50%
White
39% 39%* White
40% 37% 37% *
40% 36% 37%
33%* 33%* 33%*

30%
29% *
30%
25%*

Hispanic
20%
20% 18% * Hispanic
13% 14% 14% 14%
11%* 12% 13%
11%*
10% 13% 12% 7% * 7%* 8% *
11% 11% 10% 6% *
8% * 9%* Black Black
5%* 7%* 8%
0% 2%* 4%* 5% *
0%
1992 1994 1998 2000 2002 2003 2005
1990 1992 1996 2000 2003 2005

* Indicates that the score is significantly different from the one in 2005 (p<.05).

Source: National Center for Education Statistics, Main NAEP.

139
Exhibit B-12 Exhibit B-13
Reading Achievement on the Trend NAEP, 1971 to 2004: Mathematics Achievement on the Trend NAEP, 1978 to 2004:
Average Scale Scores for 13-Year-Olds by Race/Ethnicity Average Scale Scores for 13-Year-Olds by Race/Ethnicity

300 300
288 White
* * 283*
279* 281 281
280 280 274 * 274 * 274* 276*
White 272 * Hispanic
266 265 266 267 266 265
263*
261* 262*
264
261* 262* 259* 259*
260 260 254* 255* 256* 256 *
252 * 262 Black
Hispanic
244 * 252* 252 *
240 238 242
239* 251*
240 239 238 238 * 249* 249* 250
240 237 235* 240
244 Black
232 * 243
241 238 238 240 *
236 * 234 *234 *
233 *
222 *
226 * 228 * 230 *
220 220
1971 1975
1975 1980
1980 1984 1988 1990
19901992 1994 1996 1999 2004 1973 1978 1982 1986 1990 1992 1994 1996 1999 2004

* Indicates that the score is significantly different from the one in 2004 (p<.05).

Source: National Center for Education Statistics, Trend NAEP.

140
Appendix C: Standard Error Tables

141
Exhibit C-1
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores by School Grade Level

4th Grade 8th Grade 12th Grade


Reading
1992 215 (1.0) 258 (1.0) 290 (0.7)
1994 212 (1.1) 257 (0.8) 286 (0.7)
1998 213 (1.2) 261 (0.8) 289 (0.7)
2000 211 (1.4)
2002 217 (0.5) 263 (0.5) 285 (0.7)
2003 216 (0.3) 261 (0.2)
2005 217 (0.2) 260 (0.2)
Mathematics
1990 212 (1.1) 262 (1.4) 294 (1.2)
1992 219 (0.8) 267 (1.0) 297 (1.0)
1996 222 (1.1) 269 (1.0) 303 (0.9)
2000 224 (1.0) 272 (0.9) 300 (1.1)
2003 234 (0.2) 276 (0.3)
2005 237 (0.2) 278 (0.2)
Source: National Center for Education Statistics, Main NAEP.

142
Exhibit C-2
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 4th Grade by School Poverty Level

High-Poverty Schools Low-Poverty Schools


(76-100% Eligible for Free (0-25% Eligible for Free
or Reduced-Price Lunches) or Reduced-Price Lunches)
Reading
1992 192 (3.0) 225 (1.7)
1994 182 (3.2) 225 (1.7)
1998 187 (3.1) 230 (1.5)
2000 183 (2.8) 230 (1.7)
2002 196 (0.7) 233 (0.5)
2003 194 (0.5) 232 (0.5)
2005 197 (0.4) 233 (0.3)
Mathematics
1990 194 (4.2) 218 (2.0)
1992 195 (2.8) 230 (1.4)
1996 209 (2.7) 235 (1.5)
2000 205 (1.2) 239 (1.4)
2003 216 (0.5) 247 (0.3)
2005 221 (0.3) 251 (0.3)
Source: National Center for Education Statistics, Main NAEP.

143
Exhibit C-3
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 4th Grade by Race/Ethnicity

Black Hispanic White


Reading
1992 191 (1.7) 194 (2.7) 223 (1.4)
1994 184 (1.8) 186 (3.6) 222 (1.3)
1998 192 (2.1) 192 (3.2) 223 (1.1)
2000 189 (1.9) 188 (3.1) 223 (1.2)
2002 198 (0.6) 199 (1.4) 227 (0.3)
2003 197 (0.4) 199 (0.6) 227 (0.2)
2005 199 (0.3) 201 (0.5) 228 (0.2)
Mathematics
1990 187 (1.9) 199 (2.4) 219 (1.1)
1992 192 (1.4) 201 (1.7) 227 (0.9)
1996 198 (1.6) 207 (1.9) 231 (1.1)
2000 203 (1.2) 207 (1.5) 233 (0.9)
2003 216 (0.4) 221 (0.4) 243 (0.2)
2005 220 (0.3) 225 (0.3) 246 (0.2)
Source: National Center for Education Statistics, Main NAEP.

144
Exhibit C-4
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Percent Proficient in 4th Grade by Race/Ethnicity
Black Hispanic White
Reading
1992 8 (1.4) 10 (1.7) 33 (1.8)
1994 8 (0.9) 11 (2.1) 35 (1.5)
1998 10 (1.0) 12 (1.7) 36 (1.2)
2000 9 (1.0) 12 (1.9) 36 (1.4)
2002 12 (0.5) 14 (0.8) 39 (0.5)
2003 12 (0.4) 14 (0.5) 39 (0.3)
2005 12 (0.3) 15 (0.5) 39 (0.3)
Mathematics
1990 1 (0.5) 4 (1.6) 15 (1.7)
1992 2 (0.6) 5 (1.2) 22 (1.5)
1996 3 (0.7) 7 (1.4) 26 (1.5)
2000 4 (0.8) 7 (1.0) 30 (1.4)
2003 10 (0.3) 15 (0.5) 42 (0.3)
2005 13 (0.3) 19 (0.3) 47 (0.3)
Source: National Center for Education Statistics, Main NAEP.

145
Exhibit C-5
Reading and Mathematics Achievement on the Trend NAEP, 1971 to 2004:
Average Scale Scores by Student Age Group

9-Year-Olds 13-Year-Olds 17-Year-Olds


Reading
1971 208 (1.0) 255 (0.9) 285 (1.2)
1975 210 (0.7) 256 (0.8) 286 (0.8)
1980 215 (1.0) 258 (0.9) 285 (1.2)
1984 211 (0.7) 257 (0.5) 289 (0.6)
1988 212 (1.1) 257 (1.0) 290 (1.0)
1990 209 (1.2) 257 (0.8) 290 (1.1)
1992 211 (0.9) 260 (1.2) 290 (1.1)
1994 211 (1.2) 258 (0.9) 288 (1.3)
1996 212 (1.0) 258 (1.0) 288 (1.1)
1999 212 (1.3) 259 (1.0) 288 (1.3)
2004 219 (1.1) 259 (1.0) 285 (1.2)
Mathematics
1973 219 (0.8) 266 (1.1) 304 (1.1)
1978 219 (0.8) 264 (1.1) 300 (1.0)
1982 219 (1.1) 269 (1.1) 298 (0.9)
1986 222 (1.0) 269 (1.2) 302 (0.9)
1990 230 (0.8) 270 (0.9) 305 (0.9)
1992 230 (0.8) 273 (0.9) 307 (0.9)
1994 231 (0.8) 274 (1.0) 306 (1.0)
1996 231 (0.8) 274 (0.8) 307 (1.2)
1999 232 (0.8) 276 (0.8) 308 (1.0)
2004 241 (0.9) 281 (1.0) 307 (0.8)
Source: National Center for Education Statistics, Trend NAEP.

146
Exhibit C-6
Reading and Mathematics Achievement on the Trend NAEP, 1971 to 2004:
Average Scale Scores for 9-Year-Olds by Race/Ethnicity

Black Hispanic White


Reading
1971 170 (1.7) 214 (0.9)
1975 181 (1.2) 183 (2.2) 217 (0.7)
1980 189 (1.8) 190 (2.3) 221 (0.8)
1984 186 (1.1) 187 (2.1) 218 (0.8)
1988 189 (2.4) 194 (3.5) 218 (1.4)
1990 182 (2.9) 189 (2.3) 217 (1.3)
1992 185 (2.2) 192 (3.1) 218 (1.0)
1994 185 (2.3) 186 (3.9) 218 (1.3)
1996 191 (2.6) 195 (3.4) 220 (1.2)
1999 186 (2.3) 193 (2.7) 221 (1.6)
2004 200 (1.7) 205 (2.2) 226 (1.1)
Mathematics
1973 190 (1.8) 202 (2.4) 225 (1.0)
1978 192 (1.1) 203 (2.2) 224 (0.9)
1982 195 (1.6) 204 (1.3) 224 (1.1)
1986 202 (1.6) 205 (2.1) 227 (1.1)
1990 208 (2.2) 214 (2.1) 235 (0.8)
1992 208 (2.0) 212 (2.3) 235 (0.8)
1994 212 (1.6) 210 (2.3) 237 (1.0)
1996 212 (1.4) 215 (1.7) 237 (1.0)
1999 211 (1.6) 213 (1.9) 239 (0.9)
2004 224 (2.1) 230 (2.0) 247 (0.9)
Source: National Center for Education Statistics, Trend NAEP.

147
Exhibit C-7
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 8th Grade by School Poverty Level

High-Poverty Schools Low-Poverty Schools


(76-100% Eligible for Free (0-25% Eligible for Free
or Reduced-Price Lunches) or Reduced-Price Lunches)
Reading
1992 237 (3.5) 267 (1.5)
1994 233 (3.5) 264 (1.1)
1998 240 (1.8) 270 (1.3)
2002 240 (1.1) 274 (0.7)
2003 239 (0.9) 273 (0.4)
2005 240 (0.6) 273 (0.3)
Mathematics
1990 251 (7.9) 270 (2.4)
1992 240 (3.7) 276 (1.4)
1996 255 (6.8) 279 (1.3)
2000 246 (2.3) 287 (1.3)
2003 251 (0.7) 290 (0.5)
2005 254 (0.6) 293 (0.4)
Source: National Center for Education Statistics, Main NAEP.

148
Exhibit C-8
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Average Scale Scores in 8th Grade by Race/Ethnicity

Black Hispanic White


Reading
1992 236 (1.8) 238 (1.7) 265 (1.2)
1994 235 (1.8) 239 (1.6) 265 (1.0)
1998 242 (1.2) 241 (1.7) 268 (1.0)
2002 244 (0.8) 245 (0.8) 271 (0.5)
2003 244 (0.5) 244 (0.7) 270 (0.2)
2005 242 (0.4) 245 (0.4) 269 (0.2)
Mathematics
1990 236 (2.8) 245 (4.4) 269 (1.4)
1992 236 (1.3) 247 (1.2) 276 (1.1)
1996 239 (1.9) 249 (1.9) 279 (1.2)
2000 243 (1.3) 252 (1.4) 283 (0.9)
2003 252 (0.5) 258 (0.6) 287 (0.3)
2005 254 (0.4) 261 (0.4) 288 (0.2)
Source: National Center for Education Statistics, Main NAEP.

149
Exhibit C-9
Reading and Mathematics Achievement on the Main NAEP, 1990 to 2005:
Percent Proficient in 8th Grade by Race/Ethnicity
Black Hispanic White
Reading
1992 8 (1.1) 11 (1.3) 33 (1.4)
1994 9 (1.2) 12 (1.3) 33 (1.2)
1998 11 (1.6) 13 (1.0) 37 (1.3)
2002 13 (0.7) 14 (0.8) 39 (0.7)
2003 12 (0.4) 14 (0.6) 39 (0.3)
2005 11 (0.4) 14 (0.4) 37 (0.3)
Mathematics
1990 5 (1.1) 7 (2.1) 18 (1.4)
1992 2 (0.7) 6 (1.0) 25 (1.2)
1996 4 (0.7) 7 (1.2) 29 (1.4)
2000 5 (0.7) 8 (1.0) 33 (1.1)
2003 7 (0.3) 11 (0.5) 36 (0.4)
2005 8 (0.3) 13 (0.4) 37 (0.3)
Source: National Center for Education Statistics, Main NAEP.

150
Exhibit C-10
Reading and Mathematics Achievement on the Trend NAEP, 1971 to 2004:
Average Scale Scores for 13-Year-Olds by Race/Ethnicity

Black Hispanic White


Reading
1971 222 (1.2) 261 (0.7)
1975 226 (1.2) 232 (3.0) 262 (0.7)
1980 233 (1.5) 237 (2.0) 264 (0.7)
1984 236 (1.2) 240 (2.0) 263 (0.6)
1988 243 (2.4) 240 (3.5) 261 (1.1)
1990 241 (2.2) 238 (2.3) 262 (0.9)
1992 238 (2.3) 239 (3.5) 266 (1.2)
1994 234 (2.4) 235 (1.9) 265 (1.1)
1996 234 (2.6) 238 (2.9) 266 (1.0)
1999 238 (2.4) 244 (2.9) 267 (1.2)
2004 244 (2.0) 242 (1.6) 266 (1.0)
Mathematics
1973 228 (1.9) 239 (2.2) 274 (0.9)
1978 230 (1.9) 238 (2.0) 272 (0.8)
1982 240 (1.6) 252 (1.7) 274 (1.0)
1986 249 (2.3) 254 (2.9) 274 (1.3)
1990 249 (2.3) 255 (1.8) 276 (1.1)
1992 250 (1.9) 259 (1.8) 279 (0.9)
1994 252 (3.5) 256 (1.9) 281 (0.9)
1996 252 (1.3) 256 (1.6) 281 (0.9)
1999 251 (2.6) 259 (1.7) 283 (0.8)
2004 262 (1.6) 265 (2.0) 288 (0.9)
Source: National Center for Education Statistics, Trend NAEP.

151
Exhibit C-11
Percentage of Identified Schools That Reported Needing and Receiving Various Types
of Technical Assistance, 2003-04 to 2004-05

Percent of Non- Percent of Percent of Percent of Identified


Identified Schools Identified Identified Schools Reporting
That Needed Schools That Schools Needing That Assistance
Assistance Needed Assistance That Received When
Assistance Received It Needed Was
Sufficient
(n = 881) (n = 430) (n = 212 to 343) (n = 147 to 313)
Improve quality of teachers’ 52.6 (3.2) 79.7 (3.5)* 91.4 (2.7) 73.6 (8.4)
professional development
Get parents more engaged in 46.1 (3.1) 74.2 (3.8)* 51.2 (6.8) 53.0 (7.8)
their child’s education
Address instructional needs 49.4 (3.0) 70.8 (4.0)* 72.3 (7.7) 69.2 (6.5)
of students with IEPs
Identify effective curricula, 54.3 (3.1) 69.6 (5.1)* 92.5 (1.9) 72.5 (8.4)
instructional strategies, or
school reform models
Improve students’ test taking 32.0 (2.6) 69.9 (4.4)* 70.9 (6.0) 70.7 (9.4)
skills
Analyze assessment results 40.8 (3.1) 67.7 (4.8)* 92.4 (3.0) 93.8 (1.7)
to understand students’
strengths and weaknesses
Identify or develop detailed 49.3 (2.7) 62.2 (5.5)* 92.6 (2.0) 66.6 (8.0)
curriculum guides,
frameworks, pacing
sequences, and/or model
lessons aligned with state
standards
Develop or revise school 27.5 (3.0) 61.6 (5.3)* 89.5 (5.2) 89.1 (4.7)
improvement plan
Recruit, retain, or assign 27.6 (2.3) 62.1 (5.4)* 76.3 (5.3) 79.7 (6.1)
teachers in order to staff all
classes with a teacher who is
“highly qualified”
Address problems of student 36.4 (2.7) 56.7 (5.0)* 68.2 (6.0) 41.9 (8.5)
truancy, tardiness, and
discipline, and of dropouts
Implement the provisions of 37.8 (2.8) 52.4 (5.7)* 85.8 (3.9) 95.0 (1.5)
NCLB relating to “qualified”
paraprofessionals
Address instructional needs 36.6 (3.0) 49.3 (5.4)* 69.2 (9.9) 70.8 (8.0)
of LEP students
*Indicates statistically significant difference between identified and non-identified schools (p<.05).
Source: National Longitudinal Study of NCLB, Principal Survey.

152
Exhibit C-12
Percentage of Schools Reporting Major Focus
on Various School Improvement Strategies, 2004-05

Identified Schools Non-Identified Schools


(n=430) (n=881)
Using student achievement data to inform instruction and 82.4 (3.5) 66.7 (2.8)*
school improvement
Providing additional instruction to low-achieving students 77.6 (3.9) 59.7 (2.7)*
Aligning curriculum and instruction with standards and/or 72.2 (4.5) 70.0 (2.6)
assessments
Implementing new instructional approaches or curricula in 61.1 (4.4) 49.0 (2.6)*
reading/language arts/English
Increasing the intensity, focus, and effectiveness of 59.8 (5.1) 41.8 (2.6)*
professional development *
Implementing new instructional approaches or curricula in 59.4 (4.8) 40.8 (2.6)*
mathematics
Restructuring the school day to teach core content areas in 51.9 (4.1) 31.4 (2.4)*
greater depth
Providing extended-time instructional programs 51.4 (4.7) 30.8 (2.6)*
Implementing strategies for increasing parents’ involvement in 32.1 (4.4) 13.4 (1.6)*
their children’s education
Increasing instructional time for all students 26.0 (3.9) 12.9 (1.8)*
*Indicates statistically significant difference between identified and non-identified schools (p<.05).
Source: National Longitudinal Study of NCLB, Principal Survey.

153
Exhibit C-13
Change in Instructional Time Per Day at Elementary Schools,
by Subject Area, 2003-04 to 2004-05

Identified Schools Non-Identified Schools


(n=430) (n=881)
Decreased More Increased More Decreased More Increased More
Than 30 Minutes Than 30 Minutes Than 30 Minutes Than 30 Minutes
Reading 0.2 (0.2) 29.7 (4.9)* 0.0 (0.0) 13.1 (2.3)*
Mathematics 0.1 (0.1) 16.7 (3.1)* 0.0 (0.0) 8.3 (1.9)*
Science 1.3 (0.6) 4.8 (2.4) 0.4 (0.2) 3.6 (1.2)
Social studies 2.7 (1.2) 1.4 (0.7) 0.6 (0.2) 0.6 (0.4)
Art/music 3.1 (1.2) 1.3 (0.8) 1.2 (0.4) 0.1 (0.1)
Physical education/health 2.4 (1.2) 1.8 (1.0) 1.1 (0.5) 0.3 (0.1)
Other 0.6 (0.6) 3.5 (2.6) 2.9 (1.1) 0.0 (0.0)
*Indicates significant difference between identified and non-identified schools (p<.05).
Source: National Longitudinal Study of NCLB, Principal Survey.

154
Exhibit C-14
Percentage of Identified Title I Schools Experiencing Various Types of Interventions
Since Identification for Improvement, 2004-05

Percent of Percent of Percent of Percent of


Schools in Schools in Schools in Schools in
Year 1 of Year 2 of Corrective Restructuring
Improvement Improvement Action
(n=199) (n=74) (n=50) (n=76)

Actions Required for All Identified Schools


Parents were notified of schools’ improvement 88.6 (9.7) 95.8 (6.3) 95.7 (4.1) 100.0 (0.0)
status
District or state developed a joint improvement 80.8 (6.4) 73.2 (8.8) 92.0 (4.8) 83.1 (4.3)
plan with the school
Students were offered the option to transfer to a 81.7 (4.9) 74.7 (10.9) 87.6 (7.6) 97.6 (2.1)
higher-performing school, with transportation
provided
Action Required for Identified Schools That Miss AYP After Identification
Eligible students were offered supplemental 45.7 (7.2) 90.1 (5.7) 93.8 (3.2) 100.0 (0.0)
educational services from a state-approved
provider
Corrective Actions
Implemented a new research-based curriculum or 48.2 (7.0) 65.8 (9.5) 90.0 (3.3) 70.4 (8.3)
instructional program
Significantly decreased management authority at 3.6 (1.4) 4.7 (2.3) 27.4 (11.7) 25.8 (7.5)
the school level
Appointed outside expert to advise the school 30.2 (6.8) 34.2 (9.5) 58.0 (11.0) 57.2 (7.6)
Extended length of school day 24.0 (6.7) 28.7 (7.7) 41.6 (11.0) 31.1 (8.2)
Extended length of school year 9.0 (3.2) 15.4 (6.5) 32.0 (11.6) 16.2 (5.7)
Restructured internal organization of the school 11.6 (5.2) 22.5 (9.9) 21.1 (6.1) 33.0 (7.4)
Replaced school staff members relevant to 1.6 (0.7) 16.7 (9.7) 5.1 (2.8) 18.5 (6.8)
school’s low performance
Restructuring Interventions
Replaced the entire staff 0.1 (0.1) 1.3 (1.0) 0.0 (0.0) 1.6 (1.5)
Reopened the school as a public charter school 0.0 (0.0) 0.0 (0.0) 0.0 (0.0) 1.9 (1.6)
Entered into a contract with a private entity to 0.4 (0.2) 0.7 (0.7) 0.0 (0.0) 1.7 (1.5)
manage the school
Operation of school turned over to state 1.7 (1.2) 0.0 (0.0) 1.4 (1.4) 6.2 (4.5)
Appointed new principal 21.5 (7.1) 20.5 (5.8) 18.2 (4.8) 24.1 (6.6)
Source: National Longitudinal Study of NCLB, Principal Survey.

155
Exhibit C-15
Percentage of Districts Taking Various Actions in Response to
Being Identified for Improvement, 2004-05

Offered/required specific professional development for teachers 79.8 (11.4)


Distributed test preparation materials to some or all schools 67.2 (11.8)
Increased district monitoring of instruction and student performance at school sites 61.4 (15.6)
Offered/required specific professional development for principals 58.5 (15.5)
Reallocated fiscal resources to target specific needs (e.g., particular groups of students, 51.1 (14.6)
subjects, or schools)
Implemented a district-wide curriculum in reading 39.1 (13.8)
Developed or revised district content standards 23.9 (9.5)
Reorganized the district office staff to increase efficiency or focus on instruction 22.6 (9.2)
Implemented a district-wide curriculum in mathematics 17.4 (6.8)
Hired a consultant to advise district administrators on effective strategies 10.9 (4.5)
Created smaller schools, or schools-within-schools 11.1 (4.8)
Changed the budget allocation formula for schools 10.4 (5.1)
Implemented new personnel procedures for hiring or assigning principals and teachers 7.8 (3.4)
Source: National Longitudinal Study of NCLB, District Survey (n=75 districts).

Exhibit C-16
Number of Schools Where Title I School Choice and Supplemental Services Were Offered,
and Number of Participating Students, 2002-03 to 2004-05

Number of Schools Number of Participating Students


School Choice Supplemental Services School Choice Supplemental Services
2002-03 5,066 (439) 750 (170) 18,039 (4,680) 41,819 (9,411)
2003-04 4,624 (878) 2,529 (426) 37,599 (6,248) 233,164 (15,515)
2004-05 6,216 (NA) 45,398 (8,869)
Sources: Study of Title I Accountability Systems and School Improvement Efforts, District Survey
(estimates of participating schools in 2002-03 and 2003-04, estimates of participating students
in 2002-03); National Longitudinal Study of NCLB, District and Principal Surveys (estimates of
participating schools in 2004-05, estimates of participating students in 2003-04 and 2004-05).

Note: The estimated number of participating schools in 2004-05 is based on two data sources: a
count from the SSI-NCLB study of the number of Title I schools identified for improvement in
2004-05, and an estimate from the NLS-NCLB of the proportion of Title I identified schools that
reported that they were required to offer Title I school choice. Because this estimate is based on
a combination of two data sources, a standard error cannot be calculated.

156
Exhibit C-17
Share of Students Receiving Supplemental Services, by Type of Provider, 2003-04

Private providers 59.1 (4.1)


Faith-based providers 0.5 (0.1)
Community-based providers 12.6 (2.7)
National for-profit companies 33.8 (3.3)
Other for-profit companies 12.3 (3.3)
Districts and public schools 39.4 (4.2)
Charter schools 0.5 (0. 4)
Colleges and universities 0.5 (0.0)
Other 0.5 (0. 4)
Source: National Longitudinal Study of NCLB, District Survey (n=71
districts).

Exhibit C-18
District Strategies for Communicating with Parents
About Title I School Choice and Supplemental Services Options, 2004-05

School Choice Supplemental Services


(n=170 districts) (n=101 districts)
Percent of Percent of Percent of Percent of
districts students districts students
Written notification 59.6 (9.9) 79.6 (3.7) 88.5 (5.9) 91.9 (3.8)
Individual meetings with interested parents 36.9 (8.3) 61.7 (3.8) 72.3 (8.6) 74.9 (5.5)
Notices in district or school newsletters 28.3 (7.4) 53.4 (4.3) 60.6 (9.9) 66.3 (5.7)
Enrollment fairs or open houses to provide
13.8 (4.0) 38.0 (4.5) 48.5 (11.1) 69.8 (5.6)
information about alternate schools and providers
Notices in public newspapers 18.2 (6.1) 39.3 (4.4) 20.1 (9.0) 41.6 (6.2)
Public service announcements 7.2 (2.5) 29.0 (4.5) 16.7 (5.6) 36.6 (5.9)
Outreach through a local community partner
7.2 (3.0) 18.2 (2.6) 16.6 (7.4) 37.8 (5.9)
(e.g., Parent Information & Resource Center)
Other 19.3 (9.7) 20.7 (3.2) 24.0 (8.2) 35.2 (5.6)
Source: National Longitudinal Study of NCLB, District Survey.

157
Exhibit C-19
Percentage of Teachers Reporting that They Are Considered
Highly Qualified Under NCLB, 2004-05

n Highly Qualified Not Highly Qualified Don’t Know


Elementary teachers 4,059 75.1 (1.8) 2.1 (0.3) 22.9 (1.8)
Secondary English teachers 1,787 73.7 (2.2) 5.8 (0.9) 20.4 (2.2)
Secondary math teachers 1,627 67.9 (2.6) 8.0 (1.2) 24.1 (2.5)
Special education teachers 1,158 52.3 (2.4) 14.5 (2.2) 29.2 (2.3)
Source: National Longitudinal Study of NCLB, Teacher Survey.

Exhibit C-20
Percentage of Teachers Reporting that They Are Considered Highly Qualified
under NCLB, 2004-05, by School Improvement Status, 2004-05

Highly Qualified Not Highly Qualified Don’t Know


Elementary teachers (n=4,051)
School not identified for improvement 75.4 (2.0) 1.5 (0.3) 23.1 (2.0)
School identified for improvement
(Year 1 or Year 2) 70.8 (4.3) 4.7 (1.3) 24.5 (4.3)
School identified for corrective action 77.0 (4.3) 7.6 (3.0) 15.4 (4.6)
School identified for restructuring 76.7 (3.9) 6.4 (2.4) 16.9 (2.8)
Secondary classes (n=3,218)
School not identified for improvement 72.4 (2.1) 4.4 (0.9) 23.2 (2.3)
School identified for improvement
(Year 1 or Year 2) 69.1 (3.2) 11.7 (2.1) 19.2 (2.4)
School identified for corrective action 71.0 (4.8) 7.6 (2.4) 21.4 (4.6)
School identified for restructuring 62.3 (4.3) 14.6 (3.3) 23.1 (3.4)
Source: National Longitudinal Study of NCLB, Teacher Survey.

158
Exhibit C-21
Reasons Why Teachers Were Designated as Not Highly Qualified Under NCLB, 2004-05

Elementary Secondary Secondary Special Education


Teachers English Mathematics Teachers
Teachers Teachers (n=125)
(n=135) (n=152) (n=243)
No bachelor’s degree 0.0 (0.0) 0.2 (0.2) 1.9 (1.0) 0.0 (0.0)
Lack full certification or licensure 35.3 (6.4) 15.7 (3.7) 23.2 (4.2) 31.4 (7.7)
Have not demonstrated subject 13.6 (5.3)
knowledge and teaching skills in the
basic elementary curriculum 1.0 (0.8)
Have not demonstrated subject matter
competency in English 17.5 (3.8) 25.8 (7.0)
Have not demonstrated subject matter
competency in Math 27.2 (5.8) 31.3 (7.1)
Have not demonstrated subject matter
competency in another subject that
they teach 30.5 (7.5) 26.7 (5.6) 22.1 (7.0)
Other 42.3 (6.9) 16.6 (6.4) 16.2 (4.7) 24.5 (6.7)
Don’t know 16.3 (5.0) 5.0 (1.9) 3.9 (1.2) 3.5 (2.3)
Source: National Longitudinal Study of NCLB, Teacher Survey.

Exhibit C-22
Percentage of Secondary Teachers Who Are Novice Teachers or Lack a College Major in the Subject
That They Teach, by Self-Reported Highly Qualified Status, 2004-05

Highly Qualified Not Highly Qualified Don’t Know


(n=1,075 to 1,255) (n=138 to 152) (n=313 to 350)
English teachers with fewer than 3 years
of teaching experience 7.2 (1.2) 17.7 (4.8) 16.1 (3.4)
Math teachers with fewer than 3 years of
teaching experience 8.8 (1.5) 10.7 (3.3) 16.4 (3.8)
English teachers who do not have a
major in English 45.8 (2.7) 74.9 (7.6) 48.1 (4.5)
Math teachers who do not have a major
in mathematics 59.1 (2.7) 85.2 (4.7) 59.5 (5.6)
Source: National Longitudinal Study of NCLB, Teacher Survey.

159
Exhibit C-23
Percentage of Teachers Participating in Professional Development Focused on
Instructional Strategies for Reading and Mathematics, 2003-04

Professional Development in Professional Development in


Teaching Reading Teaching Mathematics
Elementary Secondary Elementary Secondary
Teachers English Teachers Mathematics
Teachers Teachers
(n=4,007) (n=1,740) (n=3,994) (n=1,580)
More than 24 hours 19.6 (1.3) 21.9 (1.8) 9.1 (0.9) 16.1 (1.6)
6 to 24 hours 38.9 (1.3) 35.5 (1.8) 25.6 (1.2) 30.4 (2.1)
1 to 5 hours 31.2 (1.9) 30.3 (2.0) 36.7 (1.6) 30.9 (2.5)
None 10.4 (1.3) 12.2 (1.3) 28.6 (1.9) 22.6 (2.1)
Source: National Longitudinal Study of NCLB, Teacher Survey.

Exhibit C-24
Percentage of Teachers Participating in Professional Development Focused on
In-Depth Study of Topics n Reading and Mathematics, 2003-04

In-Depth Study of In-Depth Study of


Reading Topics Mathematics Topics
Elementary Secondary Elementary Secondary
Teachers English Teachers Mathematics
Teachers Teachers
(n=3,982) (n=1,719) (n=3,950) (n=1,565)
More than 24 hours 12.8 (1.0) 15.9 (1.8) 6.2 (0.8) 10.4 (1.2)
6 to 24 hours 28.0 (1.3) 23.6 (1.6) 13.6 (1.1) 15.4 (1.7)
1 to 5 hours 32.4 (1.2) 30.4 (2.0) 29.1 (1.3) 25.5 (1.8)
None 26.8 (1.3) 30.1 (2.2) 51.0 (1.7) 48.7 (2.4)
Source: National Longitudinal Study of NCLB, Teacher Survey.

160
 Endnotes

161
1
U.S. Department of Education, Budget Service.
2
Jay Chambers, Joanne Lieberman, Tom Parrish, Daniel Kaleba, James Van Campen, and Stephanie Stullich
(2000). Study of Education Resources and Federal Funding: Final Report. Washington, DC: U.S. Department
of Education, Office of the Under Secretary, Planning and Evaluation Service. The estimate of the percentage
of public schools receiving Title I funds was updated based on the number of Title I schools reported on
Consolidated State Performance Reports for 2002-03 divided by the total number of public elementary and
secondary schools in 2001-02 from the NCES Common Core of Data.
3
Beth Sinclair (forthcoming). State ESEA Title I Participation Information for 2002-03: Final Summary Report.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
4
Beth Sinclair (forthcoming). State ESEA Title I Participation Information for 2002-03: Final Summary Report.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
5
U.S. Department of Education, National Center for Education Statistics, unpublished data on averaged
freshman graduation rates. U.S. Department of Education, Policy and Program Studies Service analysis of
state-reported graduation rates from Consolidated State Performance Reports and State Education Agency
websites. State-reported rates for 2003 or 2004 were used for 16 states where 2002 rates were not available.
6
Consolidated State Performance Reports, 2003-04.
7
Source: U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study
of State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
8
Consolidated State Performance Reports, 2003-04.
9
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
10
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
11
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
12
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
13
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
14
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
15
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality under No Child Left Behind.
16
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
17
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
18
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
19
Center on Education Policy (2005). From the Capital to the Classroom: Year 3 of the No Child Left Behind
Act. Washington, DC: Center on Education Policy.
20
Christine Padilla, Katrina Woodworth, Andrea Lash, Patrick M. Shields, and Katrina G. Laguarda (2005).
Evaluation of Title I Accountability Systems and School Improvement Efforts: Findings From 2002-03.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
21
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
22
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
23
Policy and Program Studies Service monthly reviews of State Education Agency Web sites, conducted by
Westat from May 2003 through May 2005.
24
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
25
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
26
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
27
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
28
Consolidated State Performance Reports, 2003-04.
29
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
30
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
31
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
32
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
33
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
34
Section 1501 of the Elementary and Secondary Education Act, as reauthorized by the No Child Left Behind
Act.
35
The National Longitudinal Study of No Child Left Behind is being conducted by the RAND Corporation in
collaboration with the American Institutes for Research and the National Opinion Research Center. Response
rates for the 2004-05 surveys that have been completed are 96 percent for the school district survey,
89 percent for the principal survey, 84 percent for the teacher surveys, and 87 percent for the Title I
paraprofessional survey. Other components of the study, including surveys of parents and supplemental
service providers, are still in progress.
36
The Study of State Implementation of Accountability and Teacher Quality Under No Child Left Behind is
being conducted by the American Institutes for Research in collaboration with the Council of Chief State
School Officers and REDA International. Interviews were completed for all 50 states plus the District of
Columbia and Puerto Rico.
37
The Study of Title I Accountability Systems and School Improvement Efforts is being conducted by SRI
International. Survey response rates ranged from 88 to 90 percent for the school district survey and from 83
to 85 percent for the principal survey.
38
The Case Studies of the Early Implementation of Supplemental Educational Services are being conducted
by Policy Studies Associates.
39
Beth Sinclair (forthcoming). State ESEA Title I Participation Information for 2002-03: Final Summary Report.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
40
A separate program, Title I Part D, serves students in state institutions for neglected and delinquent
children and youth.
41
Beth Sinclair (forthcoming). State ESEA Title I Participation Information for 2002-03: Final Summary Report.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service. U.S. Department of Education, National Center for Education Statistics.
Common Core of Data (CCD), “State Nonfiscal Survey of Public Elementary/Secondary Education, 2001-02.”
Ungraded students are not included in these calculations; they account for one percent of both Title I
participants and all public school students.
42
Beth Sinclair (forthcoming). State ESEA Title I Participation Information for 2002-03: Final Summary Report.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
43
In states that did not assess students in 4th-grade reading, a nearby grade was used (3rd grade for Arizona,
Delaware, the District of Columbia, Hawaii, Illinois, Indiana, Maryland, Minnesota, Missouri, New Hampshire,
Oregon, Tennessee, and Virginia, and 5th grade for Kansas, Oklahoma, and Pennsylvania).
44
State participation in NAEP was not required prior to NCLB, and as a result, NAEP results for years prior to
2003 are based on a subset of the states. For example, for the 4th-grade NAEP reading assessment, 39 states
participated in 1998 and 44 states participated in 2002, compared with 51 states in 2003.
45
NAEP changed its approach to testing accommodations for students with disabilities an LEP students during
the period examined in this report. Before 1996, no testing accommodations were provided to such students
participating in NAEP assessments. Beginning in 1996 for the mathematics assessment and 1998 for the
reading assessment, the Main NAEP was administered to two reporting samples—“accommodations
permitted” and “accommodations not permitted.” Beginning in 2002 for reading and 2003 for mathematics,
NAEP administered the Main NAEP test with “accommodations permitted” as its only administration
procedure. For the National Assessment of Title I interim report, we report Main NAEP results with no
accommodations up through 1994 and with accommodations permitted thereafter. For the Trend NAEP, 2004
was the first year that accommodations were permitted, but a sample was also assessed with “no
accommodations permitted”; we present Trend NAEP results with no accommodations for the full time period
examined in this report.
46
National Institute of Statistical Sciences, Education Statistics Services Institute (2005). Task Force on
Graduation, Completion, and Dropout Indicators (NCES 2005-105). Washington, DC: U.S. Department of
Education, National Center for Education Statistics.
47
U.S. Department of Education, Policy and Program Studies Service analysis of state-reported graduation
rates from Consolidated State Performance Reports (based on reports that have not yet been verified) and
State Education Agency Web sites.
48
Marilyn Seastrom, Lee Hoffman, Chris Chapman, and Robert Stillwell (2005). The Averaged Freshman
Graduation Rate for Public High Schools From the Common Core of Data: School Years 2001-02 and 2002-03
(NCES 2006-601). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
49
For 16 states, state-reported graduation rates for 2002 were not available and 2003 or 2004 graduation
rates were used instead; these 16 states are: Alaska, California, Connecticut, Idaho, Massachusetts,
Minnesota, Mississippi, Nevada, New Hampshire, North Carolina, Oregon, Rhode Island, South Carolina, South
Dakota, Tennessee, and West Virginia.
50
OESE review of Consolidated State Performance Reports and State Education Agency Web sites.
51
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
52
U.S. Department of Education, Office of Elementary and Secondary Education, review of Consolidated State
Performance Reports and State Education Agency Web sites.
53
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
54
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
55
U.S. Department of Education, Office of English Language Acquisition, Language Enhancement, and
Academic Achievement for Limited English Proficient Students (2003). Part II: Final Non-Regulatory Guidance
on the Title III State Formula Grant Program -Standards, Assessments, and Accountability. Available online at
www.ed.gov/programs/nfdp/NRG1.2.25.03.doc.
56
Ellen Schiller, Ellen Bobronnikov, Fran O’Reilly, Cristofer Price, and Robert St.Pierre (2005), The Study of
State and Local Implementation and Impact of the Individuals with Disabilities Education Act: Final 2nd
Interim Report (2002-2003 School Year), Bethesda, MD: Abt Associates, available at
www.abt.sliidea.org/Reports/complete%20Interim%20Report%2003-05%20for%20web.pdf (see Exhibit 6.2 on
page 105). The Study of State and Local Implementation and Impact of the Individuals with Disabilities
Education Act (SLIIDEA) focuses on policies, practices and resources used to implement the goals set forth in
IDEA. The SLIIDEA study is collecting data over a 6-year period through mail surveys at the state, district, and
school levels as well as a set of case studies. The study sample includes all 50 states and the District of
Columbia and a nationally representative sample of 959 school districts and 4,448 schools within those
districts. The school response rate was 74 percent.
57
SRI International (2004). Facts from OSEP's National Longitudinal Studies: Standardized Testing Among
Secondary School Students with Disabilities. Menlo Park, CA: Author. Available at
www.nlts2.org/pdfs/fact_sheet4%20_05_04.pdf. These data are from the National Longitudinal Transition
Study 2 (NLTS2), a study of the experiences of a national sample of students receiving special education who
were 13 to 16 years of age in 2000 as they move from secondary school into adult roles. The NLTS2 sample
draws from a nationally representative sample of LEAs and a sample of state-supported special schools. The
initial sample was approximately 11,500 students. For the data reported here, almost 6,000 parents and
guardians completed phone interviews in 2005, for a 70% response rate. Almost 3,000 youth interviews also
took place. More information on the methodology for this study is presented in Mary Wagner, Camille Marder,
Jose Blackorby, Renee Cameto, Lynn Newman, Phyllis Levine, Elizabeth Davies-Mercier, et al. (2003), The
Achievements of Youth With Disabilities During Secondary School: A Report From the National Longitudinal
Transition Study-2, Menlo Park, CA: SRI International.
58
Data for all subgroups were not available for Louisiana and Utah. Arkansas and Kentucky did not report
participation rates for Native American/Alaska Native students. Alabama did not report on the participation of
limited English proficient students. Vermont did not report on the participation of Hispanic students. Arizona,
Arkansas, and Indiana did not report on the participation of low-income students. Minnesota and New Mexico
reported 100 percent participation for all students, but did not report participation rates for subgroups. The
subgroup participation rates in Minnesota and New Mexico were assumed to be 100 percent, since the
participation rate of all students was 100 percent in both states.
59
U.S. Department of Education, Office of English Language Acquisition (2005). Biennial Evaluation Report to
Congress on the Implementation of the State Formula Grant Program, 2002-2004, English Language
Acquisition, Language Enhancement and Academic Achievement Act (ESEA, Title III, Part A), Table 2.3.
Washington, D.C: U.S. Department of Education.
60
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
61
Education Week, Quality Counts, 2003 through 2005 annual reports, Standards and accountability tables.
62
Data were unavailable for seven states. Additionally, one state did not report on all three trend indicators,
one state did not report on presenting results for each classroom, and one state did not report on presenting
results for trends in subgroups within a school.
63
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
64
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
65
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
66
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
67
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
68
Another 503 identified schools were of other types. The sum across levels is somewhat less than the total
number of identified schools reported earlier due to some missing data on school grade level. U.S.
Department of Education, Policy and Program Studies Service, unpublished data from the Study of State
Implementation of Accountability and Teacher Quality Under No Child Left Behind.
69
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
70
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
71
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
72
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
73
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
74
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
75
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
76
Data in Exhibit 34 are based on data provided by 33 states. Data on the percentage of schools missing AYP
that did not meet achievement targets for all students in reading or math is also available for a larger set of
states (48), and this figure is consistent with the Exhibit 34 figure for this group (34 percent based on the 48
states, vs. 38 percent based on the 33 states).
77
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality under No Child Left Behind. Findings involving
subgroups are based on data that was available from 33 states. There is some variation in findings involving
differing numbers of states.
78
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind. Findings are based
on data from 48 states.
79
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind. Data for calculating
percentages for limited English proficient students and students with disabilities were not available.
80
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
81
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
82
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
83
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind. Findings on
elementary, middle, and high schools meeting AYP for the other academic indicator are based on data that
were available from 36 states.
84
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
85
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
86
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
87
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
88
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind. Patrick M. Shields, Camille Esch, Andrea Lash, Christine Padilla,
Katrina Woodworth, Katrina G. Laguarda, and Nicholas Winter (2004). Evaluation of Title I Accountability
Systems and School Improvement Efforts (TASSIE): First-Year Findings. Washington, DC: U.S. Department of
Education, Office of the Under Secretary, Policy and Program Studies Service.
89
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
90
Christine Padilla, Katrina Woodworth, Andrea Lash, Patrick M. Shields, and Katrina G. Laguarda (2005).
Evaluation of Title I Accountability Systems and School Improvement Efforts: Findings From 2002-03.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
91
Christine Padilla, Heidi Skolnik, Alejandra Lopez-Torkos, Katrina Woodworth, Andrea Lash, Patrick M. Shields,
Katrina G. Laguarda, and Jane L. David (forthcoming). Title I Accountability and School Improvement Efforts
From 2001 to 2004. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy
Development, Policy and Program Studies Service. Christine Padilla, Katrina Woodworth, Andrea Lash, Patrick
M. Shields, and Katrina G. Laguarda (2005). Evaluation of Title I Accountability Systems and School
Improvement Efforts: Findings From 2002-03. Washington, DC: U.S. Department of Education, Office of
Planning, Evaluation, and Policy Development, Policy and Program Studies Service.
92
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
93
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind. Data are not
included for two states.
94
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
95
Center on Education Policy. From the Capital to the Classroom: Year 3 of the No Child Left Behind Act.
Washington, DC: Center on Education Policy, 2005.
96
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
97
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
98
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
99
Christine Padilla, Heidi Skolnik, Alejandra Lopez-Torkos, Katrina Woodworth, Andrea Lash, Patrick M. Shields,
Katrina G. Laguarda, and Jane L. David (forthcoming). Title I Accountability and School Improvement Efforts
From 2001 to 2004. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy
Development, Policy and Program Studies Service.
100
Christine Padilla, Katrina Woodworth, Andrea Lash, Patrick M. Shields, and Katrina G. Laguarda (2005).
Evaluation of Title I Accountability Systems and School Improvement Efforts: Findings From 2002-03.
Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Policy
and Program Studies Service.
101
Center on Education Policy (2005). From the Capital to the Classroom: Year 3 of the No Child Left Behind
Act. Washington, DC: Center on Education Policy.
102
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
103
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
104
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
105
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
106
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
107
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
108
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
109
Christine Padilla, Heidi Skolnik, Alejandra Lopez-Torkos, Katrina Woodworth, Andrea Lash, Patrick M. Shields,
Katrina G. Laguarda, and Jane L. David (forthcoming). Title I Accountability and School Improvement Efforts
From 2001 to 2004. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy
Development, Policy and Program Studies Service.
110
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
111
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
112
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
113
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
114
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
115
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
116
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
117
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
118
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
119
U.S. Department of Education, Office of English Language Acquisition (2005). Biennial Evaluation Report
to Congress on the Implementation of the State Formula Grant Program, 2002-2004, English Language
Acquisition, Language Enhancement and Academic Achievement Act (ESEA, Title III, Part A). Washington,
D.C.: Author.
120
U.S. Department of Education, Office of English Language Acquisition (2005). Biennial Evaluation Report
to Congress on the Implementation of the State Formula Grant Program, 2002-2004, English Language
Acquisition, Language Enhancement and Academic Achievement Act (ESEA, Title III, Part A). Washington,
D.C.: Author.
121
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
122
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
123
The eligibility counts are based on state-reported data, while the participation numbers are estimated from
district and principal survey responses.
124
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
125
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
126
U.S. Department of Education, National Center for Education Statistics, Common Core of Data, 2002-03.
Special tabulation conducted by RAND Corporation for the National Longitudinal Study of NCLB.
127
Policy and Program Studies Service monthly reviews of State Education Agency Web sites, conducted by
Westat from May 2003 through May 2005.
128
Policy and Program Studies Service, review of State Education Agency Web sites.
129
Leslie M. Anderson and Katrina G. Laguarda (2005). Case Studies of Supplemental Services Under the No
Child Left Behind Act: Findings From 2003-04. Washington, DC: U.S. Department of Education, Office of
Planning, Evaluation, and Policy Development, Policy and Program Studies Service.
130
For supplemental services, districts were asked if they used the strategy in either 2003-04 or 2004-05,
while for school choice, districts were asked separately about strategies used during each school year.
131
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
132
Leslie M. Anderson and Katrina G. Laguarda (2005). Case Studies of Supplemental Services Under the No
Child Left Behind Act: Findings From 2003-04. Washington, DC: U.S. Department of Education, Office of
Planning, Evaluation, and Policy Development, Policy and Program Studies Service.
133
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
134
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
135
Leslie M. Anderson and Katrina G. Laguarda (2005). Case Studies of Supplemental Services Under the No
Child Left Behind Act: Findings From 2003-04. Washington, DC: U.S. Department of Education, Office of
Planning, Evaluation, and Policy Development, Policy and Program Studies Service.
136
Bradford Chaney (1995). Student Outcomes and the Professional Preparation of 8th Grade Teachers.
NSF/NELS:88 Teacher Transcript Analysis. Rockville, MD: Westat. Dan D. Goldhaber and Dominic J. Brewer
(2000). “Does Teacher Certification Matter? High School Certification Status and Student Achievement” in
Educational Evaluation and Policy Analysis, 22: 129-145.
137
Section 9101(11) of the No Child Left Behind Act.
138
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
139
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
140
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
141
However, Montana is considering adoption of the Praxis II series; the state is currently piloting some of the
Praxis II tests.
142
Personal communication on November 29, 2005 with Rick Tannenbaum, Director of Assessment Design and
Scoring, Educational Testing Service.
143
Educational Testing Service, unpublished data provided on August 19, 2005. The national median scores
are based on scores of all individuals who took these tests from October 1, 2001, to July 31, 2004.
144
Educational Testing Service Web site, state requirements for Praxis II exams, accessed on September 8,
2005.
145
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
146
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
147
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
148
Pennsylvania’s HOUSSE for elementary teachers relies on the initial certification process, but the state’s
HOUSSE for secondary teachers is a point system.
149
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
150
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
151
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
152
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
153
Similarly, only 38 percent of middle school English teachers reported that they had a major in English,
compared with 81 percent of high school English teachers. Seastrom, Marilyn, Kerry Gruber, Robin Henke,
Daniel McGrath, and Benjamin Cohen (2004), Qualifications of the Public School Teacher Workforce:
Prevalence of Out-of-Field Teaching: 1987-88 to 1999-2000 (NCES 2002-603), Washington, DC: U.S.
Department of Education, National Center for Education Statistics. The sample for this survey included
56,354 teachers, with a response rate of 83 percent. More information on the methodology for this study can
be found at http://nces.ed.gov/surveys/SASS/methods9900.asp.
154
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
155
The term “classes” is used here because the data come from a survey of principals that asked secondary
principals about the number of secondary “class sections” taught by highly qualified teachers. The survey
asked elementary school principals about the number of teachers who were highly qualified; however, since
most elementary teachers teach only one class of students, we use the term “classes” here to describe both
the elementary and secondary data collected from principals.
156
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind. “High minority” schools are those in which 75 percent or more of
the students are minorities and “low minority” schools are those in which fewer than 35 percent of the
students are minorities.
157
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
158
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
159
Mary Kennedy (1998). Research Monograph No. 13: Form and Substance in Inservice Teacher Education.
Madison: National Institute for Science Education, University of Wisconsin. David K. Cohen, Milbrey W.
McLaughlin, & Heather C. Hill (1998). Instructional Policy and Classroom Performance: The Mathematics
Reform in California (RR-39). Philadelphia: Consortium for Policy Research in Education.
160
Michael S. Garet, Beatrice F. Birman, Andrew C. Porter, Laura Desimone, Rebecca Herman, and Kwang Suk
Yoon (1999). Designing Effective Professional Development: Lessons From the Eisenhower Professional
Development Program. Washington, DC: U.S. Department of Education, Office of the Under Secretary,
Planning and Evaluation Service.
161
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
162
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
163
Michael S. Garet, Beatrice F. Birman, Andrew C. Porter, Laura Desimone, Rebecca Herman, and Kwang Suk
Yoon (1999). Designing Effective Professional Development: Lessons From the Eisenhower Professional
Development Program. Washington, DC: U.S. Department of Education, Office of the Under Secretary,
Planning and Evaluation Service.
164
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
165
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
166
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
167
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
168
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
169
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
170
These findings are consistent with research by the Center on Education Policy (2005), From the Capital to
the Classroom: Year 3 of the No Child Left Behind Act, Washington, DC: Center on Education Policy.
171
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
172
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
173
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
174
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.
175
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the Study of
State Implementation of Accountability and Teacher Quality Under No Child Left Behind.
176
U.S. Department of Education, Policy and Program Studies Service, unpublished data from the National
Longitudinal Study of No Child Left Behind.

You might also like