Organizational Turnaround And Educational Performance The Impact Of (PBMAS)

Description
How do accountability policies affect failing organizations Are additional interventions used to turnaround underperforming agencies effective in raising performance outputs.

Organizational Turnaround and Educational
Performance: The Impact of Performance-Based
Monitoring Analysis Systems (PBMAS)

Amanda Rutherford
Department of Political Science
Texas A&M University

Abstract
How do accountability policies affect failing organizations? Are additional interventions
used to turnaround underperforming agencies effective in raising performance outputs? This
paper investigates the effectiveness of turnaround policies in organizations that persistently fail
to meet accountability standards. Using Performance-Based Monitoring Analysis System
(PBMAS) data from 169 school districts in Texas, this paper shows that turnaround interventions
have only limited success. While monitoring strategies work for the most salient performance
indicator in the short term, improvements quickly dissipate following an intervention.
Supporting the notion that management matters, results also show that the type of monitor
assigned to a failing school can affect the extent of improvement in performance.

1

Organizational Turnaround and Educational Performance:
The Impact of Performance-Based Monitoring Analysis Systems (PBMAS)
The performance of public organizations has been the subject of much attention
following recent demands for a more efficient and effective system of governance. In response,
elected officials have adopted a range of incentive policies aimed to increase the performance of
the bureaucracy. These incentive policies assume that performance can be improved through
changes in management strategies so that one-size-fits-all rewards and penalties will be adequate
best practices for increasing performance in all types of organizations. As a result, these efforts
have produced improvements in performance for some organizations, while little change is seen
in others. Many scholars have examined whether these broad-based accountability policies have
produced performance gains, and others have documented evidence of unintended consequences
and organizational dysfunction stemming from pressures to raise performance (Rainey 2003,
Radin 2006, Moynihan 2008).
While this literature has largely established that performance incentives do not always
contribute positively to organizational performance in general, little attention has been given to
how these accountability efforts affect underperforming organizations. In these organizations,
penalties for failing to meet performance standards include additional interventions, such a site
visits, audits, or the replacement of management, aimed at turning around the organization.
Although these interventions may provide additional resources, they also incur substantial costs
to failing organizations (i.e. payment for services, goal displacement, decreased morale).
Empirical work has yet to determine whether turnaround strategies lead to higher performance or
if failing organizations continue to perform below expectations despite interventions.
2

Existing research on the effectiveness of turnaround consists largely of small-n case
studies for which mixed findings are incapable of leading to broader generalizations (Turner et al.
2004, Eitel 2004, Beeri 2009). This study aims to expand this research through the use of a large-
N dataset to examine both short and long term effects of monitoring interventions in the context
of K-12 education, as public education has been the subject of a highly salient accountability
policies and turnaround strategies that have received mixed reviews.
I begin by considering how elected officials have responded to calls for greater
accountability in the public sector. I then connect this discussion of policy implementation to
theoretical propositions of whether we should expect accountability policies and turnaround
mechanisms to be effective in organizations that consistently underperform. Next, I introduce
turnaround policies in the realm of public education and focus on the strategy of performance
monitoring. The core of the analysis focuses on the performance impact of monitors in both the
short and long term. Findings indicate substantively interesting relationships between
monitoring and performance over time that have important implications for the development of
turnaround policies in the public sector.
Public Sector Performance
Existing research on public organization performance often grapples with the question
“Why do government organizations seem to constantly underperform?” (Rainey 2003, Moynihan
2008). Unlike private organizations, where performance and survival are generally tied to
measureable profits (Cameron et al. 1988, Arogyaswamy et al. 1995, Mellahi and Wilkinson 2004),
public organizations may be evaluated on multiple dimensions. Public organizations are often
expected to pursue a number of goals for different stakeholder groups, and each goal may be met
with a different degree of success. However, with a safety net of public funding, specialized
policy expertise, and near monopoly status, public agencies are often irreplaceable even if they
3

fail to adequately meet performance goals (Meier and Bohte 2003). With no easy replacement
available to supply a public good or service, sub-optimal levels of performance by agencies are
generally tolerated for substantial periods of time (Paton and Mordaunt 2004).
Despite common perceptions that public organizations are immortal (Kaufman 1979),
they are not immune to decline and failure (Lewis 2002). Threats to agency life generally
include policy changes, mission completion, and market competition (Jas and Skelcher 2005). In
the presence of policy change, political leaders may have priorities that are vastly different than
previous administrations. Thus, political power may be used to change the purpose of an agency
so that it becomes ineffective by design. Following mission completion, an organization may no
longer be seen as necessary for providing a public good. For instance, though the Works
Progress Administration (WPA) was once viewed as vital for reinvigorating the economy, it
eventually lost relevance and was ended. Finally, for organizations such as the postal service or
public education, increased competition has threatened agency life. However, these
organizations are often permitted to continue with little retribution for low levels of performance,
contributing to the view of bureaucracy as inefficient.
With the spreading popularity of accountability mechanisms, public organizations have
faced greater penalties for persistently underperforming. Performance initiatives - often
identified as performance management, pay-for-performance, performance planning, managing
for results, total quality management, or contracting out - have challenged public organizations
to account for organizational outputs and outcomes through a variety of reporting standards. In
some cases, these efforts have produced improvements in performance of organizations, but in
others, these incentives fail to result in positive performance gains. Many scholars have
examined the inability of these broad-based accountability policies to produce performance
4

gains, and others have documented evidence of unintended consequences and organizational
dysfunction stemming from pressures to raise performance (Meyer and Zucker 2001, Radin
2006, Moynihan 2008). Yet few have moved beyond general outcomes of accountability
policies to focus on the additional interventions needed for poorly performing organizations. Do
failing organizations perform better or worse following the implementation of accountability
policies as compared to more successful organizations? Are additional interventions effective in
improving performance of these organizations? In other words, can failing public organizations
be turned around successfully, or are they continuing to fail despite additional intervention
mechanisms?
Responding to Turnaround
While the discussion of organization turnaround is widely lacking from public
organization theory, scholars of private management have contributed much more time to
developing a stage model to describe turnaround processes. In this literature, turnaround
mechanisms are traditionally categorized as either strategic or operating (Hofer 1980, Hambrick
1985, Chowdhury 2002). Strategic turnarounds emphasize changing the business the firm is
engaged in and include actions such as developing new markets, divestment, or vertical
integration. Operating turnarounds reassess the way the organization conducts business and
include short-run tactics such as revenue generation and cost cutting. These scholars generally
agree that declines caused by the external environment should be addressed with strategic
turnaround strategies while internal threats should be addressed with operating turnaround
mechanisms (Chowdhury 2002).
As costs associated with turnaround are generally believed to be less than the cost
incurred through an agency closure (i.e. finding a suitable replacement, training new personnel,
5

etc.), underperforming public organizations are often exposed to turnaround strategies.
Analyzing whether strategic or operating strategies for turnaround are feasible for
underperforming public organizations, Boyne (2003) categorizes turnaround policies as
retrenchment, repositioning, or reorganization. Retrenchment consists of focused downsizing of
the scope or size of an organization in efforts to increase efficiency. Though this drastic form of
turnaround may be feasible for private firms, it is less feasible for public organizations due to
legal constraints and obligations. Still, possibilities for retrenchment may lie in outsourcing non-
primary duties to third parties, allowing organizations to cut costs and apply extra resources to
core responsibilities (Meier and O?Toole 2010).
Under repositioning, new efforts towards growth and innovation are expected to jump
start organizations with new target audiences. Similar to retrenchment, this strategy can prove
quite difficult for public organizations due to statutory constraints. For example, K-12 schools
cannot provide services for college students. Nevertheless, instances of repositioning can be
found in public agencies. While schools may not be able to provide services for college students,
they can provide college-level courses for current students. As another example, the post office
has expanded service options and offers mail delivery to a much larger geography as compared
to the pony express. Strategies may also include increasing service options and improving the
internal and external reputation of the organization (Boyne 2003).
Third, reorganization is similar to the concept of operating strategies in private
management literature in that it focuses on internal changes. Boyne argues that this approach is
most similar to the replacement of personnel in struggling public organizations, though it may
also include developing new budgetary or planning processes. Leadership change as a
turnaround strategy has been increasingly used among public agencies, but strong empirical
6

evidence is lacking as to whether this change results in improved performance in the short and
long term (Hill 2005).
Testing the Theory: Performance Monitoring
This study will seek to contribute to existing knowledge of turnaround by providing a
large-N analysis of the effect of monitoring on performance. Monitoring, a technique similar to
inspections and audits, is used by a superior organization to regulate smaller entities. Monitoring
includes site visits and face-to-face meetings that are used to complement other sources of
performance reporting. While monitoring, as defined here, may relate to audits through a review
of financial health of an organization, it also entails a review of the competence of personnel,
personnel, compliance with standards, and success in meeting outcome goals (Boyne, Walker,
and Day 2002). For the context of this analysis, monitoring is most closely related to
reorganization strategies that attempt to correct for deficiencies internal to the organization by
either influencing managerial decisions or replacing top level managers. Boyne, Walker, and
Day (2002) provide theoretical explorations concerning the potential for inspection to improve
the performance of an organization. They argue that inspections are associated with both costs
and benefits for public organizations. Benefits include the provision of a safety-net to help
organizations cope with failure, an increase in across-the-board standards among agencies, and a
symbolic gesture that provides assurance to multiple groups of agency stakeholders. These
benefits, however, do not come without substantial costs. Costs to the organization include the
direct costs of funding and operating an inspection system, indirect compliance costs, and goal
displacement costs as organizations attempt to meet multiple, and sometimes competing,
standards. Boyne, Walker, and Day argue that both benefits and costs are largely dependent on
the expertise and judgment of the inspector. An individual inspector must possess greater
7

knowledge than the manager of an organization in order to add value to an organization?s
outputs. The inspector must also be able to apply the interpretation of standards evenly and
consistently across organizations.
This theoretical discussion of inspection provides multiple testable hypotheses, two of
which can be tested here for failing organizations. Given the benefits that may be associated
with monitoring by an expert, the first hypothesis can be stated as: Monitoring interventions will
lead to an increase in performance for failing organizations. However, given the costs
associated with monitoring and the problems associated with one-size-fits-all policies, a second
hypothesis is warranted: Monitoring interventions will have no effect on the performance of
failing organizations.
1

Turnaround Mechanisms in Public Education
School districts provide an ideal setting to test the effectiveness of turnaround strategies,
as schools have been the center of much discussion of performance accountability and policy
interventions for turnaround throughout the last decade. With the rise in demands for greater
accountability and an increase in high-stakes testing, many schools have been widely criticized
for producing consistently poor results. As schools traditionally have operated as virtual
monopolies (Chubb and Moe 1990, Meier and Bohte 2003), recent policy changes through
initiatives like No Child Left Behind (NCLB) have focused on setting new performance and
accountability standards for education across all states. Implementation of these policies,
however, makes multiple assumptions that may not always hold. First, policies assume that all
schools can succeed, but that certain elements for success are missing (Brady 2003). Thus,
policymakers believe the solution to failing schools can generally be addressed by applying a set

1
Because a negative association between performance and monitoring cannot be ruled out with certainty, two-tailed
tests will be used instead of testing a directional hypothesis.
8

of standards with proper management skills. This implies that all schools are capable of
improving but that some are simply choosing not to due to a lack of will or misplaced priorities
by those at the top of the organization (Brady 2003, Hicklin Fryar and Rabovsky unpublished).
Little consideration is given to why schools may not be able to adjust to new standards quickly
and easily or what turnaround mechanisms may be best suited for different types of school
districts. For instance, some schools may be facing issues of financial mismanagement while
others are combating dropout rates and still others are just learning how to comply with new
special education rules.
Following the assumption that performance can improve for all schools, elected officials
often presume that new policies can be implemented through a one-size-fits-all approach. As
performance levels are considered to be linked to internal mechanisms, larger environmental
factors that may limit desired performance outcomes are often ignored. Thus, the question of
relative starting points for different organizations is ignored (Jas and Skelcher 2005). In addition
to these assumptions regarding performance, the measurement of performance in schools is
complex and needs to include multiple goals of education. The failure to consider multiple
performance indicators can lead to negative unintended consequences. Often the most salient
performance outputs, state standardized test scores, may be achieved at the detriment of larger,
more important outcomes such as learning and college readiness that are more difficult to
quantify (McNeil 2000, McDermott 2011).
As dissatisfaction of school performance continues to accelerate, a variety of turnaround
strategies have been developed at the federal, state, and local level. Though not an exhaustive
list, these interventions include school improvement plans (SIP), the provision of choice, the
provision of supplemental services, reconstitution, and monitoring by outside experts (Berry
9

2003). School improvement plans are mandated by No Child Left Behind for Title 1 schools
failing to make adequate yearly progress in two consecutive years (Mazzeo and Berman, 2003).
SIPs are intended to bring teams together to create unified strategies to raise performance.
Provision of choice mechanisms allow guardians of students in schools identified for
improvement to transfer their student(s) to another public school that is not underperforming.
This strategy was available in thirteen states prior to NCLB and is now required by federal
mandate (McDermott 2011). Likewise, schools that fail to meet adequate yearly progress for
three consecutive years are now required to offer supplemental educational services, often in the
form of free tutoring. The number of students eligible for these services has increased on a
yearly basis, signaling that schools are still struggling to meet performance criteria (Peterson
2005). Under more extreme circumstances, school reconstitution may involve removing a large
portion of school administrators and teachers and replacing them with individuals deemed to be
more qualified (Rice and Malen 2003). This type of reorganization can be controversial and, to
date, has produced mixed anecdotal evidence of success (Rudo 2001).
The turnaround intervention analyzed in this study is monitoring by third party experts.
Used at varying levels across states, monitoring techniques consist of assigning former
educators, often former superintendents with a high level of experience, to school districts that
have been identified as failing in regard to at least one performance standard. Individual
monitors not only observe actions of school administrators, but they often assume management
of the school district. Monitoring may also be used as a threat, encouraging schools to improve
performance before a monitor assumes leadership in the district. Though school monitoring has
been available as a turnaround mechanism by multiple states since the early 1980s, some states
10

use this intervention more than others. Further, very little is known about the extent to which
monitoring is successful as a turnaround strategy for failing schools.
Data and Measures
Data on monitoring in failing organizations come from a set of Texas schools districts.
Between 1993 and 2011, data were collected for 169 school districts that were subject to
monitoring by former educators hired by the Texas Education Agency (TEA). Monitoring data
is combined with seventeen years (1993-2010) of pooled data on school performance collected
by the TEA. Texas schools are evaluated yearly on a range of both absolute and relative
standards as defined by the TEA. Absolute standards include set passage rates for each portion
of the state standardized test (65 percent on math and 70 percent on reading), as well as for
graduation rates (75 percent). Performance levels (PLs) are assigned yearly to each district, and
an increase in PL assignment for a given performance indicator is possible through “adequate
yearly progress” for the given indicator. If a school fails to meet set performance requirements in
one or more reporting years, the district is subject to monitoring intervention.
While monitoring may bring the benefits of expert advice to failing school districts, costs
associated with this intervention strategy may offset any gains. For underperforming districts in
Texas, direct costs largely consist of payment to the monitor, conservator, or management team
acting on behalf of the state agency. Indirect costs may be even more threatening and include the
reallocating of resources to comply with monitor recommendations, the embarrassment being
identified as a failure by local media, and goal displacement to meet performance directives from
the monitors. Though costs can be identified, they can be difficult to measure. Additionally,
little is known regarding whether this type of intervention provides any type of payoff by
improving performance for failing schools. An analysis of district performance data across time
11

should provide substantial evidence regarding the success of this type of turnaround strategy.
Findings of positive or negative impacts may have important implications not only for the
development of a more generalizable turnaround theory but for the decision making process of
policymakers.
Dependent Variables
Definitions of school success are likely to vary across stakeholder groups (politicians,
parents, students, local community members) and across different types of environments. While
more affluent schools may prioritize college readiness, inter-city schools may be focused on
increasing attendance rates. Accordingly, five outcome measures will be used in this analysis:
standardized test passage rates (for Texas, this is the Texas Assessment of Knowledge and Skills,
or TAKS), college readiness (percent of students scoring at least an 1110 SAT or an equivalent
ACT score), graduation rates, dropout rates, and attendance rates. Using multiple dependent
variables will test whether monitors influence some types of performance more than others. The
use of multiple dependent variables will also affect the number of cases reported across model
estimations, as not all schools have reported each outcome across time (graduation rates, for
example, are only relevant for high schools). For the present analysis, determining effects across
outcomes takes priority over dropping cases that may not report all five measures.
2

Many studies assess absolute gains or losses of similar performance indicators in school
districts. However, this measurement approach will not capture state level trends that are
important for identifying low performing schools. For instance, if the state changes the structure
of a standardized test or adjusts the calculation for dropout rate, performance levels may shift for
all districts in the state. To account for these state level trends, performance indicators will be
measured as the difference in value of the performance indicator for each monitored school

2
Models with consistent case sizes are available upon request. Findings are no different than those reported here.
12

compared to the overall state average for each year. Mathematically, the dependent variable
tested here is calculated by the equation District Performance Difference
it
= District
Performance
it
- Average District Performance
t
. For underperforming schools, the dependent
variable is generally negative, as the state average is greater than performance in failing schools.
If monitors lead to improvements in school performance, the difference between
underperforming schools and the schools at the mean should become more positive following the
intervention. As modeled, a positive coefficient will be related to an increase in performance,
while a negative coefficient would indicate a decrease in performance as compared to the
average for the state.
3

Control Variables
Though not reflected in the assumptions of legislators in developing many performance
expectations placed on public agencies such as school districts, the external environment is
believed by scholars to have a substantial effect on the organizational outcomes. For education,
a set of education production functions have been well developed to control for resources and
constraints that vary by organization (Hanushek 1996).
Empirical research on school resources provides substantial evidence that schools with
greater resources face a less challenging task in educating students. Measures of resources
include expenditure per pupil, revenue per pupil, school enrollment, student-teacher ratio, and
percent central administration. With the exception of student-teacher ratio, each should be
positively correlated with school performance indicators.
Though the availability of resources may decrease school task difficulty, the presence of
a variety of constraints may also limit a school?s ability to educate students. As both poverty and

3
This dependent variable, as a form of differencing, is likely to be more robust towards threats of non-stationarity.
Results using absolute gains for school districts are available on request.
13

race are correlated with constraints such as family income and education (Jencks and Phillips
1998), measures of constraints include the percentage of students who are eligible for free or
reduced price school lunches, the percentage of students classified as special education, the
percentage of African-American students, and the percentage of Latino students. The greater
the population of these student groups, the more difficult it may be for schools to meet
performance expectations.
Of 169 districts included in this dataset, 78 (46 percent) are charter schools. As public
schools and charter schools are likely to differ in age, resources, and task difficulty levels (Sass
2006, Hanushek et al. 2007), a dummy variable is included in the model to test for differences
between these two types of underperforming schools. Scholarship presents mixed findings on
the quality of charter schools, as nonrandom selection of students into charters presents
methodological challenges (Hoxby and Rockoff 2004, Booker et al. 2007, Sass 2005, Zimmer
and Buddin 2006). Previous research indicates that charter schools may experience difficult
start-up periods in a struggle to attract and retain students (Hanushek et al. 2007), and this
perceived failure may become apparent for charter schools in this dataset.
Finally, a control must be included for the previous performance as well as for effects of
the monitor after intervention. A lagged dependent variable is included in each model, as
performance is likely to be autoregressive. Including this lag corrects for any threats of
autocorrelation present in the model.
4
Further, errors are clustered by school district or charter
code to correct for variance in error across groups.
5
Additionally, schools administrators should

4
The Arellano-Bond GMM estimator, the Cochrane-Orcutt transformation, and the Prais-Winsten with robust
standard errors all provide similar models with corrections for autocorrelation. Each of these models generated
results that are largely similar and support findings presented here. The models are available upon request.
5
The Cook-Weisburg test statistic detects heteroscedasticity prior to clustering by district. Findings for models with
robust standard errors or models using GLS approaches result in similar findings as those presented with the
clustered models presented here. These models are available upon request.
14

develop better strategies for improving performance over time through experience in the school
or district. While both controls for previous performance and post-monitor effects should be
positively related to performance, the structure of the later makes assumptions about the
functional form of the model as following a pattern of trend improvement or shift improvement.
Trend improvement assumes a positively sloped linear relationship between performance and
effects of the intervention over time. Shift improvement assumes a shift from one performance
level to the next as a result of an intervention, with a general slope of zero over time. In testing
the overall model with each assumption, trend improvement is insignificant and adds little to the
model (note shown). Including a dummy variable to control for a shift change, however, proves
significant in explaining the relationship between monitoring and performance.
Table 1 provides a description of variables for all schools in Texas as compared to failing
schools. From this data, it is clear that monitored schools are performing well below the state
average for all dependent variables in this study. Notably, underperforming schools also appear
to face greater levels of task difficulty due to an increased presence of low income and minority
students. School enrollment sizes are noticeably higher for monitored school districts, though
this is largely affected by cases of monitoring for Houston and Dallas ISDs. Finally, student-
teacher ratios and the percent of central administration are largely similar for monitored schools
compared to state averages. These factors may present great constraints on districts? ability to
meet performance expectations as compared to more advantaged districts so that they are
targeted with interventions as a consequence of underperformance.

Table 1: Mean Variable Comparisons

All Schools (n=1303) Monitored, All (n=169) Monitored, >5 yrs ago (n=107) Monitored, ? 5 yrs ago (n=82)
TAKS Passage Rate 71.03 62.05 62.23 59.37

(14.56) (17.50) (16.62) (18.50)
Dropout Rate 1.45 2.56 2.21 3.13

(3.37) (5.93) (3.74) (7.72)
Graduation Rate 86.84 78.41 80.58 74.09

(12.03) (19.52) (15.30) (23.07)
1110 SAT Percent 19.13 13.57 12.26 15.15

(12.10) (10.79) (10.34) (11.39)
Attendance Rate 95.80 94.98 95.12 94.59

(1.51) (2.75) (1.87) (3.76)
Operating Expenditure/Pupil 7202.46 7275.82 7065.60 7525.55

(2658.56) (2958.35) (2397.04) (3428.32)
Revenue per Pupil 8136.55 8099.95 7913.62 8303.20

(3246.90) (3298.51) (2749.13) (4098.65)
Percent Black 9.06 16.25 14.14 24.12

(13.50) (21.51) (20.62) (24.86)
Percent Hispanic 34.32 46.86 51.96 41.16

(27.61) (34.30) (35.46) (30.60)
Percent Low Income 51.71 63.89 66.67 63.50

(19.92) (21.72) (21.66) (21.56)
Percent Special Education 12.78 12.29 11.57 12.66

(4.36) (5.68) (3.83) (7.16)
School Enrollment 4418.69 8816.57 11608.65 15333.22

(12486.70) (28190.48) (34549.03) (40136.86)
Student-Teacher Ratio 13.08 13.74 13.68 14.15

(2.71) (3.20) (2.65) (3.94)
Percent Central Administration 1.87 1.87 1.67 2.06
(1.54) (1.99) (1.50) (2.43)

Findings
In order to compare the short term and long term effects of monitor interventions, more
recent data (2007-2011) will be first analyzed and then compared to schools with a monitor prior
to the start of the 2005 school year to test for both short and long term impacts in monitoring
intervention.
6
Data will then be combined to examine the overall effect of monitors on school
performance across seventeen years.
The results linking monitors to short term school performance are shown in Table 2.
Though data includes monitors present at schools in 2011, most of these are removed because of
missing performance data due to the recency of the monitor in the district. Although monitors
have no effect on attendance rates, graduation rates, or college-readiness, the impact of this
intervention shows a strong positive relationship with the percent of students passing the TAKS
exam while the monitor is assigned to the school. Under the direction of the monitor, the TAKS
passage rate for a district generally increases by 4.21 percentage points. This improvement,
while positive, constitutes just half of the difference between the average failing school passage
rate (62.05 percent) and that of the average school in Texas (71.03 percent). As these test
passage rates are the most salient of school performances indicators and often create front-page
news, it is rational to expect both school districts and monitors to prioritize this outcome over
other performance indicators. However, while this improvement is important, it may be short-
lived. As coefficients for the shift improvement and time are negative, the model indicates that
the monitor?s positive effect on performance will decline over time. In addition to the decline
following the exit of the monitor, passage rates steadily decrease each year so that the impact of

6
This split is a function of the data provided by the Texas Education Agency. The TEA provided data regarding
monitors since the program began, but the organization reported that data for the 2005-2006 and 2006-2007 school
years could not be located. Further, the format of the data between the two time periods changed slightly so that
more information is available for the newest format.
17

the monitor on performance disappears within four to five years. Thus, schools will return to
performance levels previously identified as failing in the long term.
Monitoring is also correlated with an increase in dropout rates for monitored schools
compared to the state average. Though findings for dropout rates are counter to expectations,
this may be explained by the low validity of the variable, as dropout rates are notoriously
miscalculated and underreported. Monitors are likely forcing schools to document actual
dropout rates in analyses required by state such that no difference is actually realized. This may
be further supported by the large decrease in dropout rates once the monitor leaves the school
district.
Results also indicate stark differences between public school districts and charter schools.
For three of the five performance variables, failing charter schools perform far worse than failing
public schools. As charter schools are a new addition in a market dominated by school districts
that have existed for much longer periods of time, these schools may be facing a number of
challenges in recruiting and retaining students (Hanushek et al. 2005). Additionally, charter
schools often take advantage of financial incentives to recruit at-risk students, further increasing
the level of task difficulty faced in meeting performance goals.

Table 2: Impact of Monitor Presence, 2007-2010

TAKS Passage Rate Dropout Rate Graduation Rate 1110 SAT Percent Attendance Rate
(Standard Error) (Standard Error) (Standard Error) (Standard Error) (Standard Error)
Presence of Monitor 4.21** 2.26** -0.41 -3.73 0.18

(1.71) (0.91) (1.80) (3.72) (0.18)
Charter School -6.93*** 2.69** -7.93** -0.18 -0.27

(1.93) (1.07) (3.21) (1.73) (0.17)
Expenditure/Pupil ($1000) 0.53 0.30 -0.28 -0.61 -0.01

(0.37) (0.30) (0.79) (0.58) (0.07)
Revenue/Pupil ($1000) 0.10 -0.21 1.39 1.00** 0.05

(0.25) (0.14) (0.87) (0.48) (0.03)
Percent Black -0.28*** 0.05*** -0.05 -0.11*** -0.00

(0.03) (0.02) (0.04) (0.03) (0.01)
Percent Hispanic -0.20*** 0.03 -0.05 -0.08*** -0.00

(0.04) (0.02) (0.04) (0.03) (0.01)
Percent Low Income -0.01 -0.01 -0.05 -0.08** -0.00

(0.05) (0.03) (0.05) (0.04) (0.01)
Percent Special Education -0.11 0.03 0.11 -0.02 -0.04*

(0.13) (0.07) (0.18) (0.16) (0.02)
School Enrollment (Logged) 0.99** -0.57*** 0.70 1.65*** 0.01

(0.49) (0.21) (0.50) (0.45) (0.05)
Student-Teacher Ratio 0.19 0.15 -0.46* -0.35 -0.01

(0.23) (0.21) (0.28) (0.26) (0.06)
Percent Central Administration -0.56** -0.21 -0.86** 0.10 -0.04

(0.26) (0.18) (0.47) (0.47) (0.07)
Post-Intervention Shift -2.42 -3.67*** 3.95** 5.48 -0.10

(2.07) (1.12) (0.35) (3.42) (0.30)
Year -0.21* 0.24** -0.84 -0.52*** -0.03**

(0.12) (0.08) (0.35) (0.18) (0.01)
One Year Performance Lag 0.37*** 0.84*** 0.71*** 0.52*** 0.90***

(0.04) (0.25) (0.07) (0.08) (0.07)
R
2
0.73 0.49 0.82 0.66 0.77
Number of Observations 634 595 367 529 596
***p
 

Attachments

Back
Top