Implementing performance measurement innovations: evidence from government

Description
Using data from a government-wide survey administered by the US General Accounting Office, we examine some of
the factors influencing the development, use, and perceived benefits of results-oriented performance measures in government
activities. We find that organizational factors such as top management commitment to the use of performance
information, decision-making authority, and training in performance measurement techniques have a significant positive
influence on measurement system development and use. We also find that technical issues, such as information
system problems and difficulties selecting and interpreting appropriate performance metrics in hard-to-measure activities,
play an important role in system implementation and use. The extent of performance measurement and
accountability are positively associated with greater use of performance information for various purposes. However,
we find relatively little evidence that the perceived benefits from recent mandated performance measurement initiatives
in the US government increase with greater measurement and accountability. Finally, we provide exploratory evidence
that some of the technical and organizational factors interact to influence measurement system implementation and
outcomes, often in a complex manner.

Implementing performance measurement innovations:
evidence from government
Ken S. Cavalluzzo
a
, Christopher D. Ittner
b,
*
a
McDonough School of Business, Georgetown University, 37th and O Streets NW, Washington, DC 20057, USA
b
The Wharton School, University of Pennsylvania, Steinberg Hall—Dietrich Hall,
3620 Locust Walk, Philadelphia, PA 19104-6365, USA
Abstract
Using data from a government-wide survey administered by the US General Accounting O?ce, we examine some of
the factors in?uencing the development, use, and perceived bene?ts of results-oriented performance measures in gov-
ernment activities. We ?nd that organizational factors such as top management commitment to the use of performance
information, decision-making authority, and training in performance measurement techniques have a signi?cant posi-
tive in?uence on measurement system development and use. We also ?nd that technical issues, such as information
system problems and di?culties selecting and interpreting appropriate performance metrics in hard-to-measure activ-
ities, play an important role in system implementation and use. The extent of performance measurement and
accountability are positively associated with greater use of performance information for various purposes. However,
we ?nd relatively little evidence that the perceived bene?ts from recent mandated performance measurement initiatives
in the US government increase with greater measurement and accountability. Finally, we provide exploratory evidence
that some of the technical and organizational factors interact to in?uence measurement system implementation and
outcomes, often in a complex manner.
#2003 Elsevier Ltd. All rights reserved.
Introduction
Performance measurement issues are receiving
increasing attention as organizations attempt to
implement new measurement systems that better
support organizational objectives. While many of
these initiatives are in the private sector, recent
e?orts to improve governmental performance have
also placed considerable emphasis on performance
measurement as a means to increase accountability
and improve decision-making (Ittner and Larcker,
1998). Indeed, Atkinson, Waterhouse, and Wells
(1997) note that government agencies are at the
forefront of e?orts to implement new, more stra-
tegic performance measurement systems. The
Government Performance and Results Act of
1993, for example, requires United States execu-
tive branch agencies to clarify their strategic
objectives and develop results-oriented measures
of progress towards these objectives. Similar
initiatives have been launched in Australia,
Canada, New Zealand, the United Kingdom, and
other countries (Atkinson & McCrindell, 1997;
Hood, 1995; Smith, 1993).
This study draws upon the information systems
change, management accounting innovation, and
0361-3682/03/$ - see front matter # 2003 Elsevier Ltd. All rights reserved.
doi:10.1016/S0361-3682(03)00013-8
Accounting, Organizations and Society 29 (2004) 243–267
www.elsevier.com/locate/aos
* Corresponding author. Tel.: +1-215-898-7786; fax: +1-
215-573-2054.
E-mail address: [email protected] (C.D. Ittner).
public sector reform literatures to examine some
of the factors in?uencing the implementation, use,
and perceived bene?ts of results-oriented perfor-
mance measurement systems in the US govern-
ment. Small-sample studies in both the public and
private sectors identify a number of potential
impediments to the successful implementation of
performance measurement innovations (e.g. GAO,
1997a; Gates, 1999). These impediments include
identifying appropriate goals in environments
characterized by multiple and con?icting objec-
tives, measuring performance on hard-to-evaluate
or subjective goals, overcoming de?ciencies in
information systems, providing incentives for
employees to use the information to improve per-
formance, and achieving management commit-
ment to the new systems. Because many of these
problems are present across the public and private
sectors, the broad-scale implementation of new
performance measures in the US government pro-
vides an attractive setting to examine some of the
factors in?uencing the success or failure of
measurement system innovations.
Consistent with information system and man-
agement accounting change models (e.g. Kwon &
Zmud, 1987; Shields & Young, 1989), we ?nd that
organizational factors such as top management
commitment to the use of performance informa-
tion, the extent of decision-making authority
delegated to users of performance information,
and training in performance measurement techni-
ques have signi?cant positive in?uences on
measurement system development and use. How-
ever, we also ?nd that technical issues play an
important role in performance measurement sys-
tem implementation and use. In particular, di?-
culties selecting and interpreting appropriate
performance metrics in hard-to-measure activities
are a major impediment to measurement system
innovation. Data limitations, such as the inability
of existing information systems to provide neces-
sary data in a valid, reliable, timely, and cost
e?ective manner, also deter the use of perfor-
mance information for accountability and perfor-
mance evaluation. Technical issues such as these
appear to play a much more important role in the
implementation of performance measurement sys-
tems than they do in cost system implementation
(e.g. Anderson & Young, 1999; Krumwiede, 1998;
Shields, 1995).
The extent of performance measurement and
accountability are positively associated with the
use of performance information for various pur-
poses, consistent with claims that improved per-
formance information and incentives for achieving
results can support governmental decision-mak-
ing. However, we ?nd relatively little evidence that
the perceived bene?ts from recent mandated per-
formance measurement initiatives in the US gov-
ernment increase with greater measurement and
accountability. The latter results support institu-
tional theories that claim systems implemented to
satisfy external requirements are less likely to
in?uence internal behavior than are those imple-
mented to satisfy the organization’s own needs.
The remainder of the paper contains ?ve sec-
tions. ‘Background and hypotheses’ provides an
overview of recent performance measurement
initiatives in the US government and develops our
hypotheses. ‘Research design’ discusses our sam-
ple, followed by descriptive statistics on the vari-
ables used in our study in ‘Descriptive statistics’.
Results and conclusions are presented in the ?nal
two sections.
Background and hypotheses
Performance measurement initiatives in the US
government
During the 1990s, the US government began
enacting several major initiatives to promote a
performance-based approach to the management
and accountability of federal activities, including
the Chief Financial O?cers Act, the National
Performance Review, and the Government Per-
formance and Results Act. The stated goals of
these initiatives are twofold: (1) to increase Con-
gressional oversight and foster greater account-
ability for achieving results, and (2) to enhance
‘‘performance-based’’ decision-making by imple-
menting information systems that supplement tra-
ditional input-oriented performance measures (e.g.
expenditures and sta?ng levels) with measures
focused on results (e.g. output quantity, quality,
244 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
and timeliness) and the achievement of strategic
objectives.
The most important initiative is the Govern-
ment Performance and Results Act of 1993 (here-
after, GPRA). The GPRA requires managers of
each government activity (i.e. project, program, or
operation) to clarify their missions and strategic
objectives and to measure relevant outputs, service
levels, and outcomes for each activity in order to
evaluate performance toward these objectives
(GAO, 1997b; US Senate, 1992). Pilot GPRA
implementations began in ?scal 1994, with all
major agencies required to submit performance
goals and indicators for each of their individual
activities by ?scal 1997.
The GPRA and related initiatives in other
countries are based on the assumption that man-
dated reporting of results-oriented, strategic per-
formance indicators can improve governmental
e?ciency and e?ectiveness by increasing the
accountability of public managers (Atkinson &
McCrindell, 1997; Jones & McCa?ery, 1997;
Osborne & Gaebler, 1993). According to the
Governmental Accounting Standards Board’s
Concept Statement No. 2, public sector account-
ability represents the duty for public managers to
answer for the execution of their assigned respon-
sibilities, and for citizens and their elected or
appointed representatives to assess performance
and take actions by allocating resources, provid-
ing recognition or rewards, or imposing sanctions
based on the managers’ results. By making public
o?cials, legislative bodies, and the public more
informed about the behavior of government man-
agers and the results of their actions, the perfor-
mance measurement initiatives are intended to
improve the allocation of government resources
and promote governmental e?ciency and e?ec-
tiveness through improved performance-based
decision-making (Flynn, 1986; Scott, 1987).
1
Determinants of measurement system implementation
and success
Prior studies on information system change,
management accounting innovation, and public
sector reform have identi?ed a number of factors
that are expected to in?uence the implementation
and success of performance measurement initia-
tives such as the GPRA. These factors include
technical issues, such as the ability of existing
information systems to provide required data and
the extent to which organizations can de?ne and
develop appropriate measures, and organizational
issues, including management commitment, deci-
sion-making authority, training, and legislative
mandates (e.g. Kwon & Zmud, 1987; Shields &
Young, 1989).
Drawing upon this literature, we employ the
conceptual model in Fig. 1 to investigate the rela-
tions among these factors, the extent of measure-
ment system development, and the stated
objectives of governmental performance measure-
ment initiatives (i.e. greater accountability for
achieving results, enhanced decision-making, and,
ultimately, improved government e?ciency and
e?ectiveness). The following sections develop our
hypotheses regarding the expected relations
between the various technical and organizational
factors and the extent of measurement system
implementation and outcomes.
Information system capabilities
Kwon and Zmud’s (1987) review of the infor-
mation technology (IT) implementation literature
indicates that some of the key factors in?uencing
implementation success are technological issues.
These issues include the compatibility of the new
system with existing systems, system complexity,
and the system’s relative improvement over exist-
ing systems (e.g. accuracy and timeliness).
Accounting researchers have drawn upon this lit-
erature to argue that the success of management
accounting innovations should also be a function
of the current information system’s capabilities.
Krumwiede (1998), for example, suggests that
organizations with higher quality information
systems may be able to implement new measure-
ment systems more easily than organizations with
1
Many observers argue that the government performance
measurement initiatives are emulating the private sector by
adopting similar mechanisms for controlling principal-agent
problems (Mayston, 1993; Smith, 1990, 1993). See Rose-Ack-
erman (1986), Tirole (1994), and Dixit (1997) for theoretical
studies focused on the applicability of principal-agent models
of management control practices in the public sector.
K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267 245
less sophisticated information systems because
measurement costs are lower, leading to a positive
relation between current information system cap-
abilities and implementation success. Conversely,
managers who are generally satis?ed with the
information from the existing system may be
reluctant to invest the necessary resources in the
new system, leading to a negative relation.
Academic studies provide mixed evidence on the
in?uence of information system issues on
accounting system innovations. Shields (1995)
?nds no association between successful implemen-
tation of activity-based costing (ABC) and tech-
nology (i.e. type of software or stand-alone vs.
integrated system). Anderson and Young (1999)
?nd that the perceived quality of the existing
information system is negatively related to man-
agement’s evaluation of ABC success. Krumwiede
(1998) reports a positive association between the
strength of the existing information system and an
organization’s decision to undertake more
advanced stages of ABC adoption, but not with
earlier stages.
Surveys of performance measurement innova-
tions in the private sector, on the other hand,
indicate that information system problems repre-
sent a major impediment to implementation suc-
cess. Many of these problems relate to the ability of
existing information systems to provide required
data in a reliable, timely, and cost e?ective manner.
Gates’ (1999) study of strategic performance
measurement (SPM) systems concludes that most
companies’ information technologies (IT) are lim-
ited in their ability to deliver rapid and con-
solidated results for analysis. In addition, nearly
60% of his respondents avoid using certain stra-
tegic performance measures due to limitations in
their IT systems, 22% do not believe their IT sys-
tems capture data su?ciently, and 57% are forced
to capture at least some SPM information manu-
ally. A survey of balanced scorecard users by
Towers Perrin also ?nds that the lack of highly-
developed information systems is a problem or
major problem in 44% of scorecard implementa-
tions (Ittner & Larcker, 1998).
Small-sample ?eld studies in the public sector
report similar results (GAO, 1997a; Jones, 1993).
These studies suggest that information system
problems in government organizations are com-
pounded by the need to use data collected by other
organizations (e.g. other federal organizations,
state and local agencies, and non-government
recipients of federal funds) and di?culties ascer-
taining the accuracy and quality of this data.
Kravchuk and Schank (1996) conclude that the
intergovernmental structure of many programs
results in serious measurement problems when the
information systems used by di?erent organiza-
tions vary in terms of data de?nitions, technology,
ease of accessibility, and amount of data retained.
Fig. 1. Hypothesized conceptual model linking implementation factors, measurement system development, and system outcomes.
246 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
If these information system limitations prevent
managers from receiving timely and reliable data,
the performance measurement system’s use for
accountability and decision-making purposes is
likely to be limited (Jones, 1993; Kravchuk and
Shank, 1996).
These issues prompt our ?rst hypothesis:
H1. Performance measurement development and
outcomes are negatively associated with problems
obtaining necessary data in a reliable, timely, and
cost e?ective manner.
Selecting and interpreting performance metrics
A second technical issue highlighted in the per-
formance measurement literature is the ability to
de?ne and assess metrics that capture desired
actions and outcomes.
2
In many public and pri-
vate sector settings, employees carry out many
tasks that are di?cult to accurately evaluate
using objective, quanti?able performance metrics
(e.g. basic research and development activities).
In these settings, theoretical studies indicate that
the implementation and e?ectiveness of per-
formance measurement systems are likely to be
low (e.g. Holmstrom & Milgrom, 1991), with
greater emphasis placed on subjective, qualitative
judgments when evaluating performance
than on quantitative performance metrics (e.g.
Prendergast, 1999).
Surveys of private sector measurement practices
indicate that problems identifying and measuring
appropriate performance metrics represent sig-
ni?cant impediments to system success. Gates
(1999) ?nds that the leading roadblocks to imple-
menting strategic performance measurement sys-
tems are avoiding the measurement of ‘‘di?cult-
to-measure’’ activities (55% of respondents),
measuring ‘‘the right things wrong’’ (29%), and
measuring ‘‘the wrong things right’’ (29%). Simi-
larly, the Towers Perrin survey of balanced scor-
ecard users ?nds that 45% of respondents view the
need to quantify qualitative results to be a major
implementation problem (Ittner & Larcker, 1998).
In the public sector, empirical and theoretical
studies indicate that problems selecting appro-
priate metrics and interpreting results often stem
from four features common to many federal pro-
grams (as well as many activities in the private
sector): (1) the complicated interplay of federal,
state, and local government activities and objec-
tives, (2) the aim to in?uence complex systems or
phenomena whose outcomes are largely outside
government control (e.g. programs that attempt to
intervene in ecosystems, year-to-year weather, or
the global economy), (3) missions that make it
hard to develop measurable outcomes (e.g. pre-
vention of a rare event such as a presidential
assassination), to attribute results to a particular
function (e.g. reductions in unemployment), or to
observe results in a given year (e.g. basic scienti?c
research), and (4) di?culties measuring many
dimensions of social welfare or other govern-
mental goals (e.g. Dixit, 1997; GAO, 1997a; Tir-
ole, 1994). The GAO (1997a) argues that problems
such as these can force organizations to develop
performance metrics that are incomplete or unin-
formative in order to meet the GPRA’s reporting
requirements, with limited use of the resulting
metrics for decision-making and accountability
purposes.
These issues lead to our second hypothesis:
H2. Performance measurement development and
outcomes are negatively associated with di?culties
selecting and interpreting appropriate perfor-
mance metrics.
Management commitment
While technical factors are expected to sig-
ni?cantly in?uence the implementation of perfor-
mance measurement innovations, their impact
may be secondary to that of organizational factors
(Shields & Young, 1989). Shields (1995), for
example, argues that top management support for
the innovation is crucial to implementation suc-
cess because these managers can focus resources,
2
The terms performance metric and performance measure
are interchangeable. We refer to performance metrics when
discussing the identi?cation, development, and interpretation
of speci?c performance measures for evaluating managerial
performance or aiding decision-making. We refer to perfor-
mance measure development or performance measurement
systems more generally as a collection of performance metrics
that are reported on a regular basis through the organization’s
information systems.
K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267 247
goals, and strategies on initiatives they deem
worthwhile, deny resources to innovations they do
not support, and provide the political help needed
to motivate or push aside individuals or coalitions
who resist the innovation.
The information system change literature also
highlights the role of top management support in
creating a suitable environment for change, in?u-
encing users’ personal stakes in the system, and
increasing the appreciation of others for the
potential contribution of the system to meeting
organizational objectives (e.g. Doll, 1985; Manley
1975; Schultz and Ginzberg, 1984). Consequently,
employees who perceive strong support for the
system by top management are more likely to view
the change favorably (McGowan & Klammer,
1997).
3
Top management commitment is therefore
expected to in?uence both the extent to which
employees feel accountable for results and their
use of the information for decision-making.
The need for strong top management commit-
ment to performance measurement is recognized
in the government reform literature. The GAO
(1997b) argues that results-oriented performance
measurement initiatives will not succeed without
the strong commitment of the US federal govern-
ment’s political and senior career leadership.
However, Flynn (1986) notes that performance
measurement initiatives are part of government
e?orts to cut expenditures. The implication is that
e?ciency improvements will lead to lower bud-
gets, reducing incentives for top management to
support performance measurement e?orts. Jones
(1993) adds that US executive branch o?cials do
not want to aid Congressional oversight commit-
tees in the micro-management of executive agen-
cies, or to assist Congress in gaining leverage over
the president and his cabinet appointees. Conse-
quently, there may be little reason for top agency
management to support performance measure-
ment e?orts. Jones and McCa?ery (1997) also ?nd
that Congressional knowledge of and interest in
performance measurement initiatives are low, and
argue that Congress, which is motivated by short-
term re-election concerns, is institutionally incap-
able of making long-range decisions based on the
performance measures mandated by the GPRA.
As a result, legislators’ commitment to the devel-
opment and use of performance information to
improve governmental accountability, e?ciency,
and e?ectiveness is also likely to be low. Thus, our
third hypothesis:
H3. Performance measurement development and
outcomes are positively associated with manage-
ment commitment to the implementation and use
of performance information.
Decision-making authority
Kwon and Zmud’s (1987) review indicates that a
second major organizational factor in IT imple-
mentation success is the level of worker responsi-
bility. Anderson (1995) builds on their de?nition
of worker responsibility to argue that individuals’
reactions to management accounting change are
positively related to the workers’ role involvement,
which she de?nes as ‘‘the centrality of the pro-
posed solution to the individuals’ jobs, their
authority and responsibilities.’’ Consistent with
this claim, a subsequent review of ABC imple-
mentation studies identi?es consistent evidence
that implementation success is positively related to
the relevance of the information for managers’
decisions (Anderson & Young, 1999). These
results suggest that managers who believe the
innovation can support their decision-making
activities are more likely to implement and use the
measures. Conversely, managers who lack the
authority to make decisions based on the new
information will have little reason to embrace the
innovation. These results suggest a positive rela-
tion between the level of decision-making author-
ity, the extent of system development, and the use
of performance information for decision-making.
The hypothesized link between decision-making
authority and system implementation and results
is also supported by economic theories, which
3
A positive relation between top management’s commit-
ment to using new performance measures and their use by
lower-level managers can also be explained by contagion
e?ects, which represent the spread of a particular process or
paradigm from one level of management hierarchy to the next
(Macintosh, 1985). Contagion e?ects can occur when lower-
level managers evaluate subordinates using the same criteria
used by upper-level managers to evaluate their performance
(Hopwood, 1974).
248 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
suggest that the level of accountability must be
aligned with the decision-rights granted to man-
agers (e.g. Brickley, Smith, & Zimmerman, 1997).
This requirement is recognized by government
reform advocates, who argue that greater
accountability can only be achieved when man-
agers have expanded authority over spending,
human resources, and other management func-
tions. As a result, the level of accountability is
expected to be positively associated with decision-
making authority. However, the requirement for
greater authority creates a potential impediment
to increased accountability in government organi-
zations, where laws, bureaucratic rules, and the
separation of powers among di?erent branches of
government can place severe constraints on man-
agers’ decision-making authority, and thereby the
extent to which they can be held accountable for
results.
4
Thus, our fourth hypothesis:
H4. Performance measurement development and
outcomes are positively associated with the extent
to which manager’s have the authority to make
decisions based on the performance information.
Training
A third organizational factor that is expected to
in?uence the implementation and results of per-
formance measurement innovations is the extent
to which resources and training are provided to
support the implementation (Kwon & Zmud,
1987; Shields & Young, 1989). Shields (1995)
argues that training in the design, implementation,
and use of a management accounting innovation
allows organizations to articulate the link between
the new practices and organizational objectives,
provides a mechanism for employees to under-
stand, accept, and feel comfortable with the inno-
vation, and prevents employees from feeling
pressured or overwhelmed by the implementation
process. The provision of training resources also
provides an indication that the organization is
providing adequate resources to support the imple-
mentation, and signals management support for the
innovation (Shields, 1995). If training resources are
insu?cient, then normal development procedures
may not be undertaken, increasing the risk of fail-
ure (McGowan & Klammer, 1997).
Studies of information technology and activity-
based costing implementations support these
claims, ?nding positive associations between
training investments and implementation success
(Anderson & Young, 1999; Kwon & Zmud, 1987).
Accordingly, our ?fth hypothesis is:
H5. Performance measurement development and
outcomes are positively associated with the extent
of related training provided to the manager.
Legislative mandates
Institutional theory suggests a fourth organiza-
tional factor that may be particularly relevant to
implementation success in government organiza-
tions: whether or not the performance measure-
ment innovation is being implemented in response
to legislative mandates or requirements (e.g.
Brignall & Modell, 2000; Covaleski & Dirsmith,
1991; Gupta, Dirsmith, & Fogarty, 1994; Scott,
1987). Institutional theory argues that organiza-
tions gain legitimacy by conforming to external
expectations regarding appropriate management
control systems in order to appear modern,
rational, and e?cient to external observers, but
tend to separate their internal activities from the
externally-focused symbolic systems. In particular,
Scott (1987) claims that in institutional environ-
ments such as government organizations, where
survival depends primarily on the support of
external constituents and only secondarily on
actual performance, external bodies have the
authority to impose organizational practices on
subordinate units or to specify conditions for
remaining eligible for funding. As a result, sub-
ordinate organizations will implement the required
practices, but the changes will tend to be super-
?cial and loosely tied to employees’ actions.
A number of empirical studies support these
theories, ?nding that government organizations
4
The GPRA allows managers to propose, and the O?ce of
Management and Budget to approve, waivers of certain non-
statutory administrative requirements and controls (e.g. pro-
curement authority or greater control over employee
compensation). However, the GPRA does not provide agencies
with authority to waive requirements for activities within their
organizations, and does not allow any waiver of statutory
requirements.
K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267 249
that implement management accounting systems
to satisfy legislative requirements make little use
of the systems for internal purposes (Ansari &
Euske, 1987; Brignall & Modell, 2000; Geiger &
Ittner, 1996). Studies of previous management
control initiatives in the US government (i.e.
Planning, Programming, and Budgeting, Manage-
ment-by-Objectives, and Zero-Base Budgeting)
also indicate that these practices were used more
as political strategies for controlling and directing
controversy than as tools for improving account-
ability or decision-making (e.g. Dirsmith,
Jablonsky, & Luzi, 1980). These studies suggest
that the recent performance measurement man-
dates in the U.S. government may increase the
development of results-oriented performance mea-
sures but have little e?ect on accountability, use, or
performance, leading to our sixth hypothesis:
H6. Performance measurement systems that are
implemented to comply with the GPRA’s require-
ments are positively associated with performance
measurement development, but are not associated
with greater accountability or use of performance
data, or with the perceived bene?ts from GPRA
implementation.
Measurement system development and system
outcomes
Many government reform advocates contend
that the mere availability and reporting of results-
oriented performance information fosters
improved decision-making by government man-
agers. Consistent with our previous hypotheses,
these claims imply a direct relation between
measurement system development and system out-
comes. Others, however, argue that these improve-
ments only occur when the performance measures
are used to increase managers’ accountability for
achieving objectives (e.g. Dixit, 1997; Mayston, 1993;
Smith, 1990, 1993; Tirole, 1994; Whynes, 1993),
thereby increasing the managers’ incentives to use
the information for decision-making. Taken toge-
ther, these arguments prompt our ?nal hypothesis:
H7. Performance measurement system develop-
ment has positive direct e?ects on systemoutcomes,
as well as indirect e?ects through the level of
accountability for results.
Research design
Sample
We test our hypotheses using data collected by
the United States General Accounting O?ce
(GAO). The GAO survey targeted a random
sample of 1300 middle- and upper-level civilian
managers working in the 24 largest executive
branch agencies. These agencies represented 97%
of the executive branch’s full-time workforce and
over 99% of the federal government’s net outlay
in ?scal 1996. The sample was strati?ed by whe-
ther the manager was a member of the Senior
Executive Service (SES) and whether the manager
worked in an agency or agency component desig-
nated as a GPRA pilot.
5
The questionnaire was
pretested using 32 managers from four agencies
and revised based on their feedback.
The survey was distributed between 27 Novem-
ber 1996 and 3 January 1997. Managers who did
not respond to the initial mailing were sent a fol-
low-up questionnaire. Analysis of responses to the
second request revealed no signi?cant di?erences
from earlier responses. Usable surveys were
received from 69% of the original sample.
6
Of the
5
Members of the Senior Executive Service represent 44.2%
of the sample and GPRA pilot sites represent 65.4%. The
senior executive strati?cation was used to control for potential
di?erences in responses by senior managers and lower-level
managers by ensuring representative sampling of each group.
Strati?ed sampling of GPRA pilot and non-pilot activities was
used because pilot sites were expected to be further along in
implementing performance measures than other agencies. The
GAO excluded pilots that were designated in ?scal year 1996
because any signi?cant initiatives would have been fairly recent
and may not have been su?ciently implemented for any e?ects
to be re?ected in questionnaire responses. Most selected pilots
were designated in ?scal 1994 and encompassed the entire
agency or a major agency component.
6
Of the original sample of 1300 managers, 47 were elimi-
nated because the individuals had retired, died, left the agency
or had some other reason that excluded them from the popu-
lation of interest, 22 could not be located, 23 refused to parti-
cipate, 299 questionnaires were not returned, and four were
returned unusable.
250 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
905 respondents, 108 stated that they did not have
performance measures for their activities and are
excluded from our tests.
7
Our initial sample consists of the 797 remain-
ing managers with usable responses. Final sam-
ple sizes in our tests range from 380 to 528 due
to missing data.
8
We use the manager of an
individual program, project, or operation (hen-
ceforth an activity) as our unit of analysis
rather than some higher unit (e.g. average
responses by all managers within a major pro-
gram or entire agency) for several reasons. First,
many of the survey questions address individual
managers’ own activities, such as the extent to
which respondents have performance measures
for the individual programs, projects, or oper-
ations they are responsible for, the extent to
which they feel accountable for results, and the
extent to which they use performance informa-
tion to manage their activities. Second, ?eld
research by the GAO (1997b) ?nds that the
development of performance measures varies
signi?cantly within a given program or agency,
and indicates that managers of some activities
have made greater progress implementing
measurement systems than others in the same
organization. Finally, organizational theory sug-
gests that individual managers are the appro-
priate unit of analysis because the beliefs and
behaviors of individuals toward a particular
innovation are shaped by their unique, indivi-
dual circumstances within the organization
(Anderson & Young, 1999).
Variables
The GAO survey provides substantial informa-
tion on performance measurement practices and
their hypothesized determinants in US govern-
ment activities. Where possible, we employ
multiple indicators for each construct. Factor
analysis is used to reduce the dimensionality of
the individual questions and minimize measure-
ment error. The resulting multi-indicator con-
structs are computed using mean standardized
responses to the survey questions loading
greater than 0.50 on the respective factors. We
assess construct reliability for the multi-item
variables using factor analysis and Cronbach
coe?cient alphas. All of the indicator variables
pertaining to a given construct load on a single
factor, with coe?cient alphas above the mini-
mum level suggested by Nunnally (1967) for
adequate construct reliability. Speci?c questions,
response scales, and descriptive statistics for the
constructs used in our analyses are provided in
Table 1.
Measurement system development
System development is assessed using the vari-
able MEASUREMENT, which captures the
extent to which respondents have developed dif-
ferent types of results-oriented performance mea-
sures (where 1=to no extent and 5=to a very
great extent) for the activities they are involved
with, from the following list: quantity of products
or services, operating e?ciency, customer satis-
faction, product or service quality, and measures
that demonstrate to someone outside the agency
7
We exclude managers without performance measures
because these managers were not required to answer many of
the questions used to develop the constructs used in our
analyses. A multivariate logit analysis examining whether a
manager had performance measures of any kind found no
di?erences with respect to the type of activity, number of
employees, or the percentage of other activities in the same
major program that had measures. Senior executives were
more likely to have measures for their activities than lower-
level managers. Managers with measures also reported
greater accountability for achieving results than those without
measures. Finally, the presence of performance measures was
more likely when the manager belonged to a GPRA pilot
site.
8
The majority of missing data relates to ‘‘no basis to
judge’’ responses to questions. Most of the survey response
scales range from 1=‘‘to no extent’’ to 5=‘‘to a very great
extent.’’ All of the questions o?er a ‘‘no basis to judge’’
response. When this response relates to the respondent’s
own activities, we code the answer ‘‘to no extent,’’ assum-
ing that these topics have little or no impact on an activity
if the manager has no basis to respond. In all other cases
(e.g. use of performance information for decisions above
the respondent’s level or perceived results from perfor-
mance measurement initiatives), ‘‘no basis to judge’’
responses are omitted from the analyses. Final sample sizes
for each of the variables used in our tests are provided in
Table 1.
K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267 251
Table 1
Summary statistics for the survey questions used to develop the measurement system development, system outcome, and implemen-
tation factor variables
Construct and survey items Mean Std.
Dev.
%Great or
Very Great
Extent
b
MEASUREMENT (n=757; coe?cient =0.87)
To what extent do you have the following performance measures for your activities?
a
1. Quantity of products or services provided 3.63 1.15 60.8
2. Operating e?ciency 3.25 1.16 44.7
3. Customer satisfaction 3.22 1.20 45.2
4. Quality of products or services provided 3.25 1.16 46.6
5. Measures demonstrating to external parties whether or not you are achieving intended results 3.36 1.14 51.2
ACCOUNTABILITY (n=744; coe?cient =0.70)
To what extent do you agree with the following statements?
a
1. Managers at my level are held accountable for the results of their activities 3.59 1.02 59.8
2. Employees in my agency receive positive recognition for helping the agency accomplish strategic goals 3.07 1.05 36.1
3. The individual I report to periodically reviews my activity’s results with me 3.26 1.20 47.6
4. Lack of incentives (e.g. rewards, positive recognition) has hindered using performance information
(reverse coded in the construct)
2.61 1.23 24.7
MGR USE (n=738; coe?cient =0.93)
To what extent do you use performance measurement information for the following activities?
a
1. Setting program priorities 3.82 1.03 68.8
2. Allocating resources 3.75 1.07 66.0
3. Adopting new program approaches or changing work processes 3.78 1.04 66.9
4. Coordinating program e?orts with other internal or external organizations 3.59 1.08 59.6
5. Re?ning program performance measures 3.67 1.12 61.9
6. Setting new or revising existing performance goals 3.74 1.09 65.6
7. Setting individual job expectations for government employees I manage or supervise 3.68 1.09 64.5
8. Rewarding government employees I manage or supervise 3.62 1.12 60.1
HIGHER USE (n=624; coe?cient =0.87)
To what extent do you agree with the following statements?
a
1. Results-oriented performance information from my activities is used to develop my agency’s budget 2.92 1.15 28.9
2. Funding decisions for my activities are based on results-oriented performance information 2.78 1.12 23.5
3. Changes by management above my level are based on results-oriented performance information 2.68 1.14 23.1
RESULTS TO DATE (n=501)
1. To what extent do you believe that your agency’s e?orts to implement GPRA to date have improved
your agency’s programs/operations/projects?
a
2.45 1.03 13.7
FUTURE RESULTS (n=596)
1. To what extent do you believe that implementing GPRA can improve your agency’s programs/
operations/projects in the future?
a
3.08 1.10 34.7
DATA LIMITATIONS (n=685; coe?cient =0.84)
To what extent have the following factors hindered measuring performance or using performance
information?
a
1. Di?culty obtaining valid or reliable data 3.00 1.23 38.1
2. Di?culty obtaining data in time to be useful 2.80 1.23 29.6
3. High cost of collecting data 2.60 1.26 25.0
4. Existing information technology not capable of providing needed data 2.61 1.26 26.6
METRIC DIFFICULTIES (n=701; coe?cient =0.81)
To what extent have the following factors hindered measuring performance or using performance
information?
a
1. Di?culty determining meaningful measures 3.36 1.21 48.1
2. Results of our program(s)/operation(s)/project(s) occurring too far in the future to be measured 2.39 1.24 19.6
3. Di?culty distinguishing between the results produced by the program and results caused by other factors 2.68 1.17 23.3
(continued on next page)
252 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
whether the organization is achieving its intended
results.
9
System outcomes
We evaluate system outcomes using three
constructs capturing the stated objectives of
governmental performance measurement e?orts:
greater accountability, enhanced decision-making,
and improved governmental performance.
10
Four questions measure the extent to which
managers feel they are held accountable for results.
Respondents were asked to rate the following
Table 1 (continued)
Construct and survey items Mean Std.
Dev.
%Great or
Very Great
Extent
b
4. Di?culty determining how to use performance information to improve the program 2.48 1.12 18.5
5. Di?culty determining how to use performance information to set new or revise existing performance
goals
2.45 1.13 19.0
COMMITMENT (n=611; coe?cient =0.65)
1. To what extent does your agency’s top leadership demonstrate a strong commitment to achieving
results?
a
3.61 1.19 62.8
2. To what extent has the lack of ongoing top executive commitment or support for using performance
information to make program/funding decisions hindered measuring performance or using performance
information?
a
(reverse coded in the construct)
2.30 1.25 18.9
3. To what extent has the lack of ongoing congressional commitment or support for using performance
information to make program/funding decisions hindered measuring performance or using performance
information?
a
(reverse coded in the construct)
2.66 1.41 31.7
AUTHORITY (n=765)
1. Agency managers at my level have the decision making authority needed to help the agency accomplish
its strategic goals
a
3.07 1.07 37.3
TRAINING (n=747)
During the past 3 years, has your agency provided, arranged, or paid for training that would help you to
accomplish the following tasks? (1=yes, 0=no):
1. Conduct strategic planning 0.50 0.50 n/a
2. Set program performance goals 0.46 0.50 n/a
3. Develop program performance measures 0.42 0.49 n/a
4. Use program performance information to make decisions 0.38 0.48 n/a
5. Link the performance of program(s)/operation(s)/project(s) to the achievement of agency strategic goals 0.40 0.49 n/a
GPRA INVOLVEMENT (n=756; coe?cient =0.91)
To what extent have you and your sta? been involved in your agency’s e?orts in implementing GPRA?
a
1. Your involvement 2.48 1.31 23.5
2. Your sta?’s involvement 2.19 1.28 17.3
a
Scale: 1=no extent, 2=small extent, 3=moderate extent, 4=great extent, 5=very great extent. Reported sample sizes and coef-
?cient alphas are for observations with responses to all of the questions used to compute the respective constructs.
b
The percentage of respondents answering ‘‘to a great extent’’ or ‘‘to a very great extent’’.
9
The fact that all of the performance measure categories
load on a single factor indicates that managers of activities
tend to implement all of these measures together. This is con-
sistent with theories calling for greater measurement diversity
in strategic performance measurement systems, but is incon-
sistent with theories stating that the types of measures should
be tailored to re?ect the organization’s strategies or the speci?c
actions desired of agents in multitasking environments. See
Ittner, Larcker, and Randall (2002) for a discussion of these
theories. Additional analysis by type of activity and other
contingency variables provided no additional insight into the
greater combined use of all these variables. However, the per-
formance measurement categories in the survey are consistent
with the GPRA’s requirements for output, service level, and
outcome measures for each activity. Consequently, the greater
implementation of measures related to each of these categories
may re?ect e?orts to meet the Act’s requirements.
10
Our outcome variables are similar to those used to evaluate the
success of activity-based costing implementations. See, for example,
Foster and Swenson (1997) and Anderson and Young (1999).
K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267 253
statements on a ?ve-point scale (where 1=to no
extent and 5=to a very great extent): (1) managers
at my level are accountable for the results of the
program(s)/project(s)/operations(s) they are
responsible for, (2) employees in my agency receive
positive recognition for helping the agency accom-
plish its strategic goals, (3) the individual I report
to periodically reviews with me the results or out-
comes of the program(s)/project(s)/operations(s) I
am responsible for, and (4) the lack of incentives
(e.g. rewards or positive recognition) has hindered
using performance information. The last question
is reverse-coded when developing the construct.
Eleven questions address the use of performance
measures. Factor analysis with oblique rotation
indicates that these questions represent two
underlying constructs. Eight questions loading
greater than 0.50 on the ?rst factor re?ect lower-
level uses related to the managers’ own activities
(denoted MGR USE). These questions ask the
extent to which respondents use performance
information for the activities they are involved
with when: (1) setting program priorities, (2) allo-
cating resources, (3) adopting new program
approaches or changing work processes, (4) coor-
dinating program e?orts with other internal or
external organizations, (5) re?ning program per-
formance measures, (6) setting new or revising
existing performance goals, (7) setting individual
job expectations for subordinates, and (8) reward-
ing subordinate government employees.
Three questions loading greater than 0.50 on the
second factor emphasize higher-level uses of per-
formance information (denoted HIGHER USE).
These questions address the extent to which per-
formance information is used to develop the
agency’s budget, make funding decisions, and
make management changes above the respon-
dent’s organizational level.
Finally, we examine the bene?ts from the US
government’s recent performance measurement
mandates using two questions on the perceived
results from the Government Performance and
Results Act. While government reform advocates
contend that the GPRA’s externally-imposed
reporting practices will improve governmental
performance (particularly in the presence of
greater accountability), institutional theory argues
that mandated practices will have little e?ect on
governmental performance regardless of the extent
of system implementation. The two questions ask
the extent to which respondents believe that e?orts
to implement the GPRA have improved their
organizations’ activities to date (denoted
RESULTS TO DATE), or will improve them in
the future (denoted FUTURE RESULTS). Since
many respondents were not su?ciently involved in
GPRA e?orts to have an opinion on its current
e?ects, we treat each question separately.
Implementation factors
Following Kwon and Zmud (1987), Shields
and Young (1989), and others, we examine both
technical and organizational in?uences on the
measurement system outcome variables. The vari-
ables used to measure the hypothesized imple-
mentation factors are discussed below.
Data limitations and metric difficulties. The survey
contains 11 questions on potential factors hinder-
ing performance measurement and management.
Consistent with discussions in the performance
measurement literature, factor analysis with obli-
que rotation reveals two underlying dimensions
with eigenvalues greater than one.
11
Four ques-
tions loading greater than 0.50 on the ?rst factor
(denoted DATA LIMITATIONS) emphasize lim-
itations in existing information systems’ ability to
provide required data. These questions address
di?culties obtaining valid or reliable data, di?-
culties obtaining data in time to be useful, the high
cost of collecting data, and the inability of existing
information systems to provide the needed data.
Five questions loading greater than 0.50 on the
second factor (denoted METRIC DIFFICUL-
TIES) relate to problems de?ning and interpreting
performance metrics. The questions ask managers
the extent to which they have experienced di?culties
determining meaningful measures, associating
11
Questions concerning implementation problems were only
asked to respondents who had performance measures for their
activities. Two questions about (1) di?erent parties using dif-
ferent de?nitions to measure performance, and (2) di?culty
resolving con?icting interests of internal and/or external stake-
holders did not load 0.50 or greater on any factor. These ques-
tions are not included in our analyses.
254 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
their activities with future results, distinguishing
results due to their activities from other factors,
and determining how to use performance infor-
mation to improve activities or set goals.
Management commitment. We develop the con-
struct COMMITMENT to measure the extent to
which top leadership is committed to achieving
results via performance measurement. COMMIT-
MENT is based on three questions: (1) to what
extent does the agency’s top leadership demon-
strate a strong commitment to achieving results,
(2) to what extent has the lack of ongoing top
executive commitment to using performance infor-
mation to make program/funding decisions hin-
dered measuring performance or using performance
information, and (3) to what extent has the lack of
ongoing congressional commitment to using per-
formance information to make program/funding
decisions hindered measuring performance or using
performance information. The latter two questions
are reverse-coded when computing the construct.
Decision-making authority. The level of decision-
making authority (denoted AUTHORITY) is
assessed using responses to a single question ask-
ing whether managers at the respondent’s level
have the decision-making authority they need to
help the agency accomplish its strategic goals.
Training. Respondents were asked whether they
have received training to accomplish the following
measurement-related tasks: (1) conduct strategic
planning, (2) set program performance goals, (3)
develop program performance measures, (4) use
program performance information to make deci-
sions, and (5) link the performance of program(s)/
operation(s)/project(s) to the achievement of
agency strategic goals. We code each response one
if the agency provided training in that task, and
zero otherwise. The construct TRAINING repre-
sents the sum of the individual responses.
Legislative mandates. We proxy for the e?ects of
legislative mandates on performance measurement
implementation using an indicator variable for
GPRA pilot sites. The GAO (1997b) argues that
pilot sites are likely to have more highly developed
measurement systems than other sites due to their
earlier e?orts to meet the GPRA’s mandate for
results-oriented performance measures. However,
the GAO makes no assessment of whether this
information is actually used to improve account-
ability or decision-making. The variable PILOT is
coded one if the activity was part of a GPRA
pilot, and zero otherwise.
Control variables
We include two control variables in our tests.
Our ?rst control is an indicator variable for mem-
bers of the Senior Executive Service (denoted
SES). This variable is included to control for
potential di?erences in responses between senior
and lower-level managers. We also include a second
control variable in models examining perceived
GPRA bene?ts to account for potential biases in
responses by those participating in the implemen-
tation process. GPRA INVOLVEMENT repre-
sents the average standardized response to two
questions on the involvement of managers and their
sta? in GPRA implementation e?orts.
12
Descriptive statistics
Descriptive statistics are provided in Table 1.
13
The most highly-developed measures are volume
12
To examine the robustness of our results to model speci?-
cation, we repeated the analyses using a number of other con-
trol variables, including the natural logarithm of the number of
employees in the activity (a size control), the type of activity
managed by the respondent (internal agency e?orts, federal
government-wide support, research and development, service
delivery, and other), and a program control for organizational
e?ects on the managers’ responses (measured using the average
response by other managers in the same program). These con-
trols had virtually no e?ect on our results and are excluded
from the reported models.
13
Although average standardized responses are used to
compute some of the constructs, we report unstandardized
responses in Table 1 to provide insight into the performance
measurement practices in our sample. Means (standard devia-
tions) for the standardized constructs are À0.002 (0.182) for
MEASUREMENT, 0.048 (0.700) for ACCOUNTABILITY,
0.005 (0.830) for MGR USE, 0.100 (0.873) for HIGHER USE,
0.006 (0.821) for DATA LIMITATIONS, 0.007 (0.753) for
METRIC DIFFICULTIES, 0.021 (0.764) for COMMIT-
MENT, and 0.461 (0.498) for GPRA INVOLVEMENT.
K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267 255
indicators, with 60.8% of managers having these
measures to a great or very great extent. The least
developed measures relate to operating e?ciency,
with only 44.7% of managers having these mea-
sures to a great or very great extent.
Almost 60% of respondents feel that managers
at their level are held accountable for results to a
great or very great extent. However, fewer than
half (47.6%) note that their superior extensively
reviews their results with them on a periodic basis.
Less than a quarter believe that the lack of incen-
tives has severely hindered using performance
information.
Between 59.6 and 68.8% of the respondents
report using performance measures extensively for
managerial purposes, depending upon the type of
measure. There is considerably lower perceived
use of performance measures for higher-level
decisions. Only 28.9% believe that results-oriented
performance information has a major in?uence on
budgets, the most extensive higher-level use. The
least common use of performance information is
for program, operation, or project changes by
upper-level management, with only 23.1% of
managers believing that upper-level management
extensively uses the performance information for
these purposes.
Most managers rate the bene?ts from GPRA
implementation relatively low. Only 13.7% feel
that the GPRA has improved agency performance
to a great or very great extent to date, with 34.7%
feeling it will have a great or very great impact in
the future. In contrast, 52.3% believe the GPRA
has had little or no impact to date, while 29.9%
believe its impact will be small to nonexistent in
the future (not shown in the table).
Correlations
Table 2 provides Spearman correlations among
the variables used in our study. More than 75%
of the associations are signi?cant at the 5% level
or better (two-tailed).
14
Performance measure
development, accountability, and uses are posi-
tively related to each other, negatively related to
data and metric problems, and positively related
to the extent of management commitment, decision-
making authority, and training. These variables
are also positively related to whether the manager
is a senior executive (SES) and the extent of
GPRA involvement.
The perceived bene?ts of GPRA-related activ-
ities (both to date and in the future) are positively
associated with performance measure develop-
ment, accountability, and use. Organizations that
demonstrate a strong commitment to results are
also more likely to allow greater decision-making
authority, to provide more training, to have a
greater proportion of senior executive respon-
dents, and to have greater GPRA involvement.
Results
Performance measure development
Table 3 provides evidence on the determinants
of results-oriented performance measure develop-
ment. Due to missing responses for some of the
variables, the sample size is 528 in this analysis.
The resulting regression is highly signi?cant, with
an adjusted R
2
of 30%.
Most of the results support our hypotheses.
15
Metric di?culties (i.e. di?culties determining
meaningful measures, results occurring too far
into the future to be measured, di?culties distin-
guishing between results produced by the program
and results caused by other factors, and di?culties
determining how to use performance information
to improve the program or to set new or revise
existing performance goals) signi?cantly dampen
the extent of performance measure development.
Top management commitment, decision-making
authority, and the level of training provided to
managers all exhibit signi?cant positive associ-
ations with performance measure development.
14
Pearson correlations are virtually identical and are avail-
able from the authors upon request. Despite the signi?cant
correlations, all Variance In?ation Factor (VIF) scores are
below 2.5, indicating no serious problems with multicollinearity
in subsequent regression models.
15
One-tailed tests are used for all of the variables with
hypothesized signs and two-tailed tests are used for control
variables. Variables with P-values of 0.05 or less are considered
statistically signi?cant.
256 K.S. Cavalluzzo, C.D. Ittner / Accounting, Organizations and Society 29 (2004) 243–267
Table 2
Spearman correlations among the implementation factor, measurement system development, and system outcome variables
1 2 3 4 5 6 7 8 9 10 11 12 13 14
1. MEASUREMENT 1.00
2. ACCOUNTABILITY 0.47*** 1.00
3. MGR USE 0.54*** 0.40*** 1.00
4. HIGHER USE 0.47*** 0.47*** 0.39*** 1.00
5. RESULTS TO DATE 0.39*** 0.29*** 0.39*** 0.47*** 1.00
6. FUTURE RESULTS 0.14*** 0.09** 0.25*** 0.24*** 0.60*** 1.00
7. DATA LIMITATIONS À0.21*** À0.24*** À0.09** À0.07* À0.01 0.14*** 1.00
8. METRIC DIFFICULTIES À0.41*** À0.37*** À0.32*** À0.23*** À0.24*** À0.06 0.57*** 1.00
9. COMMITMENT 0.38*** 0.58*** 0.29*** 0.41*** 0.30*** 0.11** À0.28*** À0.44*** 1.00
10. AUTHORITY 0.39*** 0.58*** 0.33*** 0.46*** 0.37*** 0.17*** À0.10*** À0.22*** 0.44*** 1.00
11. TRAINING 0.29*** 0.24*** 0.23*** 0.30*** 0.31*** 0.14*** 0.02 À0.12*** 0.24*** 0.25*** 1.00
12. PILOT 0.09* 0.01 0.02 0.06* 0.07* 0.01 0.004 À0.01 À0.03 À0.03 0.03 1.00
13. SES 0.18*** 0.14*** 0.16*** 0.13*** 0.06 À0.02 0.01 À0.06* 0.23*** 0.21*** 0.24*** 0.02 1.00
14. GPRA INVOLVEMENT 0.35*** 0.24*** 0.27*** 0.29*** 0.42*** 0.22*** 0.07* À0.06 0.25*** 0.30*** 0.39*** 0.14*** 0.46*** 1.00
MEASUREMENT=the extent to which results-oriented performance measures have been developed and implemented; ACCOUNTABILITY=the extent to which
managers are held accountable for achieving results; MGR USE=the use of performance data by managers for decision-making; HIGHER USE=the use of perfor-
mance information for higher-level agency or funding decisions; RESULTS TO DATE=the perceived extent the US Government Reporting and Results Act (GPRA)
has positively in?uenced agency performance; FUTURE RESULTS=the perceived extent the GPRA will positively in?uenced agency performance in the future; DATA
LIMITATIONS=the extent information system or data problems hinder performance measurement; METRIC DIFFICULTIES=the extent problems identifying,
developing, and assessing appropriate performance metrics hinder performance measurement; COMMITMENT=management commitment to performance measure-
ment; AUTHORITY=respondents’ decision-making authority; TRAINING=training in performance measurement and use of performance information; PILOT=G-
PRA pilot site; SES=member of the Senior Executive Service; and GPRA INVOLVEMENT=the extent respondent or sta? is involved in implementing the GPRA’s
requirement. ***, **, *, indicate statistical signi?cance at the 1, 5, and 10% levels (two-tailed), respectively.
K
.
S
.
C
a
v
a
l
l
u
z
z
o
,
C
.
D
.
I
t
t
n
e
r
/
A
c
c
o
u
n
t
i
n
g
,
O
r
g
a
n
i
z
a
t
i
o
n
s
a
n
d
S
o
c
i
e
t
y
2
9
(
2
0
0
4
)
2
4
3

2
6
7
2
5
7
Moreover, GPRA pilot sites have performance
measures to a greater extent than non-pilots, indi-
cating that e?orts to meet the Act’s requirements
have increased measurement system development.
One result that di?ers from our hypotheses is
the insigni?cant relation between data limitations
(i.e. di?culties obtaining valid or reliable data,
di?culties obtaining data in time to be useful, and
the high cost of data collection) and the develop-
ment of performance measurement systems. Con-
trary to Hypothesis H1, data limitations do not
appear to a?ect measurement system develop-
ment. The coe?cient on SES is also statistically
insigni?cant, indicating that measurement system
development is no higher for senior executives’
activities than for lower-level activities.
One limitation to the preceding analysis is the
assumption that the various technical and organi-
zational factors independently in?uence the extent
of performance measurement development. How-
ever, these factors may interact to impact the
development of results-oriented performance
measures. Given the large number of potential
interactions and limited theory on howthese factors
interrelate, we employ an exploratory technique
called CHAID(CHi-squared Automatic Interaction
Detection) to examine whether interactions among
the predictor variables have signi?cant e?ects on
measurement development. CHAID modeling
selects a set of predictors and their interactions
that optimally predict the dependent variable. The
technique assesses whether sequentially splitting
the sample based on the predictor variables
leads to a statistically signi?cant discrimination
(P
 

Attachments

Back
Top