Description
This paper reports on a developmental approach to performance-measurement systems (PMS). In particular, we
look at characteristics of a development process that result in the PMS being perceived by employees as enabling of
their work, rather than as primarily a control device for use by senior management
Developing performance-measurement systems
as enabling formalization: A longitudinal ?eld study
of a logistics department
Marc Wouters
*
, Celeste Wilderom
University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
Abstract
This paper reports on a developmental approach to performance-measurement systems (PMS). In particular, we
look at characteristics of a development process that result in the PMS being perceived by employees as enabling of
their work, rather than as primarily a control device for use by senior management. We will refer to such a PMS as
‘‘enabling PMS’’. The theoretical part of the study builds on ideas of enabling versus coercive formalization [Adler,
P. S., & Borys, B. (1996). Two types of bureaucracy: Enabling and coercive. Administrative Science Quarterly 41
(March), 61–89]; on notions of organizational learning (e.g., [Zollo, M., & Winter, S. G. (2002). Deliberate learning
and the evolution of dynamic capabilities. Organization Science 13(3), 339–351]); and on awareness of the incomplete-
ness of performance measures (e.g., [Chapman, C. S. (1997). Re?ections on a contingent view of accounting. Account-
ing, Organizations and Society 22, 189–205; Lillis, A. M. (2002). Managing multiple dimensions of manufacturing
performance—An exploratory study. Accounting, Organizations and Society 27, 497–529]). The empirical context entails
a mixed-method, 3-year longitudinal study of the logistics department of a medium-sized company in the beverage man-
ufacturing industry. Qualitative data were gathered through interviews, participation in meetings, action research, and
review of company documents. We also analyzed two waves of quantitative survey data, gathered from a panel of 42
employees. We ?nd that a development process that is experience-based contributes to the enabling nature of the PMS,
as it builds on existing skills, local practices, and know-how on performance measurement to enrich the PMS step-by-
step over time. Also, experimentation with speci?c performance measures was found to enhance the enabling nature of
the PMS: testing, reviewing, and re?nement of conceptualizations, de?nitions, data, and presentations of new perfor-
mance measures. Professionalism was signi?cantly related to positive attitude toward performance measures in our sur-
vey data. The results also illustrate that transparency of the PMS itself is key to enabling PMS.
Ó 2007 Elsevier Ltd. All rights reserved.
0361-3682/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.aos.2007.05.002
*
Corresponding author. Tel.: +31 53 4894498; fax: +31 53 4892159.
E-mail address: [email protected] (M. Wouters).
Available online at www.sciencedirect.com
Accounting, Organizations and Society 33 (2008) 488–516
www.elsevier.com/locate/aos
Introduction
Performance-measurement systems (PMS) are
mostly studied from the perspective of top-manage-
ment: how it allows them to monitor whether given
objectives have been achieved. In addition, it may
also help top managers to formulate strategy, spec-
ify operational actions needed for implementation,
set targets in relation to current performance (so
as to reveal priorities for operational improve-
ment), and clarify mutual expectations (Abernathy
& Brownell, 1999; Bisbe & Otley, 2004; de Haas
& Algera, 2002; Simons, 1990, 1991, 1994, 1995).
However, what about the managers who are the
subject of PMS—whose performance is being mea-
sured? There are few accounts in the PMS literature
where lower and middle-level employees and man-
agers consider a PMS as something that supports
them, that they can use for their own purposes to
assess how things are going, identify problems,
prioritize issues, develop ideas for improvement,
engineer solutions for concrete problems, or make
decisions (Jo¨nsson & Gro¨nlund, 1988). We refer
to ‘‘enabling PMS’’ when it is perceived by employ-
ees as enabling of their work, rather than as primar-
ily a control device for use by senior management.
This study investigates performance-measure-
ment systems in operations, closely connected to
the speci?cs of particular operational processes.
Building on Adler and Borys (1996), we conceive
of a PMS as a form of formalization. Coercive for-
malization aims to force employee compliance,
while enabling formalization makes employees feel
facilitated or motivated by the rules and the sys-
tems in place. Adler and Borys (1996) contrast
enabling and coercive types of formalization along
three dimensions: (1) characteristics of the system,
(2) the process of designing the system, and (3)
the implementation of the system. These dimen-
sions are relevant for understanding the role of
management control in organizations, as demon-
strated by Ahrens and Chapman (2004). While
their study primarily focused on (1) characteristics
of the system—in terms of repair, transparency and
?exibility—we focus on points (2) and (3): the
development process for designing and implement-
ing the performance-measurement system. We
expect that the manner by which the development
process is carried out a?ects the extent to which
the PMS will be perceived by employees as
enabling. In this paper we address the question:
Which characteristics of a PMS development pro-
cess enhance the enabling nature of the PMS?
Previous research has shown that developing an
enabling PMS is a delicate process. Townley, Coo-
per, and Oakes (2003), for example, demonstrated
that while the introduction of performance mea-
sures may begin as an initiative considered by var-
ious levels in the organization to be nuanced,
supporting and constructive, the process may eas-
ily derail: ‘‘From an initial discourse that empha-
sized a potential for reasoned justi?cation, debate
and dialogue quickly collapsed into a standard
template’’ (Townley et al., 2003, p. 1058). Qu
(2006) found that consultants considered the
incorporation of client input—and especially
information on existing reports and speci?c mea-
sures already in use—crucial for the production
of a usable PMS. Failure to include such input
was a major source of frustration for participants
in the development process (Qu, 2006). Yet, there
is little empirical knowledge about what kind of
a development process fosters an enabling PMS.
‘‘The balanced scorecard literature also indicates
that it [is] as much the process of establishing a
scorecard that yields bene?t as the resultant
measurement schema. However, the literature is
remarkably silent on this point’’ (Otley, 1999,
p. 377, emphasis added).
This study aims to contribute to the literature by
theoretically and empirically investigating charac-
teristics of a PMS development process that
enhance the enabling nature of the PMS. We build
on the framework of Adler and Borys (1996), who
propose that user involvement and professionalism
contribute to enabling formalization. We develop
these ideas further in the context of PMS. We con-
sider the inherent incompleteness of PMS in terms
of the inability to re?ect the various dimensions of
operational performance and tradeo?s among these
(Lillis, 2002), and therefore user involvement needs
to be mobilized, both in terms of existing experi-
ence with quanti?cation of performance, and also
throughout the design and implementation process
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 489
of new measures. Design and implementation
include several activities, such as shaping and fur-
ther improving the best ?tting de?nitions of useful
performance measures; ?nding or creating mea-
surement data for determining the actual values of
these performance measures; building information
systems for reporting performance-measurement
results; setting performance level targets for perfor-
mance measures; and periodically reviewing, revis-
ing and re?ning both single measures and the
overall PMS. We look at such activities from the
perspective of how organizations can learn by care-
fully building on and reusing existing experiences
(cf. Zollo & Winter, 2002), and experimenting and
prototyping with new practices (cf. Carlile, 2002).
‘‘Design’’ and ‘‘implementation’’ are hard to distin-
guish (Adler & Borys, 1996), and we prefer to com-
bine them and use the phrase ‘‘development
process’’. This re?ects that design and implementa-
tion activities are conducted in a mutually constitu-
tive, iterative fashion: employees learn through
implementation, on the basis of which they adjust
the design of the PMS, which leads to new imple-
mentation activities, etc. Such an approach assumes
a considerable level of professionalism.
The empirical ?ndings are based on a 3-year
longitudinal case study within the logistics depart-
ment of a medium-sized company in the beverage
manufacturing industry. We gathered both survey
and qualitative data. The research includes not
only observation of the company’s activities, but
also elements of action research, since we were
involved in the development of a departmental
PMS.
The structure of this paper is as follows. Perfor-
mance measures as either coercive or enabling
formalization are introduced in performance mea-
sures as enabling or coercive formalization section.
Our propositions regarding the characteristics of
a developmental process that contributes to an
enabling PMS are put forward in propositions
about a developmental approach for enabling
PMS section. The research methods are described
in research method section. Empirical results are
presented and discussed in results: developing an
enabling PMS and discussion section, and conclu-
sion is given in the ?nal section.
Performance measures as enabling or coercive
formalization
Traditionally, performance measures in opera-
tions put a one-sided emphasis on minimizing direct
costs through low material costs, high capacity uti-
lization, and high direct labor e?ciency. However,
early research identi?ed the need to broaden perfor-
mance-measurement systems to support new opera-
tions practices, and advocated the use of measures
for quality, throughput times, ?exibility, etc. (Bea-
mon, 1999; Eccles, 1991; Hall, Johnson, & Turney,
1990; Kaplan, 1983, 1990; Maskell, 1991; Nanni,
Dixon, & Vollmann, 1992). Empirical studies have
supported relationships between the pursuit of
speci?c operational strategies and the expansion
of traditional e?ciency-focused PMS to include
new performance measures (e.g., Abernathy & Lil-
lis, 1995; Baines & Lang?eld-Smith, 2003; Banker,
Potter, & Schroeder, 1993; Fullerton & McWatters,
2002; Perera, Harrison, & Poole, 1997; Maiga &
Jacobs, 2005).
But despite to broadening of PMS—in both
research and practice—to embrace a wider portfo-
lio of measures, the approach to developing the
PMS has received far less attention in empirical
studies. In this section, we will ?rst discuss issues
in regard to the incompleteness of performance-
measurement systems, and thereafter we will intro-
duce more speci?cally the control ideas laid out by
Adler and Borys (1996).
Incompleteness of a PMS
Incompleteness of PMS arises when strategic
performance measures are disaggregated into dif-
ferent performance dimensions, separate periods
and organizational sub-units, and the dependen-
cies between disaggregated measures are not
re?ected in the PMS (Lillis, 2002). For example,
attempts to improve responsiveness may lead to
more frequent changeovers, demands for shorter
lead times, and higher inventories, and when such
tradeo?s are inadequately re?ected in the PMS
there is a likely ‘‘friction created by the failure
to determine and adjust for the implications of
pro?t centre strategy on the manufacturing cost
490 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
function’’ (Lillis, 2002, p. 510). Designing a per-
fectly complete PMS remains challenging, if not
impossible, and would require nothing less than
the expression of all relevant aspects of perfor-
mance in quantitative terms (?nancial and non-
?nancial), estimation of the tradeo?s among such
dimensions of performance in the setting of targets
for ?nancial and non-?nancial performance mea-
sures, and consideration of interdependencies
between di?erent organizational units (and di?er-
ent time periods) in the PMS (see, e.g., Lillis,
2002).
And the greater the incompleteness, the more
the PMS may be perceived by functional sub-units
as a ‘‘negative’’, ‘‘unfair’’, ‘‘threatening’’, or ‘‘coer-
cive’’ instrument of management control. Malina
and Selto (2001) found that perceptions of PMS
were more negative if measures were inaccurate
or subjective, and if benchmarks were considered
inappropriate but nevertheless used for evaluation.
In other words, employees may feel that their per-
formance ‘‘as measured’’ (by the metrics) does not
truthfully re?ect what they see as their ‘‘real’’ con-
tribution to the organization. For example, they
may ?nd it unfair that contingencies (uncontrolla-
ble circumstances), materializing after targets have
been set, are not considered for adjusting those
targets; employees may not believe their supervi-
sors use the PMS in a fair way for evaluating their
performance; employees may regard target levels
as overly ambitious and unrealistic; or they may
feel their personal risk has increased too much
because of consequences that are tied to PMS
results.
Several studies have found evidence of the rela-
tionship between the use of controls and defensive
behavior—such as negotiating targets towards
more easily achievable levels, obtaining surplus
resources for completing tasks, concealing wind-
falls that have made tasks easier than anticipated,
or even taking operational decisions just to make
the results ‘‘as measured’’ look good at the expense
of negative long-term e?ects—sometimes moder-
ated by variables such as measurability of outputs,
the extent to which input–output relationships of
processes are understood, and the style in which
the controls are used (e.g., Carmona & Gro¨ nlund,
2003; Chow, Kato, & Merchant, 1996; Jaworski &
Young, 1992; Ramaswami, 1996, 2002; Van der
Stede, 2000).
Several studies have identi?ed ways in which
?rms manage incompleteness of PMS. Lillis
(2002) found that ?rms sometimes loosened con-
trol reactions to variances, implemented more
innovative PMS, integrated the PMS with other
management systems, or used measurement weigh-
tings. Davila and Wouters (2005) described a ?rm
that designed a budgeting system that reduced
emphasis on cost targets and provided budgetary
slack when performance attributes other than
costs required attention. Van der Stede (2000)
found that ?rms balanced the strictness of controls
with a business unit’s strategy. Business units fol-
lowing a di?erentiation strategy implemented less
rigid budgetary control, which allowed for some
budgetary slack and stimulated managers to think
long term.
Enabling formalization
Incompleteness motivates why designing and
implementing PMS in operations is di?cult and
requires a deliberate and careful approach. For
developing our propositions about a development
process that is likely to enhance the enabling nat-
ure of the PMS, we build on the framework of
Adler and Borys (1996). First, because this frame-
work conceptualizes the issue that is central to our
accounting study: the distinction between perfor-
mance-measurement systems that only serve
higher-management needs and control employees’
behavior (coercive formalization), versus systems
that support employees to do their work better
(by providing feedback, identifying problems,
revealing improvement opportunities, help priori-
tizing action, etc.): enabling formalization. Sec-
ond, because this framework helps to articulate
that characteristics of the system itself, as well as
processes for design and implementation of the
system may contribute to the coercive or enabling
nature of formalization. Third, because Adler and
Borys (1996) o?er initial suggestions about what
kind of design and implementation process is likely
to foster the enabling nature of formalization, and
so it helps to delineate our intended contribution:
to draw on organizational literature as well as
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 491
the empirical material to further develop the
understanding of enabling PMS development.
Adler and Borys (1996, p. 66) propose that
‘‘employees’ attitudes to formalization depend on
the type of formalization with which they are con-
fronted’’. They suggest that employee attitudes are
more positive when formalization enables them to
better master their tasks, and will be more negative
when it ‘‘functions as a means by which manage-
ment attempts to coerce employees’ e?ort and
compliance’’. Enabling formalization mobilizes
rather than replaces employees’ intelligence, and
acts to ‘‘help users form a mental model of the sys-
tem they are using’’ (p. 70). As such, these kinds of
‘‘procedures provide organizational memory that
captures lessons learned from experience’’ (p. 69).
It is thus relevant to better understand how
organizations may achieve enabling formalization.
Adler and Borys (1996) suggest that whether for-
malization has an enabling or coercive character,
depends on characteristics of the formalization as
well as on the process of designing and implement-
ing the system. These characteristics of formaliza-
tion are internal and global transparency, and
?exibility and repair. We will discuss these ?rst
before we begin our analysis of a development pro-
cess conducive to enabling formalization. Internal
transparency means that users have a good under-
standing of the logic of a system’s internal function
and they have information on its status. Enabling
formalization provides users with a clear under-
standing of the underlying rationale for why
certain control mechanisms are in place. Such
formalization also codi?es best-practice experi-
ences, and users are provided with feedback on
their performance. Global transparency refers to
the intelligibility for employees of the broader
system and context within which they do their
work. Controls are designed to a?ord employees
an understanding of where their own tasks ?t
into the whole. Information from beyond one’s
speci?c domain is available. Flexibility means that
users can make controlling decisions after enabling
systems have provided information. ‘‘Flexible
systems encourage users to modify the interface
and add functionality to suit their speci?c work
demands’’ (p. 74). Repair means that users can
mend and improve the work process themselves
rather than allowing breakdowns and other
non-programmable events to force the work pro-
cesses to a halt. We refer to Ahrens and Chap-
man (2004) who discuss these characteristics of
enabling formalization in the context of manage-
ment control systems.
As mentioned above, Adler and Borys (1996)
also contrast enabling and coercive types of for-
malization in terms of the processes of designing
and implementing the system. They discuss some
of the characteristics of these processes that are
likely to lead to enabling formalization, such as
employee voice, employee skills, process control,
and ?exibility in changing controls. They propose
that ‘‘employee involvement in the formulation of
procedures is likely to have a positive e?ect on
both attitudinal and technical outcomes’’ (p. 75).
Principles for the design of equipment technology,
they suggest—such as a focus on users and usabil-
ity, early and continual user testing, and iterative
design processes—carry over to the development
of formalization as ‘‘organizational technology’’.
However, the design and implementation of for-
malization are typically intertwined: while equip-
ment may be bought ‘‘o? the shelf’’, customized
from existing modules, or designed-to-speci?ca-
tion outside the client organization, ‘‘organiza-
tional technology’’ takes shape within the speci?c
implementation context. Adler and Borys (1996)
call for more research to explore whether and
how organizations can introduce enabling types
of formalization. We will build on their framework
and develop their ideas further speci?cally in the
context of PMS development.
Propositions about a developmental approach for
enabling PMS
This section sets out three propositions in
response to our research question: ‘‘What charac-
teristics of a PMS development process enhance
the enabling nature of the PMS?’’ We propose that
a development process that is characterized by (1)
being experienced-based, (2) allowing experimen-
tation, and (3) building on employees’ profession-
alism is likely to result in an enabling PMS.
Experienced-based involves the identi?cation,
492 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
appreciation, documentation, evaluation, and con-
solidation of existing local knowledge and experi-
ence with respect to quantitatively capturing and
reporting relevant aspects of performance. Experi-
mentation involves the ?rst development of a per-
formance measure and the subsequent testing
and re?nement (in several rounds) of its conceptu-
alization, de?nition, required data, IT tools, and
presentation, together with employees (whose per-
formance is going to be measured), to arrive at a
measure that is a valid, reliable, and understand-
able indicator of performance in a speci?c local
context. Professionalism of employees denotes an
orientation toward learning for the purpose of
improving work practices. We underpin these
propositions in the remainder of this section; and
in the section with empirical results we will discuss
and illustrate them further.
We feel that presenting propositions before the
empirical study helps to better discuss our theoret-
ical ideas in relation to the literature, and empirical
?ndings in relation to the theory; it is not to sug-
gest that theory and ?ndings have been developed
subsequently. Rather, the nature of the research
process was as discussed by Ahrens and Chapman
(2006, p. 836): ‘‘Problem, theory, and data in?u-
ence each other throughout the research process.
The process is one of iteratively seeking to gener-
ate a plausible ?t between problem, theory, and
data’’. Before the study started, we explicitly
intended to explore an experience-based develop-
ment process and what we than called continu-
ous revision of the PMS (later formulated as
‘‘experimentation’’). These ideas took further
shape during the course of the study through going
back-and-forth between the ?eldwork and the
literature. Furthermore, the development of the
survey instrument, which started about 15 months
into the study, involved an extensive process of
focusing and making connections between the
?eld and the literature, and in this stage the
role of ‘‘professionalism’’ was highlighted and sub-
sequently focused upon in the ?eldwork. Later
in the research project, we became familiar with
the framework of Adler and Borys (1996), and
this was found to be a very powerful way for orga-
nizing the theoretical discussion and empirical
results.
Experience-based development process
Organizational change processes may take
advantage of local knowledge, which can be de?ned
as ‘‘the very mundane, yet expert understanding of
and practical reasoning about local conditions
derived from lived experience’’ (Yanow, 2004, p.
S12). Organizational change processes that utilize
local knowledge are more likely to lead to sustain-
able changes and improvements (Abrahamson,
2000; Lowe & Jones, 2004; Zollo & Winter, 2002).
In the context of PMS, we propose that a develop-
ment process that is experience-based is likely to
have a positive e?ect on the enabling nature of the
PMS. An experience-based development process
involves the identi?cation, appreciation, documen-
tation, evaluation, and consolidation of existing
local knowledge and experience with respect to
quantitatively capturing and reporting relevant
aspects of performance. We will elaborate on the
idea of an experienced-based development process
in this section.
Many of the proposed approaches for design
and implementation in the literature, however,
seem to pay little attention to either experience
or user involvement. Most approaches to PMS
design and implementation (see Bourne, Neely,
Mills, & Platts (2003a) for a review of the PMS
development processes literature) focus on how
the goals set at the top of the organization can bet-
ter guide actions taken lower in the organization.
First steps in the typical development process are
to clearly de?ne the overall (i.e., corporate-level)
strategic objectives and then the local operations’
speci?c contribution toward achieving these over-
all strategic objectives. Thus, the organization’s
global performance measures and functional mea-
sures are derived. The PMS is typically designed
from the perspective of top-management, as is
apparent in the following representative character-
istics: (1) explicit re?ection of the ?rm’s strategic
objectives and subsequent break-down of those
objectives to more speci?c objectives at lower
managerial levels, (2) the signaling of performance
levels that are below targets, (3) the ability to
‘‘drill-down’’ and get more details when needed,
(4) striving for transparency, consistency, and
uniformity regarding de?nitions of performance
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 493
measures, presentation formats, etc., and (5) one
information system that contains all data and
reports. External experts may be involved, who
often bring in a standardized way of designing and
implementing the system, with examples (or tem-
plates), complete with performance measures, pre-
sentation formats, and a set consulting approach
for designing the system, software tools, etc.
However, top-down, mandated performance-
measurement initiatives are less likely to be success-
ful (Cavalluzzo & Ittner, 2004; Scott & Tiessen,
1999; de Haas & Algera, 2002). These well-inten-
tioned, standardized methods carry the danger of
insu?ciently re?ecting the local organizational
contexts or the available experience and unique
expertise of employees. Furthermore, even before
such measurements systems are initiated, a number
of informal performance measures, at various
levels within the organization are already in use
by managers, complementing the information they
get from other sources, such as observations, or
conversations with people individually or in group
meetings, as well as non-face-to-face communica-
tion through phone calls or emails (McKinnon &
Bruns, 1992). These informal measurement reports
are often developed locally, contain a mix of local
and centralized data, report operating information
over a very short period of time (weeks, days,
or less), provide status information (up-to-date
accumulations of bits of operating data, e.g., inven-
tory-level reports and backlog reports), and enable
performance comparisons between, for example,
budgeted vs. actual performance, one time period
vs. another, etc. (McKinnon & Bruns, 1992). Such
informal reports use a variety of presentation for-
mats, performance measure de?nitions, data, and
information systems. The existence of such reports
is often unknown outside the organizational unit
where they are produced and used, to the extent
that, from the perspective of top-management,
a coherent PMS does not appear to exist at all
within the organization! Although employees may
have considerable experience with performance
measures, and may have already established con-
text-speci?c practices, from the perspective of top-
management these do not constitute a PMS.
Typically, expert-led approaches initiated by
top-management, are not likely to expend the
e?ort necessary to build an in-depth understanding
of locally-developed existing reporting practices, in
particular about the detailed de?nition, data,
motivation for, and experiences with existing mea-
sures and information systems (Qu, 2006). The
consultants are also more likely to address prob-
lems from the perspective of top-management (or
whoever hires them), and they may seek to focus
on concepts that are fashionable in the business lit-
erature, and to attempt to transfer their earlier
experience to the project at hand (see, e.g., Sorge
& Witteloostuijn, 2004). Based on previous suc-
cesses or an awareness of the amount of e?ort
involved with design and implementation of a
PMS, the temptation is strong to simply start
PMS design from scratch (Green?eld), to copy
from previous outside assignments or other
departments in the organization, or to employ a
standardized consulting approach for the design
and implementation of performance-measurement
systems (Blueprint) (Townley et al., 2003). Such
standard consulting approaches tend to focus on
strategy clari?cation and the creation and design
of new performance measures, without detailed
regard for what is already in place. Existing infor-
mal reports typically come into view only after the
‘‘ideal’’ PMS has been designed and set, as part of
an assessment of the ‘‘gap’’ between that ideal
PMS and already existing performance measures
(Medori & Steeple, 2000).
Organizational change is more likely to be suc-
cessful when it is a process of relatively small change
e?orts that involve the recon?guration of existing
practices and systems that are successfully in-use
elsewhere in the organization, rather the creation
of new practices and systems (Abrahamson,
2000). Building organizational capabilities requires
adaptation of work processes, re?ection upon expe-
riences, and codi?cation of knowledge gained
(Zollo & Winter, 2002). In other words, organiza-
tional learning is based on experience accumulation,
and empirical studies have demonstrated the
importance of knowledge accumulation for perfor-
mance (e.g., Reagans, Argote, & Brooks, 2005;
West & Iansiti, 2003). Similarly, we propose that
building on existing, local experience is an impor-
tant characteristic of enabling PMS development
as well. We expect a development process to
494 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
successfully stimulate enabling formalization when
it fully acknowledges, respects, and utilizes the
intellectual capital of lower-level employees’ exist-
ing practices of and insights in performance
measurement.
Experimentation
Experimentation in the context of PMS devel-
opment involves the ?rst development of a new
performance measure and subsequently allowing
time to test and re?ne (in several rounds) its con-
ceptualization, de?nition, required data, IT tools,
and presentation, together with employees (whose
performance is going to be measured), to arrive
at a measure that is a valid, reliable, and under-
standable indicator of performance in a speci?c
local context. We propose that a development
process that involves much experimentation with
new performance measures is more likely to lead
to enabling formalization. Fleshing out general
goals—the usual suspects of e?ciency, producti-
vity, customer satisfaction, etc.—and making them
speci?c and measurable is a ‘‘messy’’ process
(Lowe & Jones, 2004). It involves de?ning mea-
sures that re?ect strategic goals, that are closely
related to the speci?c operating conditions in a
particular setting, that are actually measurable
(i.e., the required data are available), and that
are presented in a way that employees ?nd under-
standable. This requires a meticulous, in-depth
process of creating a ?t between the PMS and
the operational idiosyncratic local conditions.
The development process requires a close involve-
ment of and cooperation with employees. This is
not to say that employees would be the only ones
who use the data, but rather that they are the ones
who are best placed to judge that their work e?orts
are validly or invalidly re?ected in the performance
measures. The making of a performance measure
is not likely to be ‘‘right’’ after just one round; it
is more likely to be successful if the development
engages employees in a process of experimenta-
tion, e.g. tinkering with qualitative descriptions,
quantitative de?nitions of measures, the scope of
measures, data used, procedures for data gather-
ing, representation in tables and graphs, etc., as
well as actual testing to identify unanticipated
and often undesirable e?ects or behaviors that
occur in response to the PMS. Even though we
emphasize that involving employees through
experimentation and building on previous experi-
ences is relevant for improving the content of a
PMS, this may also contribute to an e?ective orga-
nizational change process (Bourne, Neely, Mills, &
Platts, 2003b).
Professionalism
Professionalism denotes an orientation toward
learning for the purpose of improving work prac-
tices. Such an orientation makes it possible to rely
on experience and to conduct experiments within a
PMS development process. A higher score on pro-
fessionalism makes it more likely that employees
express satisfaction with earnest improvement
e?orts carried out within their immediate work
environment. Professionalism may be especially
stimulated if self-involvement into departmental
improvement e?orts is made possible. Caldwell,
Herold, and Fedor (2004) conclude that employees’
motivational orientation, and particularly their
‘‘achievement predisposition’’ (p. 879) predicts
satisfaction with perceptions of organizational
change. In other words, if an employee is more
inclined to improve her work practice, then perfor-
mance measures are more likely to be seen as posi-
tive, stimulating, challenging, and helpful. In sum,
we propose that an employee’s level of profession-
alism is associated with a positive attitude towards
performance measurement, especially if a carefully
evolving developmental approach is taken, aimed
at re?ning and extending a departmental PMS as
an instance of enabling formalization.
Research method
This study has been designed as action research.
We cooperated with the logistics department of a
company in the beverage manufacturing industry,
in the period August 2002 through June 2005. We
examined in detail the evolution of the depart-
ment’s PMS and the employees’ experiences with
performance measurement over a relatively long
period of time. In this section, we will further
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 495
introduce the research site, describe how we gath-
ered and analyzed the qualitative data, and outline
the survey conducted among a representative
panel of the employees of the case department.
Research site
The company has a strong brand name and sells
its beverages to both the hospitality industry (such
as bars, restaurants, and hotels) and to retail cus-
tomers that vend to consumers. Customers are both
domestic and international. Important conditions
for success, according to the company’s annual
report, are brand strength, product innovation,
excellence in production, quality of marketing,
balancing stakeholders’ interests (shareholders,
employees, and environmental concerns), and
?nancial performance. While these factors center
on revenue enhancement, cost management is also
increasingly important. Competition among super-
market chains has intensi?ed, leading to lower
prices for consumers, and increased price pressure
on suppliers. The pro?tability of the company
has su?ered as a result, and pro?ts, revenues
and sales in 2005 were all below their 2004 levels.
Furthermore, the company recently made very sig-
ni?cant investments in a new manufacturing site,
which called for considerable operational cost
savings, because it had increased ?xed deprecia-
tion costs signi?cantly in all departments of the
company.
The approximately 150 employees in the logis-
tics department are spread among four sub-depart-
ments: purchasing, physical distribution, materials
management, and packaging development. The
director of logistics and the four heads of the
sub-departments form the management team of
the logistics department (‘‘logistics management
team’’, LMT). The team also includes the control-
ler for logistics and production, the logistics man-
ager of the hospitality market, and the logistics
manager of the international department of the
company. The director of logistics reports to the
CEO of the company. An organization chart is
shown in Fig. 1. The logistics department had been
recognized—internally and externally—for its per-
formance, including a prestigious national prize
for its customer service and supply chain
management.
An overview of some main events investigated
during this longitudinal case study is depicted in
Table 1. When this study began, the logistics
Board of directors
Logistics Marketing and sales Production
Materials
Management
Physical
Distribution
Purchasing Packaging
Development
• Planning
• Central
warehouses
• Marketing
warehouse
• Customer service
• Transportation and
distribution
• Internal transportation
and warehouses
• Transportation planning
HRM Finance
Fig. 1. Organization chart.
496 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
department had recently begun to expand their
performance-measurement system. They mainly
used an indicator called ‘‘delivery reliability’’, but
they felt that additional measures were required
to provide a more comprehensive picture of the
performance of the logistics department in rela-
tionship to its objectives. Previously, the mission
of the logistics department had been ‘‘to coordi-
nate the supply chain in an e?ective, e?cient,
and innovative way for providing optimal service
to our customers’’. This had also been reformu-
lated more concretely as four objectives for
logistics: number one in customer satisfaction,
excellence in supply chain e?ciency, continuous
Table 1
Time line of the case study
Company events Research
August 2002 Logistics department formulates
the need to have more extensive
performance measurement
Making contacts with the company and initial discussions
about research cooperation
August 2002 Director of logistics voices strong
concerns about employee ownership
*
a
2003
January Start developing and implementing new
measures with researchers
August Positive evaluation of ?rst results and
developmental approach
Agreement on longitudinal case study
September Continuation of design and implementation
of performance measures
December Start developing survey instrument
2004
January–May Developing and reviewing survey instrument
February–March Logistics department moves to new site
April Appointment of new CEO and start of
companywide Balanced Scorecard project
May Appointment of project leader for Balanced
Scorecard project
June Pilot of survey
July Tension from the central Balanced Scorecard
initiative
*
First survey
August Start of champions meetings (from all
departments)
October Discussion of results with LMT
b
November Experimenting with a new performance
measure for internal transportation and
warehouses (continued until April 2005)
*
December First o?cial scorecards for all departments
de?ned
2005
January 2005 Second survey
March 2005 Discussion of results with LMT
March 2005 LMT and middle managers discuss the PMS
(new measures and implementation support)
*
May 2005 LMT prioritizes the proposed new measures
*
June 2005 Evaluation of ?rst six months of the o?cial
balanced scorecards
a
Events market with a
*
are discussed in some detail in the text with a separate heading labeled ‘‘Illustration’’.
b
LMT: Logistics Management Team.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 497
supply chain innovations, and to be a professional
and learning organization. Explicating these goals
stimulated the implementation of performance
measures. There was also another reason. In
2002 the company had to impair inventories for
about half a million Euros, and therefore it was
concluded that inventory risk should be measured
regularly. This situation was the basis for the
beginning of our cooperation with the logistics
department, which provided an opportunity to
study in detail the evolvement and actual experi-
ences with performance measurement over a
longer period.
In the period between January 2003 and June
2005 the logistics department gradually expanded
the PMS to incorporate additional performance
measures, to review or delete other measures,
and to implement procedures and information sys-
tems for producing periodic reports (see Table 2).
The development process was strongly in?uenced
by two events: early in 2004 the company moved
to a new site and at the same time implemented
new information systems that provided new tech-
nical opportunities for developing new perfor-
mance measures. And in April 2004 a new CEO
was appointed who initiated a companywide per-
formance-measurement initiative.
We worked especially with three members of
the LMT: (1) the director of logistics who reports
to the board of directors, (2) the management con-
troller assigned to logistics, and (3) the so-called
PMS-champion, i.e., the one sub-department head
on the LMT with whom we started this liaison on
the basis of her own deeply held professional inter-
est in applying PMS to the entire logistics depart-
ment within this ?rm. At the outset we sensed that
these leading ?gures were authentic in their desire
to establish a PMS in the form of enabling formal-
ization. They showed keen interest in developing
PMS themselves, in cooperation with us as exter-
nal, university-based experts on both PMS and
the human side of organizational change.
Qualitative data gathering and action research
We obtained data through the use of various
methods, in the context of action research. Over
a period of almost three years we frequently visited
the company or met company employees at the
university, and qualitative data were obtained
through interviews, participation in management
meetings, company documents, as well as ?eld
notes made by research assistants (see Table 3).
While gathering these data, we did not act as neu-
tral observers. The project aimed to assist the com-
pany as well as contribute to science. The company
participated in this study because they welcomed
the unpaid assistance with their development of
performance measurement in exchange for o?ering
research access to us. Researchers, research assis-
tants, line employees, supervisors, middle manag-
ers, as well as the LMT members were all
involved in developing the new performance mea-
sures and providing feedback. Over the course of
Table 2
Number of performance measures over time
Period In use
at start
of the
period
Implemented Deleted In use
at end
of the
period
Under construction
a
Under review
b
January 2003–June 2003 19 6 7 18 3 1
July 2003–Dec 2003
c
18 33
January 2004–July 2004 33 8 6 35 3 5
August 2004–October 2004 35 9 3 41 1 13
Nov 2004–April 2005 41 6 2 45 1 4
April 2005–June 2005 45 2 0 47 6 15
a
New indicator being developed but not implemented at the end of the period.
b
Existing indicator being revised during this period.
c
Data on changes of performance measures during this period were not available.
498 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
this study, seven master students in industrial engi-
neering or business administration worked full
time for a period of six to eight months as an
intern at the company, in partial ful?llment of
their MSc. They produced monthly reports of
actual outcomes of performance measures, they
carried out the survey, and they worked with
employees in developing, evaluating, or re?ning
various measures.
The qualitative data have been analyzed
through a process of re?ection, and going back-
and-forth between the data, the literature, and
the company. The research ?le was organized on
the basis of a table that listed the interactions
with the company and that contained about 275
rows. On each row the following data were
recorded (when applicable): date, who partici-
pated, sub-departments involved, topic, duration,
reference to meeting notes, description of (and ref-
erence to) company documents received, reference
to researchers’ input for the meeting, code for
meeting in person, code for researcher or assistant,
code for diversity of meeting participants. This
table allowed easy retrieval of speci?c data when
certain questions or ideas emerged in discussing
or writing about the research. It was also the basis
for summarizing the time line in Table 1 and the
data gathering in Table 3. The data were used to
write summaries to pull together di?erent events
and di?erent kinds of data, and to start re?ecting
on what happened, and to focus on events that
seemed most interesting. Parallel to gathering the
data and writing the summaries, we reviewed more
literature, discussed the study with other research-
ers (informally, as well as through presentations in
workshops and conferences), and wrote (and
rewrote) the paper. This connection with theory
guided not only the analysis of the data, but also
the gathering of data, and it led to follow-up dis-
cussion or clari?cation with the company. Also,
Table 3
Qualitative data gathered
Number of meetings Time (h)
Meetings with
a
Employees of Logistics only 8 12
Employees outside Logistics 13 16
Employees from Logistics together with other areas 20 29
41 57
Number of di?erent employees interacted with
Logistics 7
Finance 3
Production 1
Marketing and Sales 2
Other functional areas 7
20
Sample company documents Number of documents
Research assistants had meetings with 71 di?erent people in 189 meetings, which took over 200 h
b
Documents about performance measures in-use in Logistics 13
Documents about performance measures in-use outside Logistics 4
Presentations and notes about developments in performance measurement in the company 8
Minutes of meetings about developments in performance measurement in the company 18
General documents about the Logistics department 11
General company documents 8
Response to panel survey study 4
66
a
‘‘Meetings’’ indicates face-to-face engagements of researchers with members of the case-study organization, either as interviews
with one or a few employees, or as active participation in meetings with a larger number of employees. Meetings took place at the
research site, with a few exceptions of meetings at the university. Not included are emails and phone calls.
b
Research assistants also accounted for their meetings. Mentioned are those interactions where they took notes of which the
researchers have copies; not included are short informal discussions (the research assistants worked on-site), emails, and phone calls.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 499
a draft version of the paper was discussed with
managers of the case study organization. And vice
versa, interaction with the company guided the
search for new literature, or informed discussions
with academics.
A potential issue of action research is that the
researcher may selectively look for empirical evi-
dence and guide the research process with a bias
towards the expected ?ndings (Atkinson & Sha?r,
1998). However, there are several countervailing
e?ects that limit such a bias, which were also prom-
inent in this study (Labro & Tuomela, 2003): The
length of the research process and access to all
kinds of data provides many di?erent ‘‘pieces of
the puzzle’’ that provided di?erent kinds of empir-
ical evidence which need to be understood as a
whole. Furthermore, members of the case study
organization expect results that are of practical rel-
evance and this provides an incentive for them to
be involved and to spend time with the researchers.
Also because of the potential impact on their
work, organizational members are engaged, chal-
lenge ideas, and provide feedback on results.
Furthermore, this type of action research allows
an empirical test of ideas implemented in an actual
organization. Organizational members are likely to
be cautious about trying interventions they deem
unsuccessful or otherwise undesirable. Researchers
cannot easily persuade people to implement what
they consider to be a bad idea; and when an idea
that seemed good actually works out poorly, that
will become obvious in the empirical data. In
short, the objective of making actual changes
in real organizations counters researchers’ biases,
because of the active involvement of organizational
members and the empirical facts resulting from
implementation.
Panel survey
A survey was conducted twice, once in July 2004
and once in February 2005. On both occasions we
approached the same respondents, and we assessed
also the same variables, with the same or a slightly
improved version of the questionnaire. Hence this
part of the study is labeled the ‘‘panel survey’’.
The timing of the panel survey within Logistics
coincided with the start of the companywide initia-
tive to implement a balanced scorecard, and the
two surveys provided information on the initial
attitude towards performance measures and the sit-
uation about six months into the initiative.
We ?rst asked the four sub-department heads
within logistics to come up with a list of potential
respondents who would be representative for their
sub-department in terms of their attitudes towards
PMS. The LMT reviewed the lists and made a few
changes of prospective respondents, which were
subsequently approved by the nominating sub-
department heads. Members of the panel had to
have been employed in their sub-department for
at least one year and not temporary employed.
In addition, all sub-department heads were
included since they were crucial in the PMS pro-
cess. Moreover, the number of panel respondents
per sub-department had to be proportional to
the size of each of the four sub-departments.
In the ?rst data wave, we received the com-
pleted questionnaires from all of the 42 selected
respondents. In the second data wave, we got the
data from 39 of the same 42 respondents plus
one new participant. This attrition was due to ill-
nesses, and one employee on the panel had left
the company.
For the process of data collection, research
assistants requested the participation of panel
members with a letter (signed by the director of
logistics) and verbally during several team meet-
ings in which they explained the purposes of the
survey and the role of the panel and allayed partic-
ipants’ concerns about the con?dentiality of the
data. To ensure a high response rate, the research
assistants made appointments with all members of
the panel to have them ?ll out the questionnaire
during an on-site interview. The assistants also
wrote down other PMS-related comments respon-
dents made during the meetings. In the second sur-
vey appointments were made only with those
respondents on the panel who were expected to
be uncomfortable with completing the question-
naires by themselves.
Con?dentiality was a key consideration during
the panel study. Given that some sub-departments
were rather small and that panel membership was
known by the LMT, we promised the respondents
explicitly and repeatedly that results would never
500 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
be reported at the sub-departmental level, but only
at the aggregate level (i.e., for the whole logistics
department). Furthermore, the completed ques-
tionnaires were ?led outside the company (at the
university) and no one at the company possessed
the list that linked respondent numbers to respon-
dent names. Procedures to guarantee con?dentially
of the data were emphasized in all communications
with research participants, and thus participants
appeared comfortable enough to provide frank
answers and comments.
The measurement of the panel survey data will
be discussed in the remaining part of this section.
1
The measurement scale for the dependent variable
Attitude toward performance measures was devel-
oped expressly for this study. Its items are
described in Appendix. The variable re?ects the
perceived usefulness of performance measures that
are reported concerning the respondent’s sub-
department within the logistics department. In
the second administration we added the variable
Ambition level in two years. Using the same items,
participants were asked what the situation with
respect to performance measurement should be
two years into the future.
The measurement of Professionalism was also
developed expressly for this study. Its items are
also described in Appendix. We refer to this new
construct informally as being improvement-ori-
ented on the job. Formally, professionalism refers
to the degree to which individual employees
behave in a way that shows commitment to both
their profession and their current organization,
through e?orts aiming explicitly to upgrade or
improve the quality of the work carried out. Sam-
ple items are: ‘‘I learn every day at work’’; ‘‘I
always contribute to new ideas at work’’. The
answering possibilities of these Likert items range
from 1 (very much disagree) to 7 (very much
agree). The origin of the professional attitude
questionnaire lies in e?orts carried out by Swailes
(2003) who in turn relied explicitly on measure-
ment e?orts of Hall (1968) and Snizek (1972). In
our study we de?ned the questionnaire items
entirely on an individual employee level. Deviating
from these previous professionalism scaling e?orts,
we made all items refer to solely one’s own current
job and not also to one’s profession, other profes-
sions or professional colleagues. Because of a lack
of validity of broad measurement scales, Swailes
(2003) called for ‘‘reconceptualising professional-
ism in terms of process rather than structure’’
(p. 103), to which we contributed in this study
through the formulation of the survey ques-
tions.
In the survey, a number of variables regarding
the task environment were also included. This made
it possible to investigate the association between
professionalism and the attitude towards perfor-
mance measures while controlling for these other
variables that could also a?ect the attitude towards
performance measures. Leadership style was mea-
sured using a subset of 10 MLQ 8Y items of trans-
formational leadership style (Bass & Avolio, 2000).
MLQ refers to the currently most used valid ques-
tionnaire for assessing leadership. For Team trust
the scale was comprised of the seven items
employed in a German study by Baer and Frese
(2003), and based on the work of Edmondson
(1999). The scale for measuring Work pressure
was comprised of 14 items and is taken from Stan-
ton, Balzer, Smith, Parra, and Ironson (2001).
Work satisfaction was measured in a way in which
respondents had to write down three numbers total-
ing 100%. They were asked to note the percentage of
time they felt, on average, ‘‘satis?ed’’, ‘‘unsatis-
?ed’’, and ‘‘neutral’’ about their current job.
Results: developing an enabling PMS
In this section, we will present the empirical
material to explore the developmental approach
that fostered the enabling nature of the PMS in
the case study company. First, we will present
results that suggest that employee attitude toward
performance measures in the logistics department
was quite positive, and this is based both on the
survey and the qualitative data. Then we will
explain this positive attitude through the proposi-
tions outlined above in propositions about a devel-
opmental approach for enabling PMS section.
1
Please contact the ?rst author for more details about the
research instrument.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 501
A positive attitude toward performance measures in
the logistics department
Employee attitudes toward the performance
measures used in their sub-departments were quite
positive; the means from two waves of question-
naire deployment are in Table 4. Note that the reli-
abilities of the survey questionnaires ranges from
satisfactory to good, as measured by Cronbach’s
Alphas. The average scores on the variable Atti-
tude toward performance measures were 5.2 and
5.4 (for the ?rst and second data wave, on a
seven-point scale). In the second administration
we also asked participants what the situation with
respect to performance measurement should be
two years later, using the same seven items and
the same seven-point answering scale (Ambition
level in two years). The average score shown in
Table 4 is 6.2, which is higher than their assess-
ment of the current situation. This suggests that
on average employees were ambitious in terms of
performance measurement, and this may also be
taken as an indication of a positive attitude toward
performance measurement.
The qualitative data provide further evidence of
positive attitudes toward performance measure-
ment. Particularly, signi?cant was a meeting with
the logistics management team (LMT) and middle
managers in logistics (i.e., managers who reported
to the members of the LMT, planners, and shift
leaders) in March 2005 (to be described in detail
below). We will describe that there were tensions:
the middle managers wanted more performance
measures to support them in their work. To the
extent that they could not develop and implement
these themselves, they needed resources outside
their teams (such as time from experts in the con-
troller’s o?ce), and the prioritization of such
resources was debated. We take this as another
indication of positive attitudes toward perfor-
mance measures, as the PMS within logistics was
clearly being perceived as enabling formalization.
Illustration: LMT and middle managers discussed
the current status and priorities for further
development of the PMS
The PMS-champion and the controller pre-
sented the history and current state of performance
Table 4
Survey constructs (reliabilities, descriptives, correlations)
First data wave, N = 42 Cronbach’s Alpha Mean SD 1 2 3 4 5
1. Attitude toward performance measures .816 5.169 .818 1
2. Professionalism .813 5.304 .606 .433
**
1
3. Leadership style .862 4.848 .891 .137 .275 1
4. Team trust .729 5.762 1.153 .520
**
.208 .302 1
5. Work pressure .803 3.628 .741 À.224 .110 .041 À.409
**
1
6. Work satisfaction .868 6.000 1.653 .316
*
.238 .175 .543
**
À.421
**
Second data wave, N = 40 Cronbach’s Alpha Mean SD 1 2 3 4 5 6
1. Attitude toward performance
measures
.906 5.430 .797 1
2. Ambition level in two years .881 6.186 .627 .681
**
1
3. Professionalism .878 5.317 .587 .385
*
.439
**
1
4. Leadership style .914 4.463 1.050 .219 .311 .365
*
1
5. Team trust .655 5.500 .973 .205 .194 .351
*
.326
*
1
6. Work pressure .870 3.586 .852 À.281 À.105 .019 .072 À.363
*
1
7. Work satisfaction .909 5.783 1.924 .177 .124 .242 À.026 .490
**
À.485
**
For completeness we also conducted a principal components analysis. All measurement items loaded on their expected factors for the
independent variables Leadership style, and Team trust, but not always for Professionalism, Work pressure, and Work satisfaction.
However, because of the very small number of observations, we maintained all items for further analyses.
*
Correlation is signi?cant at the 0.05 level (two-tailed).
**
Correlation is signi?cant at the 0.01 level (2-tailed).
502 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
measures in a meeting of the LMT together with
middle managers in logistics, in March 2005 (see
Table 1). They contrasted performance measures
that provide insights into whether the logistics
department achieves its medium and long-term
objectives versus performance measures that an
employee in logistics may need to be successful
in his or her daily work. The PMS-champion
and controller described their ‘‘dream’’ situation:
‘‘middle managers can and want to develop perfor-
mance measures and produce the reports on
these themselves in order to better manage their
processes’’. The director of logistics explicated
this further as ‘‘inventing it yourself; getting the
data out of systems or recording these oneself. Is
this a dream or a nightmare? Is this something we
share? Or do you just always want to call the con-
troller, who should realize this for you?’’ This
remark stimulated a lot of discussion. It seemed
that the idea of developing their own measures
and reporting on these was supported. ‘‘There are
enough opportunities with the new systems, and
sometimes you discover only after a while what
you can get out of these’’ one of the participants
commented. Middle managers further remarked
that they felt they needed to be fully involved in
the development process of new measures, and it
should be made easy for them to actually generate
the reports.
The discussion centered on the way in which
resources for PMS development were allocated
between performance measures for the LMT and
those for use by middle managers in logistics. At
?rst, practical issues with performance measures
were mentioned, such as that currently, for many
performance measures, the reporting involved
manual activities that required too much time
and that could cause errors. The middle managers
said better information systems and tools were
needed. They also expressed a desire for speci?c
new performance measures to be implemented.
During this discussion, it was brought forward
by the middle managers that support from people
in the controller’s o?ce and from the research
assistants was required. It became clear that the
resources for developing and implementing all
the desired new performance measures would have
been severely overstretched, and thus not all
wishes for new performance measures could be
supported. The controller stated that he wanted
to allocate the resources to performance measures
that involved signi?cant ?nancial risk, which was
di?cult to control without such measures. Hence,
employees in logistics had to develop and imple-
ment some measures by themselves. While there
seemed to be much support for the idea of devel-
oping their own measures, it was also discussed
that in some situations this was considered too dif-
?cult, and specialized involvement from the con-
troller’s o?ce was needed. The middle managers
argued that their employee voices should be heard
and their requirements should be supported.
The PMS-champion mentioned a particular
measure and said that she had a real dilemma
about it: ‘‘I agree with [the controller] that we
should focus on the strategic measures for logis-
tics, but I also feel that this is a really important
measure within our department’’.
2
One of the
managers said: ‘‘If I as a middle manager ask for
a particular performance measure, I think you
should say ‘yes’ right away, because then I really
need it, and otherwise you will not get any support
[from us middle managers for performance mea-
surement initiatives]’’. The controller responded
by saying ‘‘that is simply not always feasible, our
time is limited’’. To this the manager responded
provocatively: ‘‘so if I am held accountable for
something, I get no support, but if the logistics
management team is held accountable for some-
thing, then there is support?’’ Clearly, people held
di?erent opinions about how dependent they were
on specialized support. Another middle manager
commented: ‘‘But these practical issues have never
stopped us from going forward with implementing
performance measures. . . .. And if you look at our
performance measures in [the sub-department], we
designed and implemented these almost com-
pletely by ourselves’’.
Later in the meeting, four groups discussed ideas
for new performance measures and presented these.
2
The word ‘‘department’’ was used in the company also to
refer to the four sub-departments within logistics. ‘‘Sub-
departments’’ is a term we use in the paper for clarity, but we
write ‘‘department’’ in quotations and in the questionnaire in
Appendix, as this term was actually used in the company.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 503
As a follow-up, the sub-departments within logis-
tics were asked to think about these measures fur-
ther and come up with proposals. After this
meeting, the sub-departments proposed 16 newper-
formance measures in total. This list was input to a
prioritization meeting of the logistics management
team in May 2005. The team prioritized the new
measures. Moreover, it was concluded that in the
future middle managers should be involved more
in further developing the PMS. One of the members
of the LMT re?ected ‘‘we have been very much top
down, authoritarian with our kpi process’’.
Just to illustrate the dilemma for prioritization
further, it is helpful to look in more detail at one
of the performance measures that the PMS-cham-
pion referred to above. It shows that middle man-
agers within logistics had speci?c ideas about new
performance measures. Toward the end of 2004,
one of the warehouses was nearing capacity, run-
ning the risk of an over?ow situation. A group
of people in logistics had developed solutions for
this problem and subsequently wanted to monitor
the e?ect, to see how the utilization of the ware-
house was developing over time. This was a simple
graph showing on a daily basis how many pallets
were stored. This total should be around 600 pal-
lets maximum, and serious storage problems
would result if it rose above 800. This inventory
monitoring report had been made available, with
the help of both the controller’s o?ce and the
research assistants. It may appear very simple
and easy to set up, but it took a couple of days,
preparing the SAP downloads and setting up the
Excel sheet to develop and implement this report.
A similar type of performance measure was now
(in March 2005) needed in another warehouse,
where all carton products (labels, boxes, etc.) were
stored. Note that outside storage was not an
option for these items. Again, a graph would be
needed showing the warehouse utilization on a
daily basis, but the new graph was a bit more com-
plex. It needed to indicate warehouse utilization
disaggregated into di?erent types of storage. Set-
ting this up would take several days of e?ort by
the controller’s department, who claimed that time
was not available for designing and implementing
this performance measure. This is an example of a
new performance measure that middle managers
wanted and that stirred debate on the prioritiza-
tion and resource allocation for the development
of performance measures.
These examples of re?nements to the PMS that
were initiated by employees to better support their
work practices suggest that the nature of the PMS
in logistics was predominantly enabling rather
than coercive. Can the enabling nature of the
PMS be understood based on the development
process that had shaped this PMS? In the follow-
ing sections, we will report on three characteristics
of the process: (a) professionalism, (b) experience-
based PMS development, and (c) experimentation
with new performance measures. We will also
explore how internal transparency (d) was impor-
tant for encouraging enabling formalization.
Professionalism of employees established the basis
for PMS development
This beverage manufacturing company was rec-
ognized for its professionalism by winning a pres-
tigious national prize in the retail beverages
category. In this annual contest, supermarket
chains assessed their 90 largest suppliers in terms
of three criteria: account management (the quality
of the sales team), trade marketing (the quality of
the sales support for the supermarkets), and sup-
ply chain management (the quality of the logistical
processes). For these criteria, the company had
won the highest score of all beverage suppliers
assessed (water, soft drinks, beer, wine and spirits)
for three years in succession (2002, 2003, and
2004). This suggests that the logistics department
operated at a high professional level.
We proposed that professionalism contributed
to a positive attitude towards PMS. Regression
results are presented in Table 5. The dependent
variables are Attitude toward performance mea-
sures (both data waves) and Ambition level in two
years (second data wave) with respect to perfor-
mance measurement. As shown in the table, the
coe?cients for the variable Professionalism are sta-
tistically signi?cant, and they are considerable
(.521, .518, and .420) and larger than the coe?-
cients for all other independent variables.
These results show that a high level of profes-
sionalism is a key characteristic of a development
504 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
process with high employee involvement. It
enabled employees, together with professionals
from the controller’s o?ce, to experiment with
performance measures and to gradually expand
and re?ne the PMS on the basis of learning from
experiences. We will elaborate now on the other
two characteristics of the PMS development
process.
Experience-based PMS development process built
on existing measurement practices
Experience-based PMS development refers to
identifying and building on local experiences with
performance measures during further rounds of
re?nement of the PMS. Note that ‘‘experience-
based’’ points to the capturing of experience for
guiding development at the level of the perfor-
mance measurement system; in the following sec-
tion ‘‘experimentation’’ will be discussed at the
level of single measures.
Our qualitative results indicate that the logistics
department has been following an experience-
based development process. Table 2 shows the
development of performance measures in logistics
between January 2003 and June 2005. The total
number of performance measures increased from
19 to 47 measures. New measures were being
added constantly, while other measures were
removed. Still other measures were reviewed and
updated and re-implemented. This situation is
not re?ective of a PMS initiative that is ?rst
designed, separately implemented, and then
reviewed, for example annually. Rather, the image
is that of a more ‘‘organic’’ PMS that is constantly
growing, being reviewed, and being pruned—a
continuous tinkering to make it better. It is consis-
tent with processes of incremental improvement,
based on experience gained (Abrahamson, 2000;
Zollo & Winter, 2002).
We will illustrate below that especially the LMT
attached importance to an experience-based pro-
cess for bringing their PMS further. In sum: the
company’s top-management wanted a common
format and approach to performance measurement
in all departments of the organization. They
labeled this ‘‘the balanced scorecard’’. Top-man-
agement initiated such a performance measure-
ment project during the period that we studied.
Tensions between the central initiative and the
local experiences in logistics could be observed.
While logistics was already the most active depart-
ment in terms of performance measures, the central
initiative was at several times perceived by the
LMT members as something that could disrupt
rather than foster their ongoing PMS activities.
Table 5
Regression results
Dependent variable Attitude toward
performance
measures (?rst
data wave)
Attitude toward
performance
measures (second
data wave)
Ambition level
in two years
(second data
wave)
Intercept 1.498 (1.206) 3.843 (1.322)
*
4.061 (1.049)
*
Professionalism .521 (.191)
*
.518 (.231)
**
.420 (.184)
**
Leadership style À.086 (.129) .097 (.129) .118 (.103)
Team trust .321 (.118)
*
À.062 (.158) À.038 (.125)
Work pressure À.108 (.170) À.324 (.166)
a
À.113 (.131)
Work satisfaction À.023 (.081) À.018 (.079) À.004 (.063)
# observations 42 40 40
R
2
.397 .250 .238
Unstandardized coe?cients and (Standard errors) tabulated.
We also estimated eight alternative speci?cations of these regression models, always including Professionalism plus various combi-
nations of a number of the other independent variables. The coe?cient for Professionalism was positive and signi?cant at least at the
.05 level in eight cases (?rst model), in seven cases (second model), and in eight cases (third model) (results not tabulated).
a
Coe?cient signi?cant at .059.
*
Coe?cient is signi?cant at the .05 level (2-tailed).
**
Coe?cient is signi?cant at the .01 level (2-tailed).
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 505
The central initiative placed performance measures
high on the agenda, and this priority status could
have possibly provided momentum to help the
LMT to move their PMS initiative forward. How-
ever, by July 2004 the LMT members were con-
cerned about what the central initiative would
mean for the PMS they had so carefully developed
with their employees over the last years. We under-
stand this as another indication of the importance
of an experience-based development process. There
was tension, because the people in logistics worried
that this top-down initiative would not re?ect their
experiences and would not allow time to experi-
ment with and adjust performance measures.
Illustration: tension from the central balanced
scorecard initiative vis-a` -vis the PMS in logistics
The new CEO was appointed in April 2004, but
he had already been a member of the board for
several months to get to know the company. He
conveyed his emphasis on performance measures
right from the start: he attended a meeting with
the LMT in October 2003, during which perfor-
mance measurement was a main point on the
agenda. He made it clear that he considered per-
formance measurement to be very important and
he wanted more of it, throughout the company.
He wanted a system to be implemented quickly:
de?ning the measures, setting (and ‘‘freezing’’)
the targets and tolerances. In that meeting, he also
said he wanted to show performance as a tra?c
light that would show red when measures slipped
below their target, and in which case the manager
responsible for a particular performance measure
would need to prepare an action plan for presenta-
tion to the management team. He spoke of this in
terms of management by exception, whereby the
performance report should be used to select issues
that needed to be discussed. It was clear that once
he assumed the chairmanship, performance mea-
surement and reporting were going to be matter
of high priority and a focal point of top-manage-
ment attention.
In June 2004, the new CEO announced the new
balanced scorecard project in the company. A bal-
anced scorecard at the board level—so for the ?rm
as a whole—was being formulated and each of the
di?erent departments in the ?rm, of which logistics
was one, had to devise a business balanced score-
card for their area. A companywide project lea-
der—an experienced internal manager at the
director level, reporting directly to the CEO—
was appointed. This initiative soon created anxiety
within the logistics department. The manager in
logistics who had become the main proponent of
performance measurement (the PMS-champion,
but that title was not yet used at that point in time)
called a meeting with the researchers, also on
behalf of the controller. She informed the research-
ers about the central initiative and explained that
the project leader was talking with other compa-
nies and consulting ?rms. She expressed concern
that new performance measures would be deter-
mined in a top-down fashion by the central initia-
tive, led by consultants. The concern of the
logistics group was that their long-standing and
ongoing PMS work would now be disturbed by
the top-down mandated initiative. They feared
that their system would have to be changed to
comply with the new, companywide balanced
scorecard framework. They were not against more
performance measurement—on the contrary, they
had already implemented performance-measure-
ment initiatives in logistics—but they feared that
a consultants-led project would be started with
top-down proposals for new performance mea-
sures, and they expected that this would leave less
room for what they had developed so far, which
had a close ?t to local work practices.
During the second half of 2004, the project lea-
der of the central balanced scorecard project vis-
ited several other companies to discuss their
experiences with the implementation of a PMS.
He concluded that although these experiences were
very diverse, it was clear that e?ective PMS devel-
opment take several years, and are more successful
if developed bottom-up and from within the orga-
nization, and that an organization should simply
get started and develop things further as it goes.
He also had gained the impression that when per-
formance measures were part of the incentive sys-
tem, there was a signi?cant risk of manipulation.
He wanted to use these insights in the scorecard
project for which he was now responsible. How-
ever, there were tensions and he said in a consulta-
tion with us, that in the eyes of the new CEO
506 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
‘‘things are going far too slowly’’. The project lea-
der established a group of ‘‘champions’’ in August
2004. There was one person from each department
within the company who was the most enthusiastic
proponent of performance measurement and who
was leading departmental initiatives to develop it
further. With one exception, these were not con-
trollers, but functional managers.
The deadline for the ?rst version of all depart-
mental balanced scorecards was November 1,
2004, and it was postponed until December 1,
2004. However, not all departments met the sec-
ond deadline, whereupon the CEO put a tra?c
light in the central entrance hall of the company’s
premises, with the signal showing red. It was there
for a couple of days. There was no sign or other
explanation of why it was there, but managers
soon found out it was there to signal that score-
cards really would have needed to be completed.
The intervention provoked quite some discussion,
some of which we witnessed in a meeting with
the LMT. Members of the team acknowledged
that it was unfortunate that the deadline was not
met, but they did not like the tra?c light interven-
tion. They complained that the e?orts that other
departments were undertaking were not facilitated,
in the sense that the practical support for imple-
mentation was lacking. The logistics’ PMS-cham-
pions objected: ‘‘We have said to the project
leader on several occasions that this can only be
done if it is facilitated in a practical way, but noth-
ing has happened’’. The logistics director com-
mented that such comment did not reach the
company’s directors meeting when the balanced
scorecard initiative was discussed.
The balanced scorecards were implemented in
the middle of December 2004, and they were eval-
uated internally and changes were proposed on the
basis of the ?rst six months of experience. The bal-
anced scorecard initiative was perceived positively,
according to the company project leader. In a
recent strategic planning meeting with the top-35
managers of the company the balanced scorecard
was often mentioned as a positive development.
However, the companywide project leader also felt
the process at the time was fragile: ‘‘If I would stop
[leading this project] now, then it would collapse.
So, apparently it’s not yet deeply ingrained’’. Bal-
ancing the top-down pressure from the chairman
‘‘who simply wanted to have this’’ and letting the
balanced scorecard being developed bottom-up
was very important (and sometimes very di?cult),
according to the project leader.
Yet, an event in the middle of 2005 again
pointed to the importance of allowing an experi-
ence-based process. A consulting ?rm that the
company had engaged on another project had
devised another, di?erent balanced scorecard pro-
posal for the entire ?rm. It did not build on what
had been developed thus far (i.e., Green?eld
approach) and it was based on what the consulting
?rm considered to be best practice in other compa-
nies (i.e., a Blueprint). The balanced scorecard
project leader considered it a very serious mistake
if the ?rm were to adopt the blueprint proposed by
the consulting ?rm and to present it at the next
top-35 meeting: ‘‘What are people such as [the
logistics PMS-champion] supposed to think if this
hangs on the wall in our next top-35 meeting?’’ He
emphasized that the proposal did not do justice at
all to what the company had developed by now,
‘‘which is so speci?cally modeled to our situation’’.
Experimentation with new measures
Experimentation refers to the process of tinker-
ing with a single performance measure while
designing and implementing it. This means that
design and implementation are interconnected,
because the design (from conceptualization to
?ne-tuning the presentation) is partly done with
real data, after measurement and reporting on
the new measure have already begun. New perfor-
mance measures are hardly ever ‘‘right’’ straight
away, and by allowing adjustments the reliability
and validity of the measure can be improved if tak-
ing into close consideration the context where the
new performance measure is actually in use. In
other words: both conceptual and detailed imple-
mentation issues of performance measures are cru-
cial for their e?ectiveness. Employees typically
possess key, yet tacit, knowledge that is required
to further re?ne performance measures. In perfor-
mance-measurement development ‘‘the devil is in
the detail’’, as illustrated below, with respect to a
new e?ciency performance measure.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 507
Illustration: experimenting with a new performance
measure for internal transportation and warehouses
We will illustrate experimentation with a new
performance measure for the sub-department of
‘‘internal transportation and warehouses’’. The
activities took place from November 2004 until
April 2005. The main activity of this sub-depart-
ment was to store ?nished goods in the ?nished
goods warehouse (it was brought from production
to the warehouse by an automatic transportation
system) and to load and unload delivery trucks,
using forklift trucks. The workload for this activity
was unevenly spread throughout the day. For
managing e?ciency, planning the number of oper-
ators per shift was important, and also making
sure that the operators were carrying out the work
quickly and that certain preparatory activities
were done during idle time. A new performance
measure was thus needed for e?ciency purposes.
(Observations of forklift-truck drivers gave a
rough indication of e?ciency, but the managers
and ?rst-line supervisors wanted more factual data
to complement these.) Based on existing ideas
within the sub-department and the controllers’
o?ce, and on discussions with employees in the
transportation sub-department, the performance
measure was de?ned as the number of ‘‘trans-
ports’’ carried out per labor hour. For example,
one transport could be to pick up a pallet from
the automatic conveyer belt and to bring it to a
particular location in the warehouse.
Measuring the number of transports was feasi-
ble, because each transport was issued by the ware-
house management system to terminals on the
forklift trucks. Measuring the number of labor
hours spent was also possible using the warehouse
management system, and so the ratio of the two
was readily available. However, not all activities
that needed to be carried out were issued by the
warehouse management system. For example: par-
ticular types of pallets needed to be rotated 90°
before they could be transported; the forklift-truck
drivers sometimes needed to move lorries for load-
ing and unloading; containers for international
destinations needed to be closed and sealed. An
initial list of more than 30 of such side activities
was prepared and a copy was given to each fork-
lift-truck driver to estimate the time spent on these
activities, and to add new activities to the list. The
?nal list of side activities was compiled and the
workload for these activities was also expressed
as a number of transports (the numerator of the
performance measure). A manager in the sub-
department ‘‘internal transportation and ware-
houses’’ calculated the performance measure, by
downloading data from the warehouse manage-
ment system and preparing the reports using
Excel.
The target for the new performance measure
was an engineered target, as this was based on
the detailed design of the processes in the ware-
house. However, not all assumptions underlying
this design were initially met, as some processes
were carried out di?erently, and this led to an
adjustment of the target. The presentation format
was designed such that it included, in weekly num-
bers, the absolute number of transports and the
number of transports per labor hour (both per
week and cumulative).
After the initial measure and report had been
implemented, weekly evaluations were conducted
with the sub-department’s manager and the team
leaders. They provided relevant and detailed feed-
back, in particular regarding the way in which side
activities were included in the calculation of the
performance measure. For example, for some side
activities it was decided not to estimate the amount
of work involved every week, but rather include a
more general estimation. It was also discussed
whether the performance as measured made sense
and appeared valid from the team leaders’ perspec-
tive, which was found to be the case. The weekly
frequency of reporting the measure was found use-
ful. Also the presentation format was discussed,
because it was quite a complex chart. Adjustment
to the chart were made, but also the term ‘‘overca-
pacity’’ that was initially used was replaced by
‘‘theoretical utilization’’ and the chart’s original
label ‘‘e?ciency [sub-department]’’ was changed
to ‘‘transportations per labor hour in [warehouse
name]’’.
Similar experimentation activities were con-
ducted for other performance measures. The e?ect
of experimentation was not only that behavioral
e?ects such as commitment were improved. More
to our point, experimentation played a vital role
508 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
in arriving at a performance measure that was
more reliable, valid, and understandable in the
context. The people who were responsible for the
PMS (from the controllers’ o?ce and research
assistants) obtained an in-depth understanding of
the operational processes the PMS were supposed
to capture. These specialists needed to obtain an
intimate familiarity with the operational processes,
and the operational managers needed to under-
stand the details involved in actually translating
these processes into quantitative performance
numbers.
In sum, in the above and previous sections we
reported ?ndings showing that a PMS is more
likely to be seen as a constructive, enabling type
of formalization, rather than a negative, coercive
form of control, if it is developed incrementally
such that the members of the organization can
gain actual experience with using performance
measures, re?ect on this, and draw conclusions
to develop the system further. We observed ongo-
ing activities such as reviewing and revising exist-
ing performance measures, brainstorming about
possible new measures, experimentation with new
measures, adding some new measures to the
PMS, and dropping some existing measures. There
was not a speci?c point in time when the perfor-
mance-measurement system was ‘‘ready’’. It was
also found that the throughput time for actually
implementing a new measure was considerable
and could easily take half a year to one year. Fur-
thermore, it was crucial to look in detail at existing
measures at the start of each measure develop-
ment. New measures could only be developed after
understanding and using as much as possible from
what was already in place such as the precise def-
initions of existing measures; the various rationales
behind these; the data used; the limitations that
people experienced with the existing measures;
the ideas that people were working on to improve
the existing measures; and information system
changes that could impact existing reports. Hence,
neither a Green?eld nor a Blueprint approach was
taken. The developmental approach stimulated the
inter-functional exchange of knowledge, which
was required for a reliable, valid and understand-
able PMS. It created also transparency of the sys-
tem from the employees’ perspective (and
transparency of operational processes for the
PMS specialists). Therefore, we will explore PMS
transparency in the next section.
Internal transparency was emphasized throughout
the development process
Transparency, in combination with ?exibility,
in the context of performance measures, means
that employees (whose performance is going to
be measured) are highly involved in operating
and managing the PMS as organizational technol-
ogy. Transparency and ?exibility imply that the
performance measures are understandable to
employees, something they have hands-on-experi-
ence with, and something they can in?uence to
make it workable for them. In the studied com-
pany, performance measures were not owned by,
nor understood solely by the technical specialists
in the ?nance and accounting function. Instead,
employees had been an integral part of the devel-
opment of the measures from the outset. They
were in some cases even managing the system after
it had been implemented. The operational manag-
ers themselves were trained with the information
system tools to record data, to pull together data,
to create performance reports, to review and revise
de?nitions of performance measures, to change
graphical representations of performance reports,
etc. The director of logistics was a strong propo-
nent of non-accounting ownership of the PMS,
and we will elaborate on this below.
Illustration: director of logistics voices strong
concerns about employee ownership of the PMS
throughout the study
The director of logistics was very outspoken on
matters of what could be considered internal trans-
parency. Already in the ?rst meeting during this
study, in August 2002 (see Table 1), he emphasized
that he wanted the employees rather than the con-
trollers to be responsible for reporting perfor-
mance. ‘‘If people are not going to take the e?ort
to do the measurements and make the reports, it
probably means it’s not essential to do them’’.
Whether new measures would actually be imple-
mented was a kind of relevance test in his mind.
In his view, when confronted with performance
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 509
measurement, employees would ask themselves
‘‘Do I feel responsible for this?’’, ‘‘Does it help
me?’’, and that the answers to these questions would
determine their attitudes and level of cooperation.
The director of logistics expressed these con-
cerns at the beginning of the study, when he stated
that the emphasis should be on creating new per-
formance measures for use within the four logistics
sub-departments, and that less emphasis should be
given to measures for logistics as a whole (for use
in the LMT) or for reporting the performance of
logistics to the board. And towards the end of
the project (in May 2005), when discussing the pri-
ority for further development (after the meeting
with middle managers in logistics, March 2005,
described above), he made forceful comments
re?ecting concerns for empowering middle manag-
ers. As described above, in March 2005 the four
sub-departments within logistics had proposed a
list of 16 performance measures they wanted to
implement, and there was a meeting of the LMT
about prioritizing those measures and discussing
more generally how logistics wanted to move for-
ward with their performance-measurement system.
In that meeting, the question was raised whether, if
needed, the LMT was willing to allocate some of
the resources to support performance measures
for lower-level managers within logistics, and if
they wanted to invest in enabling these managers
to implement and generate performance measures
themselves. Such investment would include, for
example, buying and implementing additional IT
tools, providing training, allocating time of people
in the controller’s o?ce. Some members of the
LMT supported this, but also had some reserva-
tions: e.g., ‘‘Only if I also think it is relevant for
our department’’. The logistic director intervened:
‘‘You cannot give a conditional ‘yes’, no ‘yes, but’.
You cannot say ‘no’ to this’’. He stated that, as a
principle, he wanted to support the information
requests from lower-level managers, and subse-
quently there could be a need to prioritize. ‘‘And
those that are selected, we will certainly enable’’.
‘‘If we then say ‘these [particular performance
measures] are the most important ones and these
we will facilitate’, then managers should de?ne
what it entails and come up with a project plan
for each [performance measure]’’.
In sum, during various discussions (the meeting
in August 2002 with the middle managers, and
later in the LMT) it became clear that di?erent
PMS requirements existed for the LMT member
and for middle managers in the logistics depart-
ment. There were no inherent con?icts between
these di?erent requirements, because the PMS
was conceived of and implemented as something
that supported di?erent managerial levels—the
enabling intent and nature of the PMS was unmis-
takable. However, in practical terms, there was a
con?ict in the sense that resources for PMS devel-
opment were limited, and choices had to be made
regarding whose PMS requirements were going to
be implemented ?rst. This observation illustrates a
key advantage of transparency due to heavy
involvement of employees throughout the develop-
ment process: as employees were more involved
and better enabled (such as provided with IT sys-
tems for PMS development), internal transparency
increased and dependency on specialized resources
for further PMS development was reduced.
Discussion
This study of performance-measurement sys-
tems (PMS) in operations focused on identifying
a development process that is likely to lead to a
PMS that employees regard as useful for them;
something they want to help develop, and not
exclusively as a control device for senior manage-
ment. Which characteristics of the development
process contribute to an enabling PMS? How can
a PMS be developed as enabling formalization
and not as coercive formalization? Our research
was conducted as a longitudinal case study of the
logistics department in a medium-sized beverage
manufacturing company, from August 2002
through June 2005. Qualitative data were gath-
ered, as well as two waves of survey data.
We found that Professionalism was signi?cantly
related to positive attitudes toward performance
measures, based on the survey data. The qualitative
?ndings point to professionalism as a force that can
be mobilized through a development process that
is experience-based and allows for experimenta-
tion. Experience-based characterizes a development
510 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
process that builds on existing skills, practices,
and know-how of involved employees, in order
to enrich the PMS step-by-step over time. Key
qualitative ?ndings supporting the importance
of this experienced-based characteristic were
revealed when a centrally initiated balanced score-
card initiative threatened to overrule the experi-
enced-based approach that had been followed
thus far within logistics. Experimentation with
PMS improvements concerned deliberate employee
e?orts to test, review, and re?ne conceptualiza-
tions, de?nitions, data, and presentations of new
performance measures. Key ?ndings supporting
the importance of experimentation pertained to
the way in which speci?c new measures (such as
for warehousing and internal transportation) were
developed. Furthermore, we found that transpar-
ency contributed to an enabling PMS. This became
apparent from the internal discussions on owner-
ship of the performance measures. Local transpar-
ency in the context of performance measures was
stimulated by deeply involving operational manag-
ers in the conceptual and practical development of
such measures, but also by making them, rather
than people in the controller’s o?ce, responsible
for periodically ‘‘calculating’’ and reporting these
measures.
3
The theoretical framework developed by Adler
and Borys (1996) was key to our study of e?ective
organizational change towards an enabling PMS.
We developed their framework further, based on
other literatures and the empirical data, in the spe-
ci?c context of PMS for operations. We proposed
three characteristics of a development process that
is likely to result in an enabling PMS, and we show
a departmental episode where enabled formaliza-
tion took place in form of a much-expanded
PMS. Its development process was characterized
by experiential inputs, experimentation, and a high
degree of professionalism on the part of individual
employees. Furthermore, we found that the
‘‘norm’’ of transparency repeatedly and consis-
tently voiced by the departmental director was a
force that contributed to the enabling PMS we wit-
nessed in this setting of a logistics department.
Conclusions
This study provides several main contributions.
First, it increases our understanding and apprecia-
tion for a developmental approach leading to an
enabling PMS. We demonstrate that building on
existing performance-measurement experience of
employees, as well as their professionalism, and
allowing experimentation with measures contrib-
ute to the enabling nature of the PMS. Design
and implementation appear interrelated, because
design is partly conducted while obtaining an
empirical understanding of how performance mea-
sures are being used within their actual operational
context. An experienced-based process and experi-
mentation are not used to manage resistance to
organizational change, such as to create commit-
ment, or to make people feel that they are taken
seriously (Piderit, 2000). Rather, an experienced-
based process and experimentation serve to involve
employees in such a way that their knowledge is
mobilized to design a more valid, reliable, and
understandable PMS in their speci?c local context.
We pointed to the importance of professionalism of
employees as a condition for a development pro-
cess characterized by being experience-based and
containing experimentation.
Second, this study provides a possible explana-
tion for why a developmental PMS approach as
described in our case study may establish enabling
formalization. Given the fact that a developmental
approach to PMS evolution engages all personnel
whose performance is being measured, it may com-
pensate for the inherent incompleteness of perfor-
mance measures. This study connects accounting
considerations about the completeness of perfor-
mance measures in operations (Chapman, 1997;
Lillis, 2002) with the ideas proposed by Adler
and Borys (1996) regarding design principles for
3
After this study, around January 2006, it was formally
decided that the reporting of most performance measures would
be the responsibility of particular managers in logistics. The
controller’s o?ce would conduct audits on these measures, and
with the IT department they had to provide skills and tools to
managers in logistics. The controller’s o?ce was responsible for
reporting on the measures that were included on the scorecards
for the LMT and the company’s top-management.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 511
enabling formalization. A PMS re?ects perfor-
mance on a variety of dimensions, such as e?-
ciency, productivity, quality, and responsiveness.
However, it remains di?cult to develop a techni-
cally complete PMS that fully re?ects the dimen-
sions of operational performance, that contains
valid measures on all these dimensions, and that
includes targets that reliably capture the tradeo?s
between opposing performance measures (Chap-
man, 1997; Lillis, 2002). An experience-based
development process that includes experimenta-
tion and builds on professionalism of employees
(whose performance is being measured) enhances
both the validity and acceptance of the PMS.
The study shows, furthermore, that local PMS
initiatives have a high chance of being successfully
implemented, despite top-management’s e?orts to
coerce the local unit into a much faster and conse-
quently less-developmental mode. The action
research described here stimulated learning, both
on the part of the organization’s department and
by the university-based participants. After the
study, there was a continuation of similar PMS
activities, carried out by members of the logistics
department with less active engagement of the uni-
versity research partners. When re?ecting on the
project (in July 2006), members of the LMT empha-
sized the importance of management support for
PMS development, because the development pro-
cess requires signi?cant time of employees at
various levels, spent on activities that may not be
seen as particularly ‘‘strategic’’ or ‘‘glamorous’’.
Employees’ individual expertise, insights, and skills,
as well their enthusiasm for performance measure-
ment, were utilized in the process; and they received
the credits for results achieved, in terms of an
enriched PMS. According to these LMT members,
the development process had also bene?ted from
the stimulating and challenging interaction with
outsiders—researchers and students in this case.
Although the study is based on a multitude of
observations, an obvious but important limitation
is that it is based on a single case study. Results
may be di?cult to generalize to other empirical
settings, also because the researchers and research
assistants have not only been neutral observers;
they were also involved in helping to expand and
re?ne the departmental PMS. However, against
these weaknesses stands the advantage that
detailed observations could be made. Discussions
with members of the organization where always
lively, detailed, and involved. Our ideas were crit-
ically challenged, because ideas pertained to
‘‘their’’ PMS and the development and actual
usage of it. These interactions were not discussions
about abstract ideas in the interest of the research-
ers’ project or theory. Rather they dealt with what
made sense to organizational members in the con-
text and language of their own work. We feel that
our study, while acknowledging the limitations in
terms of possible biases (for example, selective per-
ception and interpretation), validly captures the
departmental members’ attitude toward perfor-
mance measurement and the development process
that had contributed to that attitude. Further-
more, the insights based on qualitative data have
been complemented with representative quantita-
tive data gathered within the department at two
points in time.
An important ?eld for future research remains
the dualistic role of performance-measurement sys-
tems—to provide some of the knowledge necessary
for planning and decision making, but also to
motivate and monitor people in organizations
(Zimmerman, 1997, p. 5)—and the e?ects of the
incompleteness of such systems. We concur with
Ahrens and Chapman (2004, p. 298) that ‘‘the con-
cept of enabling control presents a clearly de?ned
framework within which future research . . . might
further develop our understanding of the ways in
which management control systems can simulta-
neously support the objectives of e?ciency and ?ex-
ibility’’. Future research could possibly expand our
focus to conditions when a developmental, enabling
PMS approach is most feasible. Feasibility is an
issue, because this approach is demanding on
employees, senior management, and support func-
tions. A developmental approach assumes informal
local experience with quantitative performance
measurement, and that employees are quite willing
and capable to build on that in order to further
develop the PMS. Beyond that, we expect that other
requirements also play a role. For example, time is
needed to really understand in detail what is already
in place, and to evaluate what will be reused and
what not. Time and local autonomy are needed to
512 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
not ‘‘?x’’ the PMS too soon so that improvements
and adjustments to local conditions can be made.
Furthermore, senior management needs to have a
clear understanding of their objective for develop-
ing a PMS: is it to monitor and report upward in
the hierarchy, or is it also (or even primarily)
intended to support lower-level employees in their
work? Senior management also needs to behave in
accordance with an enabling PMS: balancing
between recognizing the incompleteness of the
PMS (so there is a story next to measuredoutcomes)
and demanding certain performance. And a devel-
opmental approach needs to be facilitated in terms
of resources and rewards, such as time to work on
it, bestowing prestige upon PMS developers, sup-
port from experts, development of IT tools with
which non-specialists can work, etc. Such facilita-
tion requires high-level support from IT, and
cross-functional cooperation of ?nance and
accounting professionals. In sum, the developmen-
tal approach reported in this paper is demanding
and may not be feasible in every organization.
The developmental approach may also not
be equally relevant to every organization. While
incompleteness of PMS helps to understand that a
developmental approach a?ects the enabling nature
of PMS, in certain organizations PMS may be well
developed and stable. Adevelopmental approach to
shape the PMS may also seem less relevant if oper-
ations managers have other kinds of information
that are more informative than formal performance
measures, such as direct observations of processes.
So, future studies may focus on the question: what
are antecedents of an e?ective developmental PMS
approach? Furthermore, investigating the bene?ts
to the organization (such as employee learning,
or ?nancial bene?ts), as well as assessing other
consequences of a developmental approach is an
intriguing line of future research.
To conclude, this study analyzed and illustrated
a developmental approach to PMS development,
which harvests existing informal measurement
practices, it lets new measurement experiments
blossom, and at times prunes the extant measure-
ment system. This developmental approach works
through employees’ local measurement experi-
ences, experimentation with re?ned and new mea-
sures, and mobilizes employees’ professionalism.
Future research could help to better understand
antecedents and consequences of this developmen-
tal approach toward performance-measurement
systems.
Acknowledgements
The authors thank Thomas Ahrens, Chris
Chapman, Kim Lang?eld-Smith, Je? Hicks, the
journal’s Editor and Reviewers, company employ-
ees, and workshop participants at the Academy of
Management 2005, the Warwick Business School,
the Global Management Accounting Research
Symposium 2006, and the New Directions in Man-
agement Accounting Conference 2006 for their
comments and suggestions.
Appendix
This appendix contains the questionnaire items
for the newly developed constructs Attitude toward
performance measures and Professionalism.
Attitude toward performance measures
‘‘In your department a number of performance
measures’’ (or KPIs: ‘‘Key Performance Indica-
tors’’) are used, as shown in the appendix to this
questionnaire. We ask your opinion about the
KPIs within your department. Please give a score
from 1 to 7.
1. How familiar are you with the KPIs of your
department?
2. How understandable do you ?nd the KPIs of
your department?
3. How reliable do you consider the KPIs of
your department?
4. How validly re?ect the KPIs the perfor-
mance of your department?
5. How extensively are the measurements of the
KPIs used in your department?
6. How involved are you within your depart-
ment in the development of better KPIs?
7. How do you experience the process of devel-
oping better KPIs within your department?
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 513
8. How useful do you consider the present
departmental KPIs for the Logistics
Department?
9. How useful do you consider the present
departmental KPIs for your department?
10. How useful do you consider the present
departmental KPIs for you ‘‘personally?’’
These questions were answered on seven-point
Likert scales anchored to the key concept in each
question. For example, ‘‘How reliable do you con-
sider the KPIs of your department?’’ was anchored
on ‘‘very unreliable’’, ‘‘unreliable’’, ‘‘somewhat
unreliable’’, ‘‘neutral’’, ‘‘somewhat reliable’’, ‘‘reli-
able’’, ‘‘very reliable’’.
Professionalism
‘‘We ask your opinion about the following
statements’’. You can indicate the extent to which
you agree with each statement by a number from 1
through 7:
1. I always contribute to new ideas at work.
2. At work, I like to be active improving things.
3. I like to do things well in my work.
4. The way I conduct my activities, is very con-
sistent with what is being recommended by
professionals.
5. I obey the rules at work.
6. I adhere to standards of integrity that pertain
to my work.
7. I sometimes act in ways I should not, because
it will not be noticed anyway (reversely
coded).
8. The manner of my daily work I consider
‘‘professional’’.
9. The way in which may work is organized is
professional.
10. I am busy with my profession or work also
outside working hours.
11. I can demonstrate to other people that my
work is important.
12. I learn every day at work.
13. I have colleagues at work from whom I
learn.
14. I enjoy reading about my profession or work.
15. I take part in activities outside working
hours that improve my professionalism.
16. I am always keen to follow suitable external
courses.
17. I am always keen to follow suitable internal
courses.
18. I learn from problems I encounter at work.
19. I am an active member of an organization
(or network) that helps advancing my
profession.
20. I get su?cient autonomy to direct my work.
21. I take my personal professional development
seriously.
22. I keep myself informed about new develop-
ments in my profession or work.
23. I am actively improving at work.
4
24. I would like to pursue more external
training.
4
25. I would like to pursue more internal
training.
4
References
Abernathy, M. A., & Brownell, P. (1999). The role of budgets in
organizations facing strategic change: an exploratory study.
Accounting, Organizations and Society, 24(3), 189–204.
Abernathy, M. A., & Lillis, A. M. (1995). The impact of
manufacturing ?exibility on management control system
design. Accounting, Organizations and Society, 20(4),
241–258.
Abrahamson, E. (2000). Change without pain. Harvard
Business Review, 78(4), 75–79.
Very much disagree Disagree Moderately disagree Neutral Moderately agree Agree Very much
agree
1 2 3 4 5 6 7
4
These items were added only after the ?rst administration of
the questionnaire.
514 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
Adler, P. S., & Borys, B. (1996). Two types of bureaucracy:
Enabling and coercive. Administrative Science Quarterly,
41(March), 61–89.
Ahrens, T. A., & Chapman, C. S. (2004). Accounting for
?exibility and e?ciency: A ?eld study of management
control systems in a restaurant chain. Contemporary
Accounting Research, 21(2), 271–301.
Ahrens, T. A., & Chapman, C. S. (2006). Doing qualitative ?eld
research in management accounting: Positioning data to
contribute to theory. Accounting, Organizations and Society,
31, 819–841.
Atkinson, A. A., & Sha?r, W. (1998). Standards for ?eld
research in management accounting. Journal of Manage-
ment Accounting Research, 10, 41–68.
Baer, M., & Frese, M. (2003). Innovation is not enough:
Climates for initiative and psychological safety, process
innovations, and ?rm performance. Journal of Organiza-
tional Behavior, 24, 45–68.
Baines, A., & Lang?eld-Smith, K. (2003). Antecedents to
management accounting change: A structural equation
approach. Accounting, Organizations and Society, 28,
675–698.
Banker, R. D., Potter, G., & Schroeder, R. G. (1993).
Reporting manufacturing performance measures to work-
ers: An empirical study. Journal of Management Accounting
Research, 5, 33–55.
Bass, B. M., & Avolio, B. J. (2000). MLQ multifactor leadership
questionnaire (2nd ed.). Redwood City: Mind Garden
(Technical Report).
Beamon, B. M. (1999). Measuring supply chain performance.
International Journal of Operations & Production Manage-
ment, 19(3), 275–292.
Bisbe, J., & Otley, D. (2004). The e?ects of the interactive use of
management control systems on product innovation.
Accounting, Organizations and Society, 29, 709–737.
Bourne, M., Neely, A., Mills, J., & Platts, K. (2003a).
Implementing performance measurement systems: A litera-
ture review. International Journal of Business Performance
Management, 5(1), 1–24.
Bourne, M., Neely, A., Mills, J., & Platts, K. (2003b). Why
some performance measurement initiatives fail: Lessons
from the change management literature. International Jour-
nal of Business Performance Management, 5(2–3), 245–269.
Caldwell, S. D., Herold, D. M., & Fedor, D. B. (2004). Toward
and understanding of the relationship among organizational
change, individual di?erences, and changes in person-
environment ?t: A cross-level study. Journal of Applied
Psychology, 89(5), 868–882.
Carlile, P. R. (2002). A pragmatic view of knowledge and
boundaries: Boundary objects in new product development.
Organization Science, 13(4), 442–455.
Carmona, S., & Gro¨ nlund, A. (2003). Measures vs actions: The
balanced scorecard in Swedish law enforcement. Interna-
tional Journal of Operations & Production Management,
23(11), 1475–1496.
Cavalluzzo, K. S., & Ittner, C. D. (2004). Implementing
performance measurement innovations: Evidence from
government. Accounting, Organizations and Society, 29,
243–267.
Chapman, C. S. (1997). Re?ections on a contingent view of
accounting. Accounting, Organizations and Society, 22,
189–205.
Chow, C. W., Kato, Y., & Merchant, K. A. (1996). The use
of organizational controls and their e?ects on data manip-
ulation and management myopia: A Japan vs U.S. com-
parison. Accounting, Organizations and Society, 21(2–3),
175–192.
Davila, A., & Wouters, M. (2005). Managing budget emphasis
through the explicit design of conditional budgetary slack.
Accounting, Organizations and Society, 30, 587–608.
de Haas, M., & Algera, J. A. (2002). Demonstrating the e?ect of
the strategic dialogue: Participation in designing the man-
agement control system. Management Accounting Research,
13, 41–69.
Eccles, R. G. (1991). The performance measurement manifesto.
Harvard Business Review, 69(1), 131–137.
Edmondson, A. (1999). Psychological safety and learning
behavior in work teams. Administrative Science Quarterly,
44, 350–383.
Fullerton, R. R., & McWatters, C. S. (2002). The role of
performance measures and incentive systems in relation to
the degree of JIT implementation. Accounting, Organiza-
tions and Society, 27, 711–735.
Hall, R. H. (1968). Professionalization and bureaucratization.
American Sociological Review, 33(10), 92–104.
Hall, R. W., Johnson, H. T., & Turney, P. B. B. (1990).
Measuring up: Charting pathways to manufacturing excel-
lence. Homewood, IL: Business One Irwin.
Jaworski, B. J., & Young, S. M. (1992). Dysfunctional behavior
and management control: An empirical study of marketing
managers. Accounting, Organizations and Society, 17(1),
17–35.
Jo¨ nsson, S., & Gro¨ nlund, A. (1988). Life with a sub-contractor:
New technology and management accounting. Accounting,
Organizations and Society, 13(5), 512–532.
Kaplan, R. S. (1983). Measuring manufacturing performance:
A new challenge for managerial accounting research. The
Accounting Review, 58(4), 686–705.
Kaplan, R. S. (Ed.). (1990). Measures for manufacturing
excellence. Boston, MA: Harvard Business School Press.
Labro, E., & Tuomela, T.-S. (2003). On bringing more action
into management accounting research: Process consider-
ations based on two constructive case studies. European
Accounting Review, 12(3), 409–442.
Lillis, A. M. (2002). Managing multiple dimensions of manu-
facturing performance—An exploratory study. Accounting,
Organizations and Society, 27, 497–529.
Lowe, A., & Jones, A. (2004). Emergent strategy and the
measurement of performance: The formulation of perfor-
mance indicators at the microlevel. Organization Studies,
25(8), 1313–1337.
Maiga, A. S., & Jacobs, F. A. (2005). Antecedents and
consequences of quality performance. Behavioral Research
in Accounting, 17, 111–131.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 515
Malina, M. A., & Selto, F. H. (2001). Communicating and
controlling strategy: An empirical study of the e?ectiveness of
the balanced scorecard. Journal of Management Accounting
Research, 13, 47–90.
Maskell, B. H. (1991). Performance measurement for world class
manufacturing—Amodel for Americancompanies. Cambridge,
MA: Productivity Press.
McKinnon, S. M., & Bruns, W. J. (1992). The information
mosaic. Boston, MA: Harvard Business School Press.
Medori, D., & Steeple, D. (2000). A framework for auditing
and enhancing performance measurement systems. Interna-
tional Journal of Operations & Production Management,
20(5), 520–533.
Nanni, A. J., Dixon, J. R., & Vollmann, T. E. (1992). Integrated
performance measurement: Management accounting to
support the new manufacturing realities. Journal of
Management Accounting Research, 4, 1–19.
Otley, D. (1999). Performance management: A framework for
management control systems research. Management
Accounting Research, 10, 363–382.
Perera, S., Harrison, G., & Poole, M. (1997). Customer-focused
manufacturing strategy and the use of operations-based
non-?nancial performance measures: A research note.
Accounting, Organizations and Society, 22, 557–572.
Piderit, S. K. (2000). Rethinking resistance and recognizing
ambivalence: A multidimensional view of attitudes toward
an organizational change. Academy of Management Review,
25(4), 783–794.
Qu, S. Q. (2006). Translating popular accounting ideas into
action: The role of inscriptions in customizing the balanced
scorecard. University of Alberta, School of Business.
Ramaswami, S. N. (1996). Marketing controls and dysfunc-
tional employee behaviors: A test of traditional and
contingency theory postulates. Journal of Marketing,
60(2), 105–120.
Ramaswami, S. N. (2002). In?uence of control systems on
opportunistic behaviors of salespeople: A test of gender
di?erences. Journal of Personal Selling & Sales Management,
22(3), 173–188.
Reagans, R., Argote, L., & Brooks, D. (2005). Individual
experience and experience working together: Predicting
learning rates from knowing who knows what and knowing
how to work together. Management Science, 51(6), 869–881.
Scott, T. W., & Tiessen, P. (1999). Performance measurement
andmanagerial teams. Accounting, Organizations andSociety,
24, 263–285.
Simons, R. (1990). The role of management control systems in
creating competitive advantage: New perspectives. Account-
ing, Organizations and Society, 15, 127–143.
Simons, R. (1991). Strategic orientation and top management
attention to control systems. Strategic Management Journal,
12(1), 49–62.
Simons, R. (1994). How new top managers use control systems
as levers of strategic renewal. Strategic Management Jour-
nal, 15(3), 169–189.
Simons, R. (1995). Control in an age of empowerment. Harvard
Business Review, 73(2), 80–88.
Stanton, J. M., Balzer, W. K., Smith, P. C., Parra, L. F., &
Ironson, G. (2001). A general measure of work stress: The
stress in general scale. Educational and Psychological Mea-
surement, 61(5), 866–888.
Snizek, W. (1972). Hall’s Professionalism scale: An empiri-
cal reassessment. American Sociological Review, 37(1),
109–114.
Sorge, A., & Witteloostuijn, A. van. (2004). The (non)sense of
organizational change: An essay about universal manage-
ment hypes, sick consultancy metaphors, and healthy
organization theories. Organization Studies, 25(7),
1205–1231.
Swailes, S. (2003). Professionalism: Evolution and measure-
ment. The Service Industries Journal, 23(2), 130–149.
Townley, B., Cooper, D. J., & Oakes, L. (2003). Performance
measures and the rationalization of organizations. Organi-
zation Studies, 24(7), 1045–1071.
Van der Stede, W. A. (2000). The relationship between two
consequences of budgetary controls: Budgetary slack crea-
tion and managerial short-term orientation. Accounting,
Organizations and Society, 25, 609–622.
West, J., & Iansiti, M. (2003). Experience, experimentation, and
the accumulation of knowledge: The evolution of R&D in
the semiconductor industry. Research Policy, 32(5),
809–825.
Yanow, D. (2004). Translating local knowledge at organiza-
tional peripheries. British Journal of Management, 15,
S9–S25.
Zollo, M., & Winter, S. G. (2002). Deliberate learning and the
evolution of dynamic capabilities. Organization Science,
13(3), 339–351.
Zimmerman, J. L. (1997). Accounting for decision making and
control (2nd ed.). Chicago, IL: Irwin.
516 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
doc_336076688.pdf
This paper reports on a developmental approach to performance-measurement systems (PMS). In particular, we
look at characteristics of a development process that result in the PMS being perceived by employees as enabling of
their work, rather than as primarily a control device for use by senior management
Developing performance-measurement systems
as enabling formalization: A longitudinal ?eld study
of a logistics department
Marc Wouters
*
, Celeste Wilderom
University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
Abstract
This paper reports on a developmental approach to performance-measurement systems (PMS). In particular, we
look at characteristics of a development process that result in the PMS being perceived by employees as enabling of
their work, rather than as primarily a control device for use by senior management. We will refer to such a PMS as
‘‘enabling PMS’’. The theoretical part of the study builds on ideas of enabling versus coercive formalization [Adler,
P. S., & Borys, B. (1996). Two types of bureaucracy: Enabling and coercive. Administrative Science Quarterly 41
(March), 61–89]; on notions of organizational learning (e.g., [Zollo, M., & Winter, S. G. (2002). Deliberate learning
and the evolution of dynamic capabilities. Organization Science 13(3), 339–351]); and on awareness of the incomplete-
ness of performance measures (e.g., [Chapman, C. S. (1997). Re?ections on a contingent view of accounting. Account-
ing, Organizations and Society 22, 189–205; Lillis, A. M. (2002). Managing multiple dimensions of manufacturing
performance—An exploratory study. Accounting, Organizations and Society 27, 497–529]). The empirical context entails
a mixed-method, 3-year longitudinal study of the logistics department of a medium-sized company in the beverage man-
ufacturing industry. Qualitative data were gathered through interviews, participation in meetings, action research, and
review of company documents. We also analyzed two waves of quantitative survey data, gathered from a panel of 42
employees. We ?nd that a development process that is experience-based contributes to the enabling nature of the PMS,
as it builds on existing skills, local practices, and know-how on performance measurement to enrich the PMS step-by-
step over time. Also, experimentation with speci?c performance measures was found to enhance the enabling nature of
the PMS: testing, reviewing, and re?nement of conceptualizations, de?nitions, data, and presentations of new perfor-
mance measures. Professionalism was signi?cantly related to positive attitude toward performance measures in our sur-
vey data. The results also illustrate that transparency of the PMS itself is key to enabling PMS.
Ó 2007 Elsevier Ltd. All rights reserved.
0361-3682/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.aos.2007.05.002
*
Corresponding author. Tel.: +31 53 4894498; fax: +31 53 4892159.
E-mail address: [email protected] (M. Wouters).
Available online at www.sciencedirect.com
Accounting, Organizations and Society 33 (2008) 488–516
www.elsevier.com/locate/aos
Introduction
Performance-measurement systems (PMS) are
mostly studied from the perspective of top-manage-
ment: how it allows them to monitor whether given
objectives have been achieved. In addition, it may
also help top managers to formulate strategy, spec-
ify operational actions needed for implementation,
set targets in relation to current performance (so
as to reveal priorities for operational improve-
ment), and clarify mutual expectations (Abernathy
& Brownell, 1999; Bisbe & Otley, 2004; de Haas
& Algera, 2002; Simons, 1990, 1991, 1994, 1995).
However, what about the managers who are the
subject of PMS—whose performance is being mea-
sured? There are few accounts in the PMS literature
where lower and middle-level employees and man-
agers consider a PMS as something that supports
them, that they can use for their own purposes to
assess how things are going, identify problems,
prioritize issues, develop ideas for improvement,
engineer solutions for concrete problems, or make
decisions (Jo¨nsson & Gro¨nlund, 1988). We refer
to ‘‘enabling PMS’’ when it is perceived by employ-
ees as enabling of their work, rather than as primar-
ily a control device for use by senior management.
This study investigates performance-measure-
ment systems in operations, closely connected to
the speci?cs of particular operational processes.
Building on Adler and Borys (1996), we conceive
of a PMS as a form of formalization. Coercive for-
malization aims to force employee compliance,
while enabling formalization makes employees feel
facilitated or motivated by the rules and the sys-
tems in place. Adler and Borys (1996) contrast
enabling and coercive types of formalization along
three dimensions: (1) characteristics of the system,
(2) the process of designing the system, and (3)
the implementation of the system. These dimen-
sions are relevant for understanding the role of
management control in organizations, as demon-
strated by Ahrens and Chapman (2004). While
their study primarily focused on (1) characteristics
of the system—in terms of repair, transparency and
?exibility—we focus on points (2) and (3): the
development process for designing and implement-
ing the performance-measurement system. We
expect that the manner by which the development
process is carried out a?ects the extent to which
the PMS will be perceived by employees as
enabling. In this paper we address the question:
Which characteristics of a PMS development pro-
cess enhance the enabling nature of the PMS?
Previous research has shown that developing an
enabling PMS is a delicate process. Townley, Coo-
per, and Oakes (2003), for example, demonstrated
that while the introduction of performance mea-
sures may begin as an initiative considered by var-
ious levels in the organization to be nuanced,
supporting and constructive, the process may eas-
ily derail: ‘‘From an initial discourse that empha-
sized a potential for reasoned justi?cation, debate
and dialogue quickly collapsed into a standard
template’’ (Townley et al., 2003, p. 1058). Qu
(2006) found that consultants considered the
incorporation of client input—and especially
information on existing reports and speci?c mea-
sures already in use—crucial for the production
of a usable PMS. Failure to include such input
was a major source of frustration for participants
in the development process (Qu, 2006). Yet, there
is little empirical knowledge about what kind of
a development process fosters an enabling PMS.
‘‘The balanced scorecard literature also indicates
that it [is] as much the process of establishing a
scorecard that yields bene?t as the resultant
measurement schema. However, the literature is
remarkably silent on this point’’ (Otley, 1999,
p. 377, emphasis added).
This study aims to contribute to the literature by
theoretically and empirically investigating charac-
teristics of a PMS development process that
enhance the enabling nature of the PMS. We build
on the framework of Adler and Borys (1996), who
propose that user involvement and professionalism
contribute to enabling formalization. We develop
these ideas further in the context of PMS. We con-
sider the inherent incompleteness of PMS in terms
of the inability to re?ect the various dimensions of
operational performance and tradeo?s among these
(Lillis, 2002), and therefore user involvement needs
to be mobilized, both in terms of existing experi-
ence with quanti?cation of performance, and also
throughout the design and implementation process
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 489
of new measures. Design and implementation
include several activities, such as shaping and fur-
ther improving the best ?tting de?nitions of useful
performance measures; ?nding or creating mea-
surement data for determining the actual values of
these performance measures; building information
systems for reporting performance-measurement
results; setting performance level targets for perfor-
mance measures; and periodically reviewing, revis-
ing and re?ning both single measures and the
overall PMS. We look at such activities from the
perspective of how organizations can learn by care-
fully building on and reusing existing experiences
(cf. Zollo & Winter, 2002), and experimenting and
prototyping with new practices (cf. Carlile, 2002).
‘‘Design’’ and ‘‘implementation’’ are hard to distin-
guish (Adler & Borys, 1996), and we prefer to com-
bine them and use the phrase ‘‘development
process’’. This re?ects that design and implementa-
tion activities are conducted in a mutually constitu-
tive, iterative fashion: employees learn through
implementation, on the basis of which they adjust
the design of the PMS, which leads to new imple-
mentation activities, etc. Such an approach assumes
a considerable level of professionalism.
The empirical ?ndings are based on a 3-year
longitudinal case study within the logistics depart-
ment of a medium-sized company in the beverage
manufacturing industry. We gathered both survey
and qualitative data. The research includes not
only observation of the company’s activities, but
also elements of action research, since we were
involved in the development of a departmental
PMS.
The structure of this paper is as follows. Perfor-
mance measures as either coercive or enabling
formalization are introduced in performance mea-
sures as enabling or coercive formalization section.
Our propositions regarding the characteristics of
a developmental process that contributes to an
enabling PMS are put forward in propositions
about a developmental approach for enabling
PMS section. The research methods are described
in research method section. Empirical results are
presented and discussed in results: developing an
enabling PMS and discussion section, and conclu-
sion is given in the ?nal section.
Performance measures as enabling or coercive
formalization
Traditionally, performance measures in opera-
tions put a one-sided emphasis on minimizing direct
costs through low material costs, high capacity uti-
lization, and high direct labor e?ciency. However,
early research identi?ed the need to broaden perfor-
mance-measurement systems to support new opera-
tions practices, and advocated the use of measures
for quality, throughput times, ?exibility, etc. (Bea-
mon, 1999; Eccles, 1991; Hall, Johnson, & Turney,
1990; Kaplan, 1983, 1990; Maskell, 1991; Nanni,
Dixon, & Vollmann, 1992). Empirical studies have
supported relationships between the pursuit of
speci?c operational strategies and the expansion
of traditional e?ciency-focused PMS to include
new performance measures (e.g., Abernathy & Lil-
lis, 1995; Baines & Lang?eld-Smith, 2003; Banker,
Potter, & Schroeder, 1993; Fullerton & McWatters,
2002; Perera, Harrison, & Poole, 1997; Maiga &
Jacobs, 2005).
But despite to broadening of PMS—in both
research and practice—to embrace a wider portfo-
lio of measures, the approach to developing the
PMS has received far less attention in empirical
studies. In this section, we will ?rst discuss issues
in regard to the incompleteness of performance-
measurement systems, and thereafter we will intro-
duce more speci?cally the control ideas laid out by
Adler and Borys (1996).
Incompleteness of a PMS
Incompleteness of PMS arises when strategic
performance measures are disaggregated into dif-
ferent performance dimensions, separate periods
and organizational sub-units, and the dependen-
cies between disaggregated measures are not
re?ected in the PMS (Lillis, 2002). For example,
attempts to improve responsiveness may lead to
more frequent changeovers, demands for shorter
lead times, and higher inventories, and when such
tradeo?s are inadequately re?ected in the PMS
there is a likely ‘‘friction created by the failure
to determine and adjust for the implications of
pro?t centre strategy on the manufacturing cost
490 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
function’’ (Lillis, 2002, p. 510). Designing a per-
fectly complete PMS remains challenging, if not
impossible, and would require nothing less than
the expression of all relevant aspects of perfor-
mance in quantitative terms (?nancial and non-
?nancial), estimation of the tradeo?s among such
dimensions of performance in the setting of targets
for ?nancial and non-?nancial performance mea-
sures, and consideration of interdependencies
between di?erent organizational units (and di?er-
ent time periods) in the PMS (see, e.g., Lillis,
2002).
And the greater the incompleteness, the more
the PMS may be perceived by functional sub-units
as a ‘‘negative’’, ‘‘unfair’’, ‘‘threatening’’, or ‘‘coer-
cive’’ instrument of management control. Malina
and Selto (2001) found that perceptions of PMS
were more negative if measures were inaccurate
or subjective, and if benchmarks were considered
inappropriate but nevertheless used for evaluation.
In other words, employees may feel that their per-
formance ‘‘as measured’’ (by the metrics) does not
truthfully re?ect what they see as their ‘‘real’’ con-
tribution to the organization. For example, they
may ?nd it unfair that contingencies (uncontrolla-
ble circumstances), materializing after targets have
been set, are not considered for adjusting those
targets; employees may not believe their supervi-
sors use the PMS in a fair way for evaluating their
performance; employees may regard target levels
as overly ambitious and unrealistic; or they may
feel their personal risk has increased too much
because of consequences that are tied to PMS
results.
Several studies have found evidence of the rela-
tionship between the use of controls and defensive
behavior—such as negotiating targets towards
more easily achievable levels, obtaining surplus
resources for completing tasks, concealing wind-
falls that have made tasks easier than anticipated,
or even taking operational decisions just to make
the results ‘‘as measured’’ look good at the expense
of negative long-term e?ects—sometimes moder-
ated by variables such as measurability of outputs,
the extent to which input–output relationships of
processes are understood, and the style in which
the controls are used (e.g., Carmona & Gro¨ nlund,
2003; Chow, Kato, & Merchant, 1996; Jaworski &
Young, 1992; Ramaswami, 1996, 2002; Van der
Stede, 2000).
Several studies have identi?ed ways in which
?rms manage incompleteness of PMS. Lillis
(2002) found that ?rms sometimes loosened con-
trol reactions to variances, implemented more
innovative PMS, integrated the PMS with other
management systems, or used measurement weigh-
tings. Davila and Wouters (2005) described a ?rm
that designed a budgeting system that reduced
emphasis on cost targets and provided budgetary
slack when performance attributes other than
costs required attention. Van der Stede (2000)
found that ?rms balanced the strictness of controls
with a business unit’s strategy. Business units fol-
lowing a di?erentiation strategy implemented less
rigid budgetary control, which allowed for some
budgetary slack and stimulated managers to think
long term.
Enabling formalization
Incompleteness motivates why designing and
implementing PMS in operations is di?cult and
requires a deliberate and careful approach. For
developing our propositions about a development
process that is likely to enhance the enabling nat-
ure of the PMS, we build on the framework of
Adler and Borys (1996). First, because this frame-
work conceptualizes the issue that is central to our
accounting study: the distinction between perfor-
mance-measurement systems that only serve
higher-management needs and control employees’
behavior (coercive formalization), versus systems
that support employees to do their work better
(by providing feedback, identifying problems,
revealing improvement opportunities, help priori-
tizing action, etc.): enabling formalization. Sec-
ond, because this framework helps to articulate
that characteristics of the system itself, as well as
processes for design and implementation of the
system may contribute to the coercive or enabling
nature of formalization. Third, because Adler and
Borys (1996) o?er initial suggestions about what
kind of design and implementation process is likely
to foster the enabling nature of formalization, and
so it helps to delineate our intended contribution:
to draw on organizational literature as well as
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 491
the empirical material to further develop the
understanding of enabling PMS development.
Adler and Borys (1996, p. 66) propose that
‘‘employees’ attitudes to formalization depend on
the type of formalization with which they are con-
fronted’’. They suggest that employee attitudes are
more positive when formalization enables them to
better master their tasks, and will be more negative
when it ‘‘functions as a means by which manage-
ment attempts to coerce employees’ e?ort and
compliance’’. Enabling formalization mobilizes
rather than replaces employees’ intelligence, and
acts to ‘‘help users form a mental model of the sys-
tem they are using’’ (p. 70). As such, these kinds of
‘‘procedures provide organizational memory that
captures lessons learned from experience’’ (p. 69).
It is thus relevant to better understand how
organizations may achieve enabling formalization.
Adler and Borys (1996) suggest that whether for-
malization has an enabling or coercive character,
depends on characteristics of the formalization as
well as on the process of designing and implement-
ing the system. These characteristics of formaliza-
tion are internal and global transparency, and
?exibility and repair. We will discuss these ?rst
before we begin our analysis of a development pro-
cess conducive to enabling formalization. Internal
transparency means that users have a good under-
standing of the logic of a system’s internal function
and they have information on its status. Enabling
formalization provides users with a clear under-
standing of the underlying rationale for why
certain control mechanisms are in place. Such
formalization also codi?es best-practice experi-
ences, and users are provided with feedback on
their performance. Global transparency refers to
the intelligibility for employees of the broader
system and context within which they do their
work. Controls are designed to a?ord employees
an understanding of where their own tasks ?t
into the whole. Information from beyond one’s
speci?c domain is available. Flexibility means that
users can make controlling decisions after enabling
systems have provided information. ‘‘Flexible
systems encourage users to modify the interface
and add functionality to suit their speci?c work
demands’’ (p. 74). Repair means that users can
mend and improve the work process themselves
rather than allowing breakdowns and other
non-programmable events to force the work pro-
cesses to a halt. We refer to Ahrens and Chap-
man (2004) who discuss these characteristics of
enabling formalization in the context of manage-
ment control systems.
As mentioned above, Adler and Borys (1996)
also contrast enabling and coercive types of for-
malization in terms of the processes of designing
and implementing the system. They discuss some
of the characteristics of these processes that are
likely to lead to enabling formalization, such as
employee voice, employee skills, process control,
and ?exibility in changing controls. They propose
that ‘‘employee involvement in the formulation of
procedures is likely to have a positive e?ect on
both attitudinal and technical outcomes’’ (p. 75).
Principles for the design of equipment technology,
they suggest—such as a focus on users and usabil-
ity, early and continual user testing, and iterative
design processes—carry over to the development
of formalization as ‘‘organizational technology’’.
However, the design and implementation of for-
malization are typically intertwined: while equip-
ment may be bought ‘‘o? the shelf’’, customized
from existing modules, or designed-to-speci?ca-
tion outside the client organization, ‘‘organiza-
tional technology’’ takes shape within the speci?c
implementation context. Adler and Borys (1996)
call for more research to explore whether and
how organizations can introduce enabling types
of formalization. We will build on their framework
and develop their ideas further speci?cally in the
context of PMS development.
Propositions about a developmental approach for
enabling PMS
This section sets out three propositions in
response to our research question: ‘‘What charac-
teristics of a PMS development process enhance
the enabling nature of the PMS?’’ We propose that
a development process that is characterized by (1)
being experienced-based, (2) allowing experimen-
tation, and (3) building on employees’ profession-
alism is likely to result in an enabling PMS.
Experienced-based involves the identi?cation,
492 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
appreciation, documentation, evaluation, and con-
solidation of existing local knowledge and experi-
ence with respect to quantitatively capturing and
reporting relevant aspects of performance. Experi-
mentation involves the ?rst development of a per-
formance measure and the subsequent testing
and re?nement (in several rounds) of its conceptu-
alization, de?nition, required data, IT tools, and
presentation, together with employees (whose per-
formance is going to be measured), to arrive at a
measure that is a valid, reliable, and understand-
able indicator of performance in a speci?c local
context. Professionalism of employees denotes an
orientation toward learning for the purpose of
improving work practices. We underpin these
propositions in the remainder of this section; and
in the section with empirical results we will discuss
and illustrate them further.
We feel that presenting propositions before the
empirical study helps to better discuss our theoret-
ical ideas in relation to the literature, and empirical
?ndings in relation to the theory; it is not to sug-
gest that theory and ?ndings have been developed
subsequently. Rather, the nature of the research
process was as discussed by Ahrens and Chapman
(2006, p. 836): ‘‘Problem, theory, and data in?u-
ence each other throughout the research process.
The process is one of iteratively seeking to gener-
ate a plausible ?t between problem, theory, and
data’’. Before the study started, we explicitly
intended to explore an experience-based develop-
ment process and what we than called continu-
ous revision of the PMS (later formulated as
‘‘experimentation’’). These ideas took further
shape during the course of the study through going
back-and-forth between the ?eldwork and the
literature. Furthermore, the development of the
survey instrument, which started about 15 months
into the study, involved an extensive process of
focusing and making connections between the
?eld and the literature, and in this stage the
role of ‘‘professionalism’’ was highlighted and sub-
sequently focused upon in the ?eldwork. Later
in the research project, we became familiar with
the framework of Adler and Borys (1996), and
this was found to be a very powerful way for orga-
nizing the theoretical discussion and empirical
results.
Experience-based development process
Organizational change processes may take
advantage of local knowledge, which can be de?ned
as ‘‘the very mundane, yet expert understanding of
and practical reasoning about local conditions
derived from lived experience’’ (Yanow, 2004, p.
S12). Organizational change processes that utilize
local knowledge are more likely to lead to sustain-
able changes and improvements (Abrahamson,
2000; Lowe & Jones, 2004; Zollo & Winter, 2002).
In the context of PMS, we propose that a develop-
ment process that is experience-based is likely to
have a positive e?ect on the enabling nature of the
PMS. An experience-based development process
involves the identi?cation, appreciation, documen-
tation, evaluation, and consolidation of existing
local knowledge and experience with respect to
quantitatively capturing and reporting relevant
aspects of performance. We will elaborate on the
idea of an experienced-based development process
in this section.
Many of the proposed approaches for design
and implementation in the literature, however,
seem to pay little attention to either experience
or user involvement. Most approaches to PMS
design and implementation (see Bourne, Neely,
Mills, & Platts (2003a) for a review of the PMS
development processes literature) focus on how
the goals set at the top of the organization can bet-
ter guide actions taken lower in the organization.
First steps in the typical development process are
to clearly de?ne the overall (i.e., corporate-level)
strategic objectives and then the local operations’
speci?c contribution toward achieving these over-
all strategic objectives. Thus, the organization’s
global performance measures and functional mea-
sures are derived. The PMS is typically designed
from the perspective of top-management, as is
apparent in the following representative character-
istics: (1) explicit re?ection of the ?rm’s strategic
objectives and subsequent break-down of those
objectives to more speci?c objectives at lower
managerial levels, (2) the signaling of performance
levels that are below targets, (3) the ability to
‘‘drill-down’’ and get more details when needed,
(4) striving for transparency, consistency, and
uniformity regarding de?nitions of performance
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 493
measures, presentation formats, etc., and (5) one
information system that contains all data and
reports. External experts may be involved, who
often bring in a standardized way of designing and
implementing the system, with examples (or tem-
plates), complete with performance measures, pre-
sentation formats, and a set consulting approach
for designing the system, software tools, etc.
However, top-down, mandated performance-
measurement initiatives are less likely to be success-
ful (Cavalluzzo & Ittner, 2004; Scott & Tiessen,
1999; de Haas & Algera, 2002). These well-inten-
tioned, standardized methods carry the danger of
insu?ciently re?ecting the local organizational
contexts or the available experience and unique
expertise of employees. Furthermore, even before
such measurements systems are initiated, a number
of informal performance measures, at various
levels within the organization are already in use
by managers, complementing the information they
get from other sources, such as observations, or
conversations with people individually or in group
meetings, as well as non-face-to-face communica-
tion through phone calls or emails (McKinnon &
Bruns, 1992). These informal measurement reports
are often developed locally, contain a mix of local
and centralized data, report operating information
over a very short period of time (weeks, days,
or less), provide status information (up-to-date
accumulations of bits of operating data, e.g., inven-
tory-level reports and backlog reports), and enable
performance comparisons between, for example,
budgeted vs. actual performance, one time period
vs. another, etc. (McKinnon & Bruns, 1992). Such
informal reports use a variety of presentation for-
mats, performance measure de?nitions, data, and
information systems. The existence of such reports
is often unknown outside the organizational unit
where they are produced and used, to the extent
that, from the perspective of top-management,
a coherent PMS does not appear to exist at all
within the organization! Although employees may
have considerable experience with performance
measures, and may have already established con-
text-speci?c practices, from the perspective of top-
management these do not constitute a PMS.
Typically, expert-led approaches initiated by
top-management, are not likely to expend the
e?ort necessary to build an in-depth understanding
of locally-developed existing reporting practices, in
particular about the detailed de?nition, data,
motivation for, and experiences with existing mea-
sures and information systems (Qu, 2006). The
consultants are also more likely to address prob-
lems from the perspective of top-management (or
whoever hires them), and they may seek to focus
on concepts that are fashionable in the business lit-
erature, and to attempt to transfer their earlier
experience to the project at hand (see, e.g., Sorge
& Witteloostuijn, 2004). Based on previous suc-
cesses or an awareness of the amount of e?ort
involved with design and implementation of a
PMS, the temptation is strong to simply start
PMS design from scratch (Green?eld), to copy
from previous outside assignments or other
departments in the organization, or to employ a
standardized consulting approach for the design
and implementation of performance-measurement
systems (Blueprint) (Townley et al., 2003). Such
standard consulting approaches tend to focus on
strategy clari?cation and the creation and design
of new performance measures, without detailed
regard for what is already in place. Existing infor-
mal reports typically come into view only after the
‘‘ideal’’ PMS has been designed and set, as part of
an assessment of the ‘‘gap’’ between that ideal
PMS and already existing performance measures
(Medori & Steeple, 2000).
Organizational change is more likely to be suc-
cessful when it is a process of relatively small change
e?orts that involve the recon?guration of existing
practices and systems that are successfully in-use
elsewhere in the organization, rather the creation
of new practices and systems (Abrahamson,
2000). Building organizational capabilities requires
adaptation of work processes, re?ection upon expe-
riences, and codi?cation of knowledge gained
(Zollo & Winter, 2002). In other words, organiza-
tional learning is based on experience accumulation,
and empirical studies have demonstrated the
importance of knowledge accumulation for perfor-
mance (e.g., Reagans, Argote, & Brooks, 2005;
West & Iansiti, 2003). Similarly, we propose that
building on existing, local experience is an impor-
tant characteristic of enabling PMS development
as well. We expect a development process to
494 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
successfully stimulate enabling formalization when
it fully acknowledges, respects, and utilizes the
intellectual capital of lower-level employees’ exist-
ing practices of and insights in performance
measurement.
Experimentation
Experimentation in the context of PMS devel-
opment involves the ?rst development of a new
performance measure and subsequently allowing
time to test and re?ne (in several rounds) its con-
ceptualization, de?nition, required data, IT tools,
and presentation, together with employees (whose
performance is going to be measured), to arrive
at a measure that is a valid, reliable, and under-
standable indicator of performance in a speci?c
local context. We propose that a development
process that involves much experimentation with
new performance measures is more likely to lead
to enabling formalization. Fleshing out general
goals—the usual suspects of e?ciency, producti-
vity, customer satisfaction, etc.—and making them
speci?c and measurable is a ‘‘messy’’ process
(Lowe & Jones, 2004). It involves de?ning mea-
sures that re?ect strategic goals, that are closely
related to the speci?c operating conditions in a
particular setting, that are actually measurable
(i.e., the required data are available), and that
are presented in a way that employees ?nd under-
standable. This requires a meticulous, in-depth
process of creating a ?t between the PMS and
the operational idiosyncratic local conditions.
The development process requires a close involve-
ment of and cooperation with employees. This is
not to say that employees would be the only ones
who use the data, but rather that they are the ones
who are best placed to judge that their work e?orts
are validly or invalidly re?ected in the performance
measures. The making of a performance measure
is not likely to be ‘‘right’’ after just one round; it
is more likely to be successful if the development
engages employees in a process of experimenta-
tion, e.g. tinkering with qualitative descriptions,
quantitative de?nitions of measures, the scope of
measures, data used, procedures for data gather-
ing, representation in tables and graphs, etc., as
well as actual testing to identify unanticipated
and often undesirable e?ects or behaviors that
occur in response to the PMS. Even though we
emphasize that involving employees through
experimentation and building on previous experi-
ences is relevant for improving the content of a
PMS, this may also contribute to an e?ective orga-
nizational change process (Bourne, Neely, Mills, &
Platts, 2003b).
Professionalism
Professionalism denotes an orientation toward
learning for the purpose of improving work prac-
tices. Such an orientation makes it possible to rely
on experience and to conduct experiments within a
PMS development process. A higher score on pro-
fessionalism makes it more likely that employees
express satisfaction with earnest improvement
e?orts carried out within their immediate work
environment. Professionalism may be especially
stimulated if self-involvement into departmental
improvement e?orts is made possible. Caldwell,
Herold, and Fedor (2004) conclude that employees’
motivational orientation, and particularly their
‘‘achievement predisposition’’ (p. 879) predicts
satisfaction with perceptions of organizational
change. In other words, if an employee is more
inclined to improve her work practice, then perfor-
mance measures are more likely to be seen as posi-
tive, stimulating, challenging, and helpful. In sum,
we propose that an employee’s level of profession-
alism is associated with a positive attitude towards
performance measurement, especially if a carefully
evolving developmental approach is taken, aimed
at re?ning and extending a departmental PMS as
an instance of enabling formalization.
Research method
This study has been designed as action research.
We cooperated with the logistics department of a
company in the beverage manufacturing industry,
in the period August 2002 through June 2005. We
examined in detail the evolution of the depart-
ment’s PMS and the employees’ experiences with
performance measurement over a relatively long
period of time. In this section, we will further
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 495
introduce the research site, describe how we gath-
ered and analyzed the qualitative data, and outline
the survey conducted among a representative
panel of the employees of the case department.
Research site
The company has a strong brand name and sells
its beverages to both the hospitality industry (such
as bars, restaurants, and hotels) and to retail cus-
tomers that vend to consumers. Customers are both
domestic and international. Important conditions
for success, according to the company’s annual
report, are brand strength, product innovation,
excellence in production, quality of marketing,
balancing stakeholders’ interests (shareholders,
employees, and environmental concerns), and
?nancial performance. While these factors center
on revenue enhancement, cost management is also
increasingly important. Competition among super-
market chains has intensi?ed, leading to lower
prices for consumers, and increased price pressure
on suppliers. The pro?tability of the company
has su?ered as a result, and pro?ts, revenues
and sales in 2005 were all below their 2004 levels.
Furthermore, the company recently made very sig-
ni?cant investments in a new manufacturing site,
which called for considerable operational cost
savings, because it had increased ?xed deprecia-
tion costs signi?cantly in all departments of the
company.
The approximately 150 employees in the logis-
tics department are spread among four sub-depart-
ments: purchasing, physical distribution, materials
management, and packaging development. The
director of logistics and the four heads of the
sub-departments form the management team of
the logistics department (‘‘logistics management
team’’, LMT). The team also includes the control-
ler for logistics and production, the logistics man-
ager of the hospitality market, and the logistics
manager of the international department of the
company. The director of logistics reports to the
CEO of the company. An organization chart is
shown in Fig. 1. The logistics department had been
recognized—internally and externally—for its per-
formance, including a prestigious national prize
for its customer service and supply chain
management.
An overview of some main events investigated
during this longitudinal case study is depicted in
Table 1. When this study began, the logistics
Board of directors
Logistics Marketing and sales Production
Materials
Management
Physical
Distribution
Purchasing Packaging
Development
• Planning
• Central
warehouses
• Marketing
warehouse
• Customer service
• Transportation and
distribution
• Internal transportation
and warehouses
• Transportation planning
HRM Finance
Fig. 1. Organization chart.
496 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
department had recently begun to expand their
performance-measurement system. They mainly
used an indicator called ‘‘delivery reliability’’, but
they felt that additional measures were required
to provide a more comprehensive picture of the
performance of the logistics department in rela-
tionship to its objectives. Previously, the mission
of the logistics department had been ‘‘to coordi-
nate the supply chain in an e?ective, e?cient,
and innovative way for providing optimal service
to our customers’’. This had also been reformu-
lated more concretely as four objectives for
logistics: number one in customer satisfaction,
excellence in supply chain e?ciency, continuous
Table 1
Time line of the case study
Company events Research
August 2002 Logistics department formulates
the need to have more extensive
performance measurement
Making contacts with the company and initial discussions
about research cooperation
August 2002 Director of logistics voices strong
concerns about employee ownership
*
a
2003
January Start developing and implementing new
measures with researchers
August Positive evaluation of ?rst results and
developmental approach
Agreement on longitudinal case study
September Continuation of design and implementation
of performance measures
December Start developing survey instrument
2004
January–May Developing and reviewing survey instrument
February–March Logistics department moves to new site
April Appointment of new CEO and start of
companywide Balanced Scorecard project
May Appointment of project leader for Balanced
Scorecard project
June Pilot of survey
July Tension from the central Balanced Scorecard
initiative
*
First survey
August Start of champions meetings (from all
departments)
October Discussion of results with LMT
b
November Experimenting with a new performance
measure for internal transportation and
warehouses (continued until April 2005)
*
December First o?cial scorecards for all departments
de?ned
2005
January 2005 Second survey
March 2005 Discussion of results with LMT
March 2005 LMT and middle managers discuss the PMS
(new measures and implementation support)
*
May 2005 LMT prioritizes the proposed new measures
*
June 2005 Evaluation of ?rst six months of the o?cial
balanced scorecards
a
Events market with a
*
are discussed in some detail in the text with a separate heading labeled ‘‘Illustration’’.
b
LMT: Logistics Management Team.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 497
supply chain innovations, and to be a professional
and learning organization. Explicating these goals
stimulated the implementation of performance
measures. There was also another reason. In
2002 the company had to impair inventories for
about half a million Euros, and therefore it was
concluded that inventory risk should be measured
regularly. This situation was the basis for the
beginning of our cooperation with the logistics
department, which provided an opportunity to
study in detail the evolvement and actual experi-
ences with performance measurement over a
longer period.
In the period between January 2003 and June
2005 the logistics department gradually expanded
the PMS to incorporate additional performance
measures, to review or delete other measures,
and to implement procedures and information sys-
tems for producing periodic reports (see Table 2).
The development process was strongly in?uenced
by two events: early in 2004 the company moved
to a new site and at the same time implemented
new information systems that provided new tech-
nical opportunities for developing new perfor-
mance measures. And in April 2004 a new CEO
was appointed who initiated a companywide per-
formance-measurement initiative.
We worked especially with three members of
the LMT: (1) the director of logistics who reports
to the board of directors, (2) the management con-
troller assigned to logistics, and (3) the so-called
PMS-champion, i.e., the one sub-department head
on the LMT with whom we started this liaison on
the basis of her own deeply held professional inter-
est in applying PMS to the entire logistics depart-
ment within this ?rm. At the outset we sensed that
these leading ?gures were authentic in their desire
to establish a PMS in the form of enabling formal-
ization. They showed keen interest in developing
PMS themselves, in cooperation with us as exter-
nal, university-based experts on both PMS and
the human side of organizational change.
Qualitative data gathering and action research
We obtained data through the use of various
methods, in the context of action research. Over
a period of almost three years we frequently visited
the company or met company employees at the
university, and qualitative data were obtained
through interviews, participation in management
meetings, company documents, as well as ?eld
notes made by research assistants (see Table 3).
While gathering these data, we did not act as neu-
tral observers. The project aimed to assist the com-
pany as well as contribute to science. The company
participated in this study because they welcomed
the unpaid assistance with their development of
performance measurement in exchange for o?ering
research access to us. Researchers, research assis-
tants, line employees, supervisors, middle manag-
ers, as well as the LMT members were all
involved in developing the new performance mea-
sures and providing feedback. Over the course of
Table 2
Number of performance measures over time
Period In use
at start
of the
period
Implemented Deleted In use
at end
of the
period
Under construction
a
Under review
b
January 2003–June 2003 19 6 7 18 3 1
July 2003–Dec 2003
c
18 33
January 2004–July 2004 33 8 6 35 3 5
August 2004–October 2004 35 9 3 41 1 13
Nov 2004–April 2005 41 6 2 45 1 4
April 2005–June 2005 45 2 0 47 6 15
a
New indicator being developed but not implemented at the end of the period.
b
Existing indicator being revised during this period.
c
Data on changes of performance measures during this period were not available.
498 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
this study, seven master students in industrial engi-
neering or business administration worked full
time for a period of six to eight months as an
intern at the company, in partial ful?llment of
their MSc. They produced monthly reports of
actual outcomes of performance measures, they
carried out the survey, and they worked with
employees in developing, evaluating, or re?ning
various measures.
The qualitative data have been analyzed
through a process of re?ection, and going back-
and-forth between the data, the literature, and
the company. The research ?le was organized on
the basis of a table that listed the interactions
with the company and that contained about 275
rows. On each row the following data were
recorded (when applicable): date, who partici-
pated, sub-departments involved, topic, duration,
reference to meeting notes, description of (and ref-
erence to) company documents received, reference
to researchers’ input for the meeting, code for
meeting in person, code for researcher or assistant,
code for diversity of meeting participants. This
table allowed easy retrieval of speci?c data when
certain questions or ideas emerged in discussing
or writing about the research. It was also the basis
for summarizing the time line in Table 1 and the
data gathering in Table 3. The data were used to
write summaries to pull together di?erent events
and di?erent kinds of data, and to start re?ecting
on what happened, and to focus on events that
seemed most interesting. Parallel to gathering the
data and writing the summaries, we reviewed more
literature, discussed the study with other research-
ers (informally, as well as through presentations in
workshops and conferences), and wrote (and
rewrote) the paper. This connection with theory
guided not only the analysis of the data, but also
the gathering of data, and it led to follow-up dis-
cussion or clari?cation with the company. Also,
Table 3
Qualitative data gathered
Number of meetings Time (h)
Meetings with
a
Employees of Logistics only 8 12
Employees outside Logistics 13 16
Employees from Logistics together with other areas 20 29
41 57
Number of di?erent employees interacted with
Logistics 7
Finance 3
Production 1
Marketing and Sales 2
Other functional areas 7
20
Sample company documents Number of documents
Research assistants had meetings with 71 di?erent people in 189 meetings, which took over 200 h
b
Documents about performance measures in-use in Logistics 13
Documents about performance measures in-use outside Logistics 4
Presentations and notes about developments in performance measurement in the company 8
Minutes of meetings about developments in performance measurement in the company 18
General documents about the Logistics department 11
General company documents 8
Response to panel survey study 4
66
a
‘‘Meetings’’ indicates face-to-face engagements of researchers with members of the case-study organization, either as interviews
with one or a few employees, or as active participation in meetings with a larger number of employees. Meetings took place at the
research site, with a few exceptions of meetings at the university. Not included are emails and phone calls.
b
Research assistants also accounted for their meetings. Mentioned are those interactions where they took notes of which the
researchers have copies; not included are short informal discussions (the research assistants worked on-site), emails, and phone calls.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 499
a draft version of the paper was discussed with
managers of the case study organization. And vice
versa, interaction with the company guided the
search for new literature, or informed discussions
with academics.
A potential issue of action research is that the
researcher may selectively look for empirical evi-
dence and guide the research process with a bias
towards the expected ?ndings (Atkinson & Sha?r,
1998). However, there are several countervailing
e?ects that limit such a bias, which were also prom-
inent in this study (Labro & Tuomela, 2003): The
length of the research process and access to all
kinds of data provides many di?erent ‘‘pieces of
the puzzle’’ that provided di?erent kinds of empir-
ical evidence which need to be understood as a
whole. Furthermore, members of the case study
organization expect results that are of practical rel-
evance and this provides an incentive for them to
be involved and to spend time with the researchers.
Also because of the potential impact on their
work, organizational members are engaged, chal-
lenge ideas, and provide feedback on results.
Furthermore, this type of action research allows
an empirical test of ideas implemented in an actual
organization. Organizational members are likely to
be cautious about trying interventions they deem
unsuccessful or otherwise undesirable. Researchers
cannot easily persuade people to implement what
they consider to be a bad idea; and when an idea
that seemed good actually works out poorly, that
will become obvious in the empirical data. In
short, the objective of making actual changes
in real organizations counters researchers’ biases,
because of the active involvement of organizational
members and the empirical facts resulting from
implementation.
Panel survey
A survey was conducted twice, once in July 2004
and once in February 2005. On both occasions we
approached the same respondents, and we assessed
also the same variables, with the same or a slightly
improved version of the questionnaire. Hence this
part of the study is labeled the ‘‘panel survey’’.
The timing of the panel survey within Logistics
coincided with the start of the companywide initia-
tive to implement a balanced scorecard, and the
two surveys provided information on the initial
attitude towards performance measures and the sit-
uation about six months into the initiative.
We ?rst asked the four sub-department heads
within logistics to come up with a list of potential
respondents who would be representative for their
sub-department in terms of their attitudes towards
PMS. The LMT reviewed the lists and made a few
changes of prospective respondents, which were
subsequently approved by the nominating sub-
department heads. Members of the panel had to
have been employed in their sub-department for
at least one year and not temporary employed.
In addition, all sub-department heads were
included since they were crucial in the PMS pro-
cess. Moreover, the number of panel respondents
per sub-department had to be proportional to
the size of each of the four sub-departments.
In the ?rst data wave, we received the com-
pleted questionnaires from all of the 42 selected
respondents. In the second data wave, we got the
data from 39 of the same 42 respondents plus
one new participant. This attrition was due to ill-
nesses, and one employee on the panel had left
the company.
For the process of data collection, research
assistants requested the participation of panel
members with a letter (signed by the director of
logistics) and verbally during several team meet-
ings in which they explained the purposes of the
survey and the role of the panel and allayed partic-
ipants’ concerns about the con?dentiality of the
data. To ensure a high response rate, the research
assistants made appointments with all members of
the panel to have them ?ll out the questionnaire
during an on-site interview. The assistants also
wrote down other PMS-related comments respon-
dents made during the meetings. In the second sur-
vey appointments were made only with those
respondents on the panel who were expected to
be uncomfortable with completing the question-
naires by themselves.
Con?dentiality was a key consideration during
the panel study. Given that some sub-departments
were rather small and that panel membership was
known by the LMT, we promised the respondents
explicitly and repeatedly that results would never
500 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
be reported at the sub-departmental level, but only
at the aggregate level (i.e., for the whole logistics
department). Furthermore, the completed ques-
tionnaires were ?led outside the company (at the
university) and no one at the company possessed
the list that linked respondent numbers to respon-
dent names. Procedures to guarantee con?dentially
of the data were emphasized in all communications
with research participants, and thus participants
appeared comfortable enough to provide frank
answers and comments.
The measurement of the panel survey data will
be discussed in the remaining part of this section.
1
The measurement scale for the dependent variable
Attitude toward performance measures was devel-
oped expressly for this study. Its items are
described in Appendix. The variable re?ects the
perceived usefulness of performance measures that
are reported concerning the respondent’s sub-
department within the logistics department. In
the second administration we added the variable
Ambition level in two years. Using the same items,
participants were asked what the situation with
respect to performance measurement should be
two years into the future.
The measurement of Professionalism was also
developed expressly for this study. Its items are
also described in Appendix. We refer to this new
construct informally as being improvement-ori-
ented on the job. Formally, professionalism refers
to the degree to which individual employees
behave in a way that shows commitment to both
their profession and their current organization,
through e?orts aiming explicitly to upgrade or
improve the quality of the work carried out. Sam-
ple items are: ‘‘I learn every day at work’’; ‘‘I
always contribute to new ideas at work’’. The
answering possibilities of these Likert items range
from 1 (very much disagree) to 7 (very much
agree). The origin of the professional attitude
questionnaire lies in e?orts carried out by Swailes
(2003) who in turn relied explicitly on measure-
ment e?orts of Hall (1968) and Snizek (1972). In
our study we de?ned the questionnaire items
entirely on an individual employee level. Deviating
from these previous professionalism scaling e?orts,
we made all items refer to solely one’s own current
job and not also to one’s profession, other profes-
sions or professional colleagues. Because of a lack
of validity of broad measurement scales, Swailes
(2003) called for ‘‘reconceptualising professional-
ism in terms of process rather than structure’’
(p. 103), to which we contributed in this study
through the formulation of the survey ques-
tions.
In the survey, a number of variables regarding
the task environment were also included. This made
it possible to investigate the association between
professionalism and the attitude towards perfor-
mance measures while controlling for these other
variables that could also a?ect the attitude towards
performance measures. Leadership style was mea-
sured using a subset of 10 MLQ 8Y items of trans-
formational leadership style (Bass & Avolio, 2000).
MLQ refers to the currently most used valid ques-
tionnaire for assessing leadership. For Team trust
the scale was comprised of the seven items
employed in a German study by Baer and Frese
(2003), and based on the work of Edmondson
(1999). The scale for measuring Work pressure
was comprised of 14 items and is taken from Stan-
ton, Balzer, Smith, Parra, and Ironson (2001).
Work satisfaction was measured in a way in which
respondents had to write down three numbers total-
ing 100%. They were asked to note the percentage of
time they felt, on average, ‘‘satis?ed’’, ‘‘unsatis-
?ed’’, and ‘‘neutral’’ about their current job.
Results: developing an enabling PMS
In this section, we will present the empirical
material to explore the developmental approach
that fostered the enabling nature of the PMS in
the case study company. First, we will present
results that suggest that employee attitude toward
performance measures in the logistics department
was quite positive, and this is based both on the
survey and the qualitative data. Then we will
explain this positive attitude through the proposi-
tions outlined above in propositions about a devel-
opmental approach for enabling PMS section.
1
Please contact the ?rst author for more details about the
research instrument.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 501
A positive attitude toward performance measures in
the logistics department
Employee attitudes toward the performance
measures used in their sub-departments were quite
positive; the means from two waves of question-
naire deployment are in Table 4. Note that the reli-
abilities of the survey questionnaires ranges from
satisfactory to good, as measured by Cronbach’s
Alphas. The average scores on the variable Atti-
tude toward performance measures were 5.2 and
5.4 (for the ?rst and second data wave, on a
seven-point scale). In the second administration
we also asked participants what the situation with
respect to performance measurement should be
two years later, using the same seven items and
the same seven-point answering scale (Ambition
level in two years). The average score shown in
Table 4 is 6.2, which is higher than their assess-
ment of the current situation. This suggests that
on average employees were ambitious in terms of
performance measurement, and this may also be
taken as an indication of a positive attitude toward
performance measurement.
The qualitative data provide further evidence of
positive attitudes toward performance measure-
ment. Particularly, signi?cant was a meeting with
the logistics management team (LMT) and middle
managers in logistics (i.e., managers who reported
to the members of the LMT, planners, and shift
leaders) in March 2005 (to be described in detail
below). We will describe that there were tensions:
the middle managers wanted more performance
measures to support them in their work. To the
extent that they could not develop and implement
these themselves, they needed resources outside
their teams (such as time from experts in the con-
troller’s o?ce), and the prioritization of such
resources was debated. We take this as another
indication of positive attitudes toward perfor-
mance measures, as the PMS within logistics was
clearly being perceived as enabling formalization.
Illustration: LMT and middle managers discussed
the current status and priorities for further
development of the PMS
The PMS-champion and the controller pre-
sented the history and current state of performance
Table 4
Survey constructs (reliabilities, descriptives, correlations)
First data wave, N = 42 Cronbach’s Alpha Mean SD 1 2 3 4 5
1. Attitude toward performance measures .816 5.169 .818 1
2. Professionalism .813 5.304 .606 .433
**
1
3. Leadership style .862 4.848 .891 .137 .275 1
4. Team trust .729 5.762 1.153 .520
**
.208 .302 1
5. Work pressure .803 3.628 .741 À.224 .110 .041 À.409
**
1
6. Work satisfaction .868 6.000 1.653 .316
*
.238 .175 .543
**
À.421
**
Second data wave, N = 40 Cronbach’s Alpha Mean SD 1 2 3 4 5 6
1. Attitude toward performance
measures
.906 5.430 .797 1
2. Ambition level in two years .881 6.186 .627 .681
**
1
3. Professionalism .878 5.317 .587 .385
*
.439
**
1
4. Leadership style .914 4.463 1.050 .219 .311 .365
*
1
5. Team trust .655 5.500 .973 .205 .194 .351
*
.326
*
1
6. Work pressure .870 3.586 .852 À.281 À.105 .019 .072 À.363
*
1
7. Work satisfaction .909 5.783 1.924 .177 .124 .242 À.026 .490
**
À.485
**
For completeness we also conducted a principal components analysis. All measurement items loaded on their expected factors for the
independent variables Leadership style, and Team trust, but not always for Professionalism, Work pressure, and Work satisfaction.
However, because of the very small number of observations, we maintained all items for further analyses.
*
Correlation is signi?cant at the 0.05 level (two-tailed).
**
Correlation is signi?cant at the 0.01 level (2-tailed).
502 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
measures in a meeting of the LMT together with
middle managers in logistics, in March 2005 (see
Table 1). They contrasted performance measures
that provide insights into whether the logistics
department achieves its medium and long-term
objectives versus performance measures that an
employee in logistics may need to be successful
in his or her daily work. The PMS-champion
and controller described their ‘‘dream’’ situation:
‘‘middle managers can and want to develop perfor-
mance measures and produce the reports on
these themselves in order to better manage their
processes’’. The director of logistics explicated
this further as ‘‘inventing it yourself; getting the
data out of systems or recording these oneself. Is
this a dream or a nightmare? Is this something we
share? Or do you just always want to call the con-
troller, who should realize this for you?’’ This
remark stimulated a lot of discussion. It seemed
that the idea of developing their own measures
and reporting on these was supported. ‘‘There are
enough opportunities with the new systems, and
sometimes you discover only after a while what
you can get out of these’’ one of the participants
commented. Middle managers further remarked
that they felt they needed to be fully involved in
the development process of new measures, and it
should be made easy for them to actually generate
the reports.
The discussion centered on the way in which
resources for PMS development were allocated
between performance measures for the LMT and
those for use by middle managers in logistics. At
?rst, practical issues with performance measures
were mentioned, such as that currently, for many
performance measures, the reporting involved
manual activities that required too much time
and that could cause errors. The middle managers
said better information systems and tools were
needed. They also expressed a desire for speci?c
new performance measures to be implemented.
During this discussion, it was brought forward
by the middle managers that support from people
in the controller’s o?ce and from the research
assistants was required. It became clear that the
resources for developing and implementing all
the desired new performance measures would have
been severely overstretched, and thus not all
wishes for new performance measures could be
supported. The controller stated that he wanted
to allocate the resources to performance measures
that involved signi?cant ?nancial risk, which was
di?cult to control without such measures. Hence,
employees in logistics had to develop and imple-
ment some measures by themselves. While there
seemed to be much support for the idea of devel-
oping their own measures, it was also discussed
that in some situations this was considered too dif-
?cult, and specialized involvement from the con-
troller’s o?ce was needed. The middle managers
argued that their employee voices should be heard
and their requirements should be supported.
The PMS-champion mentioned a particular
measure and said that she had a real dilemma
about it: ‘‘I agree with [the controller] that we
should focus on the strategic measures for logis-
tics, but I also feel that this is a really important
measure within our department’’.
2
One of the
managers said: ‘‘If I as a middle manager ask for
a particular performance measure, I think you
should say ‘yes’ right away, because then I really
need it, and otherwise you will not get any support
[from us middle managers for performance mea-
surement initiatives]’’. The controller responded
by saying ‘‘that is simply not always feasible, our
time is limited’’. To this the manager responded
provocatively: ‘‘so if I am held accountable for
something, I get no support, but if the logistics
management team is held accountable for some-
thing, then there is support?’’ Clearly, people held
di?erent opinions about how dependent they were
on specialized support. Another middle manager
commented: ‘‘But these practical issues have never
stopped us from going forward with implementing
performance measures. . . .. And if you look at our
performance measures in [the sub-department], we
designed and implemented these almost com-
pletely by ourselves’’.
Later in the meeting, four groups discussed ideas
for new performance measures and presented these.
2
The word ‘‘department’’ was used in the company also to
refer to the four sub-departments within logistics. ‘‘Sub-
departments’’ is a term we use in the paper for clarity, but we
write ‘‘department’’ in quotations and in the questionnaire in
Appendix, as this term was actually used in the company.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 503
As a follow-up, the sub-departments within logis-
tics were asked to think about these measures fur-
ther and come up with proposals. After this
meeting, the sub-departments proposed 16 newper-
formance measures in total. This list was input to a
prioritization meeting of the logistics management
team in May 2005. The team prioritized the new
measures. Moreover, it was concluded that in the
future middle managers should be involved more
in further developing the PMS. One of the members
of the LMT re?ected ‘‘we have been very much top
down, authoritarian with our kpi process’’.
Just to illustrate the dilemma for prioritization
further, it is helpful to look in more detail at one
of the performance measures that the PMS-cham-
pion referred to above. It shows that middle man-
agers within logistics had speci?c ideas about new
performance measures. Toward the end of 2004,
one of the warehouses was nearing capacity, run-
ning the risk of an over?ow situation. A group
of people in logistics had developed solutions for
this problem and subsequently wanted to monitor
the e?ect, to see how the utilization of the ware-
house was developing over time. This was a simple
graph showing on a daily basis how many pallets
were stored. This total should be around 600 pal-
lets maximum, and serious storage problems
would result if it rose above 800. This inventory
monitoring report had been made available, with
the help of both the controller’s o?ce and the
research assistants. It may appear very simple
and easy to set up, but it took a couple of days,
preparing the SAP downloads and setting up the
Excel sheet to develop and implement this report.
A similar type of performance measure was now
(in March 2005) needed in another warehouse,
where all carton products (labels, boxes, etc.) were
stored. Note that outside storage was not an
option for these items. Again, a graph would be
needed showing the warehouse utilization on a
daily basis, but the new graph was a bit more com-
plex. It needed to indicate warehouse utilization
disaggregated into di?erent types of storage. Set-
ting this up would take several days of e?ort by
the controller’s department, who claimed that time
was not available for designing and implementing
this performance measure. This is an example of a
new performance measure that middle managers
wanted and that stirred debate on the prioritiza-
tion and resource allocation for the development
of performance measures.
These examples of re?nements to the PMS that
were initiated by employees to better support their
work practices suggest that the nature of the PMS
in logistics was predominantly enabling rather
than coercive. Can the enabling nature of the
PMS be understood based on the development
process that had shaped this PMS? In the follow-
ing sections, we will report on three characteristics
of the process: (a) professionalism, (b) experience-
based PMS development, and (c) experimentation
with new performance measures. We will also
explore how internal transparency (d) was impor-
tant for encouraging enabling formalization.
Professionalism of employees established the basis
for PMS development
This beverage manufacturing company was rec-
ognized for its professionalism by winning a pres-
tigious national prize in the retail beverages
category. In this annual contest, supermarket
chains assessed their 90 largest suppliers in terms
of three criteria: account management (the quality
of the sales team), trade marketing (the quality of
the sales support for the supermarkets), and sup-
ply chain management (the quality of the logistical
processes). For these criteria, the company had
won the highest score of all beverage suppliers
assessed (water, soft drinks, beer, wine and spirits)
for three years in succession (2002, 2003, and
2004). This suggests that the logistics department
operated at a high professional level.
We proposed that professionalism contributed
to a positive attitude towards PMS. Regression
results are presented in Table 5. The dependent
variables are Attitude toward performance mea-
sures (both data waves) and Ambition level in two
years (second data wave) with respect to perfor-
mance measurement. As shown in the table, the
coe?cients for the variable Professionalism are sta-
tistically signi?cant, and they are considerable
(.521, .518, and .420) and larger than the coe?-
cients for all other independent variables.
These results show that a high level of profes-
sionalism is a key characteristic of a development
504 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
process with high employee involvement. It
enabled employees, together with professionals
from the controller’s o?ce, to experiment with
performance measures and to gradually expand
and re?ne the PMS on the basis of learning from
experiences. We will elaborate now on the other
two characteristics of the PMS development
process.
Experience-based PMS development process built
on existing measurement practices
Experience-based PMS development refers to
identifying and building on local experiences with
performance measures during further rounds of
re?nement of the PMS. Note that ‘‘experience-
based’’ points to the capturing of experience for
guiding development at the level of the perfor-
mance measurement system; in the following sec-
tion ‘‘experimentation’’ will be discussed at the
level of single measures.
Our qualitative results indicate that the logistics
department has been following an experience-
based development process. Table 2 shows the
development of performance measures in logistics
between January 2003 and June 2005. The total
number of performance measures increased from
19 to 47 measures. New measures were being
added constantly, while other measures were
removed. Still other measures were reviewed and
updated and re-implemented. This situation is
not re?ective of a PMS initiative that is ?rst
designed, separately implemented, and then
reviewed, for example annually. Rather, the image
is that of a more ‘‘organic’’ PMS that is constantly
growing, being reviewed, and being pruned—a
continuous tinkering to make it better. It is consis-
tent with processes of incremental improvement,
based on experience gained (Abrahamson, 2000;
Zollo & Winter, 2002).
We will illustrate below that especially the LMT
attached importance to an experience-based pro-
cess for bringing their PMS further. In sum: the
company’s top-management wanted a common
format and approach to performance measurement
in all departments of the organization. They
labeled this ‘‘the balanced scorecard’’. Top-man-
agement initiated such a performance measure-
ment project during the period that we studied.
Tensions between the central initiative and the
local experiences in logistics could be observed.
While logistics was already the most active depart-
ment in terms of performance measures, the central
initiative was at several times perceived by the
LMT members as something that could disrupt
rather than foster their ongoing PMS activities.
Table 5
Regression results
Dependent variable Attitude toward
performance
measures (?rst
data wave)
Attitude toward
performance
measures (second
data wave)
Ambition level
in two years
(second data
wave)
Intercept 1.498 (1.206) 3.843 (1.322)
*
4.061 (1.049)
*
Professionalism .521 (.191)
*
.518 (.231)
**
.420 (.184)
**
Leadership style À.086 (.129) .097 (.129) .118 (.103)
Team trust .321 (.118)
*
À.062 (.158) À.038 (.125)
Work pressure À.108 (.170) À.324 (.166)
a
À.113 (.131)
Work satisfaction À.023 (.081) À.018 (.079) À.004 (.063)
# observations 42 40 40
R
2
.397 .250 .238
Unstandardized coe?cients and (Standard errors) tabulated.
We also estimated eight alternative speci?cations of these regression models, always including Professionalism plus various combi-
nations of a number of the other independent variables. The coe?cient for Professionalism was positive and signi?cant at least at the
.05 level in eight cases (?rst model), in seven cases (second model), and in eight cases (third model) (results not tabulated).
a
Coe?cient signi?cant at .059.
*
Coe?cient is signi?cant at the .05 level (2-tailed).
**
Coe?cient is signi?cant at the .01 level (2-tailed).
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 505
The central initiative placed performance measures
high on the agenda, and this priority status could
have possibly provided momentum to help the
LMT to move their PMS initiative forward. How-
ever, by July 2004 the LMT members were con-
cerned about what the central initiative would
mean for the PMS they had so carefully developed
with their employees over the last years. We under-
stand this as another indication of the importance
of an experience-based development process. There
was tension, because the people in logistics worried
that this top-down initiative would not re?ect their
experiences and would not allow time to experi-
ment with and adjust performance measures.
Illustration: tension from the central balanced
scorecard initiative vis-a` -vis the PMS in logistics
The new CEO was appointed in April 2004, but
he had already been a member of the board for
several months to get to know the company. He
conveyed his emphasis on performance measures
right from the start: he attended a meeting with
the LMT in October 2003, during which perfor-
mance measurement was a main point on the
agenda. He made it clear that he considered per-
formance measurement to be very important and
he wanted more of it, throughout the company.
He wanted a system to be implemented quickly:
de?ning the measures, setting (and ‘‘freezing’’)
the targets and tolerances. In that meeting, he also
said he wanted to show performance as a tra?c
light that would show red when measures slipped
below their target, and in which case the manager
responsible for a particular performance measure
would need to prepare an action plan for presenta-
tion to the management team. He spoke of this in
terms of management by exception, whereby the
performance report should be used to select issues
that needed to be discussed. It was clear that once
he assumed the chairmanship, performance mea-
surement and reporting were going to be matter
of high priority and a focal point of top-manage-
ment attention.
In June 2004, the new CEO announced the new
balanced scorecard project in the company. A bal-
anced scorecard at the board level—so for the ?rm
as a whole—was being formulated and each of the
di?erent departments in the ?rm, of which logistics
was one, had to devise a business balanced score-
card for their area. A companywide project lea-
der—an experienced internal manager at the
director level, reporting directly to the CEO—
was appointed. This initiative soon created anxiety
within the logistics department. The manager in
logistics who had become the main proponent of
performance measurement (the PMS-champion,
but that title was not yet used at that point in time)
called a meeting with the researchers, also on
behalf of the controller. She informed the research-
ers about the central initiative and explained that
the project leader was talking with other compa-
nies and consulting ?rms. She expressed concern
that new performance measures would be deter-
mined in a top-down fashion by the central initia-
tive, led by consultants. The concern of the
logistics group was that their long-standing and
ongoing PMS work would now be disturbed by
the top-down mandated initiative. They feared
that their system would have to be changed to
comply with the new, companywide balanced
scorecard framework. They were not against more
performance measurement—on the contrary, they
had already implemented performance-measure-
ment initiatives in logistics—but they feared that
a consultants-led project would be started with
top-down proposals for new performance mea-
sures, and they expected that this would leave less
room for what they had developed so far, which
had a close ?t to local work practices.
During the second half of 2004, the project lea-
der of the central balanced scorecard project vis-
ited several other companies to discuss their
experiences with the implementation of a PMS.
He concluded that although these experiences were
very diverse, it was clear that e?ective PMS devel-
opment take several years, and are more successful
if developed bottom-up and from within the orga-
nization, and that an organization should simply
get started and develop things further as it goes.
He also had gained the impression that when per-
formance measures were part of the incentive sys-
tem, there was a signi?cant risk of manipulation.
He wanted to use these insights in the scorecard
project for which he was now responsible. How-
ever, there were tensions and he said in a consulta-
tion with us, that in the eyes of the new CEO
506 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
‘‘things are going far too slowly’’. The project lea-
der established a group of ‘‘champions’’ in August
2004. There was one person from each department
within the company who was the most enthusiastic
proponent of performance measurement and who
was leading departmental initiatives to develop it
further. With one exception, these were not con-
trollers, but functional managers.
The deadline for the ?rst version of all depart-
mental balanced scorecards was November 1,
2004, and it was postponed until December 1,
2004. However, not all departments met the sec-
ond deadline, whereupon the CEO put a tra?c
light in the central entrance hall of the company’s
premises, with the signal showing red. It was there
for a couple of days. There was no sign or other
explanation of why it was there, but managers
soon found out it was there to signal that score-
cards really would have needed to be completed.
The intervention provoked quite some discussion,
some of which we witnessed in a meeting with
the LMT. Members of the team acknowledged
that it was unfortunate that the deadline was not
met, but they did not like the tra?c light interven-
tion. They complained that the e?orts that other
departments were undertaking were not facilitated,
in the sense that the practical support for imple-
mentation was lacking. The logistics’ PMS-cham-
pions objected: ‘‘We have said to the project
leader on several occasions that this can only be
done if it is facilitated in a practical way, but noth-
ing has happened’’. The logistics director com-
mented that such comment did not reach the
company’s directors meeting when the balanced
scorecard initiative was discussed.
The balanced scorecards were implemented in
the middle of December 2004, and they were eval-
uated internally and changes were proposed on the
basis of the ?rst six months of experience. The bal-
anced scorecard initiative was perceived positively,
according to the company project leader. In a
recent strategic planning meeting with the top-35
managers of the company the balanced scorecard
was often mentioned as a positive development.
However, the companywide project leader also felt
the process at the time was fragile: ‘‘If I would stop
[leading this project] now, then it would collapse.
So, apparently it’s not yet deeply ingrained’’. Bal-
ancing the top-down pressure from the chairman
‘‘who simply wanted to have this’’ and letting the
balanced scorecard being developed bottom-up
was very important (and sometimes very di?cult),
according to the project leader.
Yet, an event in the middle of 2005 again
pointed to the importance of allowing an experi-
ence-based process. A consulting ?rm that the
company had engaged on another project had
devised another, di?erent balanced scorecard pro-
posal for the entire ?rm. It did not build on what
had been developed thus far (i.e., Green?eld
approach) and it was based on what the consulting
?rm considered to be best practice in other compa-
nies (i.e., a Blueprint). The balanced scorecard
project leader considered it a very serious mistake
if the ?rm were to adopt the blueprint proposed by
the consulting ?rm and to present it at the next
top-35 meeting: ‘‘What are people such as [the
logistics PMS-champion] supposed to think if this
hangs on the wall in our next top-35 meeting?’’ He
emphasized that the proposal did not do justice at
all to what the company had developed by now,
‘‘which is so speci?cally modeled to our situation’’.
Experimentation with new measures
Experimentation refers to the process of tinker-
ing with a single performance measure while
designing and implementing it. This means that
design and implementation are interconnected,
because the design (from conceptualization to
?ne-tuning the presentation) is partly done with
real data, after measurement and reporting on
the new measure have already begun. New perfor-
mance measures are hardly ever ‘‘right’’ straight
away, and by allowing adjustments the reliability
and validity of the measure can be improved if tak-
ing into close consideration the context where the
new performance measure is actually in use. In
other words: both conceptual and detailed imple-
mentation issues of performance measures are cru-
cial for their e?ectiveness. Employees typically
possess key, yet tacit, knowledge that is required
to further re?ne performance measures. In perfor-
mance-measurement development ‘‘the devil is in
the detail’’, as illustrated below, with respect to a
new e?ciency performance measure.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 507
Illustration: experimenting with a new performance
measure for internal transportation and warehouses
We will illustrate experimentation with a new
performance measure for the sub-department of
‘‘internal transportation and warehouses’’. The
activities took place from November 2004 until
April 2005. The main activity of this sub-depart-
ment was to store ?nished goods in the ?nished
goods warehouse (it was brought from production
to the warehouse by an automatic transportation
system) and to load and unload delivery trucks,
using forklift trucks. The workload for this activity
was unevenly spread throughout the day. For
managing e?ciency, planning the number of oper-
ators per shift was important, and also making
sure that the operators were carrying out the work
quickly and that certain preparatory activities
were done during idle time. A new performance
measure was thus needed for e?ciency purposes.
(Observations of forklift-truck drivers gave a
rough indication of e?ciency, but the managers
and ?rst-line supervisors wanted more factual data
to complement these.) Based on existing ideas
within the sub-department and the controllers’
o?ce, and on discussions with employees in the
transportation sub-department, the performance
measure was de?ned as the number of ‘‘trans-
ports’’ carried out per labor hour. For example,
one transport could be to pick up a pallet from
the automatic conveyer belt and to bring it to a
particular location in the warehouse.
Measuring the number of transports was feasi-
ble, because each transport was issued by the ware-
house management system to terminals on the
forklift trucks. Measuring the number of labor
hours spent was also possible using the warehouse
management system, and so the ratio of the two
was readily available. However, not all activities
that needed to be carried out were issued by the
warehouse management system. For example: par-
ticular types of pallets needed to be rotated 90°
before they could be transported; the forklift-truck
drivers sometimes needed to move lorries for load-
ing and unloading; containers for international
destinations needed to be closed and sealed. An
initial list of more than 30 of such side activities
was prepared and a copy was given to each fork-
lift-truck driver to estimate the time spent on these
activities, and to add new activities to the list. The
?nal list of side activities was compiled and the
workload for these activities was also expressed
as a number of transports (the numerator of the
performance measure). A manager in the sub-
department ‘‘internal transportation and ware-
houses’’ calculated the performance measure, by
downloading data from the warehouse manage-
ment system and preparing the reports using
Excel.
The target for the new performance measure
was an engineered target, as this was based on
the detailed design of the processes in the ware-
house. However, not all assumptions underlying
this design were initially met, as some processes
were carried out di?erently, and this led to an
adjustment of the target. The presentation format
was designed such that it included, in weekly num-
bers, the absolute number of transports and the
number of transports per labor hour (both per
week and cumulative).
After the initial measure and report had been
implemented, weekly evaluations were conducted
with the sub-department’s manager and the team
leaders. They provided relevant and detailed feed-
back, in particular regarding the way in which side
activities were included in the calculation of the
performance measure. For example, for some side
activities it was decided not to estimate the amount
of work involved every week, but rather include a
more general estimation. It was also discussed
whether the performance as measured made sense
and appeared valid from the team leaders’ perspec-
tive, which was found to be the case. The weekly
frequency of reporting the measure was found use-
ful. Also the presentation format was discussed,
because it was quite a complex chart. Adjustment
to the chart were made, but also the term ‘‘overca-
pacity’’ that was initially used was replaced by
‘‘theoretical utilization’’ and the chart’s original
label ‘‘e?ciency [sub-department]’’ was changed
to ‘‘transportations per labor hour in [warehouse
name]’’.
Similar experimentation activities were con-
ducted for other performance measures. The e?ect
of experimentation was not only that behavioral
e?ects such as commitment were improved. More
to our point, experimentation played a vital role
508 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
in arriving at a performance measure that was
more reliable, valid, and understandable in the
context. The people who were responsible for the
PMS (from the controllers’ o?ce and research
assistants) obtained an in-depth understanding of
the operational processes the PMS were supposed
to capture. These specialists needed to obtain an
intimate familiarity with the operational processes,
and the operational managers needed to under-
stand the details involved in actually translating
these processes into quantitative performance
numbers.
In sum, in the above and previous sections we
reported ?ndings showing that a PMS is more
likely to be seen as a constructive, enabling type
of formalization, rather than a negative, coercive
form of control, if it is developed incrementally
such that the members of the organization can
gain actual experience with using performance
measures, re?ect on this, and draw conclusions
to develop the system further. We observed ongo-
ing activities such as reviewing and revising exist-
ing performance measures, brainstorming about
possible new measures, experimentation with new
measures, adding some new measures to the
PMS, and dropping some existing measures. There
was not a speci?c point in time when the perfor-
mance-measurement system was ‘‘ready’’. It was
also found that the throughput time for actually
implementing a new measure was considerable
and could easily take half a year to one year. Fur-
thermore, it was crucial to look in detail at existing
measures at the start of each measure develop-
ment. New measures could only be developed after
understanding and using as much as possible from
what was already in place such as the precise def-
initions of existing measures; the various rationales
behind these; the data used; the limitations that
people experienced with the existing measures;
the ideas that people were working on to improve
the existing measures; and information system
changes that could impact existing reports. Hence,
neither a Green?eld nor a Blueprint approach was
taken. The developmental approach stimulated the
inter-functional exchange of knowledge, which
was required for a reliable, valid and understand-
able PMS. It created also transparency of the sys-
tem from the employees’ perspective (and
transparency of operational processes for the
PMS specialists). Therefore, we will explore PMS
transparency in the next section.
Internal transparency was emphasized throughout
the development process
Transparency, in combination with ?exibility,
in the context of performance measures, means
that employees (whose performance is going to
be measured) are highly involved in operating
and managing the PMS as organizational technol-
ogy. Transparency and ?exibility imply that the
performance measures are understandable to
employees, something they have hands-on-experi-
ence with, and something they can in?uence to
make it workable for them. In the studied com-
pany, performance measures were not owned by,
nor understood solely by the technical specialists
in the ?nance and accounting function. Instead,
employees had been an integral part of the devel-
opment of the measures from the outset. They
were in some cases even managing the system after
it had been implemented. The operational manag-
ers themselves were trained with the information
system tools to record data, to pull together data,
to create performance reports, to review and revise
de?nitions of performance measures, to change
graphical representations of performance reports,
etc. The director of logistics was a strong propo-
nent of non-accounting ownership of the PMS,
and we will elaborate on this below.
Illustration: director of logistics voices strong
concerns about employee ownership of the PMS
throughout the study
The director of logistics was very outspoken on
matters of what could be considered internal trans-
parency. Already in the ?rst meeting during this
study, in August 2002 (see Table 1), he emphasized
that he wanted the employees rather than the con-
trollers to be responsible for reporting perfor-
mance. ‘‘If people are not going to take the e?ort
to do the measurements and make the reports, it
probably means it’s not essential to do them’’.
Whether new measures would actually be imple-
mented was a kind of relevance test in his mind.
In his view, when confronted with performance
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 509
measurement, employees would ask themselves
‘‘Do I feel responsible for this?’’, ‘‘Does it help
me?’’, and that the answers to these questions would
determine their attitudes and level of cooperation.
The director of logistics expressed these con-
cerns at the beginning of the study, when he stated
that the emphasis should be on creating new per-
formance measures for use within the four logistics
sub-departments, and that less emphasis should be
given to measures for logistics as a whole (for use
in the LMT) or for reporting the performance of
logistics to the board. And towards the end of
the project (in May 2005), when discussing the pri-
ority for further development (after the meeting
with middle managers in logistics, March 2005,
described above), he made forceful comments
re?ecting concerns for empowering middle manag-
ers. As described above, in March 2005 the four
sub-departments within logistics had proposed a
list of 16 performance measures they wanted to
implement, and there was a meeting of the LMT
about prioritizing those measures and discussing
more generally how logistics wanted to move for-
ward with their performance-measurement system.
In that meeting, the question was raised whether, if
needed, the LMT was willing to allocate some of
the resources to support performance measures
for lower-level managers within logistics, and if
they wanted to invest in enabling these managers
to implement and generate performance measures
themselves. Such investment would include, for
example, buying and implementing additional IT
tools, providing training, allocating time of people
in the controller’s o?ce. Some members of the
LMT supported this, but also had some reserva-
tions: e.g., ‘‘Only if I also think it is relevant for
our department’’. The logistic director intervened:
‘‘You cannot give a conditional ‘yes’, no ‘yes, but’.
You cannot say ‘no’ to this’’. He stated that, as a
principle, he wanted to support the information
requests from lower-level managers, and subse-
quently there could be a need to prioritize. ‘‘And
those that are selected, we will certainly enable’’.
‘‘If we then say ‘these [particular performance
measures] are the most important ones and these
we will facilitate’, then managers should de?ne
what it entails and come up with a project plan
for each [performance measure]’’.
In sum, during various discussions (the meeting
in August 2002 with the middle managers, and
later in the LMT) it became clear that di?erent
PMS requirements existed for the LMT member
and for middle managers in the logistics depart-
ment. There were no inherent con?icts between
these di?erent requirements, because the PMS
was conceived of and implemented as something
that supported di?erent managerial levels—the
enabling intent and nature of the PMS was unmis-
takable. However, in practical terms, there was a
con?ict in the sense that resources for PMS devel-
opment were limited, and choices had to be made
regarding whose PMS requirements were going to
be implemented ?rst. This observation illustrates a
key advantage of transparency due to heavy
involvement of employees throughout the develop-
ment process: as employees were more involved
and better enabled (such as provided with IT sys-
tems for PMS development), internal transparency
increased and dependency on specialized resources
for further PMS development was reduced.
Discussion
This study of performance-measurement sys-
tems (PMS) in operations focused on identifying
a development process that is likely to lead to a
PMS that employees regard as useful for them;
something they want to help develop, and not
exclusively as a control device for senior manage-
ment. Which characteristics of the development
process contribute to an enabling PMS? How can
a PMS be developed as enabling formalization
and not as coercive formalization? Our research
was conducted as a longitudinal case study of the
logistics department in a medium-sized beverage
manufacturing company, from August 2002
through June 2005. Qualitative data were gath-
ered, as well as two waves of survey data.
We found that Professionalism was signi?cantly
related to positive attitudes toward performance
measures, based on the survey data. The qualitative
?ndings point to professionalism as a force that can
be mobilized through a development process that
is experience-based and allows for experimenta-
tion. Experience-based characterizes a development
510 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
process that builds on existing skills, practices,
and know-how of involved employees, in order
to enrich the PMS step-by-step over time. Key
qualitative ?ndings supporting the importance
of this experienced-based characteristic were
revealed when a centrally initiated balanced score-
card initiative threatened to overrule the experi-
enced-based approach that had been followed
thus far within logistics. Experimentation with
PMS improvements concerned deliberate employee
e?orts to test, review, and re?ne conceptualiza-
tions, de?nitions, data, and presentations of new
performance measures. Key ?ndings supporting
the importance of experimentation pertained to
the way in which speci?c new measures (such as
for warehousing and internal transportation) were
developed. Furthermore, we found that transpar-
ency contributed to an enabling PMS. This became
apparent from the internal discussions on owner-
ship of the performance measures. Local transpar-
ency in the context of performance measures was
stimulated by deeply involving operational manag-
ers in the conceptual and practical development of
such measures, but also by making them, rather
than people in the controller’s o?ce, responsible
for periodically ‘‘calculating’’ and reporting these
measures.
3
The theoretical framework developed by Adler
and Borys (1996) was key to our study of e?ective
organizational change towards an enabling PMS.
We developed their framework further, based on
other literatures and the empirical data, in the spe-
ci?c context of PMS for operations. We proposed
three characteristics of a development process that
is likely to result in an enabling PMS, and we show
a departmental episode where enabled formaliza-
tion took place in form of a much-expanded
PMS. Its development process was characterized
by experiential inputs, experimentation, and a high
degree of professionalism on the part of individual
employees. Furthermore, we found that the
‘‘norm’’ of transparency repeatedly and consis-
tently voiced by the departmental director was a
force that contributed to the enabling PMS we wit-
nessed in this setting of a logistics department.
Conclusions
This study provides several main contributions.
First, it increases our understanding and apprecia-
tion for a developmental approach leading to an
enabling PMS. We demonstrate that building on
existing performance-measurement experience of
employees, as well as their professionalism, and
allowing experimentation with measures contrib-
ute to the enabling nature of the PMS. Design
and implementation appear interrelated, because
design is partly conducted while obtaining an
empirical understanding of how performance mea-
sures are being used within their actual operational
context. An experienced-based process and experi-
mentation are not used to manage resistance to
organizational change, such as to create commit-
ment, or to make people feel that they are taken
seriously (Piderit, 2000). Rather, an experienced-
based process and experimentation serve to involve
employees in such a way that their knowledge is
mobilized to design a more valid, reliable, and
understandable PMS in their speci?c local context.
We pointed to the importance of professionalism of
employees as a condition for a development pro-
cess characterized by being experience-based and
containing experimentation.
Second, this study provides a possible explana-
tion for why a developmental PMS approach as
described in our case study may establish enabling
formalization. Given the fact that a developmental
approach to PMS evolution engages all personnel
whose performance is being measured, it may com-
pensate for the inherent incompleteness of perfor-
mance measures. This study connects accounting
considerations about the completeness of perfor-
mance measures in operations (Chapman, 1997;
Lillis, 2002) with the ideas proposed by Adler
and Borys (1996) regarding design principles for
3
After this study, around January 2006, it was formally
decided that the reporting of most performance measures would
be the responsibility of particular managers in logistics. The
controller’s o?ce would conduct audits on these measures, and
with the IT department they had to provide skills and tools to
managers in logistics. The controller’s o?ce was responsible for
reporting on the measures that were included on the scorecards
for the LMT and the company’s top-management.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 511
enabling formalization. A PMS re?ects perfor-
mance on a variety of dimensions, such as e?-
ciency, productivity, quality, and responsiveness.
However, it remains di?cult to develop a techni-
cally complete PMS that fully re?ects the dimen-
sions of operational performance, that contains
valid measures on all these dimensions, and that
includes targets that reliably capture the tradeo?s
between opposing performance measures (Chap-
man, 1997; Lillis, 2002). An experience-based
development process that includes experimenta-
tion and builds on professionalism of employees
(whose performance is being measured) enhances
both the validity and acceptance of the PMS.
The study shows, furthermore, that local PMS
initiatives have a high chance of being successfully
implemented, despite top-management’s e?orts to
coerce the local unit into a much faster and conse-
quently less-developmental mode. The action
research described here stimulated learning, both
on the part of the organization’s department and
by the university-based participants. After the
study, there was a continuation of similar PMS
activities, carried out by members of the logistics
department with less active engagement of the uni-
versity research partners. When re?ecting on the
project (in July 2006), members of the LMT empha-
sized the importance of management support for
PMS development, because the development pro-
cess requires signi?cant time of employees at
various levels, spent on activities that may not be
seen as particularly ‘‘strategic’’ or ‘‘glamorous’’.
Employees’ individual expertise, insights, and skills,
as well their enthusiasm for performance measure-
ment, were utilized in the process; and they received
the credits for results achieved, in terms of an
enriched PMS. According to these LMT members,
the development process had also bene?ted from
the stimulating and challenging interaction with
outsiders—researchers and students in this case.
Although the study is based on a multitude of
observations, an obvious but important limitation
is that it is based on a single case study. Results
may be di?cult to generalize to other empirical
settings, also because the researchers and research
assistants have not only been neutral observers;
they were also involved in helping to expand and
re?ne the departmental PMS. However, against
these weaknesses stands the advantage that
detailed observations could be made. Discussions
with members of the organization where always
lively, detailed, and involved. Our ideas were crit-
ically challenged, because ideas pertained to
‘‘their’’ PMS and the development and actual
usage of it. These interactions were not discussions
about abstract ideas in the interest of the research-
ers’ project or theory. Rather they dealt with what
made sense to organizational members in the con-
text and language of their own work. We feel that
our study, while acknowledging the limitations in
terms of possible biases (for example, selective per-
ception and interpretation), validly captures the
departmental members’ attitude toward perfor-
mance measurement and the development process
that had contributed to that attitude. Further-
more, the insights based on qualitative data have
been complemented with representative quantita-
tive data gathered within the department at two
points in time.
An important ?eld for future research remains
the dualistic role of performance-measurement sys-
tems—to provide some of the knowledge necessary
for planning and decision making, but also to
motivate and monitor people in organizations
(Zimmerman, 1997, p. 5)—and the e?ects of the
incompleteness of such systems. We concur with
Ahrens and Chapman (2004, p. 298) that ‘‘the con-
cept of enabling control presents a clearly de?ned
framework within which future research . . . might
further develop our understanding of the ways in
which management control systems can simulta-
neously support the objectives of e?ciency and ?ex-
ibility’’. Future research could possibly expand our
focus to conditions when a developmental, enabling
PMS approach is most feasible. Feasibility is an
issue, because this approach is demanding on
employees, senior management, and support func-
tions. A developmental approach assumes informal
local experience with quantitative performance
measurement, and that employees are quite willing
and capable to build on that in order to further
develop the PMS. Beyond that, we expect that other
requirements also play a role. For example, time is
needed to really understand in detail what is already
in place, and to evaluate what will be reused and
what not. Time and local autonomy are needed to
512 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
not ‘‘?x’’ the PMS too soon so that improvements
and adjustments to local conditions can be made.
Furthermore, senior management needs to have a
clear understanding of their objective for develop-
ing a PMS: is it to monitor and report upward in
the hierarchy, or is it also (or even primarily)
intended to support lower-level employees in their
work? Senior management also needs to behave in
accordance with an enabling PMS: balancing
between recognizing the incompleteness of the
PMS (so there is a story next to measuredoutcomes)
and demanding certain performance. And a devel-
opmental approach needs to be facilitated in terms
of resources and rewards, such as time to work on
it, bestowing prestige upon PMS developers, sup-
port from experts, development of IT tools with
which non-specialists can work, etc. Such facilita-
tion requires high-level support from IT, and
cross-functional cooperation of ?nance and
accounting professionals. In sum, the developmen-
tal approach reported in this paper is demanding
and may not be feasible in every organization.
The developmental approach may also not
be equally relevant to every organization. While
incompleteness of PMS helps to understand that a
developmental approach a?ects the enabling nature
of PMS, in certain organizations PMS may be well
developed and stable. Adevelopmental approach to
shape the PMS may also seem less relevant if oper-
ations managers have other kinds of information
that are more informative than formal performance
measures, such as direct observations of processes.
So, future studies may focus on the question: what
are antecedents of an e?ective developmental PMS
approach? Furthermore, investigating the bene?ts
to the organization (such as employee learning,
or ?nancial bene?ts), as well as assessing other
consequences of a developmental approach is an
intriguing line of future research.
To conclude, this study analyzed and illustrated
a developmental approach to PMS development,
which harvests existing informal measurement
practices, it lets new measurement experiments
blossom, and at times prunes the extant measure-
ment system. This developmental approach works
through employees’ local measurement experi-
ences, experimentation with re?ned and new mea-
sures, and mobilizes employees’ professionalism.
Future research could help to better understand
antecedents and consequences of this developmen-
tal approach toward performance-measurement
systems.
Acknowledgements
The authors thank Thomas Ahrens, Chris
Chapman, Kim Lang?eld-Smith, Je? Hicks, the
journal’s Editor and Reviewers, company employ-
ees, and workshop participants at the Academy of
Management 2005, the Warwick Business School,
the Global Management Accounting Research
Symposium 2006, and the New Directions in Man-
agement Accounting Conference 2006 for their
comments and suggestions.
Appendix
This appendix contains the questionnaire items
for the newly developed constructs Attitude toward
performance measures and Professionalism.
Attitude toward performance measures
‘‘In your department a number of performance
measures’’ (or KPIs: ‘‘Key Performance Indica-
tors’’) are used, as shown in the appendix to this
questionnaire. We ask your opinion about the
KPIs within your department. Please give a score
from 1 to 7.
1. How familiar are you with the KPIs of your
department?
2. How understandable do you ?nd the KPIs of
your department?
3. How reliable do you consider the KPIs of
your department?
4. How validly re?ect the KPIs the perfor-
mance of your department?
5. How extensively are the measurements of the
KPIs used in your department?
6. How involved are you within your depart-
ment in the development of better KPIs?
7. How do you experience the process of devel-
oping better KPIs within your department?
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 513
8. How useful do you consider the present
departmental KPIs for the Logistics
Department?
9. How useful do you consider the present
departmental KPIs for your department?
10. How useful do you consider the present
departmental KPIs for you ‘‘personally?’’
These questions were answered on seven-point
Likert scales anchored to the key concept in each
question. For example, ‘‘How reliable do you con-
sider the KPIs of your department?’’ was anchored
on ‘‘very unreliable’’, ‘‘unreliable’’, ‘‘somewhat
unreliable’’, ‘‘neutral’’, ‘‘somewhat reliable’’, ‘‘reli-
able’’, ‘‘very reliable’’.
Professionalism
‘‘We ask your opinion about the following
statements’’. You can indicate the extent to which
you agree with each statement by a number from 1
through 7:
1. I always contribute to new ideas at work.
2. At work, I like to be active improving things.
3. I like to do things well in my work.
4. The way I conduct my activities, is very con-
sistent with what is being recommended by
professionals.
5. I obey the rules at work.
6. I adhere to standards of integrity that pertain
to my work.
7. I sometimes act in ways I should not, because
it will not be noticed anyway (reversely
coded).
8. The manner of my daily work I consider
‘‘professional’’.
9. The way in which may work is organized is
professional.
10. I am busy with my profession or work also
outside working hours.
11. I can demonstrate to other people that my
work is important.
12. I learn every day at work.
13. I have colleagues at work from whom I
learn.
14. I enjoy reading about my profession or work.
15. I take part in activities outside working
hours that improve my professionalism.
16. I am always keen to follow suitable external
courses.
17. I am always keen to follow suitable internal
courses.
18. I learn from problems I encounter at work.
19. I am an active member of an organization
(or network) that helps advancing my
profession.
20. I get su?cient autonomy to direct my work.
21. I take my personal professional development
seriously.
22. I keep myself informed about new develop-
ments in my profession or work.
23. I am actively improving at work.
4
24. I would like to pursue more external
training.
4
25. I would like to pursue more internal
training.
4
References
Abernathy, M. A., & Brownell, P. (1999). The role of budgets in
organizations facing strategic change: an exploratory study.
Accounting, Organizations and Society, 24(3), 189–204.
Abernathy, M. A., & Lillis, A. M. (1995). The impact of
manufacturing ?exibility on management control system
design. Accounting, Organizations and Society, 20(4),
241–258.
Abrahamson, E. (2000). Change without pain. Harvard
Business Review, 78(4), 75–79.
Very much disagree Disagree Moderately disagree Neutral Moderately agree Agree Very much
agree
1 2 3 4 5 6 7
4
These items were added only after the ?rst administration of
the questionnaire.
514 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
Adler, P. S., & Borys, B. (1996). Two types of bureaucracy:
Enabling and coercive. Administrative Science Quarterly,
41(March), 61–89.
Ahrens, T. A., & Chapman, C. S. (2004). Accounting for
?exibility and e?ciency: A ?eld study of management
control systems in a restaurant chain. Contemporary
Accounting Research, 21(2), 271–301.
Ahrens, T. A., & Chapman, C. S. (2006). Doing qualitative ?eld
research in management accounting: Positioning data to
contribute to theory. Accounting, Organizations and Society,
31, 819–841.
Atkinson, A. A., & Sha?r, W. (1998). Standards for ?eld
research in management accounting. Journal of Manage-
ment Accounting Research, 10, 41–68.
Baer, M., & Frese, M. (2003). Innovation is not enough:
Climates for initiative and psychological safety, process
innovations, and ?rm performance. Journal of Organiza-
tional Behavior, 24, 45–68.
Baines, A., & Lang?eld-Smith, K. (2003). Antecedents to
management accounting change: A structural equation
approach. Accounting, Organizations and Society, 28,
675–698.
Banker, R. D., Potter, G., & Schroeder, R. G. (1993).
Reporting manufacturing performance measures to work-
ers: An empirical study. Journal of Management Accounting
Research, 5, 33–55.
Bass, B. M., & Avolio, B. J. (2000). MLQ multifactor leadership
questionnaire (2nd ed.). Redwood City: Mind Garden
(Technical Report).
Beamon, B. M. (1999). Measuring supply chain performance.
International Journal of Operations & Production Manage-
ment, 19(3), 275–292.
Bisbe, J., & Otley, D. (2004). The e?ects of the interactive use of
management control systems on product innovation.
Accounting, Organizations and Society, 29, 709–737.
Bourne, M., Neely, A., Mills, J., & Platts, K. (2003a).
Implementing performance measurement systems: A litera-
ture review. International Journal of Business Performance
Management, 5(1), 1–24.
Bourne, M., Neely, A., Mills, J., & Platts, K. (2003b). Why
some performance measurement initiatives fail: Lessons
from the change management literature. International Jour-
nal of Business Performance Management, 5(2–3), 245–269.
Caldwell, S. D., Herold, D. M., & Fedor, D. B. (2004). Toward
and understanding of the relationship among organizational
change, individual di?erences, and changes in person-
environment ?t: A cross-level study. Journal of Applied
Psychology, 89(5), 868–882.
Carlile, P. R. (2002). A pragmatic view of knowledge and
boundaries: Boundary objects in new product development.
Organization Science, 13(4), 442–455.
Carmona, S., & Gro¨ nlund, A. (2003). Measures vs actions: The
balanced scorecard in Swedish law enforcement. Interna-
tional Journal of Operations & Production Management,
23(11), 1475–1496.
Cavalluzzo, K. S., & Ittner, C. D. (2004). Implementing
performance measurement innovations: Evidence from
government. Accounting, Organizations and Society, 29,
243–267.
Chapman, C. S. (1997). Re?ections on a contingent view of
accounting. Accounting, Organizations and Society, 22,
189–205.
Chow, C. W., Kato, Y., & Merchant, K. A. (1996). The use
of organizational controls and their e?ects on data manip-
ulation and management myopia: A Japan vs U.S. com-
parison. Accounting, Organizations and Society, 21(2–3),
175–192.
Davila, A., & Wouters, M. (2005). Managing budget emphasis
through the explicit design of conditional budgetary slack.
Accounting, Organizations and Society, 30, 587–608.
de Haas, M., & Algera, J. A. (2002). Demonstrating the e?ect of
the strategic dialogue: Participation in designing the man-
agement control system. Management Accounting Research,
13, 41–69.
Eccles, R. G. (1991). The performance measurement manifesto.
Harvard Business Review, 69(1), 131–137.
Edmondson, A. (1999). Psychological safety and learning
behavior in work teams. Administrative Science Quarterly,
44, 350–383.
Fullerton, R. R., & McWatters, C. S. (2002). The role of
performance measures and incentive systems in relation to
the degree of JIT implementation. Accounting, Organiza-
tions and Society, 27, 711–735.
Hall, R. H. (1968). Professionalization and bureaucratization.
American Sociological Review, 33(10), 92–104.
Hall, R. W., Johnson, H. T., & Turney, P. B. B. (1990).
Measuring up: Charting pathways to manufacturing excel-
lence. Homewood, IL: Business One Irwin.
Jaworski, B. J., & Young, S. M. (1992). Dysfunctional behavior
and management control: An empirical study of marketing
managers. Accounting, Organizations and Society, 17(1),
17–35.
Jo¨ nsson, S., & Gro¨ nlund, A. (1988). Life with a sub-contractor:
New technology and management accounting. Accounting,
Organizations and Society, 13(5), 512–532.
Kaplan, R. S. (1983). Measuring manufacturing performance:
A new challenge for managerial accounting research. The
Accounting Review, 58(4), 686–705.
Kaplan, R. S. (Ed.). (1990). Measures for manufacturing
excellence. Boston, MA: Harvard Business School Press.
Labro, E., & Tuomela, T.-S. (2003). On bringing more action
into management accounting research: Process consider-
ations based on two constructive case studies. European
Accounting Review, 12(3), 409–442.
Lillis, A. M. (2002). Managing multiple dimensions of manu-
facturing performance—An exploratory study. Accounting,
Organizations and Society, 27, 497–529.
Lowe, A., & Jones, A. (2004). Emergent strategy and the
measurement of performance: The formulation of perfor-
mance indicators at the microlevel. Organization Studies,
25(8), 1313–1337.
Maiga, A. S., & Jacobs, F. A. (2005). Antecedents and
consequences of quality performance. Behavioral Research
in Accounting, 17, 111–131.
M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516 515
Malina, M. A., & Selto, F. H. (2001). Communicating and
controlling strategy: An empirical study of the e?ectiveness of
the balanced scorecard. Journal of Management Accounting
Research, 13, 47–90.
Maskell, B. H. (1991). Performance measurement for world class
manufacturing—Amodel for Americancompanies. Cambridge,
MA: Productivity Press.
McKinnon, S. M., & Bruns, W. J. (1992). The information
mosaic. Boston, MA: Harvard Business School Press.
Medori, D., & Steeple, D. (2000). A framework for auditing
and enhancing performance measurement systems. Interna-
tional Journal of Operations & Production Management,
20(5), 520–533.
Nanni, A. J., Dixon, J. R., & Vollmann, T. E. (1992). Integrated
performance measurement: Management accounting to
support the new manufacturing realities. Journal of
Management Accounting Research, 4, 1–19.
Otley, D. (1999). Performance management: A framework for
management control systems research. Management
Accounting Research, 10, 363–382.
Perera, S., Harrison, G., & Poole, M. (1997). Customer-focused
manufacturing strategy and the use of operations-based
non-?nancial performance measures: A research note.
Accounting, Organizations and Society, 22, 557–572.
Piderit, S. K. (2000). Rethinking resistance and recognizing
ambivalence: A multidimensional view of attitudes toward
an organizational change. Academy of Management Review,
25(4), 783–794.
Qu, S. Q. (2006). Translating popular accounting ideas into
action: The role of inscriptions in customizing the balanced
scorecard. University of Alberta, School of Business.
Ramaswami, S. N. (1996). Marketing controls and dysfunc-
tional employee behaviors: A test of traditional and
contingency theory postulates. Journal of Marketing,
60(2), 105–120.
Ramaswami, S. N. (2002). In?uence of control systems on
opportunistic behaviors of salespeople: A test of gender
di?erences. Journal of Personal Selling & Sales Management,
22(3), 173–188.
Reagans, R., Argote, L., & Brooks, D. (2005). Individual
experience and experience working together: Predicting
learning rates from knowing who knows what and knowing
how to work together. Management Science, 51(6), 869–881.
Scott, T. W., & Tiessen, P. (1999). Performance measurement
andmanagerial teams. Accounting, Organizations andSociety,
24, 263–285.
Simons, R. (1990). The role of management control systems in
creating competitive advantage: New perspectives. Account-
ing, Organizations and Society, 15, 127–143.
Simons, R. (1991). Strategic orientation and top management
attention to control systems. Strategic Management Journal,
12(1), 49–62.
Simons, R. (1994). How new top managers use control systems
as levers of strategic renewal. Strategic Management Jour-
nal, 15(3), 169–189.
Simons, R. (1995). Control in an age of empowerment. Harvard
Business Review, 73(2), 80–88.
Stanton, J. M., Balzer, W. K., Smith, P. C., Parra, L. F., &
Ironson, G. (2001). A general measure of work stress: The
stress in general scale. Educational and Psychological Mea-
surement, 61(5), 866–888.
Snizek, W. (1972). Hall’s Professionalism scale: An empiri-
cal reassessment. American Sociological Review, 37(1),
109–114.
Sorge, A., & Witteloostuijn, A. van. (2004). The (non)sense of
organizational change: An essay about universal manage-
ment hypes, sick consultancy metaphors, and healthy
organization theories. Organization Studies, 25(7),
1205–1231.
Swailes, S. (2003). Professionalism: Evolution and measure-
ment. The Service Industries Journal, 23(2), 130–149.
Townley, B., Cooper, D. J., & Oakes, L. (2003). Performance
measures and the rationalization of organizations. Organi-
zation Studies, 24(7), 1045–1071.
Van der Stede, W. A. (2000). The relationship between two
consequences of budgetary controls: Budgetary slack crea-
tion and managerial short-term orientation. Accounting,
Organizations and Society, 25, 609–622.
West, J., & Iansiti, M. (2003). Experience, experimentation, and
the accumulation of knowledge: The evolution of R&D in
the semiconductor industry. Research Policy, 32(5),
809–825.
Yanow, D. (2004). Translating local knowledge at organiza-
tional peripheries. British Journal of Management, 15,
S9–S25.
Zollo, M., & Winter, S. G. (2002). Deliberate learning and the
evolution of dynamic capabilities. Organization Science,
13(3), 339–351.
Zimmerman, J. L. (1997). Accounting for decision making and
control (2nd ed.). Chicago, IL: Irwin.
516 M. Wouters, C. Wilderom / Accounting, Organizations and Society 33 (2008) 488–516
doc_336076688.pdf