Description
In this paper we develop the concept of compromising accounts as a distinctive approach
to the analysis of whether and how accounting can facilitate compromise amongst organizational
actors. We take the existence of conflicting logics and values as the starting point
for our analysis, and directly examine the ways in which the design and operation of
accounts can be implicated in compromises between different modes of evaluation and
when and how such compromises can be productive or unproductive. In doing so, we draw
on Stark’s (2009: 27) concept of ‘organizing dissonance’, where the coming together of
multiple evaluative principles has the potential to produce a ‘productive friction’ that
can help the organization to recombine ideas and perspectives in creative and constructive
ways. In a field study of a non-government organization, we examine how debates and
struggles over the design and operation of a performance measurement system affected
the potential for productive debate and compromise between different modes of evaluation.
Performance measurement, modes of evaluation
and the development of compromising accounts
Robert H. Chenhall
a
, Matthew Hall
b,?
, David Smith
a
a
Department of Accounting and Finance, Monash University, Australia
b
Department of Accounting, London School of Economics and Political Science, United Kingdom
a b s t r a c t
In this paper we develop the concept of compromising accounts as a distinctive approach
to the analysis of whether and how accounting can facilitate compromise amongst organi-
zational actors. We take the existence of con?icting logics and values as the starting point
for our analysis, and directly examine the ways in which the design and operation of
accounts can be implicated in compromises between different modes of evaluation and
when and how such compromises can be productive or unproductive. In doing so, we draw
on Stark’s (2009: 27) concept of ‘organizing dissonance’, where the coming together of
multiple evaluative principles has the potential to produce a ‘productive friction’ that
can help the organization to recombine ideas and perspectives in creative and constructive
ways. In a ?eld study of a non-government organization, we examine how debates and
struggles over the design and operation of a performance measurement system affected
the potential for productive debate and compromise between different modes of evalua-
tion. Our study shows that there is much scope for future research to examine how
accounts can create sites that bring together (or indeed push apart) organizational actors
with different evaluative principles, and the ways in which this ‘coming together’ can be
potentially productive and/or destructive.
Ó 2013 Elsevier Ltd. All rights reserved.
Introduction
‘‘There’s still a big debate in VSO about whether the pur-
pose is to make sure volunteers have a good experience
overseas and then return back happy or do we have some
sort of coherent development programmes which use vol-
unteers as a key input. I think there are those two different
views of the organization. I mean there’s a whole lot of
views between those two but those are the two extreme-
s... it probably divides down the middle’’ (Regional Director
2, Voluntary Service Overseas).
The role of accounting practices in situations of differ-
ent and potentially competing interests has been a promi-
nent feature in studies of accounting and organizations.
Some studies have shown how accounting practices can
be mobilized by organizational actors to introduce a new
order and model of organizational rationality, typically
one focused on market concerns (e.g., Dent, 1991; Ezzamel,
Willmott, & Worthington, 2008; Oakes, Townley, & Cooper,
1998). Other research has emphasized the role of account-
ing in situations of multiple and potentially con?icting
interests, logics and regimes of accountability (e.g., Ahrens
& Chapman, 2002; Cooper, Hinings, Greenwood, & Brown,
1996; Lounsbury, 2008). In these settings, organizational
sub-groups can hold differing views of organizational real-
ity that are not displaced, but can become layered (cf., Coo-
per et al., 1996) such that they persist over time, or, as the
quote above suggests, remain ‘‘divided down the middle.’’
Here, accounts such as costing, resource allocation, and
0361-3682/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2013.06.002
?
Corresponding author.
E-mail address: [email protected] (M. Hall).
Accounting, Organizations and Society 38 (2013) 268–287
Contents lists available at SciVerse ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
performance measurement systems are involved in on-
going contests and struggles as various groups advance
particular interests and values (e.g., Andon, Baxter, & Chua,
2007; Briers & Chua, 2001; Nahapiet, 1988). Research on
the use of ?nancial and non-?nancial measures in perfor-
mance measurement systems (e.g., Kaplan & Norton,
1992; Sundin, Granlund, & Brown, 2010), or the use of
quantitative and qualitative information in ?nancial re-
ports (e.g., Chahed, 2010; Nicholls, 2009), can also be seen
to relate to the ways in which accounting practices can
give voice to different concerns and priorities. Often the
outcome of struggles between groups is intractable con?ict
and confused effort with the eventual dominance of a sin-
gular perspective that limits opportunities for on-going
contests and debate (e.g., Dent, 1991; Fischer & Ferlie,
2013). Alternatively, the processes taken by sub-groups
to promote their preferred views can sometimes achieve
a more workable compromise that generates constructive
debate and on-going dialogue (e.g., Nahapiet, 1988; Sundin
et al., 2010). Building on this literature, in this study we
analyse directly the ways in which the design and opera-
tion of accounts can be implicated in compromises be-
tween different modes of evaluation and seek to
illustrate when and how such compromises can be produc-
tive or unproductive.
As con?icting logics are probably unavoidable in any
human organization (Gendron, 2002), our approach is to
take the existence of, and the potential for, tension be-
tween different modes of evaluation as the starting point
for our analysis. In doing so, we mobilize Stark’s (2009:
27) concept of ‘organizing dissonance’, which posits that
the coming together of multiple evaluative principles has
the potential to produce a ‘productive friction’ that can
help the organization to recombine ideas and perspectives
in creative and constructive ways. The concept of organiz-
ing dissonance provides an analytical approach that views
the co-existence of multiple evaluative principles as an
opportunity for productive debate, rather than a site of
domination or intractable con?ict. As such, our approach
extends prior research by privileging analysis of when
and how the co-existence of multiple evaluative principles
can be productive or unproductive. We summarize the fo-
cus of our study in the following research questions: How
does the design and operation of accounting practices facil-
itate (or impede) compromise in situations of multiple
evaluative principles? When (and how) is compromise be-
tween different evaluative principles productive or
unproductive?
We argue that answers to these questions contribute to
the literature by focusing directly on how accounting is
implicated in compromising between different evaluative
principles and the way in which such compromise can be
productive or unproductive. Here the design and operation
of accounting practices can help organizational actors to
re-order priorities and integrate perspectives in situations
of co-existing and potentially competing values (Stark,
2009). In particular, we show how accounts have the po-
tential to provide a fertile arena for productive debate be-
tween individuals and groups who have differing values
(Stark, 2009; Jay, 2013; Gehman, Trevino, & Garud, 2013;
Moor & Lury, 2011; Denis, Langley, & Rouleau, 2007).
The ?ndings from our ?eld study of a non-government
organization indicate that the potential for accounts to
provide a fertile arena for productive debate is related to
three important processes. First, designing accounts that
productively manage tensions between different evalua-
tive principles involves ‘imperfection’, that is, a process of
‘give and take’ that ensures that no single evaluative prin-
ciple comes to dominate others. Here the design and oper-
ation of accounting practices represents a temporary
settlement between different evaluative principles that
will require on-going effort to maintain (cf., Gehman
et al., 2013; Stark, 2009). Second, the design and operation
of accounts can facilitate productive friction by making vis-
ible the attributes of accounts that are important to organi-
zational actors with different evaluative principles, a
process that we term ‘concurrent visibility.’ This process
is important because it serves to crystallize compromises
between different modes of evaluation in a material form
(Denis et al., 2007). Third, our study reveals an important
distinction between the types of responses that can
emerge in situations where compromises break down
and accounting practices are viewed as ‘not working.’ In
particular, we show how debates over the mechanics of
accounting practices can be unproductive and lead to
‘stuckness’ (Jay, 2013) between different modes of evalua-
tion, whereas debate focused on the principles underlying
the account can help to integrate different evaluative prin-
ciples in a productive way (Jay, 2013; Stark, 2009).
Overall, our approach improves understanding of how
actors with different evaluative principles reach an accept-
able compromise, the factors that promote and/or damage
efforts toreachcompromise, andthe consequences for those
individuals, groups, and organizations involved. Accounts
are central to these processes because they are a site where
multiple modes of evaluation potentially operate at once,
with different modes of evaluation privileging particular
metrics, measuring instruments and proofs of worth (Stark,
1996, 2009). Accounts of performance are critical because it
is in discussions over the different metrics, images and
words that can be used to represent performance that the
actual worth of things is frequently debated and contested.
An analysis of compromising accounts
1
provides a powerful
analytical lens for examining whether and how compromise
between different modes of evaluation is developed, estab-
lished and destroyed. In particular, we show how the design
and operation of accounts can create the potential for
‘productive friction’ to arise from the coming together of
different evaluative principles (Stark, 2009).
Our study also makes a more speci?c contribution to re-
search on performance measurement systems. There has
been a wealth of prior management accounting studies
focusing on the attributes of various performance metrics
and their effects on individual and organizational
performance (see, for example, research on subjectivity
(Gibbs, Merchant, Van der Stede, & Vargus, 2004; Moers,
2005), comprehensiveness (Hall, 2008) and ?nancial/
non-?nancial measures (e.g. Baines & Lang?eld-Smith,
1
We use the term ‘compromising accounts’ to refer to the role of
accounts in facilitating (or not) compromise between actors with different
evaluative principles. We develop this concept later in the paper.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 269
2003; Perera, Harrison, & Poole, 1997). However, most of
these studies do not explicitly investigate how the metrics
that comprise performance measurement systems are
developed (see Wouters and Wilderom (2008) and Town-
ley, Cooper, and Oakes (2003) for exceptions). Thus, we ex-
tend this literature by examining explicitly the processes
that take place in negotiating the scope, design and opera-
tion of the metrics included in performance measurement
systems.
The remainder of the paper is structured as follows. In
the next section we provide the theoretical framework
for the study. The third section details the research meth-
od, with the fourth section presenting ?ndings from our
?eld study of a non-government organization, Voluntary
Service Overseas. In the ?nal section we discuss our ?nd-
ings and provide concluding comments.
Theoretical framework
Our focus is on whether and how accounting practices
can aid compromises in situations of co-existing modes
of evaluation. As such, in developing our theoretical frame-
work, we draw on recent developments in the ‘sociology of
worth’ to help conceptualize the co-existence of, and po-
tential for agreement between, multiple evaluative sys-
tems (see for example, Boltanski & Thévenot, 1999, 2006;
Denis et al., 2007; Huault & Rainelli-Weiss, 2011; McIner-
ney, 2008; Stark, 2009). A focus of this perspective is to
examine how competing values are taken into account
when parties seek to reach agreement or resolve disputes.
Boltanski and Thévenot (2006) conceptualize individuals
as living in different ‘worlds’ or orders of worth, where
each ‘world’ privileges particular modes of evaluation that
entail discrete metrics, measuring instruments and proofs
of worth (Stark, 2009).
2
Instead of enforcing a single princi-
ple of evaluation as the only acceptable framework, it is rec-
ognized that it is legitimate for actors to articulate
alternative conceptions of what is valuable, where multiple
evaluative principles can potentially co-exist and compete
in any given ?eld (Kaplan & Murray, 2010; McInerney,
2008; Moor & Lury, 2011; Scott & Orlikowski, 2012; Stark,
1996, 2009).
As co-existing evaluative principles may not be compat-
ible, a ‘clash’ or dispute may emerge between parties, who
at a given point in time, and in relation to a given situation,
emphasize different modes of evaluation (Jagd, 2011; Kap-
lan & Murray, 2010). Following Stark (2009), who extends
the framework of Boltanski and Thévenot (2006), our focus
is directed not at the presence of particular logics or orders
of worth, but on exploring the ways in which the co-exis-
tence of different logics can be productive or destructive.
In doing so, we draw on Stark’s (2009) notion of organizing
dissonance. Stark (2009) characterizes organizing disso-
nance as beinga possible outcome of a clashbetweenpropo-
nents of differing conceptions of value, that is, in situations
when multiple performance criteria overlap. The disso-
nance that results from such a clash requires the organiza-
tion to consider new ways of using resources in a manner
that accommodates these different evaluative principles.
Here, rather than something to be avoided, struggles be-
tween different evaluative criteria can prompt those in-
volved to engage in deliberate consideration about the
merits of existing practices (Gehman et al., 2013). In this
way, keeping multiple performance criteria in play can pro-
duce a resourceful dissonance that can enable organisations
to bene?t from the ‘productive friction’ that can result
(Stark, 2009). However, as Stark (2009: 27) notes, not all
forms of friction will be productive, as there is a danger that
‘‘where multiple evaluative principles collide...arguments
displace action and nothing is accomplished.’’ This points
to the critical nature of compromises when there are dis-
putes involving different evaluative principles. In practice,
such compromises can be facilitated by the use of conven-
tions, as detailed in the following section.
Disputes, conventions and accounting practices
The negotiation and development of conventions is
seen as a critical tool to aid compromise in situations of
co-existing evaluative principles (Denis et al., 2007). A con-
vention is ‘‘an artefact or object that crystallises the com-
promise between various logics in a speci?c context’’
(Denis et al., 2007: 192). Conventions can help to bridge
different perspectives by providing an acceptable compro-
mise between competing value frameworks (Biggard &
Beamish, 2003; Denis et al., 2007).
Accounting practices as a convention can help to re-
solve disputes in two inter-related ways. One, the develop-
ment and operation of accounts can provide a fertile arena
for debate between individuals and groups with differing
evaluative principles. The production of accounts is impor-
tant to this process because different evaluative principles
do not necessary con?ict or compete continuously, but
resurface at particular moments in time (Jay, 2013), such
as during the design and operation of accounting practices.
Two, the production of accounts can serve to ‘crystallize’
the compromise in a material form (cf., Denis et al.,
2007), thus providing recognition of, and visibility to, dif-
ferent values and principles.
Tensions over accounts and accounting practices are
likely because they can have very real consequences for
the ordering of priorities in an organization and, conse-
quently, for the interests of groups within the organization
who hold different views. It is well understood that
accounting can make certain factors more visible and more
important than others, provide inputs that affect decision-
2
Boltanski and Thévenot (1999, 2006) specify six ‘worlds’ or orders of
worth (the ‘inspirational’, ‘domestic’, ‘opinion’, ‘civic’, ‘merchant’ and
‘industrial’ worlds). The ‘civic’ world, for example, is based on solidarity,
justice and the suppression of particular interests in pursuit of the common
good, whereas the ‘market’ world is one with competing actors who play a
commercial game to further their personal (rather than collective) goals. In
this paper our key focus is on understanding why and how actors can reach
compromises (or not) in situations that are characterised by the presence of
multiple evaluative principles. In doing so, we follow the approach of Stark
(2009, see p. 13 in particular). That is, we do not con?ne our analysis to the
six orders of worth as outlined by Boltanski and Thévenot (1999, 2006) but
specify the different evaluative principles as is appropriate to the particular
empirical setting. Given our approach, we do not elaborate further on the
six orders of worth of Boltanski and Thévenot (1999, 2006) here. For further
insight on the six orders of worth, see Boltanski and Thévenot (1999, 2006),
and for their implications for accounting research, see Annisette and
Richardson (2011) and Annisette and Trivedi (2013).
270 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
making and the allocation of resources, and can also pro-
vide authoritative signals regarding the very purpose and
direction of the organization. In addition, research has
highlighted the persuasiveness of numbers in accounts
and the role of quanti?cation in advancing particular views
and interests (e.g., Porter, 1995; Robson, 1992; Vollmer,
2007).
Nahapiet’s (1988) study of changes to a resource alloca-
tion formula in the United Kingdom’s National Health Ser-
vice showed how the formula made existing values more
visible and tangible and thus acted as a stimulus which
forced explicit consideration of three fundamental organi-
zational dilemmas. In this setting, actors contested
strongly the formula’s design and operation, and its inter-
pretation by other groups. Different interpretations of the
formula, and of accounting more generally, were problem-
atic because they played a key role in establishing what
counts and thus what is worthy. This tension is exacer-
bated in organizational settings where limited resources
(e.g., money, time, space) mean that not all interests can
be accommodated. In particular, the processes of evalua-
tion inherent to the production of accounts are central to
problems of worth in organizations (cf., Stark, 2009). For
example, the process of developing, adjusting and recon-
?guring accounts can require groups to make mutual con-
cessions (i.e., compromise) in order to agree on the ?nal (if
only temporary) form and content of the account. In this
way, producing accounts can provide an arena where dif-
ferent understandings of value may be articulated, tested,
and partially resolved (Moor & Lury, 2011). However, while
debate over accounts has the potential to facilitate produc-
tive friction, this depends on whether and how the conven-
tion comes to be (and continues to be) viewed as an
‘acceptable’ compromise. Importantly, although accounts
as conventions may help enact compromises, they can also
be subject to criticism and thus require on-going efforts to
maintain and stabilize the compromise.
Responses to breakdowns in compromise
Designing accounting practices in the presence of co-
existing modes of evaluation is likely to result in situations
where the practice is viewed, at least by some actors in the
organization, as ‘not working.’ Here there is a ‘breakdown’
such that issues and concerns that have arisen can no long-
er be absorbed into the usual way of operating (Sandberg &
Tsoukas, 2011). Some breakdowns can be viewed as tem-
porary and so the focus is on what is problematic about
the current practice and how to ?x it (Sandberg & Tsoukas,
2011). For example, doubts and criticisms can arise about
the dif?culties of implementing the practice, about
whether it will result in the desired behaviours, and how
it will in?uence other practices (Gehman et al., 2013).
This resonates with research in accounting that shows
how the introduction of new accounting practices can re-
sult in criticisms that they have not been implemented
correctly and revised procedures are required to improve
the design and implementation process (e.g., Cavalluzzo
& Ittner, 2004; Wouters & Roijmans, 2011). A criticism of
existing practices is also evident, for example, in the
context of performance measurement systems that are
seen to require more non-?nancial measures (Kaplan &
Norton, 1992) and ?nancial reports are viewed as needing
more narrative information (Chahed, 2010). Such criti-
cisms can result in changes to the existing accounting
practices. Stark (2009) notes, however, that disputes over
the mechanics of existing practices may not lead to effec-
tive changes, but rather result in a situation where nothing
is accomplished. Here, co-existing modes of evaluation
may not lead to innovation, but rather oscillation and
‘stuckness’ between logics (Jay, 2013).
A breakdown in practice can also be more severe such
that existing ways of doing things no longer work and
re?ection at a distance from the existing practice is re-
quired (Sandberg & Tsoukas, 2011). Here actors can debate
the principles and values underlying the existing practice
and the changes that are required to move beyond the
breakdown (Gehman et al., 2013). This type of criticism
and debate can arise where people feel that some funda-
mental principles with which they identify are not being
respected (Denis et al., 2007). This can be particularly
problematic in debates over incommensurables, that is,
the process of denying ‘‘that the value of two things is
comparable’’ (Espeland & Stevens, 1998: 326). Claims over
incommensurables are important because they can be ‘‘vi-
tal expressions of core values, signalling to people how
they should act toward those things’’ (Espeland & Stevens,
1998: 327). It can also arise where the values evident in the
existing practice clash with deeply held values obtained
through prior experience (Gehman et al., 2013).
Debates over the underlying principles of accounting
practices, and conventions more broadly, can result in
what Stark (2009: 27) labels ‘‘organizing dissonance’’, that
is, a process of productive friction arising from debate be-
tween actors over different and potentially diverse evalua-
tive principles. To generate productive friction in the
context of such debates, the rivalry between different
groups must be principled, with advocates offering rea-
soned justi?cations for their positions (Stark, 2009). In this
situation actors become re?exively aware of latent para-
doxes and directly confront and accept ambiguities, help-
ing new practices that integrate logics to emerge (Jay,
2013). The resolution of breakdowns also requires recogni-
tion that such a compromise represents a ‘‘temporary set-
tlement’’ (Stark, 2009: 27) between competing value
frameworks that is fragile (Kaplan & Murray, 2010) and
only likely to be maintained via on-going effort and
reworking (Gehman et al., 2013).
Summary
This discussion highlights the potential role for ac-
counts in developing compromises in situations where
the co-existence of different evaluative principles is a com-
mon feature of organizations. In particular, it reveals how
accounts have the potential to act as a convention to help
develop and crystallize compromises. It also highlighted
the way in which compromises are temporary settlements
that require on-going work to stabilize. In particular, the
merits of an accounting practice may be called into
question, resulting in efforts to ‘?x’ the way in which the
practice currently operates and/or debate focused on
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 271
resolving tensions between underlying principles and val-
ues. In the next section, we empirically examine the role of
compromising accounts through a detailed analysis of the
development of a performance measurement system that
we observed during a longitudinal ?eld study at Voluntary
Service Overseas (VSO).
Method
VSO is a non-governmental international development
organization that works by (mainly) linking volunteers
with partner organizations in developing countries. Each
year approximately 1500 volunteers are recruited and take
up placements in one of the over forty developing coun-
tries in which VSO operates. Our interest in VSO was
sparked due to an initiative to develop a new performance
measurement system, subsequently referred to as the
‘Quality Framework’ (QF). This framework attempted to
combine different metrics and narrative content into a sin-
gle report that would provide a common measure of per-
formance in each of VSO’s country programmes.
The ?eld study was conducted between July 2008 and
August 2010. During this time we conducted 32 interviews,
attended meetings, observed day-to-day work practices,
collected internal and publicly available documents, partic-
ipated in lunches and after-work drinks with staff and vol-
unteers, primarily in London, but also during a 1-week
visit to the Sri Lanka programme of?ce in January 2009.
Most of the interviews were conducted by one of the
authors, with two authors conducting the interviews with
the country directors. Interviews lasted from 30 min to 2 h.
Almost all interviews were digitally recorded and tran-
scribed, and, where this was not possible, extensive notes
were taken during the interview and further notes then
written-up on the same day. We interviewed staff across
many levels of the organization as well as staff at different
locations. Face-to-face interviews were conducted at VSO’s
London headquarters, and in Sri Lanka. Due to the location
of VSO staff around the world, some interviews (particu-
larly those with country directors) were conducted via
telephone. Table 1 provides an overview of the formal
interviews and observations of meetings. We carried out
observations of 17 meetings and workshops in both Lon-
don and Sri Lanka, primarily concerned with the QF and
other planning and evaluation practices.
Throughout the study, we were also involved in infor-
mal conversations (typically before and after meetings,
and during coffee breaks, lunches and after-work drinks)
where staff and volunteers expressed their thoughts about
the meetings, as well as other goings-on at VSO and the
non-government organization sector. We kept a detailed
notebook of these informal conversations, which was then
written up into an ‘expanded account’ (Spradley, 1980)
that on completion of the ?eld study totalled more than
200 pages of text. We also exchanged numerous emails
(over 700 separate communications) and telephone con-
versations with VSO staff.
We were provided access to over 600 internal VSO doc-
uments, including performance measurement reports, sup-
porting documents and analysis. These reports included
the complete set of QF reports from each VSO programme
of?ce for 2008 and 2009, documents related to other
monitoring and review processes, as well as more general
documents concerning organizational policies, plans and
Table 1
Formal ?eldwork activity.
Interviews Location of staff Number of
interviews
Director, International Programmes Group London 2
Deputy-Director, International Programmes Group London 1
Regional Director London, Ghana 2
Country Director Sri Lanka(x2), Guyana, Ghana, The Gambia, Uganda, Vietnam, Nepal,
Namibia, Cambodia
10
Head-Programme Learning and Advocacy London 1
Team Leader-Programme Development and Learning London 2
Executive Assistant to Director, International
Programmes Group
London 3
Programme Learning Advisor Ottawa 1
Systems and Project Manager London 1
Head-Strategy, Performance and Governance London 1
Director-VSO Federation London 1
Volunteer Placement Advisor London 1
Finance Manager Sri Lanka 1
Programme Manager Sri Lanka 2
Facilities and Of?ce Manager Sri Lanka 1
Volunteer Sri Lanka 2
32
Observation and attendance at meetings Location of meeting Number of
meetings
Quality Framework meetings London 6
Various planning and review meetings London 6
Programme planning and review workshop Sri Lanka 3
Of?ce planning and logistics meeting Sri Lanka 2
17
272 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
strategies. Finally, we collected publicly available docu-
ments, such as annual reports and programme reviews,
newspaper articles, as well as several books on VSO (e.g.
Bird, 1998).
Consistent with the approach employed by Ahrens and
Chapman (2004), Free (2008) and Chenhall, Hall, and Smith
(2010), we employed Eisenhardt’s (1989) methods. This in-
volved arranging the data (transcripts, ?eld notes, docu-
ments) chronologically and identifying common themes
and emerging patterns. We focused in particular on itera-
tions in the content and use of performance measurement
systems at VSO over time and then sought to understand
why they came about and the subsequent reactions from
different people within the organization. We then re-orga-
nized this original data around key events (for example,
the ‘league table’ debates) and signi?cant issues (for exam-
ple, ‘consistency’) that emerged as we sought to under-
stand the performance measurement and review systems
at VSO. We compared our emerging ?ndings from the
study with existing research to identify the extent of
matching between our data and expectations based on
prior theory. In particular, ?ndings that did not appear to
?t emerging patterns and/or existing research were high-
lighted for further investigation. This process was iterative
throughout the research, and ?nished when we believed
we had generated a plausible ?t between our research
questions, theory and data (Ahrens & Chapman, 2006).
Case context
VSO was founded in 1958 in England as an organization
to send school leavers to teach English in the ‘‘underdevel-
oped countries’’ of the Commonwealth (Bird, 1998: 15).
Volunteers were initially recruited exclusively from Eng-
land, and later from other countries, including the Nether-
lands, Canada, Kenya, the Philippines, and India. The initial
focus on the 18-year-old high school graduate was re-
placed over time by a (typically) 30-year old-plus experi-
enced professional. Volunteers operated under a capacity
building approach, being involved in initiatives such as
teacher training, curriculum development, and advocacy.
3
In 2004 VSO signalled it would adopt a more ‘program-
matic’ approach to its work, which shifted attention away
from each volunteer placement to one that focused ‘‘all our
efforts on achieving speci?c development priorities within
the framework of six development goals’’ (Voluntary Ser-
vices Overseas, 2004).
4
This move to a programmatic model
was coupled with explicit recognition of VSO’s purpose as
primarily a ‘development’ rather than ‘volunteer-sending’
organization, and the development of evaluation systems
to support this change. Notwithstanding this explicit shift
in organizational priorities, the focus on volunteering was
still strong, particularly as many VSO staff were formerly
volunteers. As such, a mix of different world-views at VSO
was the norm:
‘‘There are some different kind of ideological views
between people who feel that the important thing
about VSO, it’s just about international cooperation
and getting people from different countries mixing with
each other and sharing ideas. It doesn’t matter what the
outcome is really, it’s going to be a positive thing but
you don’t need to pin it down. Versus it’s all about pin-
ning down the impact and the outcomes of our work
and being very focused and targeted and being able to
work out what is your return on your investment and
all these kind of things so I think it is partly historical
and partly differences in just a mindset or world-view.’’
(Interview, Regional Director 2, November 2008).
The different views on VSO’s overall purpose created
considerable tension, focusedinparticular ondebates about
the value of VSO’s work. Originating from VSO’s founding
principles, many staff and volunteers felt that volunteering
was, in and of itself, a positive and productive activity and
any drive to specify an ‘outcome’ of this was secondary. In
contrast, the programmatic approach, coupled with the
recruitment of many staff fromother international develop-
ment agencies, gave more attention to poverty reduction
and demonstration of the ‘impact’ of VSO’s work. This situa-
tion was increasingly common in the wider NGO sector,
where founding principles of volunteerism, the develop-
ment of personal relationships, andrespect for eachindivid-
ual were comingintocontact withmore ‘commercial’ values
favouring professionalism, competition and standardiza-
tion (see, for example, Helmig, Jegers, &Lapsley, 2004; Hop-
good, 2006; Parsons & Broadbridge, 2004).
As an espoused international development organiza-
tion, VSO also existed in an environment increasingly char-
acterized by the use of indicators and targets (a prime
example being the Millennium Development Goals, see
United Nations, 2011) and a greater focus on the effective-
ness of aid (particularly the Paris Declaration on Aid Effec-
tiveness in 2005).
5
VSO’s main funder, the United Kingdom’s
Department for International Development (DFID), had
aligned its development programme around the Millennium
Development Goals, and was also a signatory to the Paris
Declaration.
6,7
This had implications for the way in which
VSO was required to report to DFID, particularly during the
3
VSO operated what it calls a ‘capacity building’ approach by partnering
volunteers with local organizations that require assistance or expertise in a
variety of capacities. VSO describes its partnership approach as follows:
‘‘We work with local partners in the communities we work with, placing
volunteers with them to help increase their impact and effectiveness’’ (VSO
website,http://www.vsointernational.org/vso-today/how-we-do-it/,
accessed 7 April 2010). Volunteers typically take up a speci?c role or
position, often working alongside a local staff member, where partner
organizations range in size from very small, local businesses, community
groups and NGOs, to large organizations and government departments and
ministries.
4
The six development goals were health, education, secure livelihoods,
HIV/AIDS, disability, and participation and governance (Focus for Change,
Voluntary Services Overseas, 2004).
5
See www.oecd.org/dataoecd/11/41/34428351.pdf for the Declaration.
6
See DFID (2000, 2006, 2009).
7
In terms of the overall funding environment, VSO’s total funding
increased steadily during the 2000s. In 2000 total income was approxi-
mately £28m, with approximately £22m from DFID (77% of total funds). In
2005 total income was approximately £34m, with approximately £25m
from DFID (74% total funds). In 2009 total income was approximately
£47m, with approximately £29m from DFID (60% total funds) (source: VSO
Annual Reports).
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 273
course of our study when a change in DFID’s reporting for-
mat required VSO to track progress against its four agreed
strategic objectives using a set of 17 indicators.
8
Collectively, the changing context of the NGO sector,
the move in international development towards an in-
creased focus on aid effectiveness and the use of indicators,
along with VSO’s own progression from a volunteering to a
more programmatic focus, meant that the co-existence of
different evaluative principles characterized the situation
at VSO. In particular, we identify two primary modes of
evaluation.
9
The ?rst mode of evaluation, which we label
‘learning and uniqueness’, was focused primarily on re?ec-
tion, the use of contextual and local interpretations, and a
preference for narrative content. Discourses within VSO reg-
ularly emphasized the importance of this mode of evalua-
tion, with one of VSO’s three stated values a ‘‘commitment
to learning’’ whereby VSO seeks to ‘‘continue to develop
effective monitoring and evaluation methods so that we
can learn from our own and others’ works’’ (VSO, 2004).
The second mode of evaluation, which we label ‘consistency
and competition’, was focused primarily on standardization,
the use of consistent and universal interpretations, and a
preference for indicators. We outline the different modes
of evaluation in Table 2, which we return to throughout
our empirical analysis.
Attempts at compromise between these different
modes of evaluation became evident in debates about
how to measure the value of VSO’s work in each country.
Measuring performance became particularly important be-
cause the move to be more programmatic had placed in-
creased pressure on the allocation of resources amongst
programme of?ces, as it required more expenditure on
staff to support volunteer placements and develop and
manage programmes.
10
However, the situation was charac-
terized by a lack of commonly agreed criteria for measuring
the performance of country programmes (cf., Garud, 2008),
where over time three approaches had been instigated; the
‘Strategic Resource Allocation’ (SRA) tool, the ‘Annual Coun-
try Report’ (ACR) and the ‘Quality Framework’ (QF).
11
The SRA was developed in 2002 as VSO’s ?rst attempt to
measure the effectiveness of each programme of?ce.
12
The
SRA relied almost exclusively on using numerical data to
measure performance, where each programme of?ce was re-
quired to score itself on 16 criteria related to the extent to
which its work was focused on disadvantage, achieved cer-
tain outputs related to volunteers, and adopted a strategic
approach. Each criterion was given a precise percentage
weighting, e.g., 2% or 4% or 17%. Scores on the 16 criteria
were to be aggregated with each programme of?ce awarded
a percentage score out of 100, with recognition that ‘‘the
higher the overall percentage a Programme Of?ce receives
in this tool, the more ‘‘effective’’ it will be perceived to be
based on this measure.’’
13
There was a strong emphasis on
review of scores by staff in London ‘‘to ensure consistency
between regions. . .in order to ensure transparency and to al-
low comparison between countries.’’
14
The SRA’s implemen-
tation was problematic, however, and the approach was
abandoned as a country director later explained:
‘‘The SRA was dropped because it was becoming
increasingly apparent that some programmes were
rather self-critical while others were not – but that this
did not necessarily relate very closely to programme
quality – in fact it appeared that sometimes the oppo-
site was true, the programmes that had the capacity
to critically assess their own performance (and give
themselves a low score) were of a better quality than
those who from year to year claimed that things were
Table 2
Modes of evaluation at VSO.
Dimensions Modes of evaluation
‘‘Learning and
Uniqueness’’
‘‘Consistency and
Competition’’
Purpose of
evaluation
Re?ection, learning,
improvement
Standardize, compare,
compete
Attributes of
‘good’
evaluation
Contextual, detailed,
‘local’ interpretations
Consistent, precise,
objective, ‘universal’
interpretations
Attributes of
‘good’
accounts
Narrative
descriptions, case
studies, stories, images
Numbers, indicators,
and scales, particularly
those that can be
compared between
units
Indicators that
provoke creativity,
ambition and
innovation
Indicators that
capture current
performance accurately
Avoid reliance on
numbers as they
provide only a partial
account and do not tell
the ‘real’ story
Avoid reliance on
narrative as it is
‘selective’ and cannot
be compared between
units
8
See www.vsointernational.org/Images/ppa-self-assessment-review-
2010-11_tcm76-32739.pdf for the 2010/2011 report to DFID (accessed 31
May 2012). The ?rst report issued under this format was for the 2009/2010
reporting year. Prior to this, there was an absence of indicators, with VSO
reporting against various development outcomes using descriptive exam-
ples of progress from different countries (‘VSO Narrative Summary and
Learning Report for PPA 2005-6’).
9
As noted above, our approach here is to follow Stark (2009) and specify
the different modes of evaluation in accordance with our empirical setting.
10
VSO operated a geographic structure, whereby several programme
of?ces were grouped together to form a speci?c region, for example, Sri
Lanka, India, Bangladesh, Nepal and Pakistan formed the ‘South Asia’
region. Each Country Director reported to a ‘Regional Director’, with the
Regional Directors reporting to the Director of IPG, based in London. IPG
also had staff responsible for providing support to programme of?ces in
areas such as funding, advocacy, and programme learning and reporting.
Each programme of?ce was a budget holder, and received core funding
from VSO headquarters via the annual budgeting process. Core funding
related to costs such as staff salaries and bene?ts, of?ce and vehicle rental,
and volunteer costs (including allowances and training/support costs). Each
programme of?ce received a limited amount of funding for ‘programme’
costs, with programme of?ces expected to apply for grants from donors to
support further programme work.
11
Our ?eld study (July 2008 to August 2010) corresponded to the ?rst
year of the QF’s operation and thus was subsequent to the use of the SRA
and ACR. As such, we brie?y describe the SRA and ACR to provide context to
the development of the QF but do not analyze the development of the SRA
and ACR in detail.
12
See Appendix A which provides the ‘summary page’ of the SRA.
13
SRA document, 2002.
14
SRA document, 2002.
274 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
going well – and this resulted in some good
programmes being closed down.’’ (Interview, Country
Director 1, November 2008).
Subsequent to the SRA, the ACR was developed in 2005
and focused its reporting on the ‘activity’ that a country
programme had engaged in, such as ‘so many workshops,
so many volunteer placements.’
15
The ACR itself did not
contain any quantitative scoring or ranking of programme
of?ces but was a narrative report that provided descriptions
of progress towards ‘Strategic objectives’ and contained a
section focused on ‘Lessons’ to be learned.
16
The ACR also in-
cluded one or more ‘Most Signi?cant Change’ (MSC) stories’
which focused on telling stories as a way to re?ect upon and
learn from programme experiences (see Dart & Davies,
2003; Davies & Dart, 2005).
The third approach (and our empirical focus) developed
subsequent to the SRA and ACR was the QF, which at-
tempted to combine scoring and narrative elements into
a single reporting framework. We show how the QF was
subject to criticism that resulted in changes in the use of
narrative and quantitative measures, which favoured a
mode of evaluation focused on ‘consistency’ over that
which respected the ‘unique’ circumstances of individual
country programmes. A further dispute emerged over the
relative focus on ‘learning’ and ‘competition’ that precipi-
tated more fundamental changes in order to develop an ac-
count that helped to compromise between the different
modes of evaluation.
Development of the quality framework
The initial development of the QF occurred during a
meeting of all country directors in late 2007.
17
Prior to this
meeting, in an email sent to all country directors in May
2007, the Director of IPGgave his support to the development
of the QF and outlined his rationale for its implementation:
‘‘We are very good at measuring volunteer numbers,
numbers of tools being used, early return rates, levels
of programme funding – but what about the impact of
our work? How do we know if we really are working
with poor people to contribute to positive change in
their lives?...I believe that it is absolutely essential that
we have a shared vision of success – that we all know
what a high quality, successful VSO country programme
could look like – that we know how to measure this –
and that we have a culture that encourages, supports
and celebrates this. Of course all of our country
programmes could, and should, look very different.
Local circumstances and development priorities above
all should ensure this. . .However, there must be some
fundamental principles that drive VSO’s programme
work and enable us to determine whether we are suc-
cessful or not.’’
In this statement, the imperative for compromise be-
tween VSO’s different modes of evaluation in the develop-
ment of the QF was revealed. The reference to a ‘shared
vision of success’ and knowing ‘how to measure this’ indi-
cates a concern with developing common and standard-
ized ways of measuring success. There is also recognition
of the uniqueness of country programmes in that they
‘should look very different.’ In our analysis below, we focus
on two central debates that emerged in the development of
the QF; the ?rst concerning the tension between standard-
ization and uniqueness, and the second regarding the most
appropriate approach to improve programme quality.
Debate 1: How to standardize and respect uniqueness?
A key dif?culty in developing the QF was tension be-
tween the desire to standardize, i.e., have indicators that
provide a consistent method for measuring success in each
programme of?ce, whilst respecting the uniqueness of pro-
gramme of?ces and the need for indicators to be ‘inspira-
tional.’ It was the need to make choices about the content
of elements, indicators and narrative components in the
QF that provided an arena for debates and discussions
regarding different modes of evaluation at VSO. Country
directors and other programme staff were central to these
discussions, and provided suggestions for elements and
indicators that were collected in London, and then followed
in late 2007 by a meeting of all country directors and senior
IPG staff in Cambridge, UK. A central platform of this meet-
ing was sessions devoted to dialogue and debate about the
elements and indicators that would comprise the QF. Cen-
tredonthe question‘‘What is quality?’’, it was here that staff
were able to advocate for the inclusionand exclusion of par-
ticular elements and indicators. This resulted in a set of 14
elements relating to various aspects of programme quality,
such as inclusion, volunteer engagement, innovative pro-
gramming and ?nancial management. Importantly, the ele-
ments relating to the impact of VSO’s work on partners and
bene?ciaries were givenhighest priority: they were the ?rst
two elements in the QF and were assigned the labels ‘Ele-
ment A’ and ‘Element B’ to distinguish them from the other
elements that were labelled with numbers one through 12
(see Appendix B).
Testing the QF at the country level was a priority, with a
country director offering the following re?ections on a pi-
lot test:
‘‘We worked through the different indicators to see
whether the results that the framework spat out were
recognizable. . .some of the results didn’t give the right
answer basically. So we changed some of the indica-
tors...The framework itself allows for a small narrative
at the beginning of each element, which can at least
explain context as to why it may have a low score or
conversely why it might have a high score. They may
be working in a country that has a very easy operational
environment. It might have lots of external funding and
15
Interview, Country Director 2, November 2008.
16
ACR document, 2005.
17
As noted above, the ?rst report to DFID that used indicators to track
progress against strategic objectives was for the 2009/2010 reporting year.
Within VSO, work to address the new reporting requirements began in the
second half of 2008, more than 1 year after the initial development of the
QF. We also note that the QF reports were not provided to DFID or to any
other external funders, although the IPG Director commented that he did
inform DFID about the QF process and that this was considered by him to
be ‘helpful’ in showing DFID that VSO was addressing issues around the
impact of its work.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 275
that for me is re?ected in that short narrative section at
the beginning.’’ (Interview, Country Director 1, Novem-
ber 2008).
This comment reveals how local knowledge was consid-
eredcritical inthat indicators wererequiredtoproduceresults
that were ‘recognizable’ to programme of?ce staff, partners
and volunteers. Providing space in the QF report for narrative
discussion to re?ect the different circumstances of countries
was also important. In particular, the narratives in the QF re-
ports were typically very extensive and contained statements
calling attention to the unique situation of each country. They
alsosought tocelebrate achievements as a waytoinspire staff,
volunteers andother stakeholders, for example, bystatingthat
‘‘the success of the education programme in demonstrating
bene?ciary level impact in [Country X] is extraordinary and
it is motivatingfor staff andvolunteers tobeabletoseetheim-
pact of their work.’’
18
Further recognitionof countryuniqueness
was evident in the design of the performance ranges for the
indicators:
‘‘Many of the KPIs have got ranges set against them to
outline what ‘high performance’, ‘satisfactory perfor-
mance’ and ‘room for improvement’ looks like. How-
ever, in some cases it will be more relevant for CDs
and Regional Directors to decide what results say about
the performance of the programme within the context
of the programme itself. . .it is recognised that what
can be considered high performance will differ consid-
erably between Programme Of?ces.’’
19
Inthis quote, there is explicit recognition of differences be-
tween countries that prevents the use of standardized perfor-
manceranges for eachandeveryindicator. As such, eight of the
indicators in the QF were scored with guidance that ‘‘perfor-
mance [to be] determined by PO [programme of?ce] and
RD [regional director]’’. Finally, in contrast to the SRA, ele-
ments were not given explicit weights and there was no cal-
culation of an overall score for each programme of?ce.
There was also scope for constructive debate lasting be-
yond the QF’s initial development. Country and regional
directors were required to discuss the scoring for an individ-
ual programme of?ce together, and to analyze and resolve
differences inscores that emergedfromthis process. Further-
more, many staff from programme of?ces completed the QF
together, providing a way toreviewoverall results, andmany
regional directors used the QF to help set objectives and ac-
tion plans for country directors in the coming year. Some
programme of?ces embraced the QF even further, using it
to determine whether an of?ce move would improve pro-
gramme quality, or further disaggregating the QF so it could
be applied to different parts of the programme of?ce.
Collectively, the input of country directors and other
programme staff, the importance of local knowledge in
designing indicators, providing space for narrative so that
programme of?ces could re?ect local circumstances, and
recognition that performance on some indicators was best
determined using programme of?ce and regional director
judgement, provided explicit recognition of the uniqueness
of programme of?ces. Critically, however, the need for
comparability was also recognized. Each programme of?ce
was required to complete the QF using a common tem-
plate, with common elements and indicators, thus provid-
ing a standardized method of measuring performance
across countries.
After its ?rst year of operation, praise for the QF was
widespread, with this comment from a country director
echoing that of many others:
‘‘Overall it was a good move away from the Annual
Country Report because one of the main things was it
gave much more direction on being clear on what to
report on but also through the report it identi?ed what
is important, what’s quality but it’s also important to
re?ect on as a programme. Now you can always argue
about elements of that, that’s not the point. I think it’s
just helpful to say well these are overall important parts
to re?ect on and I thought that was quite useful.’’ (Inter-
view, Country Director 2, November 2008).
In this statement, the QF is seen as better than the ACR
because it provides clarity around what makes a quality
country programme, and, in this way, provided a ‘collec-
tivelyrecognized’ reference regarding the wayinwhichpro-
gramme of?ces would be evaluated (cf., Jagd, 2007; Biggard
& Beamish, 2003). Importantly, this quote also reveals that
although there is recognition that the QF was and would
be the site of disagreements (e.g., over elements), it was
the overall approach of focusing on what made a quality
programme that was most important. This corresponded
to the view of the IPG Director, who also praised the QF:
‘‘I thinkit’s beengreat. It’s not aperfect tool but I don’t think
anytool indevelopment ever is perfect. . . there wasn’t a lot
of discussionabout qualityor about success andthediscus-
sions were more about howmany volunteers have we got
or how much programme funding have we got and the
qualityframeworkhas beenareallyuseful tool over thelast
18 months for just getting people to talk more about
impacts on poverty. Quality, what is quality like?. . .[The
QFhas] givenmestronger evidencewhenarguing at senior
management teamlevel for where things aren’t working.
So when you’ve got 35 country directors saying things
like the ?nance systems aren’t working it gives you a
lot of evidencetobe able toreallyargue for that. . .sofrom
[that] basis, I think it’s gone really well.’’ (Interview, IPG
Director, December 2008).
Here, praise is directed at how the QF helped move dis-
cussions more towards the impact of programmes on part-
ners and bene?ciaries and less on volunteer numbers or
funding levels. The ability to aggregate data across pro-
gramme of?ces was important in providing arguments
for more resources at senior management forums.
20
The
18
QF Country Report, 2008.
19
QF Guidance document, 2008.
20
The Director of IPG was a member of the six-person executive
management team at VSO called the ‘Senior Management Team’ (other
members were the Chief Executive Of?cer, Director of the VSO Federation,
Director of the UK Federation, the Chief Financial Of?cer, and the Director
of Human Resources). This group was responsible for major resource
allocation decisions, particularly the amount of funds that were allocated to
each of the major divisions within VSO, including IPG.
276 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
statements that the QF was not ‘a perfect tool’ but was ‘quite
useful’ and ‘worked out pretty well’ reveal an awareness of
the importance of ‘making do’ (cf., Andon et al., 2007), which
helped to enact a certain stability between a mode of evalu-
ation that privileged country uniqueness and one favouring
standardization and comparability. However, such stability
was temporary, highlighting the fragility of the compromise.
While there was initial praise for the QF from many sources,
there were also critics. Shortly after the completion of the QF
in its ?rst year, it was subject to strong criticism, aimed in
particular at the process used to score the indicators and
elements.
But you’re not consistent!
The scoring process for the QF was based on self-assess-
ment by the country director (with programme of?ce staff
input), with review by the relevant regional director. This
raised concerns, particularly from some staff in London,
that scoring was inconsistent across regions and countries:
‘‘(The) key anomaly is that the ratings seem to have
been applied differently in each region. . .I believe this
is an inaccurate re?ection of the current strengths and
weaknesses of programme funding across IPG. . .I sus-
pect there are different interpretations as to what con-
stitutes good programme funding performance. . .I think
there is a need to clarify what justi?es a 1, 2, 3 or 4
within each indicator.’’
21
This comment reveals that the scoring methodology
was criticized for producing inaccurate results, with the
problem being a dislike of the different interpretations of
good performance made by different countries. The sug-
gested solution was to instigate changes to the scoring pro-
cedure to clarify the meaning of each score.
A senior IPG manager was the most vocal critic of the
scoring process, which he believed was ‘‘extremely dubi-
ous.’’ He lamented the SRA’s demise and concluded that
shifting the balance in favour of self-assessment in the
QF had created what he believed were questionable re-
sults. He expressed a preference for taking the scoring of
indicators and elements out of the hands of country and re-
gional directors altogether. He ?rst ?oated the idea of
using an external assessment process akin to an
‘‘OFSTED-type inspection unit.’’
22
Another option was the
use of an internal assessment unit within VSO to carry out
an ‘‘independent performance assessment.’’ These prefer-
ences strongly emphasized the importance of having a ‘con-
sistent methodology’ with ‘independence’, which, in effect,
placed the values of standardization and comparability
above those of local context and country uniqueness.
Although the use of an internal or external performance
assessment unit did not materialize, the criticism resulted
in several changes to the QF for its second year of opera-
tion. Each indicator now included a description of each of
the 1–4 levels, where previously only levels 1 and 4 had
a description. Revised guidance documentation was also
issued:
‘‘It is important to score yourself precisely against the
descriptors. There may be very good reasons why you
achieve a low score on a particular indicator, but it is
important to score precisely – the narrative can be used
to give a brief explanation.’’
23
This guidance highlights two important changes. First,
there was the explicit emphasis on the need to score pre-
cisely, with the reasons that lay behind particular scores
considered secondary. Second, the narrative was now
viewed as the space where scores can be explained, indi-
cating that its primary value was its connection to the scor-
ing process, not in providing information that can arise
from other sources. In further changes, the guidance ‘‘what
can be considered high performance will differ consider-
ably between Programme Of?ces’’ was removed from the
QF documentation, the scoring of only one (rather than
the previous eight) of the indicators was to be assessed
using judgement,
24
and ownership of some scoring was ta-
ken away from programme of?ces, with the explanation that
this would allow ‘‘data to be comparable across programmes
by using universal interpretations of the data.’’
25
Concerns over the scoring process itself were also ad-
dressed, particularly in relation to ensuring consistency
in the way that regional directors used indicator scores
to determine overall element scores. A series of meetings
was arranged to address this issue directly. One of these
meetings, which lasted for over 2 h, involved regional
directors working through a recently completed QF report
in order to agree on how to score each element. One by
one, through each of the 14 elements, the process for using
indicator scores to determine an overall element score was
discussed. Looking fed-up, a regional director said:
‘‘Can I just ask, do we really care how accurately we
score? [quizzical looks from other regional directors].
No, honestly, so we could spend a lot of time working
out how we score it and use it for comparison but I
mean you could roughly get a score on an average with-
out spending too much time on the scoring but concen-
trate on what they’re saying, and concentrate on quality
discussion which presumably we also want to do.’’
(Regional Director 5, QF meeting, May 2009).
The ensuing discussion did not focus on what pro-
gramme of?ces were ‘saying’, or on how to ‘concentrate
on quality discussion.’ Rather, debate focused on whether
the ‘average’ was a legitimate way to determine element
scores, and whether the most important indicator in each
element should be designated as the ‘lead’ indicator for
21
QF Element Summary document, 2008.
22
OFSTED is the Of?ce for Standards in Education, Children’s Services and
Skills in the UK, an independent body that inspects schools and provides
ratings of performance. For example, schools are awarded an overall grade
from 1 to 4, where 1 is outstanding, 2 is good, 3 is satisfactory and 4 is
inadequate (OFSTED website, www.ofsted.gov.uk/, accessed 24 July 2010).
23
QF Guidance document, 2009.
24
As an example, indicator 8.1 on funding was changed whereby the
ability to assess performance on a ‘country by country’ basis was replaced
with explicit monetary ranges that would apply to each programme of?ce
regardless of its size of operations or different external funding
environments.
25
QF Reporting Template, 2009.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 277
the purposes of scoring. This reveals how debate and dis-
agreement was focused exclusively on the mechanics of
the scoring process, rather than providing a forum for the
consideration of the different substantive issues in play.
Despite a protracted discussion, at the meeting’s end there
was no established process, except for general agreement
that an overall element score ‘‘won’t be based on an arith-
metic average of KPI results for that element.’’
26
Notwith-
standing this guidance, virtually all changes to the QF
resulting from this criticism privileged consistency in
scoring over that of country uniqueness. While more consis-
tent scores were (arguably) likely, the initial compromise
appeared tenuous and a counter-criticism developed around
the lack of inspiration evident in the QF.
But you’re not inspiring!
The focus on consistency and precision in the changes
to the scoring process meant that the use of indicators to
inspire was given minimal attention. Concerns were ex-
pressed in a QF meeting in March 2009:
Regional Director 3: The messages that we’re giving to
programs at the moment about thinking creatively,
being ambitious, being innovative and so on, are not
necessarily captured in this element, in this thing here
[points at the QF document]. . .I think this is really good
for telling us where we’re at and measuring what we’re
looking at measuring but in terms of really looking to
shift and change, I just wonder how we’re going to do
that, and where that’s captured.
PLA
27
Staff 1: I think there’s a bit that’s still missing in
the quality framework because it’s become a set of indi-
cators, so the bit that I think is missing is that we don’t
really have anything about culture.
Regional Director 3: Yeah, that’s what it is, yeah
[enthusiastically].
PLA Staff 1: If people ful?l all these indicators. . .that
might not be enough to achieve what we’re really look-
ing for, you know. . .we’ve got ?xed on the elements but
there’s something behind it all that we haven’t quite
nailed. . .
Here, a strong criticism of the QF is made by likening it
to a ‘set of indicators’, with such a description generally
considered to be a damning indictment of any evaluation
practice at VSO. Furthermore, concern is expressed that
indicators cannot capture what is most valued, that is,
the desire to ‘shift and change’, ‘achieve what we’re really
looking for’ and ‘culture’ are ‘missing’ in the QF. This con-
cern was coupled with critical feedback on other changes
made to the QF, for example, the emphasis on standardiza-
tion meant that the indicators no longer captured perfor-
mance ‘accurately.’ Re?ecting general concern with the
changes, this statement appeared in one country pro-
gramme’s QF report:
‘‘The QF report has grown exponentially this year, and
the indicators have changed and were only issued a
month before the report is due. . .Good practice in mon-
itoring and evaluation is to collect evidence and learn-
ing on a day-to-day basis, which is dif?cult to do if
the ground keeps shifting under one’s feet.’’
28
Increasingly, discussions of the QF were focused on crit-
icisms and counter-criticisms over the speci?c details of
the scoring process. A regional director summarized the
state-of-play after completion of the QF for the second
year:
‘‘People can see that we’ve tried to make it a little bit
more objective in the way that it’s done, but I am get-
ting quite a lot of critical feedback that the quality
framework is so big, so many indicators, stuff being sent
really late. . .the whole thing is just a quantitative scor-
ing tool and it’s not about learning in any way, shape or
form. . .so I am getting quite critical feedback’’ (Regional
Director 5, QF meeting, May 2009).
In this quote, the focus on indicators and scoring pro-
cesses is viewed as sti?ing opportunities for learning. Thus,
an initial compromise between standardization through
scoring and recognition of country uniqueness had fal-
tered. The initial praise for the QF had dissipated and
was replaced by critical feedback, particularly from coun-
try directors who felt that the push for consistency had
moved too far such that the QF was no longer about learn-
ing and instead was labelled as a ‘quantitative scoring tool’,
a severe condemnation at VSO. We also see that the de-
bates about indicators, performance ranges, or scoring
methodologies were increasingly focused on the QF itself,
often in the context of long meetings with little productive
output. The previously fruitful discussions about how to
improve quality or make compromises between differing
values were almost non-existent. These developments
were also evident in initial debates about how to use the
QF to improve programme of?ce performance.
Debate 2: How to improve quality?
Given the considerable effort that had gone into the
development of the QF, there were high hopes that it
would lead to improved performance of programme of?-
ces. On the one hand, there was a strong desire to improve
through ‘learning’, whereby the QF would help identify
examples of innovative practice that could then be shared
amongst programme of?ces. Concurrently, there was a be-
lief that the QF could be used to generate a sense of ‘com-
petition’ amongst programme of?ces, which would lead to
increased motivation and thus improved performance.
These different positions on the avenues to improved per-
formance presented many obstacles to enacting an accept-
able compromise; obstacles that, at ?rst, proved dif?cult to
overcome.
Although not stated explicitly in QF documentation, the
engendering of a sense of competition emerged through
26
QF Guidance document, 2009.
27
The PLA (Programme Learning and Advocacy) unit was a team within
VSO whose main role was to support programme of?ces in learning from
their own work and sharing good practice with other programme of?ces.
28
QF Country Report, 2009.
278 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
the way in which the results were distributed to country
directors via email:
Country Director 9: ‘‘We were sent back a kind of
world-wide kind of scoring sheet and obviously all that
that had was a series of numbers. Sort of 1, 2, 3 ?lled
with red and green and orange. Although to be honest
we came second in the world. . .I feel quite sorry for
the countries that have scored quite low because I really
don’t think it’s a valid scoring system. . .but for me it
was quite handy to be able to say this and say ‘‘maybe
next year- ?rst’’ and all the rest.’’
‘‘How do you know you are second in the world? Was
there some kind of ranking?’’
Country Director 9: ‘‘Yeah, they sent us a ranking. They
sent us a worldwide ranking thing afterwards.’’
‘‘And so countries were rank ordered from 1 to 34?’’
Country Director 9: ‘‘Yeah it was a summary of results
with ranking. And you could look against different
scores so you could see that. . .globally you came second
on something and third on something else but then
there was an overall sort of score.’’
(Interview, Country Director 9, November 2008)
Despite reference to an ‘overall sort of score’, the
spreadsheet did not contain a summary score for each
country and countries were not ranked, but listed in alpha-
betical order. As such, Country Director 9’s country only
appeared ‘second in the world’ by virtue of its name begin-
ning with a letter near the beginning of the alphabet. Other
similar stories emerged of countries with names that
started with letters towards the end of the alphabet believ-
ing that they had performed poorly. These examples were
the source of much joking at IPG staff meetings in London,
with suggestions that ‘Albania’ will be top of the league ta-
ble next year, and that the solution was to put countries in
reverse alphabetical order. This light-heartedness about
rankings belied an appreciation of how aware country
directors were of the competitive mantra that lay behind
the spreadsheet’s distribution, and how this was viewed
as sti?ing learning opportunities, as one country director
commented:
‘‘When this whole [QF] thing was being started, some
of the conversations were framed around what are
the indicators of quality in a programme of?ce that
is doing well. How do we assess whether Ghana is
better than Zimbabwe or vice versa? So I think the
framing of the conversations around that time kind
of planted the seeds of a league table. . .as long as
people continue to see it [the QF] as a league table
then we might see each other as competitors and
therefore everybody [will keep] what he or she is
doing very close to their chest’’ (Interview, Country
Director 6, November 2008).
Importantly, several features of the spreadsheet served
to reinforce the ‘league table mentality’ noted by the coun-
try director. First, it ‘was a series of numbers’ and did not
contain any of the narrative discussion. Second, only the
overall element scores were displayed without the speci?c
indicator scores. Third, each element score of 1–4 was as-
signed a colour to make differences between scores in
the spreadsheet visually distinct, with ‘Low’ performers
particularly prominent as scores of ‘1’ were assigned the
colour red. This led regional directors to question the use
of scores to promote learning, suggesting that it was their
role, rather than that of a spreadsheet, to direct country
directors to examples of good practice. More fundamen-
tally, however, not only was comparison of countries in a
spreadsheet not considered helpful for sharing best prac-
tice, but the uniqueness of each country also made such
comparisons ‘unfair’:
Regional Director 1: ‘‘I don’t see the value of knowing
that, for example, on maybe even six of the twelve cri-
teria, West Africa comes out worse than say South-East
Asia because my interpretation instinctively would be
what are the cultural, educational, historical back-
ground, you know, accumulation of circumstances in
South-East Asia that means that they’re in a completely
different environment.’’
Regional Director 4: ‘‘It [a league table] makes it [com-
parisons] into a competition essentially.’’
PLA Staff 1: ‘‘Yeah, but it’s an unfair competition. . .It’s
like getting Marks & Spencers compared with Pete’s
Café across the road where you’ve got totally different
contexts.’’
29
(QF meeting, March 2009)
Claims of unfairness speak to tensions arising because
the principle that country uniqueness is important was
not being respected. The process of reducing perfor-
mance on an element to a standardized metric was seen
by country directors to have ensured that the contextual
information required to understand these scores had
been stripped away. Even when this information was
present in the narrative section, it did not accompany
the scores in the spreadsheet and was thus seemingly ig-
nored (or considered too dif?cult to take into account).
In this way, the ideals embodied in different modes of
evaluation did not co-exist as the values of competition
had in effect ‘swamped’ the values of learning and coun-
try uniqueness.
This situation was also problematic because improv-
ing quality through learning was an explicit ambition
of the QF, with one of its stated purposes ‘‘to help iden-
tify strengths and areas for improvement across pro-
grammes so that steps can be taken to grow
programme quality and foster learning’’ (emphasis in
original).
30
Country directors, however, felt that there
was no formal mechanism for sharing learning between
programme of?ces. Consequently, the view amongst coun-
try directors was that learning from the QF had been gi-
ven a relatively low priority, and its potential for sharing
good practice went unful?lled. Seeking to reorder these
priorities was an important debate in preparing the QF
for its second year.
29
Marks and Spencers is a large UK department store whose annual
revenue in the ?nancial year 2009/2010 was £9.5 billion. Pete’s Café was a
small café immediately opposite the VSO building in London.
30
QF Guidance documents, 2008 and 2009.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 279
Reordering priorities of learning and competition
Over a series of three meetings, IPG staff debated rank-
ing programme of?ces using QF data. This excerpt is from
the ?rst meeting in March 2009:
Regional Director 3: ‘‘You don’t have to send it [the
scores] out in a table to everybody but you can go and
look and say ‘right, ok, this country over here does
really good volunteer engagement, why don’t we
arrange some sort of visit or some sort of support from
that’, that would absolutely make sense but to send
something out and say look for the [countries that have
scored] fours and talk to them doesn’t.’’
Regional Director 2: [. . .] ‘‘I think we all agree there’s a
reason to link people up according to where there is
good practice, or good performance and there are ways
to do that that aren’t about a published table. So, who’s
for a published table, who’s against a published table,
who’s for a published table at this stage?’’
[General laughter]
Regional Director 4: ‘‘Obviously we [regional directors]
are discouraging’’.
Director, IPG: ‘‘I would re?ect on it a bit further. . .I’m
not sure how helpful it is when the table only had the
element scoring and it’s very subjective. . .so I think
work out if anyone else found it useful. . .I’m thinking
all of this is worth picking up again in that next
meeting.’’
Here, regional directors were generally against the idea
of a spreadsheet being the vehicle for identifying good
practice, instead arguing that they should take an active
role in linking up programme of?ces to help improve per-
formance. In particular, there was a strong argument
against using scores of four (the highest score possible on
an element) to identify what is considered good practice.
The IPG Director believed that using the QF to create com-
petition was sound, but could see weaknesses in the
mechanics of the league table (‘only. . .element scoring’,
‘very subjective’). As such, he was not yet convinced of
the league table’s apparent inappropriateness and stalled
any decision to the next meeting.
Convened in May 2009, the next QF meeting was fo-
cused on convincing the IPG Director (not in attendance)
that a ranking was not appropriate:
Regional Director 3: ‘‘I’m just wondering what the rea-
son for having a league table is.’’
PLA Staff 1: ‘‘I think it’s the idea you publish informa-
tion and then people will be shamed, people will feel
they got a low performance, they will feel forced to
have to make improvement because it’s public.’’
PLA Staff 3: ‘‘There is a real danger of labelling them
[programme of?ces], isn’t there? That’s what’s really
horrible about this because someone then gets labelled
as being the of?ce that’s rubbish at volunteer engage-
ment or the one that’s great at such and such.’’
PLA Staff 1: ‘‘Yeah, yeah I agree, yeah. The reason I really
don’t like it, I don’t see how an organization’s [that’s]
about volunteering and is very personal how that. . .sort
of. . .philosophy could really ?t with this [league table],
but the other thing is I think it will change the quality
framework from being a learning tool. . . my real fear
is if you publish the scores people get ?xated on doing
well on particular indicators, which we’re now saying
aren’t good enough, rather than the spirit of trying to
actually improve. . .so I think it’s a combination of phi-
losophy in terms of what VSO is about but also, you
know, keeping the quality framework as something that
is a learning tool.’’
In contrast to the arguments used in the ?rst meeting,
this criticism was more fundamental, in that it directly
criticized the very principles upon which the spreadsheet
and (apparent) ranking system was based. Here, the use
of competition to ‘label’ and ‘shame’ programme of?ces
into improvements was viewed as ‘horrible’ and the league
table considered incompatible with the purpose of the QF
as a learning tool. Finally, and perhaps most telling, is that
ranking programme of?ces was viewed as being against
the ideals of ‘volunteering’ and personal engagement that
are considered critical to VSO’s philosophy, as expressed
above by PLA Staff 1. In June 2009, a third and ?nal meet-
ing to debate the league table issue was convened. The
above arguments were used in this QF meeting with the
IPG Director, and a compromise agreed:
Director, IPG: ‘‘All right, let’s do it a different way. . .let’s
ask each element leader to highlight con?dentially
where they think there are real concerns.’’
PLA Staff 2: ‘‘So, ok, that’s ?ne, then what? What hap-
pens to that information?’’
Director, IPG: ‘‘So basically the element leaders are
informing discussions about where we might priori-
tize. . .so the top three is highlighting good practice, giv-
ing an indication to countries across the world where
they might want to talk to in terms of good practice,
and the bottom one is just con?dential for management
purposes.’’
Importantly, the earlier appeals by the IPG Director to
improve the mechanics of the league table were no match
for arguments undermining the very principle upon which
it was based. As such, the compromise between competi-
tion and learning, between comparisons and sharing good
practice, was resolved by abolishing the league table and
replacing it with a new practice of differential disclosure.
That is, the identity of good performers would be made
public whereas the identity of poor performers would be
kept con?dential. Disclosure of good performers would al-
low the sharing of good practice between programme of?-
ces, and disclosure of poor performers to the IPG Director
would allow management action to be taken but without
‘naming and shaming’ programme of?ces in the process.
It is here that debates about the league table facilitated
productive discussion between those who viewed ‘compe-
tition’ as the route to improvement versus those who saw
learning as the way to increase quality. Unlike the debates
over consistency in scoring, discussion was not focused on
the QF per se, but was connected to broader principles,
such as uniqueness and innovation, competition and a vol-
unteering ethos. In this way, principled argument led to a
280 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
compromise between different evaluative principles, de-
spite strong enthusiasm for the spreadsheet and ranking
system to remain.
Epilogue
Towards the end of the ?eld study, a review of existing
‘‘Quality Initiatives at VSO’’ was conducted, including the
QF. While analysis of the QF was generally favourable,
numerous ‘‘areas of improvement’’ were suggested:
Its holistic and coherent nature allows people to think
more broadly and re?ect on the progress of the whole
programme. . .The process of doing the report makes
people take stock, consider areas of improvement and
make action plans accordingly. . ..It seems that the QF
is not referred to or used as often as people would like-
. . .The numbers are not useful because they are too
mechanistic, yet subjective and inconsistent across
POs. . . [Self-assessment] is a great way for the PO to
take stock and think about their performance and how
to make improvements. But many people feel that this
needs some kind of external support and veri?ca-
tion. . . although the design of the framework is quick
and simple. . .It has become too long and its design
means that indicators are ‘set in stone’ to a certain
degree in order to make comparisons from one year to
the next.
31
This analysis reveals that the QF enabled broad thinking
but was not used enough; self-assessment helped to ‘take
stock’ but needed external veri?cation; and the QF was
simple, yet too long. We see that positive features of the
QF that were closely aligned to one mode of evaluation
inevitably gave rise to suggestions for improvement that
sought to address the concerns of those with different
evaluative principles. The evaluation highlights how the
compromises being made in the design and operation of
the QF were not ‘resolved’ but formed a series of temporary
settlements (cf., Gehman et al., 2013; Kaplan & Murray,
2010; Stark, 2009) between different evaluative principles.
In this way, the process of establishing and maintaining
compromises between different modes of evaluation can
be seen as a dynamic and enduring feature of a compro-
mising account.
Discussion
Taking tensions between different logics and values as
the starting point for our analysis, this study has focused
directly on how accounting is implicated in compromising
between different evaluative principles and the way in
which such compromise can be productive or unproduc-
tive. Accounts are particularly important in settings of con-
?icting values because they are sites where multiple
modes of evaluation all potentially operate at once (Stark,
2009). Our ?eld study shows how VSO’s attempts to mea-
sure the performance of its programme of?ces brought to-
gether differing modes of evaluation, one based primarily
on ‘Learning and Uniqueness’ and the other based primar-
ily on ‘Consistency and Competition’, where each mode of
evaluation was distinguished according to its purpose, the
desirable attributes of a good evaluation and subsequently
the desirable attributes of a good account (see Table 2).
Making choices about indicators, types of scoring pro-
cesses, the identi?cation of good and poor performers,
and different methods of data analysis, created sites for de-
bate between individuals and groups who espoused these
different evaluative principles (Stark, 2009; Jay, 2013; Geh-
man et al., 2013; Moor & Lury, 2011; Denis et al., 2007). In
this way, our analysis reveals how an account itself can act
as an agent in the process of compromise between differ-
ent evaluative principles. A compromising account is thus
both the process of, and at particular moments the speci?c
outcome of, a temporary settlement between different
modes of evaluation. Analogous to Chua’s (2007) discus-
sion of strategizing and accounting, this draws attention
to a compromising account as both a noun, i.e., the account
itself that is produced in some material form (e.g., a bal-
anced scorecard, a ?nancial report), and as a verb, i.e., the
processes of compromise that lead to and follow on from
the physical production of an account.
Our study shows that differences in the design and
operation of accounting practices can affect the extent of
compromise between different evaluative principles, and
whether such compromise is productive or unproductive.
In particular, our ?ndings reveal that the potential for ac-
counts to provide a fertile arena for productive debate is
related to three important processes: (1) imperfection –
the extent to which the design and operation of accounting
practices represents a ‘give and take’ between different
evaluative principles; (2) concurrent visibility – the way
in which desirable attributes of accounts are made visible
in the design and/or operation of the accounting practice;
and (3) the extent to which the discussions concerning po-
tential problems with the accounting practice are focused
on underlying evaluative principles (vs. mechanics/techni-
cal considerations). In the discussion below we elaborate
the characteristics of these processes, and then conclude
the paper by outlining the implications for future research
and highlighting the insights for practice.
‘Imperfection’ and the potential for ‘productive friction’
In organizational settings with multiple and potentially
competing evaluative principles, the development of com-
promises re?ects a temporary agreement (Gehman et al.,
2013; Kaplan & Murray, 2010; Stark, 2009). In this setting,
rather than reaching closure, the development and opera-
tion of compromising accounts entails on-going adjust-
ment (cf., Gehman et al., 2013). This was clearly evident
in VSO’s QF, where it was subject to on-going criticism
and re?nement and was ‘loved by no-one.’ We suggest that
it is the ‘imperfect’ nature of the QF that was pivotal to its
continued existence as a compromising account. We see
that the constant shifting and rebalancing in the QF’s de-
sign and operation enabled the co-existence, albeit often
temporary, of different modes of evaluation. Changes priv-
ileging one mode of evaluation, such as a focus on a more
rigorous and consistent scoring process, were accompanied
31
Review of Quality Initiatives at VSO document, 2010.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 281
by changes that shifted the emphasis back to another
mode of evaluation, such as ensuring the analysis of QF
data included a pairing of numbers with narrative. It was
this ‘give and take’ between different modes that helped
to resist pressures for recourse to a single and therefore
ultimately dominant mode of evaluation (cf., Thévenot,
2001), and enabled productive friction to arise from the
coming together of different evaluative principles. In this
way, compromises involving multiple evaluative principles
are inherently ‘imperfect’ when enacted in practice (cf.
Annisette & Richardson, 2011).
We see our ?ndings in this regard as having parallels
with recent literature on the ‘imperfect’ nature of perfor-
mance measures (see, for example, Andon et al., 2007;
Dambrin & Robson, 2011; Jordan & Messner, 2012).
These studies often stress the importance of organiza-
tional actors ‘making do’ with the existing performance
measurement system, despite its perceived imperfec-
tions. For example, Bürkland, Mouritsen, and Loova
(2010) show how actors compensate for ‘imperfect’ per-
formance measures by using other information, while
Jordan and Messner (2012) ?nd that actors respond to
incomplete performance measures in two ways; by try-
ing to repair them or by distancing themselves from
the measures. However, in our study we ?nd that rather
than organizational actors merely ‘making do’ with
imperfect performance measures, it was these ‘imperfec-
tions’ that helped to provide a fertile arena for produc-
tive dialogue and discussion between individuals and
groups with differing values (cf., Denis et al., 2007; Geh-
man et al., 2013; Jay, 2013; Moor & Lury, 2011; Stark,
2009). In this way accounts can play a role in surfacing
latent paradoxes and providing space to work out ways
to combine different evaluative principles (Jay, 2013).
The struggles between different evaluative criteria can
prompt those involved to engage in deliberate consider-
ation about the merits of existing practices (Gehman
et al., 2013). Here we see the importance of the accom-
modation of different perspectives and recognition by ac-
tors that the proposed solution (in our case the QF),
although not perfect, provides a ?tting answer to a prob-
lem of common interest (cf. Huault & Rainelli-Weiss,
2011; Samiolo, 2012).
‘Imperfect’ accounts, such as VSO’s QF, are therefore
not just about ‘making do’, but can create opportunities
for bringing together competing value systems and, thus,
the potential for what Stark (2009: 19) terms ‘productive
friction.’ This was most evident in the league table de-
bates, where discussions between actors with different
evaluative principles led to changes in the use of spread-
sheets and element summaries that recognized a reor-
dering of the priorities between learning and
competition. Here we see the role of compromising ac-
counts as creating a form of organized dissonance, that
is, the tension that can result from the combination of
two (at least partially) inconsistent modes of evaluation.
A compromising account can thus be a vehicle through
which dialogue, debate and productive friction is pro-
duced, where it is the discussion that can result from
having to compromise on the design and operation of
an account that can be productive.
Concurrent visibility
But how does a compromising account enable organ-
ised dissonance? Our study indicates that an important
feature of a compromising account is that of ‘concurrent
visibility.’ To facilitate organized dissonance it was critical
that the QF made visible the features of an account that
were important to different groups. We use the term ‘visi-
ble’ in a broad sense to refer to how the design and opera-
tion of a compromising account reveals the attributes of
accounts that are important to organizational actors with
different evaluative principles. For example, in the physical
format of the QF, indicators were accompanied by narra-
tive boxes, which enabled compromise between the evalu-
ative principles of standardization and country
uniqueness. In addition, the differential disclosure of good
and poor performing countries (post the league table) facil-
itated compromise between the evaluative principles of
learning and competition. The concurrent use of these dif-
ferent features gave visibility to the importance of different
modes of evaluation. This resonates with Nahapiet (1988),
where the resource allocation formula helped to make val-
ues more visible and tangible and prompted explicit con-
sideration of three fundamental organizational dilemmas.
More generally, it resonates with the way in which instru-
ments like accounting and performance measurement sys-
tems are well suited to rendering visible the multiplicity of
criteria of evaluation (Lamont, 2012).
We suggest that where the co-existence of different
evaluative principles is an on-going feature of organiza-
tions, organizational actors are likely to be particularly
concerned that their fundamental principles may not be
respected and thus come to be dominated by others (cf.,
Denis et al., 2007). It is here that ‘concurrent visibility’ in
a compromising account can provide con?rmation and
reassurance that a particular mode of evaluation is, indeed,
recognized and respected, thus making productive debate
more likely. The visibility of different evaluative principles
in the account also serves to crystallize the compromise
between them in a material form (cf., Denis et al., 2007).
The importance of concurrent visibility is evident by
contrasting the views of the QF at the end of the ?rst and
second years of operation. The features of the QF during
its ?rst year of operation (narrative, local knowledge,
judgement, common elements and indicators) gave expli-
cit recognition to different evaluative principles and thus
helped to develop a compromise between values of stan-
dardization and country uniqueness. In contrast, changes
to make the QF more consistent removed many of the fea-
tures that recognized country uniqueness as an important
evaluative principle. Subsequently, the initial praise for the
QF had dissipated and was replaced by ‘endless’ disagree-
ments and critical feedback, which resulted in a situation
where actors were ‘stuck’ between different evaluative
principles (Jay, 2013).
Our study also reveals, however, that there are limits to
the way in which concurrent visibility can facilitate orga-
nized dissonance, particularly where the strategy is ‘addi-
tive.’ That is, to address the evaluative principles
favoured by different organizational actors, the account
can simply encompass more and more of those desired fea-
282 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
tures. Over time, however, the account is likely to become
cumbersome and unwieldy, as we saw with the QF when,
at the end of its second year of operation, it was described
as ‘so big, so many indicators.’ As such, without careful
attention, concurrent visibility could potentially be direc-
ted towards the appeasement of different modes of evalu-
ation rather than serving as a necessary entry point for
productive discussion over the merits of different evalua-
tive principles.
Criticisms of accounts and breakdowns in compromise
Our study also highlights an important distinction be-
tween the types of responses that can emerge in situations
where compromises break down and accounting practices
are viewed as ‘not working.’ One criticism of the QF con-
cerned the presentation of scores in a spreadsheet and the
subsequent illusion of a league table ranking of countries
according to their overall performance. Such a practice
was viewed as privileging the value of ‘competition’
above that of ‘learning’ and was thus primarily a debate
about the principles and values underlying the use and
operation of the league table (cf., Gehman et al., 2013).
Here, there was a passionate response from those actors
who felt that a fundamental principle was not being re-
spected (Denis et al., 2007), particularly that the league
table ignored their belief that the performance and hence
value of country programmes was ‘incommensurable’
(Espeland & Stevens, 1998). This debate was not about
how to ‘?x’ the league table per se but focused on
whether the league table itself was an appropriate prac-
tice – revealing a situation where actors re?ect at a dis-
tance on the values underlying the existing practice
(Gehman et al., 2013; Sandberg & Tsoukas, 2011). This
helped the actors to confront the latent paradoxes (Jay,
2013) evident in the use of a league table and facilitated
‘productive friction’ between those who viewed ‘competi-
tion’ as the route to improvement versus those who saw
learning as the way to increase quality. As a result, a
new practice was developed (i.e., differential disclosure
of good and bad performers) that helped to integrate dif-
ferent evaluative principles in a more substantive way
(Jay, 2013; Stark, 2009).
Another criticism of the QF was directed at its lack of
consistency and thus inability to enable meaningful com-
parisons of country performance. This was primarily a
criticism of the implementation of the QF’s scoring pro-
cess, where discussion focused on what was problematic
about the current practice and how to ?x it (cf., Gehman
et al., 2013; Sandberg & Tsoukas, 2011) and not on
whether scoring itself was an issue of concern. As such,
subsequent changes to the QF focused on removing fea-
tures of the existing scoring process that were seen not
to align with the value of consistency, and adding fea-
tures viewed as promoting consistency. Such changes
clearly shifted the scoring process of the QF in favour
of those organizational actors who held consistency in
scoring as an essential feature of an evaluation process.
Rather than integrating different perspectives, however,
this response can be characterised by oscillation and
‘stuckness’ (Jay, 2013) between the evaluative principles
of consistency and country uniqueness. Furthermore, as
these debates were primarily focused on technicalities,
they took up valuable meeting time that effectively pre-
venting meaningful engagement (i.e., ‘productive fric-
tion’) with the underlying principles. This resonates
with Stark’s (2009) warning that disputes over the
mechanics of existing practices can limit effective
changes and result in endless disagreements where noth-
ing is accomplished.
Conclusion
Our study has highlighted the importance of examining
the role of accounting in facilitating (or not) compromises
in situations of multiple evaluative principles. Our results
indicate that much can be learned by focusing on how ac-
counts can potentially bring together differing (and often
competing) evaluative principles, where such encounters
can generate productive friction, or lead to the re?nement
of accounting practices and ‘endless’ debate and discussion
over technicalities and the mechanics of the account. We
view accounts as central to processes of compromise in
organizations because it is in discussions over the design
and operation of accounts that the worth of things is fre-
quently contested by organizational actors. Drawing on
Stark’s (2009) concept of organizing dissonance, our study
shows that there is much scope for future research to
examine how accounts can create sites that bring together
(or indeed push apart) organizational actors with different
evaluative principles, and the ways in which this ‘coming
together’ can be potentially constructive and/or
destructive.
Our analysis also has implications for the ways in which
performance measures and other accounting information
can be mobilized by managers and practitioners as a re-
source for action (cf., Ahrens & Chapman, 2007; Hall,
2010). In particular, our results indicate that ‘imperfect’
performance measures can actually be helpful, that is, they
can be used by practitioners to generate productive dia-
logue, despite, or, as our analysis shows, because of, their
perceived imperfections. This resonates with Stark
(2009), who argues that entrepreneurship is the ability to
keep multiple evaluative principles in play and exploit
the friction that results from their interplay. Here, the
‘imperfect’ nature of compromising accounts can enable
skilful organizational actors to keep multiple evaluative
principles in play. In contrast, a focus on the continual
re?nement of accounts and a quest for ‘perfection’ can lead
to the domination of a single evaluative principle, ‘distanc-
ing’ organizational actors who hold different evaluative
principles, and limiting opportunities for productive
friction.
A further implication of our study is to promote further
research on how performance measurement systems, and
accounting practices more broadly, are actually developed
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 283
in organizations (cf., Wouters & Wilderom, 2008). In par-
ticular, we analyzed the different responses that can occur
when compromises breakdown and how they relate to the
potential for productive friction. More broadly, it is unclear
how organizational actors negotiate the development of
performance indicators and what types of responses and
arguments prove (un)successful in these encounters. This
could prove a fruitful area for future research.
We conclude by outlining the practical implications of
our study, which centre on imperfection and concurrent
visibility. Although practitioners are no doubt aware of
the need to ‘make do’ with the perceived inadequacies of
performance measures, our study indicates that the pro-
ductive discourse arising from performance measurement
is perhaps more important than ensuring that such mea-
sures (or accounts more generally) are ‘complete.’ Our
analysis of concurrent visibility indicates that practitioners
should ensure that features of accounts that are of funda-
mental importance to particular groups are explicitly rec-
ognized, whether in the material content of the account,
the associated scoring and evaluation processes, or in its
use in wider organizational practices.
Acknowledgements
We would like to thank David Brown, Chris Chapman,
Paul Collier, Silvia Jordan, Martin Messner, Yuval Millo,
Brendan O’Dwyer, Alan Richardson, Keith Robson, Wim
Van der Stede, seminar participants at Cardiff Business
School, Deakin University, HEC Paris, La Trobe University,
London School of Economics and Political Science, Turku
School of Economics, and University of Technology Sydney,
and conference participants at the Conference on New
Directions in Management Accounting 2010 and Manage-
ment Accounting as Social and Organizational Practice
workshop 2011 for their helpful comments and sugges-
tions. The support of CIMA General Charitable Trust is
gratefully acknowledged.
Appendix A: Strategic resource allocation tool
Assessment of Programme effectiveness
Country:
The higher the overall percentage a Programme Of?ce re-
ceives in this tool, the more ‘‘effective’’ it will be perceived
to be based on this measure.
Section A. Focus on disadvantage (48% of total score)
Measure % of total
score
1 HDI 17
2 Percentage of more disadvantaged
people being reached through
implementation of CSP aim
10
3 Scored analysis of how well strategies
are working in addressing the causes
of disadvantage
10
4 Disadvantage Focus in Current and
Planned Placements
11
Section B. Outputs of country programme (27% of total
score)
Measure % of total
score
5 What% of placements in the last 2
planning years have fully or mostly
met their objectives (not including
early return reports)?
13
6 What was the Early Return rate
(excluding medical and
compassionate) over the last two
planning years?
4
7 What percentage of the ACP target of
fully documented requests (i.e. with
Placement Descriptions) was
submitted on time over the last 3
planning years?
5
8 What percentage of the ACP target
number of volunteers was in country
on 31/3/01, 31/3/00 and 31/3/99?
5
Section C strategic approach (25% of total)
Note that the statements attached to each score are for
guidance and are not absolute statements: we recognise
that with some programmes no one statement will accu-
rately describe the programme. The RPM must have a clear
idea of the rationale behind the scoring, in order to ensure
transparency and to allow comparison between countries.
All of your scores should be based on an analysis of the cur-
rent situation – i.e. not future strategy or placements.
9. Strategic approach based on
programme at the current time
Score % of
total
score
(a) Placements working at different
levels (micro/macro) towards
strategic aims + planned links
between them
4
(b) Critical appraisal of placements with
clear rationale linking placement to
strategic aim + planned exit strategy
4
(c) Strategic and linked imple-
mentation of cross cutting themes
2
(d) In-country advocacy by the
programme of?ce
2
(e) PO proactive in promoting increased
development understanding amongst
volunteers
2
(f) Openness and commitment to
learning
5
(g) Genuine partnership relationship
with employers and other
development actors
4
(h) Types of placements used most
appropriate to needs of
disadvantaged groups and based on
strategic reasoning
2
284 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
Appendix B
QF summary sheet 2008.
References
Ahrens, T., & Chapman, C. S. (2002). The structuration of legitimate
performance measures and management: Day to day contests of
accountability in a UK restaurant chain. Management Accounting
Research, 13, 151–171.
Ahrens, T., & Chapman, C. S. (2004). Accounting for ?exibility and
ef?ciency: A ?eld study of management control systems in a
restaurant chain. Contemporary Accounting Research, 21, 271–301.
Ahrens, T., & Chapman, C. S. (2006). Doing qualitative ?eld research in
management accounting: Positioning data to contribute to theory.
Accounting, Organizations and Society, 31(8), 819–841.
Ahrens, T., & Chapman, C. S. (2007). Management accounting as practice.
Accounting, Organizations and Society, 32(1–2), 1–27.
Andon, P., Baxter, J., & Chua, W. F. (2007). Accounting change as relational
drifting: A ?eld study of experiments with performance
measurement. Management Accounting Research, 18, 273–308.
Annisette, M., & Richardson, A. J. (2011). Justi?cation and accounting:
Applying sociology of worth to accounting research. Accounting,
Auditing and Accountability Journal, 24, 229–249.
Annisette, M., & Trivedi, V. U. (2013). Globalisation, paradox
and the (un)making of identities: Immigrant Chartered
Accountants of India in Canada. Accounting, Organizations and
Society, 38(1), 1–29.
Baines, A., & Lang?eld-Smith, K. (2003). Antecedents to management
accounting change: A structural equation approach. Accounting,
Organizations and Society, 28(7–8), 675–698.
Biggard, N. W., & Beamish, T. D. (2003). The economic sociology of
conventions: Habit, custom, practice, and routine in market order.
Annual Review of Sociology, 29, 443–464.
Bird, D. (1998). Never the same again: A history of VSO. Cambridge:
Lutterworth Press.
Boltanski, L., & Thévenot, L. (1999). The sociology of critical capacity.
European Journal of Social Theory, 2(3), 359–377.
Name of Programme:
r o t a c i d n I t n e m e l E Indicator
result
Element
result
A.1 Annual Progress in PAP objectives is achieved 4
A.2 Programmes and partners monitor and review progress in capacity development and/or
service delivery
4
B.1 Positive changes for target groups of partners are achieved 4
B.2 Programmes and partners are able to understand, monitor and review changes for target
groups
4
1.1 PO is responsive to changes in social, economic and political context 4
1.2 PO is consulted by peer agencies and / or government bodies a credible development agency
within its field of operation
4
1.3 Programmes working at different levels (e.g. national, provincial, districts, grass-roots)
towards strategic aims
4
2.1
The contribution of National Volunteering (NV) to programme delivery has been maximised
4
2.2. The contribution of a range of different interventions to programme delivery has been
maximised
4
2.3 Development awareness amongst volunteers and the wider community has been maximised 4
2.4 Opportunities to develop international resource partnerships are fully explored and developed 4
2.5 The contribution of advocacy to programme delivery has been maximised 4
3.1 LTV/YFD & STV arrivals during year against reforecast plans 4
3.2 Firm and documented placement delivery against reforecast plans 4
3.3 Quality of placement documentation 4
3.4 PIP milestones successfully completed across all programmes 4
4.1 The PO has an inclusion statement and standards that are shared amon g all staff and that
new staff sign. This is for both programme work and how the programme office is run
4
4.2 Partner organisations include excluded groups in their work and as part of their target group 4
5.1
Number of PIPs updated and signed off annually as a result of PARs in line with guidance
4
5.2 All programmes are reviewed annually in line with guidance 4
6.1 Portfolio of partners in place relevant to PAP and CSP objectives 4
6.2 Long-term (3-5 years) Partnership Plans are in place which include partnership objectives that
are linked to the PAP objectives
4
6.3 Partners are actively involved in programme development and review 4
6.4 Partnerships are reviewed annually to assess progress towards Partnership and PAP
objectives and quality of the relationship with VSO
4
7.1 Volunteer support baselines are being met by the Programme Office and partners are
supported to manage volunteers
4
7.2 PO celebrates volunteer achievement, responds to volunteer problems effectively and
encourages the development of effective and accountable volunteer groups
4
7.3 Volunteers are engaged in programme development 4
8.1 Value of proposals signed off by the CD/RPM against agreed quality criteria in Stage 2 of
PMPG.
4
8.2 Restricted income as % of PO Total Budget (including vol. recruitment costs) 4
8.3 Donor conditions for existing funding have been met (including financial, narrative and audit
reports submitted on time and to the standard required by the donor) throughout the year
4
9.1 Percentage of total approved PO managed budget (restricted and unrestricted) budgeted on
PC and VC costs in 2008/09 (Global average = 40%Regional averages range from32% and
57%)
4
9.2 Possible areas of saving against costs identified during budget setting process through
innovation and creative thinking
4
9.3 Percentage of total revised PO managed budget (restricted and unrestricted) budgeted on
staff costs in 2007/08
4
10.1 Annual programme office expenditure variance (restricted plus unrestricted) for 08/09 a gainst
budget adjusted for macro-forecast
4
10.2 Volunteer Unit Cost based on 08/09 budget 4
11.1 Performance management system are being actively implemented 4
11.2 Evidence of major HR policies and systems being adhered to 4
12.1 Number of outstanding category A and B internal audit actions for PO action relating to legal
compliance
4
12.2 Security Risk management plans signed off and implemented and tested according to
Country’s main security risks (e.g. avian flu, security, natural disasters etc)
4
4
Programme impact at beneficiary level
Programme delivery against plans
4
Staff management and support
4
Legal and policy compliance and risk management
4
Cost effectiveness
4
Financial management
4
Volunteer enagement and support
4
Programme funding
4
Planning and review
4
Partnership development and maintanance
4
Inclusion
4
Relevant and ambitious strategic plans are evolved in
response to the development needs of the countryÕs
disadvantaged communities 4
Appropriate and innovative use of development
interventions to deliver programme outcomes and
impact
4
Programme outcomes at partner level
4
Please enter name of country here
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 285
Boltanski, L., & Thévenot, L. (2006). On justi?cation. The economies of worth
C. Miller Trans.. Princeton: Princeton University Press.
Briers, M., & Chua, W. F. (2001). The role of actor-networks and boundary
objects in management accounting change: A ?eld study of an
implementation of activity-based costing. Accounting, Organizations
and Society, 26(3), 237–269.
Bürkland, S., Mouritsen, J., & Loova, R. (2010). Dif?culties of translation:
Making action at a distance work in ERP system implementation.
Working paper.
Cavalluzzo, K. S., & Ittner, C. D. (2004). Implementing performance
measurement innovations: Evidence from government. Accounting,
Organizations and Society, 29(3–4), 243–267.
Chahed, Y. (2010). Reporting beyond the numbers: The recon?guring of
accounting as economic narrative in accounting policy reform in the
UK. Working paper.
Chenhall, R. H., Hall, M., & Smith, D. (2010). Social capital and
management control systems: A case study of a non-government
organization. Accounting, Organizations and Society, 35(8), 737–756.
Chua, W. F. (2007). Accounting, measuring, reporting and strategizing –
Re-using verbs: A review essay. Accounting, Organizations and Society,
32(4–5), 487–494.
Cooper, D. J., Hinings, B., Greenwood, R., & Brown, J. L. (1996).
Sedimentation and transformation in organizational change: The
case of Canadian law ?rms. Organization Studies, 17, 623–647.
Dambrin, C., & Robson, K. (2011). Tracing performance in the
pharmaceutical industry: Ambivalence, opacity, and the
performativity of ?awed measures. Accounting, Organizations and
Society, 36(7), 428–455.
Dart, J., & Davies, R. (2003). A dialogical, story-based evaluation tool: The
Most Signi?cant Change technique. American Journal of Evaluation, 24,
137–155.
Davies, R., & Dart, J. (2005). The ‘most signi?cant change’ (MSC) technique: A
guide to its use. Accessed
20.09.11.
Denis, J.-L., Langley, A., & Rouleau, L. (2007). Strategizing in pluralistic
contexts: Rethinking theoretical frames. Human Relations, 60(1),
179–215.
Dent, J. F. (1991). Accounting and organizational cultures: A ?eld study of
the emergence of a new organizational reality. Accounting,
Organizations and Society, 16(8), 705–732.
Department for International Development (2000). Eliminating world
poverty: Making globalisation work for the poor.
Department for International Development (2006). Eliminating world
poverty: Making governance work for the poor.
Department for International Development (2009). Eliminating world
poverty: Building our common future.
Eisenhardt, K. M. (1989). Building theories from case study research.
Academy of Management Review, 14(4), 532–550.
Espeland, W. N., & Stevens, M. (1998). Commensuration as a social
process. Annual Review of Sociology, 24, 312–343.
Ezzamel, M., Willmott, H., & Worthington, F. (2008). Manufacturing
shareholder value: The role of accounting in organizational
transformation. Accounting, Organizations and Society, 33(2–3),
107–140.
Fischer, M. D., & Ferlie, E. (2013). Resisting hybridisation between modes
of clinical risk management: Contradiction, contest, and the
production of intractable con?ict. Accounting, Organizations and
Society, 38(1), 30–49.
Free, C. W. (2008). Walking the talk? Supply chain accounting and trust
among UK supermarkets and suppliers. Accounting, Organizations and
Society, 33(6), 629–662.
Garud, R. (2008). Conference as venues for the con?guration of emerging
organizational ?elds: The case of cochlear implants. Journal of
Management Studies, 45, 1061–1088.
Gehman, J., Trevino, L., & Garud, R. (2013). Values work: A process study
of the emergence and performance of organizational values. Academy
of Management Journal, 56(1), 84–112.
Gendron, Y. (2002). On the role of the organization in auditors’ client-
acceptance decisions. Accounting, Organizations and Society, 27,
659–684.
Gibbs, M., Merchant, K. A., Van der Stede, W. A., & Vargus, M. E. (2004).
Determinants and effects of subjectivity in incentives. The Accounting
Review (April), 409–436.
Hall, M. R. (2008). The effect of comprehensive performance
measurement systems on role clarity, psychological empowerment
and managerial performance. Accounting, Organizations and Society,
33(2–3), 141–163.
Hall, M. R. (2010). Accounting information and managerial work.
Accounting, Organizations and Society, 35(3), 301–315.
Helmig, B., Jegers, M., & Lapsley, I. (2004). Challenges in managing
nonpro?t organizations: A research overview. Voluntas: International
Journal of Voluntary and Nonpro?t Organizations, 15, 101–116.
Hopgood, S. (2006). Keepers of the ?ame: Understanding Amnesty
International. Ithaca: Cornell University Press.
Huault, I., & Rainelli-Weiss, H. (2011). A market for weather risk?
Con?icting metrics, attempts at compromise, and limits to
commensuration. Organization Studies, 32(10), 1395–1419.
Jagd, S. (2007). Economics of convention and new economic sociology:
Mutual inspiration and dialogue. Current Sociology, 55(1), 75–91.
Jagd, S. (2011). Pragmatic sociology and competing orders of worth in
organizations. European Journal of Social Theory, 14(3), 343–359.
Jay, J. (2013). Navigating paradox as a mechanism of change and
innovation in hybrid organizations. Academy of Management Journal,
56(1), 137–159.
Jordan, S., & Messner, M. (2012). Enabling control and the problem of
incomplete performance indicators. Accounting, Organizations and
Society, 37(8), 544–564.
Kaplan, S., & Murray, F. (2010). Entrepreneurship and the construction of
value in biotechnology. In N. Phillips, G. Sewell, & D. Grif?ths (Eds.).
Technology and organization: Essays in honour of Joan Woodward
(Research in the Sociology of Organizations) (Vol. 29, pp. 107–147).
Emerald Group Publishing Limited.
Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard – Measures
that drive performance. Harvard Business Review, 71–79 (January–
February).
Lamont, M. (2012). Toward a comparative sociology of valuation and
evaluation. Annual Review of Sociology, 38, 201–221.
Lounsbury, M. (2008). Institutional rationality and practice variation:
New directions in the institutional analysis of practice. Accounting,
Organizations and Society, 33, 349–361.
McInerney, P.-B. (2008). Showdown at Kykuit: Field-con?guring events as
loci for conventionalizing accounts. Journal of Management Studies, 45,
1089–1116.
Moers, F. (2005). Discretion and bias in performance evaluation: The
impact of diversity and subjectivity. Accounting, Organizations and
Society, 30(1), 67–80.
Moor, L., & Lury, C. (2011). Making and measuring value. Journal of
Cultural Economy, 4, 439–454.
Nahapiet, J. (1988). The rhetoric and reality of an accounting change: A
study of resource allocation. Accounting, Organizations and Society, 13,
333–358.
Nicholls, A. (2009). We do good things, don’t we? ‘Blended value
accounting’ in social entrepreneurship. Accounting, Organizations and
Society, 34, 755–769.
Oakes, L. S., Townley, B., & Cooper, D. J. (1998). Business planning as
pedagogy: Language and control in a changing institutional ?eld.
Administrative Science Quarterly, 43, 257–292.
Parsons, E., & Broadbridge, A. (2004). Managing change in nonpro?t
organizations: Insights from the UK charity retail sector. Voluntas:
International Journal of Voluntary and Nonpro?t Organizations, 15,
227–242.
Perera, S., Harrison, G., & Poole, M. (1997). Customer-focused
manufacturing strategy and the use of operations-based non-
?nancial performance measures: A research note. Accounting,
Organizations and Society, 22(6), 557–572.
Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and
public life. Princeton, NJ: Princeton University Press.
Robson, K. (1992). Accounting numbers as ‘‘inscription’’: Action at a
distance and the development of accounting. Accounting,
Organizations and Society, 17, 685–708.
Samiolo, R. (2012). Commensuration and styles of reasoning: Venice,
cost–bene?t, and the defence of place. Accounting, Organizations and
Society, 37(6), 382–402.
Sandberg, J., & Tsoukas, H. (2011). Grasping the logic of practice:
Theorizing through practical rationality. Academy of Management
Review, 36, 338–360.
Scott, S. V., & Orlikowski, W. J. (2012). Recon?guring relations of
accountability: Materialization of social media in the travel sector.
Accounting, Organizations and Society, 37, 26–40.
Spradley, J. P. (1980). Participant observation. New York: Holt, Rinehart
and Winston.
Stark, D. (1996). Recombinant property in east European capitalism.
American Journal of Sociology, 101, 993–1027.
Stark, D. (2009). The sense of dissonance: Accounts of worth in economic life.
Princeton: Princeton University Press.
Sundin, H. J., Granlund, M., & Brown, D. A. (2010). Balancing multiple
competing objectives with a balanced scorecard. European Accounting
Review, 19(2), 203–246.
286 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
Thévenot, L. (2001). Organized complexity: Conventions of coordination
and the composition of economic arrangements. European Journal of
Social Theory, 4(4), 317–330.
Townley, B., Cooper, D., & Oakes, L. (2003). Performance measures and the
rationalization of organizations. Organization Studies, 24(7),
1045–1071.
United Nations (2011). The millennium development goals report 2011.
Accessed
02.08.11.
Vollmer, H. (2007). How to do more with numbers: Elementary stakes,
framing, keying, and the three-dimensional character of numerical
signs. Accounting, Organizations and Society, 32, 577–600.
Voluntary Services Overseas (2004). Focus for change: VSO’s strategic plan.
Accessed 10.08.10.
Wouters, M., & Roijmans, D. (2011). Using prototypes to induce
experimentation and knowledge integration in the development of
enabling accounting information. Contemporary Accounting Research,
28(2), 708–736.
Wouters, M., & Wilderom, C. (2008). Developing performance-
measurement systems as enabling formalization: A longitudinal
?eld study of a logistics department. Accounting, Organizations and
Society, 33(4–5), 488–516.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 287
doc_410379703.pdf
In this paper we develop the concept of compromising accounts as a distinctive approach
to the analysis of whether and how accounting can facilitate compromise amongst organizational
actors. We take the existence of conflicting logics and values as the starting point
for our analysis, and directly examine the ways in which the design and operation of
accounts can be implicated in compromises between different modes of evaluation and
when and how such compromises can be productive or unproductive. In doing so, we draw
on Stark’s (2009: 27) concept of ‘organizing dissonance’, where the coming together of
multiple evaluative principles has the potential to produce a ‘productive friction’ that
can help the organization to recombine ideas and perspectives in creative and constructive
ways. In a field study of a non-government organization, we examine how debates and
struggles over the design and operation of a performance measurement system affected
the potential for productive debate and compromise between different modes of evaluation.
Performance measurement, modes of evaluation
and the development of compromising accounts
Robert H. Chenhall
a
, Matthew Hall
b,?
, David Smith
a
a
Department of Accounting and Finance, Monash University, Australia
b
Department of Accounting, London School of Economics and Political Science, United Kingdom
a b s t r a c t
In this paper we develop the concept of compromising accounts as a distinctive approach
to the analysis of whether and how accounting can facilitate compromise amongst organi-
zational actors. We take the existence of con?icting logics and values as the starting point
for our analysis, and directly examine the ways in which the design and operation of
accounts can be implicated in compromises between different modes of evaluation and
when and how such compromises can be productive or unproductive. In doing so, we draw
on Stark’s (2009: 27) concept of ‘organizing dissonance’, where the coming together of
multiple evaluative principles has the potential to produce a ‘productive friction’ that
can help the organization to recombine ideas and perspectives in creative and constructive
ways. In a ?eld study of a non-government organization, we examine how debates and
struggles over the design and operation of a performance measurement system affected
the potential for productive debate and compromise between different modes of evalua-
tion. Our study shows that there is much scope for future research to examine how
accounts can create sites that bring together (or indeed push apart) organizational actors
with different evaluative principles, and the ways in which this ‘coming together’ can be
potentially productive and/or destructive.
Ó 2013 Elsevier Ltd. All rights reserved.
Introduction
‘‘There’s still a big debate in VSO about whether the pur-
pose is to make sure volunteers have a good experience
overseas and then return back happy or do we have some
sort of coherent development programmes which use vol-
unteers as a key input. I think there are those two different
views of the organization. I mean there’s a whole lot of
views between those two but those are the two extreme-
s... it probably divides down the middle’’ (Regional Director
2, Voluntary Service Overseas).
The role of accounting practices in situations of differ-
ent and potentially competing interests has been a promi-
nent feature in studies of accounting and organizations.
Some studies have shown how accounting practices can
be mobilized by organizational actors to introduce a new
order and model of organizational rationality, typically
one focused on market concerns (e.g., Dent, 1991; Ezzamel,
Willmott, & Worthington, 2008; Oakes, Townley, & Cooper,
1998). Other research has emphasized the role of account-
ing in situations of multiple and potentially con?icting
interests, logics and regimes of accountability (e.g., Ahrens
& Chapman, 2002; Cooper, Hinings, Greenwood, & Brown,
1996; Lounsbury, 2008). In these settings, organizational
sub-groups can hold differing views of organizational real-
ity that are not displaced, but can become layered (cf., Coo-
per et al., 1996) such that they persist over time, or, as the
quote above suggests, remain ‘‘divided down the middle.’’
Here, accounts such as costing, resource allocation, and
0361-3682/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2013.06.002
?
Corresponding author.
E-mail address: [email protected] (M. Hall).
Accounting, Organizations and Society 38 (2013) 268–287
Contents lists available at SciVerse ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
performance measurement systems are involved in on-
going contests and struggles as various groups advance
particular interests and values (e.g., Andon, Baxter, & Chua,
2007; Briers & Chua, 2001; Nahapiet, 1988). Research on
the use of ?nancial and non-?nancial measures in perfor-
mance measurement systems (e.g., Kaplan & Norton,
1992; Sundin, Granlund, & Brown, 2010), or the use of
quantitative and qualitative information in ?nancial re-
ports (e.g., Chahed, 2010; Nicholls, 2009), can also be seen
to relate to the ways in which accounting practices can
give voice to different concerns and priorities. Often the
outcome of struggles between groups is intractable con?ict
and confused effort with the eventual dominance of a sin-
gular perspective that limits opportunities for on-going
contests and debate (e.g., Dent, 1991; Fischer & Ferlie,
2013). Alternatively, the processes taken by sub-groups
to promote their preferred views can sometimes achieve
a more workable compromise that generates constructive
debate and on-going dialogue (e.g., Nahapiet, 1988; Sundin
et al., 2010). Building on this literature, in this study we
analyse directly the ways in which the design and opera-
tion of accounts can be implicated in compromises be-
tween different modes of evaluation and seek to
illustrate when and how such compromises can be produc-
tive or unproductive.
As con?icting logics are probably unavoidable in any
human organization (Gendron, 2002), our approach is to
take the existence of, and the potential for, tension be-
tween different modes of evaluation as the starting point
for our analysis. In doing so, we mobilize Stark’s (2009:
27) concept of ‘organizing dissonance’, which posits that
the coming together of multiple evaluative principles has
the potential to produce a ‘productive friction’ that can
help the organization to recombine ideas and perspectives
in creative and constructive ways. The concept of organiz-
ing dissonance provides an analytical approach that views
the co-existence of multiple evaluative principles as an
opportunity for productive debate, rather than a site of
domination or intractable con?ict. As such, our approach
extends prior research by privileging analysis of when
and how the co-existence of multiple evaluative principles
can be productive or unproductive. We summarize the fo-
cus of our study in the following research questions: How
does the design and operation of accounting practices facil-
itate (or impede) compromise in situations of multiple
evaluative principles? When (and how) is compromise be-
tween different evaluative principles productive or
unproductive?
We argue that answers to these questions contribute to
the literature by focusing directly on how accounting is
implicated in compromising between different evaluative
principles and the way in which such compromise can be
productive or unproductive. Here the design and operation
of accounting practices can help organizational actors to
re-order priorities and integrate perspectives in situations
of co-existing and potentially competing values (Stark,
2009). In particular, we show how accounts have the po-
tential to provide a fertile arena for productive debate be-
tween individuals and groups who have differing values
(Stark, 2009; Jay, 2013; Gehman, Trevino, & Garud, 2013;
Moor & Lury, 2011; Denis, Langley, & Rouleau, 2007).
The ?ndings from our ?eld study of a non-government
organization indicate that the potential for accounts to
provide a fertile arena for productive debate is related to
three important processes. First, designing accounts that
productively manage tensions between different evalua-
tive principles involves ‘imperfection’, that is, a process of
‘give and take’ that ensures that no single evaluative prin-
ciple comes to dominate others. Here the design and oper-
ation of accounting practices represents a temporary
settlement between different evaluative principles that
will require on-going effort to maintain (cf., Gehman
et al., 2013; Stark, 2009). Second, the design and operation
of accounts can facilitate productive friction by making vis-
ible the attributes of accounts that are important to organi-
zational actors with different evaluative principles, a
process that we term ‘concurrent visibility.’ This process
is important because it serves to crystallize compromises
between different modes of evaluation in a material form
(Denis et al., 2007). Third, our study reveals an important
distinction between the types of responses that can
emerge in situations where compromises break down
and accounting practices are viewed as ‘not working.’ In
particular, we show how debates over the mechanics of
accounting practices can be unproductive and lead to
‘stuckness’ (Jay, 2013) between different modes of evalua-
tion, whereas debate focused on the principles underlying
the account can help to integrate different evaluative prin-
ciples in a productive way (Jay, 2013; Stark, 2009).
Overall, our approach improves understanding of how
actors with different evaluative principles reach an accept-
able compromise, the factors that promote and/or damage
efforts toreachcompromise, andthe consequences for those
individuals, groups, and organizations involved. Accounts
are central to these processes because they are a site where
multiple modes of evaluation potentially operate at once,
with different modes of evaluation privileging particular
metrics, measuring instruments and proofs of worth (Stark,
1996, 2009). Accounts of performance are critical because it
is in discussions over the different metrics, images and
words that can be used to represent performance that the
actual worth of things is frequently debated and contested.
An analysis of compromising accounts
1
provides a powerful
analytical lens for examining whether and how compromise
between different modes of evaluation is developed, estab-
lished and destroyed. In particular, we show how the design
and operation of accounts can create the potential for
‘productive friction’ to arise from the coming together of
different evaluative principles (Stark, 2009).
Our study also makes a more speci?c contribution to re-
search on performance measurement systems. There has
been a wealth of prior management accounting studies
focusing on the attributes of various performance metrics
and their effects on individual and organizational
performance (see, for example, research on subjectivity
(Gibbs, Merchant, Van der Stede, & Vargus, 2004; Moers,
2005), comprehensiveness (Hall, 2008) and ?nancial/
non-?nancial measures (e.g. Baines & Lang?eld-Smith,
1
We use the term ‘compromising accounts’ to refer to the role of
accounts in facilitating (or not) compromise between actors with different
evaluative principles. We develop this concept later in the paper.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 269
2003; Perera, Harrison, & Poole, 1997). However, most of
these studies do not explicitly investigate how the metrics
that comprise performance measurement systems are
developed (see Wouters and Wilderom (2008) and Town-
ley, Cooper, and Oakes (2003) for exceptions). Thus, we ex-
tend this literature by examining explicitly the processes
that take place in negotiating the scope, design and opera-
tion of the metrics included in performance measurement
systems.
The remainder of the paper is structured as follows. In
the next section we provide the theoretical framework
for the study. The third section details the research meth-
od, with the fourth section presenting ?ndings from our
?eld study of a non-government organization, Voluntary
Service Overseas. In the ?nal section we discuss our ?nd-
ings and provide concluding comments.
Theoretical framework
Our focus is on whether and how accounting practices
can aid compromises in situations of co-existing modes
of evaluation. As such, in developing our theoretical frame-
work, we draw on recent developments in the ‘sociology of
worth’ to help conceptualize the co-existence of, and po-
tential for agreement between, multiple evaluative sys-
tems (see for example, Boltanski & Thévenot, 1999, 2006;
Denis et al., 2007; Huault & Rainelli-Weiss, 2011; McIner-
ney, 2008; Stark, 2009). A focus of this perspective is to
examine how competing values are taken into account
when parties seek to reach agreement or resolve disputes.
Boltanski and Thévenot (2006) conceptualize individuals
as living in different ‘worlds’ or orders of worth, where
each ‘world’ privileges particular modes of evaluation that
entail discrete metrics, measuring instruments and proofs
of worth (Stark, 2009).
2
Instead of enforcing a single princi-
ple of evaluation as the only acceptable framework, it is rec-
ognized that it is legitimate for actors to articulate
alternative conceptions of what is valuable, where multiple
evaluative principles can potentially co-exist and compete
in any given ?eld (Kaplan & Murray, 2010; McInerney,
2008; Moor & Lury, 2011; Scott & Orlikowski, 2012; Stark,
1996, 2009).
As co-existing evaluative principles may not be compat-
ible, a ‘clash’ or dispute may emerge between parties, who
at a given point in time, and in relation to a given situation,
emphasize different modes of evaluation (Jagd, 2011; Kap-
lan & Murray, 2010). Following Stark (2009), who extends
the framework of Boltanski and Thévenot (2006), our focus
is directed not at the presence of particular logics or orders
of worth, but on exploring the ways in which the co-exis-
tence of different logics can be productive or destructive.
In doing so, we draw on Stark’s (2009) notion of organizing
dissonance. Stark (2009) characterizes organizing disso-
nance as beinga possible outcome of a clashbetweenpropo-
nents of differing conceptions of value, that is, in situations
when multiple performance criteria overlap. The disso-
nance that results from such a clash requires the organiza-
tion to consider new ways of using resources in a manner
that accommodates these different evaluative principles.
Here, rather than something to be avoided, struggles be-
tween different evaluative criteria can prompt those in-
volved to engage in deliberate consideration about the
merits of existing practices (Gehman et al., 2013). In this
way, keeping multiple performance criteria in play can pro-
duce a resourceful dissonance that can enable organisations
to bene?t from the ‘productive friction’ that can result
(Stark, 2009). However, as Stark (2009: 27) notes, not all
forms of friction will be productive, as there is a danger that
‘‘where multiple evaluative principles collide...arguments
displace action and nothing is accomplished.’’ This points
to the critical nature of compromises when there are dis-
putes involving different evaluative principles. In practice,
such compromises can be facilitated by the use of conven-
tions, as detailed in the following section.
Disputes, conventions and accounting practices
The negotiation and development of conventions is
seen as a critical tool to aid compromise in situations of
co-existing evaluative principles (Denis et al., 2007). A con-
vention is ‘‘an artefact or object that crystallises the com-
promise between various logics in a speci?c context’’
(Denis et al., 2007: 192). Conventions can help to bridge
different perspectives by providing an acceptable compro-
mise between competing value frameworks (Biggard &
Beamish, 2003; Denis et al., 2007).
Accounting practices as a convention can help to re-
solve disputes in two inter-related ways. One, the develop-
ment and operation of accounts can provide a fertile arena
for debate between individuals and groups with differing
evaluative principles. The production of accounts is impor-
tant to this process because different evaluative principles
do not necessary con?ict or compete continuously, but
resurface at particular moments in time (Jay, 2013), such
as during the design and operation of accounting practices.
Two, the production of accounts can serve to ‘crystallize’
the compromise in a material form (cf., Denis et al.,
2007), thus providing recognition of, and visibility to, dif-
ferent values and principles.
Tensions over accounts and accounting practices are
likely because they can have very real consequences for
the ordering of priorities in an organization and, conse-
quently, for the interests of groups within the organization
who hold different views. It is well understood that
accounting can make certain factors more visible and more
important than others, provide inputs that affect decision-
2
Boltanski and Thévenot (1999, 2006) specify six ‘worlds’ or orders of
worth (the ‘inspirational’, ‘domestic’, ‘opinion’, ‘civic’, ‘merchant’ and
‘industrial’ worlds). The ‘civic’ world, for example, is based on solidarity,
justice and the suppression of particular interests in pursuit of the common
good, whereas the ‘market’ world is one with competing actors who play a
commercial game to further their personal (rather than collective) goals. In
this paper our key focus is on understanding why and how actors can reach
compromises (or not) in situations that are characterised by the presence of
multiple evaluative principles. In doing so, we follow the approach of Stark
(2009, see p. 13 in particular). That is, we do not con?ne our analysis to the
six orders of worth as outlined by Boltanski and Thévenot (1999, 2006) but
specify the different evaluative principles as is appropriate to the particular
empirical setting. Given our approach, we do not elaborate further on the
six orders of worth of Boltanski and Thévenot (1999, 2006) here. For further
insight on the six orders of worth, see Boltanski and Thévenot (1999, 2006),
and for their implications for accounting research, see Annisette and
Richardson (2011) and Annisette and Trivedi (2013).
270 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
making and the allocation of resources, and can also pro-
vide authoritative signals regarding the very purpose and
direction of the organization. In addition, research has
highlighted the persuasiveness of numbers in accounts
and the role of quanti?cation in advancing particular views
and interests (e.g., Porter, 1995; Robson, 1992; Vollmer,
2007).
Nahapiet’s (1988) study of changes to a resource alloca-
tion formula in the United Kingdom’s National Health Ser-
vice showed how the formula made existing values more
visible and tangible and thus acted as a stimulus which
forced explicit consideration of three fundamental organi-
zational dilemmas. In this setting, actors contested
strongly the formula’s design and operation, and its inter-
pretation by other groups. Different interpretations of the
formula, and of accounting more generally, were problem-
atic because they played a key role in establishing what
counts and thus what is worthy. This tension is exacer-
bated in organizational settings where limited resources
(e.g., money, time, space) mean that not all interests can
be accommodated. In particular, the processes of evalua-
tion inherent to the production of accounts are central to
problems of worth in organizations (cf., Stark, 2009). For
example, the process of developing, adjusting and recon-
?guring accounts can require groups to make mutual con-
cessions (i.e., compromise) in order to agree on the ?nal (if
only temporary) form and content of the account. In this
way, producing accounts can provide an arena where dif-
ferent understandings of value may be articulated, tested,
and partially resolved (Moor & Lury, 2011). However, while
debate over accounts has the potential to facilitate produc-
tive friction, this depends on whether and how the conven-
tion comes to be (and continues to be) viewed as an
‘acceptable’ compromise. Importantly, although accounts
as conventions may help enact compromises, they can also
be subject to criticism and thus require on-going efforts to
maintain and stabilize the compromise.
Responses to breakdowns in compromise
Designing accounting practices in the presence of co-
existing modes of evaluation is likely to result in situations
where the practice is viewed, at least by some actors in the
organization, as ‘not working.’ Here there is a ‘breakdown’
such that issues and concerns that have arisen can no long-
er be absorbed into the usual way of operating (Sandberg &
Tsoukas, 2011). Some breakdowns can be viewed as tem-
porary and so the focus is on what is problematic about
the current practice and how to ?x it (Sandberg & Tsoukas,
2011). For example, doubts and criticisms can arise about
the dif?culties of implementing the practice, about
whether it will result in the desired behaviours, and how
it will in?uence other practices (Gehman et al., 2013).
This resonates with research in accounting that shows
how the introduction of new accounting practices can re-
sult in criticisms that they have not been implemented
correctly and revised procedures are required to improve
the design and implementation process (e.g., Cavalluzzo
& Ittner, 2004; Wouters & Roijmans, 2011). A criticism of
existing practices is also evident, for example, in the
context of performance measurement systems that are
seen to require more non-?nancial measures (Kaplan &
Norton, 1992) and ?nancial reports are viewed as needing
more narrative information (Chahed, 2010). Such criti-
cisms can result in changes to the existing accounting
practices. Stark (2009) notes, however, that disputes over
the mechanics of existing practices may not lead to effec-
tive changes, but rather result in a situation where nothing
is accomplished. Here, co-existing modes of evaluation
may not lead to innovation, but rather oscillation and
‘stuckness’ between logics (Jay, 2013).
A breakdown in practice can also be more severe such
that existing ways of doing things no longer work and
re?ection at a distance from the existing practice is re-
quired (Sandberg & Tsoukas, 2011). Here actors can debate
the principles and values underlying the existing practice
and the changes that are required to move beyond the
breakdown (Gehman et al., 2013). This type of criticism
and debate can arise where people feel that some funda-
mental principles with which they identify are not being
respected (Denis et al., 2007). This can be particularly
problematic in debates over incommensurables, that is,
the process of denying ‘‘that the value of two things is
comparable’’ (Espeland & Stevens, 1998: 326). Claims over
incommensurables are important because they can be ‘‘vi-
tal expressions of core values, signalling to people how
they should act toward those things’’ (Espeland & Stevens,
1998: 327). It can also arise where the values evident in the
existing practice clash with deeply held values obtained
through prior experience (Gehman et al., 2013).
Debates over the underlying principles of accounting
practices, and conventions more broadly, can result in
what Stark (2009: 27) labels ‘‘organizing dissonance’’, that
is, a process of productive friction arising from debate be-
tween actors over different and potentially diverse evalua-
tive principles. To generate productive friction in the
context of such debates, the rivalry between different
groups must be principled, with advocates offering rea-
soned justi?cations for their positions (Stark, 2009). In this
situation actors become re?exively aware of latent para-
doxes and directly confront and accept ambiguities, help-
ing new practices that integrate logics to emerge (Jay,
2013). The resolution of breakdowns also requires recogni-
tion that such a compromise represents a ‘‘temporary set-
tlement’’ (Stark, 2009: 27) between competing value
frameworks that is fragile (Kaplan & Murray, 2010) and
only likely to be maintained via on-going effort and
reworking (Gehman et al., 2013).
Summary
This discussion highlights the potential role for ac-
counts in developing compromises in situations where
the co-existence of different evaluative principles is a com-
mon feature of organizations. In particular, it reveals how
accounts have the potential to act as a convention to help
develop and crystallize compromises. It also highlighted
the way in which compromises are temporary settlements
that require on-going work to stabilize. In particular, the
merits of an accounting practice may be called into
question, resulting in efforts to ‘?x’ the way in which the
practice currently operates and/or debate focused on
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 271
resolving tensions between underlying principles and val-
ues. In the next section, we empirically examine the role of
compromising accounts through a detailed analysis of the
development of a performance measurement system that
we observed during a longitudinal ?eld study at Voluntary
Service Overseas (VSO).
Method
VSO is a non-governmental international development
organization that works by (mainly) linking volunteers
with partner organizations in developing countries. Each
year approximately 1500 volunteers are recruited and take
up placements in one of the over forty developing coun-
tries in which VSO operates. Our interest in VSO was
sparked due to an initiative to develop a new performance
measurement system, subsequently referred to as the
‘Quality Framework’ (QF). This framework attempted to
combine different metrics and narrative content into a sin-
gle report that would provide a common measure of per-
formance in each of VSO’s country programmes.
The ?eld study was conducted between July 2008 and
August 2010. During this time we conducted 32 interviews,
attended meetings, observed day-to-day work practices,
collected internal and publicly available documents, partic-
ipated in lunches and after-work drinks with staff and vol-
unteers, primarily in London, but also during a 1-week
visit to the Sri Lanka programme of?ce in January 2009.
Most of the interviews were conducted by one of the
authors, with two authors conducting the interviews with
the country directors. Interviews lasted from 30 min to 2 h.
Almost all interviews were digitally recorded and tran-
scribed, and, where this was not possible, extensive notes
were taken during the interview and further notes then
written-up on the same day. We interviewed staff across
many levels of the organization as well as staff at different
locations. Face-to-face interviews were conducted at VSO’s
London headquarters, and in Sri Lanka. Due to the location
of VSO staff around the world, some interviews (particu-
larly those with country directors) were conducted via
telephone. Table 1 provides an overview of the formal
interviews and observations of meetings. We carried out
observations of 17 meetings and workshops in both Lon-
don and Sri Lanka, primarily concerned with the QF and
other planning and evaluation practices.
Throughout the study, we were also involved in infor-
mal conversations (typically before and after meetings,
and during coffee breaks, lunches and after-work drinks)
where staff and volunteers expressed their thoughts about
the meetings, as well as other goings-on at VSO and the
non-government organization sector. We kept a detailed
notebook of these informal conversations, which was then
written up into an ‘expanded account’ (Spradley, 1980)
that on completion of the ?eld study totalled more than
200 pages of text. We also exchanged numerous emails
(over 700 separate communications) and telephone con-
versations with VSO staff.
We were provided access to over 600 internal VSO doc-
uments, including performance measurement reports, sup-
porting documents and analysis. These reports included
the complete set of QF reports from each VSO programme
of?ce for 2008 and 2009, documents related to other
monitoring and review processes, as well as more general
documents concerning organizational policies, plans and
Table 1
Formal ?eldwork activity.
Interviews Location of staff Number of
interviews
Director, International Programmes Group London 2
Deputy-Director, International Programmes Group London 1
Regional Director London, Ghana 2
Country Director Sri Lanka(x2), Guyana, Ghana, The Gambia, Uganda, Vietnam, Nepal,
Namibia, Cambodia
10
Head-Programme Learning and Advocacy London 1
Team Leader-Programme Development and Learning London 2
Executive Assistant to Director, International
Programmes Group
London 3
Programme Learning Advisor Ottawa 1
Systems and Project Manager London 1
Head-Strategy, Performance and Governance London 1
Director-VSO Federation London 1
Volunteer Placement Advisor London 1
Finance Manager Sri Lanka 1
Programme Manager Sri Lanka 2
Facilities and Of?ce Manager Sri Lanka 1
Volunteer Sri Lanka 2
32
Observation and attendance at meetings Location of meeting Number of
meetings
Quality Framework meetings London 6
Various planning and review meetings London 6
Programme planning and review workshop Sri Lanka 3
Of?ce planning and logistics meeting Sri Lanka 2
17
272 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
strategies. Finally, we collected publicly available docu-
ments, such as annual reports and programme reviews,
newspaper articles, as well as several books on VSO (e.g.
Bird, 1998).
Consistent with the approach employed by Ahrens and
Chapman (2004), Free (2008) and Chenhall, Hall, and Smith
(2010), we employed Eisenhardt’s (1989) methods. This in-
volved arranging the data (transcripts, ?eld notes, docu-
ments) chronologically and identifying common themes
and emerging patterns. We focused in particular on itera-
tions in the content and use of performance measurement
systems at VSO over time and then sought to understand
why they came about and the subsequent reactions from
different people within the organization. We then re-orga-
nized this original data around key events (for example,
the ‘league table’ debates) and signi?cant issues (for exam-
ple, ‘consistency’) that emerged as we sought to under-
stand the performance measurement and review systems
at VSO. We compared our emerging ?ndings from the
study with existing research to identify the extent of
matching between our data and expectations based on
prior theory. In particular, ?ndings that did not appear to
?t emerging patterns and/or existing research were high-
lighted for further investigation. This process was iterative
throughout the research, and ?nished when we believed
we had generated a plausible ?t between our research
questions, theory and data (Ahrens & Chapman, 2006).
Case context
VSO was founded in 1958 in England as an organization
to send school leavers to teach English in the ‘‘underdevel-
oped countries’’ of the Commonwealth (Bird, 1998: 15).
Volunteers were initially recruited exclusively from Eng-
land, and later from other countries, including the Nether-
lands, Canada, Kenya, the Philippines, and India. The initial
focus on the 18-year-old high school graduate was re-
placed over time by a (typically) 30-year old-plus experi-
enced professional. Volunteers operated under a capacity
building approach, being involved in initiatives such as
teacher training, curriculum development, and advocacy.
3
In 2004 VSO signalled it would adopt a more ‘program-
matic’ approach to its work, which shifted attention away
from each volunteer placement to one that focused ‘‘all our
efforts on achieving speci?c development priorities within
the framework of six development goals’’ (Voluntary Ser-
vices Overseas, 2004).
4
This move to a programmatic model
was coupled with explicit recognition of VSO’s purpose as
primarily a ‘development’ rather than ‘volunteer-sending’
organization, and the development of evaluation systems
to support this change. Notwithstanding this explicit shift
in organizational priorities, the focus on volunteering was
still strong, particularly as many VSO staff were formerly
volunteers. As such, a mix of different world-views at VSO
was the norm:
‘‘There are some different kind of ideological views
between people who feel that the important thing
about VSO, it’s just about international cooperation
and getting people from different countries mixing with
each other and sharing ideas. It doesn’t matter what the
outcome is really, it’s going to be a positive thing but
you don’t need to pin it down. Versus it’s all about pin-
ning down the impact and the outcomes of our work
and being very focused and targeted and being able to
work out what is your return on your investment and
all these kind of things so I think it is partly historical
and partly differences in just a mindset or world-view.’’
(Interview, Regional Director 2, November 2008).
The different views on VSO’s overall purpose created
considerable tension, focusedinparticular ondebates about
the value of VSO’s work. Originating from VSO’s founding
principles, many staff and volunteers felt that volunteering
was, in and of itself, a positive and productive activity and
any drive to specify an ‘outcome’ of this was secondary. In
contrast, the programmatic approach, coupled with the
recruitment of many staff fromother international develop-
ment agencies, gave more attention to poverty reduction
and demonstration of the ‘impact’ of VSO’s work. This situa-
tion was increasingly common in the wider NGO sector,
where founding principles of volunteerism, the develop-
ment of personal relationships, andrespect for eachindivid-
ual were comingintocontact withmore ‘commercial’ values
favouring professionalism, competition and standardiza-
tion (see, for example, Helmig, Jegers, &Lapsley, 2004; Hop-
good, 2006; Parsons & Broadbridge, 2004).
As an espoused international development organiza-
tion, VSO also existed in an environment increasingly char-
acterized by the use of indicators and targets (a prime
example being the Millennium Development Goals, see
United Nations, 2011) and a greater focus on the effective-
ness of aid (particularly the Paris Declaration on Aid Effec-
tiveness in 2005).
5
VSO’s main funder, the United Kingdom’s
Department for International Development (DFID), had
aligned its development programme around the Millennium
Development Goals, and was also a signatory to the Paris
Declaration.
6,7
This had implications for the way in which
VSO was required to report to DFID, particularly during the
3
VSO operated what it calls a ‘capacity building’ approach by partnering
volunteers with local organizations that require assistance or expertise in a
variety of capacities. VSO describes its partnership approach as follows:
‘‘We work with local partners in the communities we work with, placing
volunteers with them to help increase their impact and effectiveness’’ (VSO
website,http://www.vsointernational.org/vso-today/how-we-do-it/,
accessed 7 April 2010). Volunteers typically take up a speci?c role or
position, often working alongside a local staff member, where partner
organizations range in size from very small, local businesses, community
groups and NGOs, to large organizations and government departments and
ministries.
4
The six development goals were health, education, secure livelihoods,
HIV/AIDS, disability, and participation and governance (Focus for Change,
Voluntary Services Overseas, 2004).
5
See www.oecd.org/dataoecd/11/41/34428351.pdf for the Declaration.
6
See DFID (2000, 2006, 2009).
7
In terms of the overall funding environment, VSO’s total funding
increased steadily during the 2000s. In 2000 total income was approxi-
mately £28m, with approximately £22m from DFID (77% of total funds). In
2005 total income was approximately £34m, with approximately £25m
from DFID (74% total funds). In 2009 total income was approximately
£47m, with approximately £29m from DFID (60% total funds) (source: VSO
Annual Reports).
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 273
course of our study when a change in DFID’s reporting for-
mat required VSO to track progress against its four agreed
strategic objectives using a set of 17 indicators.
8
Collectively, the changing context of the NGO sector,
the move in international development towards an in-
creased focus on aid effectiveness and the use of indicators,
along with VSO’s own progression from a volunteering to a
more programmatic focus, meant that the co-existence of
different evaluative principles characterized the situation
at VSO. In particular, we identify two primary modes of
evaluation.
9
The ?rst mode of evaluation, which we label
‘learning and uniqueness’, was focused primarily on re?ec-
tion, the use of contextual and local interpretations, and a
preference for narrative content. Discourses within VSO reg-
ularly emphasized the importance of this mode of evalua-
tion, with one of VSO’s three stated values a ‘‘commitment
to learning’’ whereby VSO seeks to ‘‘continue to develop
effective monitoring and evaluation methods so that we
can learn from our own and others’ works’’ (VSO, 2004).
The second mode of evaluation, which we label ‘consistency
and competition’, was focused primarily on standardization,
the use of consistent and universal interpretations, and a
preference for indicators. We outline the different modes
of evaluation in Table 2, which we return to throughout
our empirical analysis.
Attempts at compromise between these different
modes of evaluation became evident in debates about
how to measure the value of VSO’s work in each country.
Measuring performance became particularly important be-
cause the move to be more programmatic had placed in-
creased pressure on the allocation of resources amongst
programme of?ces, as it required more expenditure on
staff to support volunteer placements and develop and
manage programmes.
10
However, the situation was charac-
terized by a lack of commonly agreed criteria for measuring
the performance of country programmes (cf., Garud, 2008),
where over time three approaches had been instigated; the
‘Strategic Resource Allocation’ (SRA) tool, the ‘Annual Coun-
try Report’ (ACR) and the ‘Quality Framework’ (QF).
11
The SRA was developed in 2002 as VSO’s ?rst attempt to
measure the effectiveness of each programme of?ce.
12
The
SRA relied almost exclusively on using numerical data to
measure performance, where each programme of?ce was re-
quired to score itself on 16 criteria related to the extent to
which its work was focused on disadvantage, achieved cer-
tain outputs related to volunteers, and adopted a strategic
approach. Each criterion was given a precise percentage
weighting, e.g., 2% or 4% or 17%. Scores on the 16 criteria
were to be aggregated with each programme of?ce awarded
a percentage score out of 100, with recognition that ‘‘the
higher the overall percentage a Programme Of?ce receives
in this tool, the more ‘‘effective’’ it will be perceived to be
based on this measure.’’
13
There was a strong emphasis on
review of scores by staff in London ‘‘to ensure consistency
between regions. . .in order to ensure transparency and to al-
low comparison between countries.’’
14
The SRA’s implemen-
tation was problematic, however, and the approach was
abandoned as a country director later explained:
‘‘The SRA was dropped because it was becoming
increasingly apparent that some programmes were
rather self-critical while others were not – but that this
did not necessarily relate very closely to programme
quality – in fact it appeared that sometimes the oppo-
site was true, the programmes that had the capacity
to critically assess their own performance (and give
themselves a low score) were of a better quality than
those who from year to year claimed that things were
Table 2
Modes of evaluation at VSO.
Dimensions Modes of evaluation
‘‘Learning and
Uniqueness’’
‘‘Consistency and
Competition’’
Purpose of
evaluation
Re?ection, learning,
improvement
Standardize, compare,
compete
Attributes of
‘good’
evaluation
Contextual, detailed,
‘local’ interpretations
Consistent, precise,
objective, ‘universal’
interpretations
Attributes of
‘good’
accounts
Narrative
descriptions, case
studies, stories, images
Numbers, indicators,
and scales, particularly
those that can be
compared between
units
Indicators that
provoke creativity,
ambition and
innovation
Indicators that
capture current
performance accurately
Avoid reliance on
numbers as they
provide only a partial
account and do not tell
the ‘real’ story
Avoid reliance on
narrative as it is
‘selective’ and cannot
be compared between
units
8
See www.vsointernational.org/Images/ppa-self-assessment-review-
2010-11_tcm76-32739.pdf for the 2010/2011 report to DFID (accessed 31
May 2012). The ?rst report issued under this format was for the 2009/2010
reporting year. Prior to this, there was an absence of indicators, with VSO
reporting against various development outcomes using descriptive exam-
ples of progress from different countries (‘VSO Narrative Summary and
Learning Report for PPA 2005-6’).
9
As noted above, our approach here is to follow Stark (2009) and specify
the different modes of evaluation in accordance with our empirical setting.
10
VSO operated a geographic structure, whereby several programme
of?ces were grouped together to form a speci?c region, for example, Sri
Lanka, India, Bangladesh, Nepal and Pakistan formed the ‘South Asia’
region. Each Country Director reported to a ‘Regional Director’, with the
Regional Directors reporting to the Director of IPG, based in London. IPG
also had staff responsible for providing support to programme of?ces in
areas such as funding, advocacy, and programme learning and reporting.
Each programme of?ce was a budget holder, and received core funding
from VSO headquarters via the annual budgeting process. Core funding
related to costs such as staff salaries and bene?ts, of?ce and vehicle rental,
and volunteer costs (including allowances and training/support costs). Each
programme of?ce received a limited amount of funding for ‘programme’
costs, with programme of?ces expected to apply for grants from donors to
support further programme work.
11
Our ?eld study (July 2008 to August 2010) corresponded to the ?rst
year of the QF’s operation and thus was subsequent to the use of the SRA
and ACR. As such, we brie?y describe the SRA and ACR to provide context to
the development of the QF but do not analyze the development of the SRA
and ACR in detail.
12
See Appendix A which provides the ‘summary page’ of the SRA.
13
SRA document, 2002.
14
SRA document, 2002.
274 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
going well – and this resulted in some good
programmes being closed down.’’ (Interview, Country
Director 1, November 2008).
Subsequent to the SRA, the ACR was developed in 2005
and focused its reporting on the ‘activity’ that a country
programme had engaged in, such as ‘so many workshops,
so many volunteer placements.’
15
The ACR itself did not
contain any quantitative scoring or ranking of programme
of?ces but was a narrative report that provided descriptions
of progress towards ‘Strategic objectives’ and contained a
section focused on ‘Lessons’ to be learned.
16
The ACR also in-
cluded one or more ‘Most Signi?cant Change’ (MSC) stories’
which focused on telling stories as a way to re?ect upon and
learn from programme experiences (see Dart & Davies,
2003; Davies & Dart, 2005).
The third approach (and our empirical focus) developed
subsequent to the SRA and ACR was the QF, which at-
tempted to combine scoring and narrative elements into
a single reporting framework. We show how the QF was
subject to criticism that resulted in changes in the use of
narrative and quantitative measures, which favoured a
mode of evaluation focused on ‘consistency’ over that
which respected the ‘unique’ circumstances of individual
country programmes. A further dispute emerged over the
relative focus on ‘learning’ and ‘competition’ that precipi-
tated more fundamental changes in order to develop an ac-
count that helped to compromise between the different
modes of evaluation.
Development of the quality framework
The initial development of the QF occurred during a
meeting of all country directors in late 2007.
17
Prior to this
meeting, in an email sent to all country directors in May
2007, the Director of IPGgave his support to the development
of the QF and outlined his rationale for its implementation:
‘‘We are very good at measuring volunteer numbers,
numbers of tools being used, early return rates, levels
of programme funding – but what about the impact of
our work? How do we know if we really are working
with poor people to contribute to positive change in
their lives?...I believe that it is absolutely essential that
we have a shared vision of success – that we all know
what a high quality, successful VSO country programme
could look like – that we know how to measure this –
and that we have a culture that encourages, supports
and celebrates this. Of course all of our country
programmes could, and should, look very different.
Local circumstances and development priorities above
all should ensure this. . .However, there must be some
fundamental principles that drive VSO’s programme
work and enable us to determine whether we are suc-
cessful or not.’’
In this statement, the imperative for compromise be-
tween VSO’s different modes of evaluation in the develop-
ment of the QF was revealed. The reference to a ‘shared
vision of success’ and knowing ‘how to measure this’ indi-
cates a concern with developing common and standard-
ized ways of measuring success. There is also recognition
of the uniqueness of country programmes in that they
‘should look very different.’ In our analysis below, we focus
on two central debates that emerged in the development of
the QF; the ?rst concerning the tension between standard-
ization and uniqueness, and the second regarding the most
appropriate approach to improve programme quality.
Debate 1: How to standardize and respect uniqueness?
A key dif?culty in developing the QF was tension be-
tween the desire to standardize, i.e., have indicators that
provide a consistent method for measuring success in each
programme of?ce, whilst respecting the uniqueness of pro-
gramme of?ces and the need for indicators to be ‘inspira-
tional.’ It was the need to make choices about the content
of elements, indicators and narrative components in the
QF that provided an arena for debates and discussions
regarding different modes of evaluation at VSO. Country
directors and other programme staff were central to these
discussions, and provided suggestions for elements and
indicators that were collected in London, and then followed
in late 2007 by a meeting of all country directors and senior
IPG staff in Cambridge, UK. A central platform of this meet-
ing was sessions devoted to dialogue and debate about the
elements and indicators that would comprise the QF. Cen-
tredonthe question‘‘What is quality?’’, it was here that staff
were able to advocate for the inclusionand exclusion of par-
ticular elements and indicators. This resulted in a set of 14
elements relating to various aspects of programme quality,
such as inclusion, volunteer engagement, innovative pro-
gramming and ?nancial management. Importantly, the ele-
ments relating to the impact of VSO’s work on partners and
bene?ciaries were givenhighest priority: they were the ?rst
two elements in the QF and were assigned the labels ‘Ele-
ment A’ and ‘Element B’ to distinguish them from the other
elements that were labelled with numbers one through 12
(see Appendix B).
Testing the QF at the country level was a priority, with a
country director offering the following re?ections on a pi-
lot test:
‘‘We worked through the different indicators to see
whether the results that the framework spat out were
recognizable. . .some of the results didn’t give the right
answer basically. So we changed some of the indica-
tors...The framework itself allows for a small narrative
at the beginning of each element, which can at least
explain context as to why it may have a low score or
conversely why it might have a high score. They may
be working in a country that has a very easy operational
environment. It might have lots of external funding and
15
Interview, Country Director 2, November 2008.
16
ACR document, 2005.
17
As noted above, the ?rst report to DFID that used indicators to track
progress against strategic objectives was for the 2009/2010 reporting year.
Within VSO, work to address the new reporting requirements began in the
second half of 2008, more than 1 year after the initial development of the
QF. We also note that the QF reports were not provided to DFID or to any
other external funders, although the IPG Director commented that he did
inform DFID about the QF process and that this was considered by him to
be ‘helpful’ in showing DFID that VSO was addressing issues around the
impact of its work.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 275
that for me is re?ected in that short narrative section at
the beginning.’’ (Interview, Country Director 1, Novem-
ber 2008).
This comment reveals how local knowledge was consid-
eredcritical inthat indicators wererequiredtoproduceresults
that were ‘recognizable’ to programme of?ce staff, partners
and volunteers. Providing space in the QF report for narrative
discussion to re?ect the different circumstances of countries
was also important. In particular, the narratives in the QF re-
ports were typically very extensive and contained statements
calling attention to the unique situation of each country. They
alsosought tocelebrate achievements as a waytoinspire staff,
volunteers andother stakeholders, for example, bystatingthat
‘‘the success of the education programme in demonstrating
bene?ciary level impact in [Country X] is extraordinary and
it is motivatingfor staff andvolunteers tobeabletoseetheim-
pact of their work.’’
18
Further recognitionof countryuniqueness
was evident in the design of the performance ranges for the
indicators:
‘‘Many of the KPIs have got ranges set against them to
outline what ‘high performance’, ‘satisfactory perfor-
mance’ and ‘room for improvement’ looks like. How-
ever, in some cases it will be more relevant for CDs
and Regional Directors to decide what results say about
the performance of the programme within the context
of the programme itself. . .it is recognised that what
can be considered high performance will differ consid-
erably between Programme Of?ces.’’
19
Inthis quote, there is explicit recognition of differences be-
tween countries that prevents the use of standardized perfor-
manceranges for eachandeveryindicator. As such, eight of the
indicators in the QF were scored with guidance that ‘‘perfor-
mance [to be] determined by PO [programme of?ce] and
RD [regional director]’’. Finally, in contrast to the SRA, ele-
ments were not given explicit weights and there was no cal-
culation of an overall score for each programme of?ce.
There was also scope for constructive debate lasting be-
yond the QF’s initial development. Country and regional
directors were required to discuss the scoring for an individ-
ual programme of?ce together, and to analyze and resolve
differences inscores that emergedfromthis process. Further-
more, many staff from programme of?ces completed the QF
together, providing a way toreviewoverall results, andmany
regional directors used the QF to help set objectives and ac-
tion plans for country directors in the coming year. Some
programme of?ces embraced the QF even further, using it
to determine whether an of?ce move would improve pro-
gramme quality, or further disaggregating the QF so it could
be applied to different parts of the programme of?ce.
Collectively, the input of country directors and other
programme staff, the importance of local knowledge in
designing indicators, providing space for narrative so that
programme of?ces could re?ect local circumstances, and
recognition that performance on some indicators was best
determined using programme of?ce and regional director
judgement, provided explicit recognition of the uniqueness
of programme of?ces. Critically, however, the need for
comparability was also recognized. Each programme of?ce
was required to complete the QF using a common tem-
plate, with common elements and indicators, thus provid-
ing a standardized method of measuring performance
across countries.
After its ?rst year of operation, praise for the QF was
widespread, with this comment from a country director
echoing that of many others:
‘‘Overall it was a good move away from the Annual
Country Report because one of the main things was it
gave much more direction on being clear on what to
report on but also through the report it identi?ed what
is important, what’s quality but it’s also important to
re?ect on as a programme. Now you can always argue
about elements of that, that’s not the point. I think it’s
just helpful to say well these are overall important parts
to re?ect on and I thought that was quite useful.’’ (Inter-
view, Country Director 2, November 2008).
In this statement, the QF is seen as better than the ACR
because it provides clarity around what makes a quality
country programme, and, in this way, provided a ‘collec-
tivelyrecognized’ reference regarding the wayinwhichpro-
gramme of?ces would be evaluated (cf., Jagd, 2007; Biggard
& Beamish, 2003). Importantly, this quote also reveals that
although there is recognition that the QF was and would
be the site of disagreements (e.g., over elements), it was
the overall approach of focusing on what made a quality
programme that was most important. This corresponded
to the view of the IPG Director, who also praised the QF:
‘‘I thinkit’s beengreat. It’s not aperfect tool but I don’t think
anytool indevelopment ever is perfect. . . there wasn’t a lot
of discussionabout qualityor about success andthediscus-
sions were more about howmany volunteers have we got
or how much programme funding have we got and the
qualityframeworkhas beenareallyuseful tool over thelast
18 months for just getting people to talk more about
impacts on poverty. Quality, what is quality like?. . .[The
QFhas] givenmestronger evidencewhenarguing at senior
management teamlevel for where things aren’t working.
So when you’ve got 35 country directors saying things
like the ?nance systems aren’t working it gives you a
lot of evidencetobe able toreallyargue for that. . .sofrom
[that] basis, I think it’s gone really well.’’ (Interview, IPG
Director, December 2008).
Here, praise is directed at how the QF helped move dis-
cussions more towards the impact of programmes on part-
ners and bene?ciaries and less on volunteer numbers or
funding levels. The ability to aggregate data across pro-
gramme of?ces was important in providing arguments
for more resources at senior management forums.
20
The
18
QF Country Report, 2008.
19
QF Guidance document, 2008.
20
The Director of IPG was a member of the six-person executive
management team at VSO called the ‘Senior Management Team’ (other
members were the Chief Executive Of?cer, Director of the VSO Federation,
Director of the UK Federation, the Chief Financial Of?cer, and the Director
of Human Resources). This group was responsible for major resource
allocation decisions, particularly the amount of funds that were allocated to
each of the major divisions within VSO, including IPG.
276 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
statements that the QF was not ‘a perfect tool’ but was ‘quite
useful’ and ‘worked out pretty well’ reveal an awareness of
the importance of ‘making do’ (cf., Andon et al., 2007), which
helped to enact a certain stability between a mode of evalu-
ation that privileged country uniqueness and one favouring
standardization and comparability. However, such stability
was temporary, highlighting the fragility of the compromise.
While there was initial praise for the QF from many sources,
there were also critics. Shortly after the completion of the QF
in its ?rst year, it was subject to strong criticism, aimed in
particular at the process used to score the indicators and
elements.
But you’re not consistent!
The scoring process for the QF was based on self-assess-
ment by the country director (with programme of?ce staff
input), with review by the relevant regional director. This
raised concerns, particularly from some staff in London,
that scoring was inconsistent across regions and countries:
‘‘(The) key anomaly is that the ratings seem to have
been applied differently in each region. . .I believe this
is an inaccurate re?ection of the current strengths and
weaknesses of programme funding across IPG. . .I sus-
pect there are different interpretations as to what con-
stitutes good programme funding performance. . .I think
there is a need to clarify what justi?es a 1, 2, 3 or 4
within each indicator.’’
21
This comment reveals that the scoring methodology
was criticized for producing inaccurate results, with the
problem being a dislike of the different interpretations of
good performance made by different countries. The sug-
gested solution was to instigate changes to the scoring pro-
cedure to clarify the meaning of each score.
A senior IPG manager was the most vocal critic of the
scoring process, which he believed was ‘‘extremely dubi-
ous.’’ He lamented the SRA’s demise and concluded that
shifting the balance in favour of self-assessment in the
QF had created what he believed were questionable re-
sults. He expressed a preference for taking the scoring of
indicators and elements out of the hands of country and re-
gional directors altogether. He ?rst ?oated the idea of
using an external assessment process akin to an
‘‘OFSTED-type inspection unit.’’
22
Another option was the
use of an internal assessment unit within VSO to carry out
an ‘‘independent performance assessment.’’ These prefer-
ences strongly emphasized the importance of having a ‘con-
sistent methodology’ with ‘independence’, which, in effect,
placed the values of standardization and comparability
above those of local context and country uniqueness.
Although the use of an internal or external performance
assessment unit did not materialize, the criticism resulted
in several changes to the QF for its second year of opera-
tion. Each indicator now included a description of each of
the 1–4 levels, where previously only levels 1 and 4 had
a description. Revised guidance documentation was also
issued:
‘‘It is important to score yourself precisely against the
descriptors. There may be very good reasons why you
achieve a low score on a particular indicator, but it is
important to score precisely – the narrative can be used
to give a brief explanation.’’
23
This guidance highlights two important changes. First,
there was the explicit emphasis on the need to score pre-
cisely, with the reasons that lay behind particular scores
considered secondary. Second, the narrative was now
viewed as the space where scores can be explained, indi-
cating that its primary value was its connection to the scor-
ing process, not in providing information that can arise
from other sources. In further changes, the guidance ‘‘what
can be considered high performance will differ consider-
ably between Programme Of?ces’’ was removed from the
QF documentation, the scoring of only one (rather than
the previous eight) of the indicators was to be assessed
using judgement,
24
and ownership of some scoring was ta-
ken away from programme of?ces, with the explanation that
this would allow ‘‘data to be comparable across programmes
by using universal interpretations of the data.’’
25
Concerns over the scoring process itself were also ad-
dressed, particularly in relation to ensuring consistency
in the way that regional directors used indicator scores
to determine overall element scores. A series of meetings
was arranged to address this issue directly. One of these
meetings, which lasted for over 2 h, involved regional
directors working through a recently completed QF report
in order to agree on how to score each element. One by
one, through each of the 14 elements, the process for using
indicator scores to determine an overall element score was
discussed. Looking fed-up, a regional director said:
‘‘Can I just ask, do we really care how accurately we
score? [quizzical looks from other regional directors].
No, honestly, so we could spend a lot of time working
out how we score it and use it for comparison but I
mean you could roughly get a score on an average with-
out spending too much time on the scoring but concen-
trate on what they’re saying, and concentrate on quality
discussion which presumably we also want to do.’’
(Regional Director 5, QF meeting, May 2009).
The ensuing discussion did not focus on what pro-
gramme of?ces were ‘saying’, or on how to ‘concentrate
on quality discussion.’ Rather, debate focused on whether
the ‘average’ was a legitimate way to determine element
scores, and whether the most important indicator in each
element should be designated as the ‘lead’ indicator for
21
QF Element Summary document, 2008.
22
OFSTED is the Of?ce for Standards in Education, Children’s Services and
Skills in the UK, an independent body that inspects schools and provides
ratings of performance. For example, schools are awarded an overall grade
from 1 to 4, where 1 is outstanding, 2 is good, 3 is satisfactory and 4 is
inadequate (OFSTED website, www.ofsted.gov.uk/, accessed 24 July 2010).
23
QF Guidance document, 2009.
24
As an example, indicator 8.1 on funding was changed whereby the
ability to assess performance on a ‘country by country’ basis was replaced
with explicit monetary ranges that would apply to each programme of?ce
regardless of its size of operations or different external funding
environments.
25
QF Reporting Template, 2009.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 277
the purposes of scoring. This reveals how debate and dis-
agreement was focused exclusively on the mechanics of
the scoring process, rather than providing a forum for the
consideration of the different substantive issues in play.
Despite a protracted discussion, at the meeting’s end there
was no established process, except for general agreement
that an overall element score ‘‘won’t be based on an arith-
metic average of KPI results for that element.’’
26
Notwith-
standing this guidance, virtually all changes to the QF
resulting from this criticism privileged consistency in
scoring over that of country uniqueness. While more consis-
tent scores were (arguably) likely, the initial compromise
appeared tenuous and a counter-criticism developed around
the lack of inspiration evident in the QF.
But you’re not inspiring!
The focus on consistency and precision in the changes
to the scoring process meant that the use of indicators to
inspire was given minimal attention. Concerns were ex-
pressed in a QF meeting in March 2009:
Regional Director 3: The messages that we’re giving to
programs at the moment about thinking creatively,
being ambitious, being innovative and so on, are not
necessarily captured in this element, in this thing here
[points at the QF document]. . .I think this is really good
for telling us where we’re at and measuring what we’re
looking at measuring but in terms of really looking to
shift and change, I just wonder how we’re going to do
that, and where that’s captured.
PLA
27
Staff 1: I think there’s a bit that’s still missing in
the quality framework because it’s become a set of indi-
cators, so the bit that I think is missing is that we don’t
really have anything about culture.
Regional Director 3: Yeah, that’s what it is, yeah
[enthusiastically].
PLA Staff 1: If people ful?l all these indicators. . .that
might not be enough to achieve what we’re really look-
ing for, you know. . .we’ve got ?xed on the elements but
there’s something behind it all that we haven’t quite
nailed. . .
Here, a strong criticism of the QF is made by likening it
to a ‘set of indicators’, with such a description generally
considered to be a damning indictment of any evaluation
practice at VSO. Furthermore, concern is expressed that
indicators cannot capture what is most valued, that is,
the desire to ‘shift and change’, ‘achieve what we’re really
looking for’ and ‘culture’ are ‘missing’ in the QF. This con-
cern was coupled with critical feedback on other changes
made to the QF, for example, the emphasis on standardiza-
tion meant that the indicators no longer captured perfor-
mance ‘accurately.’ Re?ecting general concern with the
changes, this statement appeared in one country pro-
gramme’s QF report:
‘‘The QF report has grown exponentially this year, and
the indicators have changed and were only issued a
month before the report is due. . .Good practice in mon-
itoring and evaluation is to collect evidence and learn-
ing on a day-to-day basis, which is dif?cult to do if
the ground keeps shifting under one’s feet.’’
28
Increasingly, discussions of the QF were focused on crit-
icisms and counter-criticisms over the speci?c details of
the scoring process. A regional director summarized the
state-of-play after completion of the QF for the second
year:
‘‘People can see that we’ve tried to make it a little bit
more objective in the way that it’s done, but I am get-
ting quite a lot of critical feedback that the quality
framework is so big, so many indicators, stuff being sent
really late. . .the whole thing is just a quantitative scor-
ing tool and it’s not about learning in any way, shape or
form. . .so I am getting quite critical feedback’’ (Regional
Director 5, QF meeting, May 2009).
In this quote, the focus on indicators and scoring pro-
cesses is viewed as sti?ing opportunities for learning. Thus,
an initial compromise between standardization through
scoring and recognition of country uniqueness had fal-
tered. The initial praise for the QF had dissipated and
was replaced by critical feedback, particularly from coun-
try directors who felt that the push for consistency had
moved too far such that the QF was no longer about learn-
ing and instead was labelled as a ‘quantitative scoring tool’,
a severe condemnation at VSO. We also see that the de-
bates about indicators, performance ranges, or scoring
methodologies were increasingly focused on the QF itself,
often in the context of long meetings with little productive
output. The previously fruitful discussions about how to
improve quality or make compromises between differing
values were almost non-existent. These developments
were also evident in initial debates about how to use the
QF to improve programme of?ce performance.
Debate 2: How to improve quality?
Given the considerable effort that had gone into the
development of the QF, there were high hopes that it
would lead to improved performance of programme of?-
ces. On the one hand, there was a strong desire to improve
through ‘learning’, whereby the QF would help identify
examples of innovative practice that could then be shared
amongst programme of?ces. Concurrently, there was a be-
lief that the QF could be used to generate a sense of ‘com-
petition’ amongst programme of?ces, which would lead to
increased motivation and thus improved performance.
These different positions on the avenues to improved per-
formance presented many obstacles to enacting an accept-
able compromise; obstacles that, at ?rst, proved dif?cult to
overcome.
Although not stated explicitly in QF documentation, the
engendering of a sense of competition emerged through
26
QF Guidance document, 2009.
27
The PLA (Programme Learning and Advocacy) unit was a team within
VSO whose main role was to support programme of?ces in learning from
their own work and sharing good practice with other programme of?ces.
28
QF Country Report, 2009.
278 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
the way in which the results were distributed to country
directors via email:
Country Director 9: ‘‘We were sent back a kind of
world-wide kind of scoring sheet and obviously all that
that had was a series of numbers. Sort of 1, 2, 3 ?lled
with red and green and orange. Although to be honest
we came second in the world. . .I feel quite sorry for
the countries that have scored quite low because I really
don’t think it’s a valid scoring system. . .but for me it
was quite handy to be able to say this and say ‘‘maybe
next year- ?rst’’ and all the rest.’’
‘‘How do you know you are second in the world? Was
there some kind of ranking?’’
Country Director 9: ‘‘Yeah, they sent us a ranking. They
sent us a worldwide ranking thing afterwards.’’
‘‘And so countries were rank ordered from 1 to 34?’’
Country Director 9: ‘‘Yeah it was a summary of results
with ranking. And you could look against different
scores so you could see that. . .globally you came second
on something and third on something else but then
there was an overall sort of score.’’
(Interview, Country Director 9, November 2008)
Despite reference to an ‘overall sort of score’, the
spreadsheet did not contain a summary score for each
country and countries were not ranked, but listed in alpha-
betical order. As such, Country Director 9’s country only
appeared ‘second in the world’ by virtue of its name begin-
ning with a letter near the beginning of the alphabet. Other
similar stories emerged of countries with names that
started with letters towards the end of the alphabet believ-
ing that they had performed poorly. These examples were
the source of much joking at IPG staff meetings in London,
with suggestions that ‘Albania’ will be top of the league ta-
ble next year, and that the solution was to put countries in
reverse alphabetical order. This light-heartedness about
rankings belied an appreciation of how aware country
directors were of the competitive mantra that lay behind
the spreadsheet’s distribution, and how this was viewed
as sti?ing learning opportunities, as one country director
commented:
‘‘When this whole [QF] thing was being started, some
of the conversations were framed around what are
the indicators of quality in a programme of?ce that
is doing well. How do we assess whether Ghana is
better than Zimbabwe or vice versa? So I think the
framing of the conversations around that time kind
of planted the seeds of a league table. . .as long as
people continue to see it [the QF] as a league table
then we might see each other as competitors and
therefore everybody [will keep] what he or she is
doing very close to their chest’’ (Interview, Country
Director 6, November 2008).
Importantly, several features of the spreadsheet served
to reinforce the ‘league table mentality’ noted by the coun-
try director. First, it ‘was a series of numbers’ and did not
contain any of the narrative discussion. Second, only the
overall element scores were displayed without the speci?c
indicator scores. Third, each element score of 1–4 was as-
signed a colour to make differences between scores in
the spreadsheet visually distinct, with ‘Low’ performers
particularly prominent as scores of ‘1’ were assigned the
colour red. This led regional directors to question the use
of scores to promote learning, suggesting that it was their
role, rather than that of a spreadsheet, to direct country
directors to examples of good practice. More fundamen-
tally, however, not only was comparison of countries in a
spreadsheet not considered helpful for sharing best prac-
tice, but the uniqueness of each country also made such
comparisons ‘unfair’:
Regional Director 1: ‘‘I don’t see the value of knowing
that, for example, on maybe even six of the twelve cri-
teria, West Africa comes out worse than say South-East
Asia because my interpretation instinctively would be
what are the cultural, educational, historical back-
ground, you know, accumulation of circumstances in
South-East Asia that means that they’re in a completely
different environment.’’
Regional Director 4: ‘‘It [a league table] makes it [com-
parisons] into a competition essentially.’’
PLA Staff 1: ‘‘Yeah, but it’s an unfair competition. . .It’s
like getting Marks & Spencers compared with Pete’s
Café across the road where you’ve got totally different
contexts.’’
29
(QF meeting, March 2009)
Claims of unfairness speak to tensions arising because
the principle that country uniqueness is important was
not being respected. The process of reducing perfor-
mance on an element to a standardized metric was seen
by country directors to have ensured that the contextual
information required to understand these scores had
been stripped away. Even when this information was
present in the narrative section, it did not accompany
the scores in the spreadsheet and was thus seemingly ig-
nored (or considered too dif?cult to take into account).
In this way, the ideals embodied in different modes of
evaluation did not co-exist as the values of competition
had in effect ‘swamped’ the values of learning and coun-
try uniqueness.
This situation was also problematic because improv-
ing quality through learning was an explicit ambition
of the QF, with one of its stated purposes ‘‘to help iden-
tify strengths and areas for improvement across pro-
grammes so that steps can be taken to grow
programme quality and foster learning’’ (emphasis in
original).
30
Country directors, however, felt that there
was no formal mechanism for sharing learning between
programme of?ces. Consequently, the view amongst coun-
try directors was that learning from the QF had been gi-
ven a relatively low priority, and its potential for sharing
good practice went unful?lled. Seeking to reorder these
priorities was an important debate in preparing the QF
for its second year.
29
Marks and Spencers is a large UK department store whose annual
revenue in the ?nancial year 2009/2010 was £9.5 billion. Pete’s Café was a
small café immediately opposite the VSO building in London.
30
QF Guidance documents, 2008 and 2009.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 279
Reordering priorities of learning and competition
Over a series of three meetings, IPG staff debated rank-
ing programme of?ces using QF data. This excerpt is from
the ?rst meeting in March 2009:
Regional Director 3: ‘‘You don’t have to send it [the
scores] out in a table to everybody but you can go and
look and say ‘right, ok, this country over here does
really good volunteer engagement, why don’t we
arrange some sort of visit or some sort of support from
that’, that would absolutely make sense but to send
something out and say look for the [countries that have
scored] fours and talk to them doesn’t.’’
Regional Director 2: [. . .] ‘‘I think we all agree there’s a
reason to link people up according to where there is
good practice, or good performance and there are ways
to do that that aren’t about a published table. So, who’s
for a published table, who’s against a published table,
who’s for a published table at this stage?’’
[General laughter]
Regional Director 4: ‘‘Obviously we [regional directors]
are discouraging’’.
Director, IPG: ‘‘I would re?ect on it a bit further. . .I’m
not sure how helpful it is when the table only had the
element scoring and it’s very subjective. . .so I think
work out if anyone else found it useful. . .I’m thinking
all of this is worth picking up again in that next
meeting.’’
Here, regional directors were generally against the idea
of a spreadsheet being the vehicle for identifying good
practice, instead arguing that they should take an active
role in linking up programme of?ces to help improve per-
formance. In particular, there was a strong argument
against using scores of four (the highest score possible on
an element) to identify what is considered good practice.
The IPG Director believed that using the QF to create com-
petition was sound, but could see weaknesses in the
mechanics of the league table (‘only. . .element scoring’,
‘very subjective’). As such, he was not yet convinced of
the league table’s apparent inappropriateness and stalled
any decision to the next meeting.
Convened in May 2009, the next QF meeting was fo-
cused on convincing the IPG Director (not in attendance)
that a ranking was not appropriate:
Regional Director 3: ‘‘I’m just wondering what the rea-
son for having a league table is.’’
PLA Staff 1: ‘‘I think it’s the idea you publish informa-
tion and then people will be shamed, people will feel
they got a low performance, they will feel forced to
have to make improvement because it’s public.’’
PLA Staff 3: ‘‘There is a real danger of labelling them
[programme of?ces], isn’t there? That’s what’s really
horrible about this because someone then gets labelled
as being the of?ce that’s rubbish at volunteer engage-
ment or the one that’s great at such and such.’’
PLA Staff 1: ‘‘Yeah, yeah I agree, yeah. The reason I really
don’t like it, I don’t see how an organization’s [that’s]
about volunteering and is very personal how that. . .sort
of. . .philosophy could really ?t with this [league table],
but the other thing is I think it will change the quality
framework from being a learning tool. . . my real fear
is if you publish the scores people get ?xated on doing
well on particular indicators, which we’re now saying
aren’t good enough, rather than the spirit of trying to
actually improve. . .so I think it’s a combination of phi-
losophy in terms of what VSO is about but also, you
know, keeping the quality framework as something that
is a learning tool.’’
In contrast to the arguments used in the ?rst meeting,
this criticism was more fundamental, in that it directly
criticized the very principles upon which the spreadsheet
and (apparent) ranking system was based. Here, the use
of competition to ‘label’ and ‘shame’ programme of?ces
into improvements was viewed as ‘horrible’ and the league
table considered incompatible with the purpose of the QF
as a learning tool. Finally, and perhaps most telling, is that
ranking programme of?ces was viewed as being against
the ideals of ‘volunteering’ and personal engagement that
are considered critical to VSO’s philosophy, as expressed
above by PLA Staff 1. In June 2009, a third and ?nal meet-
ing to debate the league table issue was convened. The
above arguments were used in this QF meeting with the
IPG Director, and a compromise agreed:
Director, IPG: ‘‘All right, let’s do it a different way. . .let’s
ask each element leader to highlight con?dentially
where they think there are real concerns.’’
PLA Staff 2: ‘‘So, ok, that’s ?ne, then what? What hap-
pens to that information?’’
Director, IPG: ‘‘So basically the element leaders are
informing discussions about where we might priori-
tize. . .so the top three is highlighting good practice, giv-
ing an indication to countries across the world where
they might want to talk to in terms of good practice,
and the bottom one is just con?dential for management
purposes.’’
Importantly, the earlier appeals by the IPG Director to
improve the mechanics of the league table were no match
for arguments undermining the very principle upon which
it was based. As such, the compromise between competi-
tion and learning, between comparisons and sharing good
practice, was resolved by abolishing the league table and
replacing it with a new practice of differential disclosure.
That is, the identity of good performers would be made
public whereas the identity of poor performers would be
kept con?dential. Disclosure of good performers would al-
low the sharing of good practice between programme of?-
ces, and disclosure of poor performers to the IPG Director
would allow management action to be taken but without
‘naming and shaming’ programme of?ces in the process.
It is here that debates about the league table facilitated
productive discussion between those who viewed ‘compe-
tition’ as the route to improvement versus those who saw
learning as the way to increase quality. Unlike the debates
over consistency in scoring, discussion was not focused on
the QF per se, but was connected to broader principles,
such as uniqueness and innovation, competition and a vol-
unteering ethos. In this way, principled argument led to a
280 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
compromise between different evaluative principles, de-
spite strong enthusiasm for the spreadsheet and ranking
system to remain.
Epilogue
Towards the end of the ?eld study, a review of existing
‘‘Quality Initiatives at VSO’’ was conducted, including the
QF. While analysis of the QF was generally favourable,
numerous ‘‘areas of improvement’’ were suggested:
Its holistic and coherent nature allows people to think
more broadly and re?ect on the progress of the whole
programme. . .The process of doing the report makes
people take stock, consider areas of improvement and
make action plans accordingly. . ..It seems that the QF
is not referred to or used as often as people would like-
. . .The numbers are not useful because they are too
mechanistic, yet subjective and inconsistent across
POs. . . [Self-assessment] is a great way for the PO to
take stock and think about their performance and how
to make improvements. But many people feel that this
needs some kind of external support and veri?ca-
tion. . . although the design of the framework is quick
and simple. . .It has become too long and its design
means that indicators are ‘set in stone’ to a certain
degree in order to make comparisons from one year to
the next.
31
This analysis reveals that the QF enabled broad thinking
but was not used enough; self-assessment helped to ‘take
stock’ but needed external veri?cation; and the QF was
simple, yet too long. We see that positive features of the
QF that were closely aligned to one mode of evaluation
inevitably gave rise to suggestions for improvement that
sought to address the concerns of those with different
evaluative principles. The evaluation highlights how the
compromises being made in the design and operation of
the QF were not ‘resolved’ but formed a series of temporary
settlements (cf., Gehman et al., 2013; Kaplan & Murray,
2010; Stark, 2009) between different evaluative principles.
In this way, the process of establishing and maintaining
compromises between different modes of evaluation can
be seen as a dynamic and enduring feature of a compro-
mising account.
Discussion
Taking tensions between different logics and values as
the starting point for our analysis, this study has focused
directly on how accounting is implicated in compromising
between different evaluative principles and the way in
which such compromise can be productive or unproduc-
tive. Accounts are particularly important in settings of con-
?icting values because they are sites where multiple
modes of evaluation all potentially operate at once (Stark,
2009). Our ?eld study shows how VSO’s attempts to mea-
sure the performance of its programme of?ces brought to-
gether differing modes of evaluation, one based primarily
on ‘Learning and Uniqueness’ and the other based primar-
ily on ‘Consistency and Competition’, where each mode of
evaluation was distinguished according to its purpose, the
desirable attributes of a good evaluation and subsequently
the desirable attributes of a good account (see Table 2).
Making choices about indicators, types of scoring pro-
cesses, the identi?cation of good and poor performers,
and different methods of data analysis, created sites for de-
bate between individuals and groups who espoused these
different evaluative principles (Stark, 2009; Jay, 2013; Geh-
man et al., 2013; Moor & Lury, 2011; Denis et al., 2007). In
this way, our analysis reveals how an account itself can act
as an agent in the process of compromise between differ-
ent evaluative principles. A compromising account is thus
both the process of, and at particular moments the speci?c
outcome of, a temporary settlement between different
modes of evaluation. Analogous to Chua’s (2007) discus-
sion of strategizing and accounting, this draws attention
to a compromising account as both a noun, i.e., the account
itself that is produced in some material form (e.g., a bal-
anced scorecard, a ?nancial report), and as a verb, i.e., the
processes of compromise that lead to and follow on from
the physical production of an account.
Our study shows that differences in the design and
operation of accounting practices can affect the extent of
compromise between different evaluative principles, and
whether such compromise is productive or unproductive.
In particular, our ?ndings reveal that the potential for ac-
counts to provide a fertile arena for productive debate is
related to three important processes: (1) imperfection –
the extent to which the design and operation of accounting
practices represents a ‘give and take’ between different
evaluative principles; (2) concurrent visibility – the way
in which desirable attributes of accounts are made visible
in the design and/or operation of the accounting practice;
and (3) the extent to which the discussions concerning po-
tential problems with the accounting practice are focused
on underlying evaluative principles (vs. mechanics/techni-
cal considerations). In the discussion below we elaborate
the characteristics of these processes, and then conclude
the paper by outlining the implications for future research
and highlighting the insights for practice.
‘Imperfection’ and the potential for ‘productive friction’
In organizational settings with multiple and potentially
competing evaluative principles, the development of com-
promises re?ects a temporary agreement (Gehman et al.,
2013; Kaplan & Murray, 2010; Stark, 2009). In this setting,
rather than reaching closure, the development and opera-
tion of compromising accounts entails on-going adjust-
ment (cf., Gehman et al., 2013). This was clearly evident
in VSO’s QF, where it was subject to on-going criticism
and re?nement and was ‘loved by no-one.’ We suggest that
it is the ‘imperfect’ nature of the QF that was pivotal to its
continued existence as a compromising account. We see
that the constant shifting and rebalancing in the QF’s de-
sign and operation enabled the co-existence, albeit often
temporary, of different modes of evaluation. Changes priv-
ileging one mode of evaluation, such as a focus on a more
rigorous and consistent scoring process, were accompanied
31
Review of Quality Initiatives at VSO document, 2010.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 281
by changes that shifted the emphasis back to another
mode of evaluation, such as ensuring the analysis of QF
data included a pairing of numbers with narrative. It was
this ‘give and take’ between different modes that helped
to resist pressures for recourse to a single and therefore
ultimately dominant mode of evaluation (cf., Thévenot,
2001), and enabled productive friction to arise from the
coming together of different evaluative principles. In this
way, compromises involving multiple evaluative principles
are inherently ‘imperfect’ when enacted in practice (cf.
Annisette & Richardson, 2011).
We see our ?ndings in this regard as having parallels
with recent literature on the ‘imperfect’ nature of perfor-
mance measures (see, for example, Andon et al., 2007;
Dambrin & Robson, 2011; Jordan & Messner, 2012).
These studies often stress the importance of organiza-
tional actors ‘making do’ with the existing performance
measurement system, despite its perceived imperfec-
tions. For example, Bürkland, Mouritsen, and Loova
(2010) show how actors compensate for ‘imperfect’ per-
formance measures by using other information, while
Jordan and Messner (2012) ?nd that actors respond to
incomplete performance measures in two ways; by try-
ing to repair them or by distancing themselves from
the measures. However, in our study we ?nd that rather
than organizational actors merely ‘making do’ with
imperfect performance measures, it was these ‘imperfec-
tions’ that helped to provide a fertile arena for produc-
tive dialogue and discussion between individuals and
groups with differing values (cf., Denis et al., 2007; Geh-
man et al., 2013; Jay, 2013; Moor & Lury, 2011; Stark,
2009). In this way accounts can play a role in surfacing
latent paradoxes and providing space to work out ways
to combine different evaluative principles (Jay, 2013).
The struggles between different evaluative criteria can
prompt those involved to engage in deliberate consider-
ation about the merits of existing practices (Gehman
et al., 2013). Here we see the importance of the accom-
modation of different perspectives and recognition by ac-
tors that the proposed solution (in our case the QF),
although not perfect, provides a ?tting answer to a prob-
lem of common interest (cf. Huault & Rainelli-Weiss,
2011; Samiolo, 2012).
‘Imperfect’ accounts, such as VSO’s QF, are therefore
not just about ‘making do’, but can create opportunities
for bringing together competing value systems and, thus,
the potential for what Stark (2009: 19) terms ‘productive
friction.’ This was most evident in the league table de-
bates, where discussions between actors with different
evaluative principles led to changes in the use of spread-
sheets and element summaries that recognized a reor-
dering of the priorities between learning and
competition. Here we see the role of compromising ac-
counts as creating a form of organized dissonance, that
is, the tension that can result from the combination of
two (at least partially) inconsistent modes of evaluation.
A compromising account can thus be a vehicle through
which dialogue, debate and productive friction is pro-
duced, where it is the discussion that can result from
having to compromise on the design and operation of
an account that can be productive.
Concurrent visibility
But how does a compromising account enable organ-
ised dissonance? Our study indicates that an important
feature of a compromising account is that of ‘concurrent
visibility.’ To facilitate organized dissonance it was critical
that the QF made visible the features of an account that
were important to different groups. We use the term ‘visi-
ble’ in a broad sense to refer to how the design and opera-
tion of a compromising account reveals the attributes of
accounts that are important to organizational actors with
different evaluative principles. For example, in the physical
format of the QF, indicators were accompanied by narra-
tive boxes, which enabled compromise between the evalu-
ative principles of standardization and country
uniqueness. In addition, the differential disclosure of good
and poor performing countries (post the league table) facil-
itated compromise between the evaluative principles of
learning and competition. The concurrent use of these dif-
ferent features gave visibility to the importance of different
modes of evaluation. This resonates with Nahapiet (1988),
where the resource allocation formula helped to make val-
ues more visible and tangible and prompted explicit con-
sideration of three fundamental organizational dilemmas.
More generally, it resonates with the way in which instru-
ments like accounting and performance measurement sys-
tems are well suited to rendering visible the multiplicity of
criteria of evaluation (Lamont, 2012).
We suggest that where the co-existence of different
evaluative principles is an on-going feature of organiza-
tions, organizational actors are likely to be particularly
concerned that their fundamental principles may not be
respected and thus come to be dominated by others (cf.,
Denis et al., 2007). It is here that ‘concurrent visibility’ in
a compromising account can provide con?rmation and
reassurance that a particular mode of evaluation is, indeed,
recognized and respected, thus making productive debate
more likely. The visibility of different evaluative principles
in the account also serves to crystallize the compromise
between them in a material form (cf., Denis et al., 2007).
The importance of concurrent visibility is evident by
contrasting the views of the QF at the end of the ?rst and
second years of operation. The features of the QF during
its ?rst year of operation (narrative, local knowledge,
judgement, common elements and indicators) gave expli-
cit recognition to different evaluative principles and thus
helped to develop a compromise between values of stan-
dardization and country uniqueness. In contrast, changes
to make the QF more consistent removed many of the fea-
tures that recognized country uniqueness as an important
evaluative principle. Subsequently, the initial praise for the
QF had dissipated and was replaced by ‘endless’ disagree-
ments and critical feedback, which resulted in a situation
where actors were ‘stuck’ between different evaluative
principles (Jay, 2013).
Our study also reveals, however, that there are limits to
the way in which concurrent visibility can facilitate orga-
nized dissonance, particularly where the strategy is ‘addi-
tive.’ That is, to address the evaluative principles
favoured by different organizational actors, the account
can simply encompass more and more of those desired fea-
282 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
tures. Over time, however, the account is likely to become
cumbersome and unwieldy, as we saw with the QF when,
at the end of its second year of operation, it was described
as ‘so big, so many indicators.’ As such, without careful
attention, concurrent visibility could potentially be direc-
ted towards the appeasement of different modes of evalu-
ation rather than serving as a necessary entry point for
productive discussion over the merits of different evalua-
tive principles.
Criticisms of accounts and breakdowns in compromise
Our study also highlights an important distinction be-
tween the types of responses that can emerge in situations
where compromises break down and accounting practices
are viewed as ‘not working.’ One criticism of the QF con-
cerned the presentation of scores in a spreadsheet and the
subsequent illusion of a league table ranking of countries
according to their overall performance. Such a practice
was viewed as privileging the value of ‘competition’
above that of ‘learning’ and was thus primarily a debate
about the principles and values underlying the use and
operation of the league table (cf., Gehman et al., 2013).
Here, there was a passionate response from those actors
who felt that a fundamental principle was not being re-
spected (Denis et al., 2007), particularly that the league
table ignored their belief that the performance and hence
value of country programmes was ‘incommensurable’
(Espeland & Stevens, 1998). This debate was not about
how to ‘?x’ the league table per se but focused on
whether the league table itself was an appropriate prac-
tice – revealing a situation where actors re?ect at a dis-
tance on the values underlying the existing practice
(Gehman et al., 2013; Sandberg & Tsoukas, 2011). This
helped the actors to confront the latent paradoxes (Jay,
2013) evident in the use of a league table and facilitated
‘productive friction’ between those who viewed ‘competi-
tion’ as the route to improvement versus those who saw
learning as the way to increase quality. As a result, a
new practice was developed (i.e., differential disclosure
of good and bad performers) that helped to integrate dif-
ferent evaluative principles in a more substantive way
(Jay, 2013; Stark, 2009).
Another criticism of the QF was directed at its lack of
consistency and thus inability to enable meaningful com-
parisons of country performance. This was primarily a
criticism of the implementation of the QF’s scoring pro-
cess, where discussion focused on what was problematic
about the current practice and how to ?x it (cf., Gehman
et al., 2013; Sandberg & Tsoukas, 2011) and not on
whether scoring itself was an issue of concern. As such,
subsequent changes to the QF focused on removing fea-
tures of the existing scoring process that were seen not
to align with the value of consistency, and adding fea-
tures viewed as promoting consistency. Such changes
clearly shifted the scoring process of the QF in favour
of those organizational actors who held consistency in
scoring as an essential feature of an evaluation process.
Rather than integrating different perspectives, however,
this response can be characterised by oscillation and
‘stuckness’ (Jay, 2013) between the evaluative principles
of consistency and country uniqueness. Furthermore, as
these debates were primarily focused on technicalities,
they took up valuable meeting time that effectively pre-
venting meaningful engagement (i.e., ‘productive fric-
tion’) with the underlying principles. This resonates
with Stark’s (2009) warning that disputes over the
mechanics of existing practices can limit effective
changes and result in endless disagreements where noth-
ing is accomplished.
Conclusion
Our study has highlighted the importance of examining
the role of accounting in facilitating (or not) compromises
in situations of multiple evaluative principles. Our results
indicate that much can be learned by focusing on how ac-
counts can potentially bring together differing (and often
competing) evaluative principles, where such encounters
can generate productive friction, or lead to the re?nement
of accounting practices and ‘endless’ debate and discussion
over technicalities and the mechanics of the account. We
view accounts as central to processes of compromise in
organizations because it is in discussions over the design
and operation of accounts that the worth of things is fre-
quently contested by organizational actors. Drawing on
Stark’s (2009) concept of organizing dissonance, our study
shows that there is much scope for future research to
examine how accounts can create sites that bring together
(or indeed push apart) organizational actors with different
evaluative principles, and the ways in which this ‘coming
together’ can be potentially constructive and/or
destructive.
Our analysis also has implications for the ways in which
performance measures and other accounting information
can be mobilized by managers and practitioners as a re-
source for action (cf., Ahrens & Chapman, 2007; Hall,
2010). In particular, our results indicate that ‘imperfect’
performance measures can actually be helpful, that is, they
can be used by practitioners to generate productive dia-
logue, despite, or, as our analysis shows, because of, their
perceived imperfections. This resonates with Stark
(2009), who argues that entrepreneurship is the ability to
keep multiple evaluative principles in play and exploit
the friction that results from their interplay. Here, the
‘imperfect’ nature of compromising accounts can enable
skilful organizational actors to keep multiple evaluative
principles in play. In contrast, a focus on the continual
re?nement of accounts and a quest for ‘perfection’ can lead
to the domination of a single evaluative principle, ‘distanc-
ing’ organizational actors who hold different evaluative
principles, and limiting opportunities for productive
friction.
A further implication of our study is to promote further
research on how performance measurement systems, and
accounting practices more broadly, are actually developed
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 283
in organizations (cf., Wouters & Wilderom, 2008). In par-
ticular, we analyzed the different responses that can occur
when compromises breakdown and how they relate to the
potential for productive friction. More broadly, it is unclear
how organizational actors negotiate the development of
performance indicators and what types of responses and
arguments prove (un)successful in these encounters. This
could prove a fruitful area for future research.
We conclude by outlining the practical implications of
our study, which centre on imperfection and concurrent
visibility. Although practitioners are no doubt aware of
the need to ‘make do’ with the perceived inadequacies of
performance measures, our study indicates that the pro-
ductive discourse arising from performance measurement
is perhaps more important than ensuring that such mea-
sures (or accounts more generally) are ‘complete.’ Our
analysis of concurrent visibility indicates that practitioners
should ensure that features of accounts that are of funda-
mental importance to particular groups are explicitly rec-
ognized, whether in the material content of the account,
the associated scoring and evaluation processes, or in its
use in wider organizational practices.
Acknowledgements
We would like to thank David Brown, Chris Chapman,
Paul Collier, Silvia Jordan, Martin Messner, Yuval Millo,
Brendan O’Dwyer, Alan Richardson, Keith Robson, Wim
Van der Stede, seminar participants at Cardiff Business
School, Deakin University, HEC Paris, La Trobe University,
London School of Economics and Political Science, Turku
School of Economics, and University of Technology Sydney,
and conference participants at the Conference on New
Directions in Management Accounting 2010 and Manage-
ment Accounting as Social and Organizational Practice
workshop 2011 for their helpful comments and sugges-
tions. The support of CIMA General Charitable Trust is
gratefully acknowledged.
Appendix A: Strategic resource allocation tool
Assessment of Programme effectiveness
Country:
The higher the overall percentage a Programme Of?ce re-
ceives in this tool, the more ‘‘effective’’ it will be perceived
to be based on this measure.
Section A. Focus on disadvantage (48% of total score)
Measure % of total
score
1 HDI 17
2 Percentage of more disadvantaged
people being reached through
implementation of CSP aim
10
3 Scored analysis of how well strategies
are working in addressing the causes
of disadvantage
10
4 Disadvantage Focus in Current and
Planned Placements
11
Section B. Outputs of country programme (27% of total
score)
Measure % of total
score
5 What% of placements in the last 2
planning years have fully or mostly
met their objectives (not including
early return reports)?
13
6 What was the Early Return rate
(excluding medical and
compassionate) over the last two
planning years?
4
7 What percentage of the ACP target of
fully documented requests (i.e. with
Placement Descriptions) was
submitted on time over the last 3
planning years?
5
8 What percentage of the ACP target
number of volunteers was in country
on 31/3/01, 31/3/00 and 31/3/99?
5
Section C strategic approach (25% of total)
Note that the statements attached to each score are for
guidance and are not absolute statements: we recognise
that with some programmes no one statement will accu-
rately describe the programme. The RPM must have a clear
idea of the rationale behind the scoring, in order to ensure
transparency and to allow comparison between countries.
All of your scores should be based on an analysis of the cur-
rent situation – i.e. not future strategy or placements.
9. Strategic approach based on
programme at the current time
Score % of
total
score
(a) Placements working at different
levels (micro/macro) towards
strategic aims + planned links
between them
4
(b) Critical appraisal of placements with
clear rationale linking placement to
strategic aim + planned exit strategy
4
(c) Strategic and linked imple-
mentation of cross cutting themes
2
(d) In-country advocacy by the
programme of?ce
2
(e) PO proactive in promoting increased
development understanding amongst
volunteers
2
(f) Openness and commitment to
learning
5
(g) Genuine partnership relationship
with employers and other
development actors
4
(h) Types of placements used most
appropriate to needs of
disadvantaged groups and based on
strategic reasoning
2
284 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
Appendix B
QF summary sheet 2008.
References
Ahrens, T., & Chapman, C. S. (2002). The structuration of legitimate
performance measures and management: Day to day contests of
accountability in a UK restaurant chain. Management Accounting
Research, 13, 151–171.
Ahrens, T., & Chapman, C. S. (2004). Accounting for ?exibility and
ef?ciency: A ?eld study of management control systems in a
restaurant chain. Contemporary Accounting Research, 21, 271–301.
Ahrens, T., & Chapman, C. S. (2006). Doing qualitative ?eld research in
management accounting: Positioning data to contribute to theory.
Accounting, Organizations and Society, 31(8), 819–841.
Ahrens, T., & Chapman, C. S. (2007). Management accounting as practice.
Accounting, Organizations and Society, 32(1–2), 1–27.
Andon, P., Baxter, J., & Chua, W. F. (2007). Accounting change as relational
drifting: A ?eld study of experiments with performance
measurement. Management Accounting Research, 18, 273–308.
Annisette, M., & Richardson, A. J. (2011). Justi?cation and accounting:
Applying sociology of worth to accounting research. Accounting,
Auditing and Accountability Journal, 24, 229–249.
Annisette, M., & Trivedi, V. U. (2013). Globalisation, paradox
and the (un)making of identities: Immigrant Chartered
Accountants of India in Canada. Accounting, Organizations and
Society, 38(1), 1–29.
Baines, A., & Lang?eld-Smith, K. (2003). Antecedents to management
accounting change: A structural equation approach. Accounting,
Organizations and Society, 28(7–8), 675–698.
Biggard, N. W., & Beamish, T. D. (2003). The economic sociology of
conventions: Habit, custom, practice, and routine in market order.
Annual Review of Sociology, 29, 443–464.
Bird, D. (1998). Never the same again: A history of VSO. Cambridge:
Lutterworth Press.
Boltanski, L., & Thévenot, L. (1999). The sociology of critical capacity.
European Journal of Social Theory, 2(3), 359–377.
Name of Programme:
r o t a c i d n I t n e m e l E Indicator
result
Element
result
A.1 Annual Progress in PAP objectives is achieved 4
A.2 Programmes and partners monitor and review progress in capacity development and/or
service delivery
4
B.1 Positive changes for target groups of partners are achieved 4
B.2 Programmes and partners are able to understand, monitor and review changes for target
groups
4
1.1 PO is responsive to changes in social, economic and political context 4
1.2 PO is consulted by peer agencies and / or government bodies a credible development agency
within its field of operation
4
1.3 Programmes working at different levels (e.g. national, provincial, districts, grass-roots)
towards strategic aims
4
2.1
The contribution of National Volunteering (NV) to programme delivery has been maximised
4
2.2. The contribution of a range of different interventions to programme delivery has been
maximised
4
2.3 Development awareness amongst volunteers and the wider community has been maximised 4
2.4 Opportunities to develop international resource partnerships are fully explored and developed 4
2.5 The contribution of advocacy to programme delivery has been maximised 4
3.1 LTV/YFD & STV arrivals during year against reforecast plans 4
3.2 Firm and documented placement delivery against reforecast plans 4
3.3 Quality of placement documentation 4
3.4 PIP milestones successfully completed across all programmes 4
4.1 The PO has an inclusion statement and standards that are shared amon g all staff and that
new staff sign. This is for both programme work and how the programme office is run
4
4.2 Partner organisations include excluded groups in their work and as part of their target group 4
5.1
Number of PIPs updated and signed off annually as a result of PARs in line with guidance
4
5.2 All programmes are reviewed annually in line with guidance 4
6.1 Portfolio of partners in place relevant to PAP and CSP objectives 4
6.2 Long-term (3-5 years) Partnership Plans are in place which include partnership objectives that
are linked to the PAP objectives
4
6.3 Partners are actively involved in programme development and review 4
6.4 Partnerships are reviewed annually to assess progress towards Partnership and PAP
objectives and quality of the relationship with VSO
4
7.1 Volunteer support baselines are being met by the Programme Office and partners are
supported to manage volunteers
4
7.2 PO celebrates volunteer achievement, responds to volunteer problems effectively and
encourages the development of effective and accountable volunteer groups
4
7.3 Volunteers are engaged in programme development 4
8.1 Value of proposals signed off by the CD/RPM against agreed quality criteria in Stage 2 of
PMPG.
4
8.2 Restricted income as % of PO Total Budget (including vol. recruitment costs) 4
8.3 Donor conditions for existing funding have been met (including financial, narrative and audit
reports submitted on time and to the standard required by the donor) throughout the year
4
9.1 Percentage of total approved PO managed budget (restricted and unrestricted) budgeted on
PC and VC costs in 2008/09 (Global average = 40%Regional averages range from32% and
57%)
4
9.2 Possible areas of saving against costs identified during budget setting process through
innovation and creative thinking
4
9.3 Percentage of total revised PO managed budget (restricted and unrestricted) budgeted on
staff costs in 2007/08
4
10.1 Annual programme office expenditure variance (restricted plus unrestricted) for 08/09 a gainst
budget adjusted for macro-forecast
4
10.2 Volunteer Unit Cost based on 08/09 budget 4
11.1 Performance management system are being actively implemented 4
11.2 Evidence of major HR policies and systems being adhered to 4
12.1 Number of outstanding category A and B internal audit actions for PO action relating to legal
compliance
4
12.2 Security Risk management plans signed off and implemented and tested according to
Country’s main security risks (e.g. avian flu, security, natural disasters etc)
4
4
Programme impact at beneficiary level
Programme delivery against plans
4
Staff management and support
4
Legal and policy compliance and risk management
4
Cost effectiveness
4
Financial management
4
Volunteer enagement and support
4
Programme funding
4
Planning and review
4
Partnership development and maintanance
4
Inclusion
4
Relevant and ambitious strategic plans are evolved in
response to the development needs of the countryÕs
disadvantaged communities 4
Appropriate and innovative use of development
interventions to deliver programme outcomes and
impact
4
Programme outcomes at partner level
4
Please enter name of country here
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 285
Boltanski, L., & Thévenot, L. (2006). On justi?cation. The economies of worth
C. Miller Trans.. Princeton: Princeton University Press.
Briers, M., & Chua, W. F. (2001). The role of actor-networks and boundary
objects in management accounting change: A ?eld study of an
implementation of activity-based costing. Accounting, Organizations
and Society, 26(3), 237–269.
Bürkland, S., Mouritsen, J., & Loova, R. (2010). Dif?culties of translation:
Making action at a distance work in ERP system implementation.
Working paper.
Cavalluzzo, K. S., & Ittner, C. D. (2004). Implementing performance
measurement innovations: Evidence from government. Accounting,
Organizations and Society, 29(3–4), 243–267.
Chahed, Y. (2010). Reporting beyond the numbers: The recon?guring of
accounting as economic narrative in accounting policy reform in the
UK. Working paper.
Chenhall, R. H., Hall, M., & Smith, D. (2010). Social capital and
management control systems: A case study of a non-government
organization. Accounting, Organizations and Society, 35(8), 737–756.
Chua, W. F. (2007). Accounting, measuring, reporting and strategizing –
Re-using verbs: A review essay. Accounting, Organizations and Society,
32(4–5), 487–494.
Cooper, D. J., Hinings, B., Greenwood, R., & Brown, J. L. (1996).
Sedimentation and transformation in organizational change: The
case of Canadian law ?rms. Organization Studies, 17, 623–647.
Dambrin, C., & Robson, K. (2011). Tracing performance in the
pharmaceutical industry: Ambivalence, opacity, and the
performativity of ?awed measures. Accounting, Organizations and
Society, 36(7), 428–455.
Dart, J., & Davies, R. (2003). A dialogical, story-based evaluation tool: The
Most Signi?cant Change technique. American Journal of Evaluation, 24,
137–155.
Davies, R., & Dart, J. (2005). The ‘most signi?cant change’ (MSC) technique: A
guide to its use. Accessed
20.09.11.
Denis, J.-L., Langley, A., & Rouleau, L. (2007). Strategizing in pluralistic
contexts: Rethinking theoretical frames. Human Relations, 60(1),
179–215.
Dent, J. F. (1991). Accounting and organizational cultures: A ?eld study of
the emergence of a new organizational reality. Accounting,
Organizations and Society, 16(8), 705–732.
Department for International Development (2000). Eliminating world
poverty: Making globalisation work for the poor.
Department for International Development (2006). Eliminating world
poverty: Making governance work for the poor.
Department for International Development (2009). Eliminating world
poverty: Building our common future.
Eisenhardt, K. M. (1989). Building theories from case study research.
Academy of Management Review, 14(4), 532–550.
Espeland, W. N., & Stevens, M. (1998). Commensuration as a social
process. Annual Review of Sociology, 24, 312–343.
Ezzamel, M., Willmott, H., & Worthington, F. (2008). Manufacturing
shareholder value: The role of accounting in organizational
transformation. Accounting, Organizations and Society, 33(2–3),
107–140.
Fischer, M. D., & Ferlie, E. (2013). Resisting hybridisation between modes
of clinical risk management: Contradiction, contest, and the
production of intractable con?ict. Accounting, Organizations and
Society, 38(1), 30–49.
Free, C. W. (2008). Walking the talk? Supply chain accounting and trust
among UK supermarkets and suppliers. Accounting, Organizations and
Society, 33(6), 629–662.
Garud, R. (2008). Conference as venues for the con?guration of emerging
organizational ?elds: The case of cochlear implants. Journal of
Management Studies, 45, 1061–1088.
Gehman, J., Trevino, L., & Garud, R. (2013). Values work: A process study
of the emergence and performance of organizational values. Academy
of Management Journal, 56(1), 84–112.
Gendron, Y. (2002). On the role of the organization in auditors’ client-
acceptance decisions. Accounting, Organizations and Society, 27,
659–684.
Gibbs, M., Merchant, K. A., Van der Stede, W. A., & Vargus, M. E. (2004).
Determinants and effects of subjectivity in incentives. The Accounting
Review (April), 409–436.
Hall, M. R. (2008). The effect of comprehensive performance
measurement systems on role clarity, psychological empowerment
and managerial performance. Accounting, Organizations and Society,
33(2–3), 141–163.
Hall, M. R. (2010). Accounting information and managerial work.
Accounting, Organizations and Society, 35(3), 301–315.
Helmig, B., Jegers, M., & Lapsley, I. (2004). Challenges in managing
nonpro?t organizations: A research overview. Voluntas: International
Journal of Voluntary and Nonpro?t Organizations, 15, 101–116.
Hopgood, S. (2006). Keepers of the ?ame: Understanding Amnesty
International. Ithaca: Cornell University Press.
Huault, I., & Rainelli-Weiss, H. (2011). A market for weather risk?
Con?icting metrics, attempts at compromise, and limits to
commensuration. Organization Studies, 32(10), 1395–1419.
Jagd, S. (2007). Economics of convention and new economic sociology:
Mutual inspiration and dialogue. Current Sociology, 55(1), 75–91.
Jagd, S. (2011). Pragmatic sociology and competing orders of worth in
organizations. European Journal of Social Theory, 14(3), 343–359.
Jay, J. (2013). Navigating paradox as a mechanism of change and
innovation in hybrid organizations. Academy of Management Journal,
56(1), 137–159.
Jordan, S., & Messner, M. (2012). Enabling control and the problem of
incomplete performance indicators. Accounting, Organizations and
Society, 37(8), 544–564.
Kaplan, S., & Murray, F. (2010). Entrepreneurship and the construction of
value in biotechnology. In N. Phillips, G. Sewell, & D. Grif?ths (Eds.).
Technology and organization: Essays in honour of Joan Woodward
(Research in the Sociology of Organizations) (Vol. 29, pp. 107–147).
Emerald Group Publishing Limited.
Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard – Measures
that drive performance. Harvard Business Review, 71–79 (January–
February).
Lamont, M. (2012). Toward a comparative sociology of valuation and
evaluation. Annual Review of Sociology, 38, 201–221.
Lounsbury, M. (2008). Institutional rationality and practice variation:
New directions in the institutional analysis of practice. Accounting,
Organizations and Society, 33, 349–361.
McInerney, P.-B. (2008). Showdown at Kykuit: Field-con?guring events as
loci for conventionalizing accounts. Journal of Management Studies, 45,
1089–1116.
Moers, F. (2005). Discretion and bias in performance evaluation: The
impact of diversity and subjectivity. Accounting, Organizations and
Society, 30(1), 67–80.
Moor, L., & Lury, C. (2011). Making and measuring value. Journal of
Cultural Economy, 4, 439–454.
Nahapiet, J. (1988). The rhetoric and reality of an accounting change: A
study of resource allocation. Accounting, Organizations and Society, 13,
333–358.
Nicholls, A. (2009). We do good things, don’t we? ‘Blended value
accounting’ in social entrepreneurship. Accounting, Organizations and
Society, 34, 755–769.
Oakes, L. S., Townley, B., & Cooper, D. J. (1998). Business planning as
pedagogy: Language and control in a changing institutional ?eld.
Administrative Science Quarterly, 43, 257–292.
Parsons, E., & Broadbridge, A. (2004). Managing change in nonpro?t
organizations: Insights from the UK charity retail sector. Voluntas:
International Journal of Voluntary and Nonpro?t Organizations, 15,
227–242.
Perera, S., Harrison, G., & Poole, M. (1997). Customer-focused
manufacturing strategy and the use of operations-based non-
?nancial performance measures: A research note. Accounting,
Organizations and Society, 22(6), 557–572.
Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and
public life. Princeton, NJ: Princeton University Press.
Robson, K. (1992). Accounting numbers as ‘‘inscription’’: Action at a
distance and the development of accounting. Accounting,
Organizations and Society, 17, 685–708.
Samiolo, R. (2012). Commensuration and styles of reasoning: Venice,
cost–bene?t, and the defence of place. Accounting, Organizations and
Society, 37(6), 382–402.
Sandberg, J., & Tsoukas, H. (2011). Grasping the logic of practice:
Theorizing through practical rationality. Academy of Management
Review, 36, 338–360.
Scott, S. V., & Orlikowski, W. J. (2012). Recon?guring relations of
accountability: Materialization of social media in the travel sector.
Accounting, Organizations and Society, 37, 26–40.
Spradley, J. P. (1980). Participant observation. New York: Holt, Rinehart
and Winston.
Stark, D. (1996). Recombinant property in east European capitalism.
American Journal of Sociology, 101, 993–1027.
Stark, D. (2009). The sense of dissonance: Accounts of worth in economic life.
Princeton: Princeton University Press.
Sundin, H. J., Granlund, M., & Brown, D. A. (2010). Balancing multiple
competing objectives with a balanced scorecard. European Accounting
Review, 19(2), 203–246.
286 R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287
Thévenot, L. (2001). Organized complexity: Conventions of coordination
and the composition of economic arrangements. European Journal of
Social Theory, 4(4), 317–330.
Townley, B., Cooper, D., & Oakes, L. (2003). Performance measures and the
rationalization of organizations. Organization Studies, 24(7),
1045–1071.
United Nations (2011). The millennium development goals report 2011.
Accessed
02.08.11.
Vollmer, H. (2007). How to do more with numbers: Elementary stakes,
framing, keying, and the three-dimensional character of numerical
signs. Accounting, Organizations and Society, 32, 577–600.
Voluntary Services Overseas (2004). Focus for change: VSO’s strategic plan.
Accessed 10.08.10.
Wouters, M., & Roijmans, D. (2011). Using prototypes to induce
experimentation and knowledge integration in the development of
enabling accounting information. Contemporary Accounting Research,
28(2), 708–736.
Wouters, M., & Wilderom, C. (2008). Developing performance-
measurement systems as enabling formalization: A longitudinal
?eld study of a logistics department. Accounting, Organizations and
Society, 33(4–5), 488–516.
R.H. Chenhall et al. / Accounting, Organizations and Society 38 (2013) 268–287 287
doc_410379703.pdf