MAKING THINGS AUDITABLE

Description
In contrast to official images of audit as a derived and neutral activity, this essay argues that audit is an
active process of “making things auditable” which has two components: the negotiation of a legitimate
and institution acceptable knowledge base; the creation of environments which are receptive to this
knowledge base. These two components are explored and clarified in relation to three areas where the
concept of auditability has been officially invoked: making public sector research auditable, and hence
accountable; making quality auditable; making bntnd valuations audltable.

Pergamon Accounting. Organizations and Sock@, Vol. 21, No. 2/3, pp. 289-315, 1996
Copyright 0 19% Elsevier Science Ltd
F’rinted in Great Britain. ALI r@hts reserved
0361~3682/96 $15.00+0.00
03613682(9s)oooo4-6
MAKING THINGS AUDITABLE*
MICHAEL POWER
The London School of Economics and Political Science
Abstract
In contrast to official images of audit as a derived and neutral activity, this essay argues that audit is an
active process of “making things auditable” which has two components: the negotiation of a legitimate
and institution acceptable knowledge base; the creation of environments which are receptive to this
knowledge base. These two components are explored and clarified in relation to three areas where the
concept of auditability has been officially invoked: making public sector research auditable, and hence
accountable; making quality auditable; making bntnd valuations audltable. The choice of these apparently
ma@nal practices is deliberate. They provide an opportunlty to observe a “logic of auditability” which is
hidden in more established contexts. In the three cases auditability is accomplished by: auditable
measures of performance; systems of control; reliance on other experts. After considering each of these
areas in turn, their general implications for a “constructlvist” understanding of the concept of auditability
are developed. The challenges that this may pose for more established frameworks of audit research are
considered briefly. Copyright Q 1996 Elsevier Science Ltd.
Unless financial data are vetiable, auditing has no
reason for existence (Mautz % Sharaf, 1961, p. 43)
Attempts to develop “philosophies” of audit
have emphasized the importance of verlfiabil-
ity. And yet, ln contrast to debates about the
“representational faithfulness” of financial
reporting, the concept of “verifiability” has
been relatively immune from controversy and
discussion. Mautz & Sharaf (1961, p. 43) argue
that “whatever word is selected to carry the
connotation of ‘auditability’, there must be
something that auditors do to give them a basis
for expressing an opinion on the reliability of
the financial statements they examine”. Flint
(1988) echoes this claim by postulating that
the “subject matter of audit . . . is susceptible
to verification by evidence” and Wolnlzer’s
(1987) extensive discussion ties both “verifl-
ability” and “audltablllty” to the idea of “lnde-
pendent testability” ln which statements can
be tested by reference to independent evl-
dence (see also Flemming-Ruud, 1989, pp.
58-5, 117). Wolnlzer argues that the lndepen-
dence which really matters in auditing is not so
much that of an ethical “state of mind” but that
of “independent authentication” which has the
virtues of objectivity, publicity and replic-
ability. In making this claim, Wolnizer, like
Mautz and Sharaf, explicates auditing as a
quasi-scientific practice’ with roots in the
“common human need” to remove doubt and
alleviate anxiety (Lee, 1993, pp. 19-20).
The concepts of “verifiability” and “audit-
ability” are widely regarded as synonymous
l Earlier versions of this essay were presented at the Maastricht Auditing Research Symposium, October 1993, and the
Universities of Sheffield and Aberdeen. The author is grateful for the comments of Richard Laughlin, Christopher Napier,
hliklos Vasarhelyi, J oni Young and two anonymous reviewers.
1 One obvious problem with this view is the extent to which it relies on a model of scientific inquiry which has been
largely discredited by philosophers of science.
289
290 M. POWER
and it is through this identity that conceptual
linkages between financial reporting and audit-
ing exist (Lee, 1993, pp. 23-24). It has been said
that “Verifiability is that attribute of information
which allows qualified individuals working
independently of one another to develop essen-
tially similar measures or conclusions from an
examination of the same evidence, data or
records” and financial statements embody
“rules and procedures” which “leave a trail of
evidence and procedures that can be verified”
(American Accounting Association, 1966, p.
10). Verifiability is also “The ability through
consensus among measurers to ensure that
information represents what it purports to
represent or that the chosen method of mea-
surement has been used without error or
bias” (FASB, 1980, p. xvi). As Solomons (1986,
p. 91) observes, these definitions indicate the
close connection between the idea of verifca-
tion and agreement among observers of mea-
surements. However, the FASB linkage
between verification and measurement is
much looser than that postulated by earlier the-
orists of accounting (Lee, 1993, p. 168) and
generally reinforces the conceptual priority of
decision relevant measurement practices over
verificatory practices (FASB, 1980, para. 81).
The official relationship between reporting
and auditing is based on this: auditing is fun-
damentally a deri ved activity which add3 cred-
ibility to financial statements.
This official image of the “external” audit is
well established in professional literature. This
externality, independence by another name, is
one of the fundamental concepts which gives
audit its value. The notion of externality also
implies a certain technical neutrality in the
manner of intervention in the auditee organiza-
tion: the auditor is a temporary visitor to an
organization charged with monitoring the rele-
vant activities of the organization. The tech-
nologies of audit may be a transient nuisance
but they do not disrupt or transform the opera-
tions of the audited organization other than to
make recommendations for control improve-
ments or to push for adjustments to the financial
statements. Auditors make these recommenda-
tions on the basis of their claimed expertise in
risk assessment procedures, in internal control
technologies and in the rules and regulations
governing financial reporting. In this way,
audits are supposed to “add value”.
From the point of view of these official rheto-
rics, it seems almost self-evident that financial
reporting preexists and is independent of the
audit process. However, even if it is plausible
to argue that, for any particular financial audit,
the financial accounting rules are independent
of, and prior to, the audit process, particular
accounts are nevertheless negotiated wi thi n
the audit process; accounts and audits get co-
produced (see Pentland, 1994). The supposed
priority of financial accounting is even less
clear when the systems of financial reporting
and auditing are considered as a whole. A brief
glance at history indicates how these systems
have co-evolved and how, in many cases, audit-
ing values have played a constitutive role for
financial reporting rather than simply “adding
credibility”. For example, a history of account-
ing policy could be written in terms of the
impact of claims for the legal objectivity and
audi tabi l i ty of the historic cost measurement
convention; a rhetoric of “objectivity” has
often played a decisive role in blocking the
development of current value measurement
practices.
Thus, in contrast to conceptual accounts of
their relationship, it could be argued that finan-
cial reporting is simply a sub-system in the lar-
ger system of auditing.’ In this essay I argue
’ Flenuning-Ruud (1989, p. 126) argues for a new conceptual integration between financial reporting and auditing. Like
Wolnizer and Chambers, he argues that auditing can be reformed only if financial reporting “retlects reality”. Accordingly,
he is critical of Mautz and Sharaf’s identitication of verifiability and auditability and argues that auditing rarely ue?@es
accounting numbers but merely examines allocations and calculations. While he may be correct that verification is a more
robust concept than mere inspection, the actual conceptual slippage between concepts of auditabiity, testability and
verification is an important feature of “making things auditable”.
MAKING THINGS AUDITABLE 291
that audit evidence is not just “out there” but
must be constructed to count as evidence
within this system of audit knowledge. Audit-
ability is not just a natural property of eco-
nomic transactions, not simply a function of
the “quality” of evidence which exists in the
environment within which auditing operates.
Rather, auditing actively constructs the legiti-
macy of its own knowledge base and seeks to
create the environments in which this know-
ledge base will be successful. Auditing know-
ledge in this systemic sense does not emerge
from the experimentally isolated cognitive
judgements of practitioners in relation to sets
of cues in the outside world, as the tradition of
audit judgement research would have it (Felix
& Kinney, 1982; Power, forthcoming). Audit
plays a decisive role in constituting the environ-
ment of cues itself @i&ham, 1992, p. 296) and
its techniques are part of a system of know-
ledge which is driven by the imperative of
“making things auditable”.
In the next section this idea of a “system
of audit knowledge” is elaborated. This is
followed by an exploration of the motif of
“making things auditable” in three areas
where the concept of auditability has been
officially invoked; accountability in public
sector research; quality assurance and environ-
mental management systems; brand account-
ing. In the case of research, the theme of
making a new domain auditable by creating
an environment of “measurable facts” is
emphasized. In the case of quality assurance
and environmental management, the issue is
primarily one of creating a legitimate surface
of auditable facts in the form of a management
system. In the case of brands, it is less a case of
creati ng an auditable environment so much as
negoti ati ng auditability in terms of the credi-
bility of valuation experts. This ordering of the
cases provides a development from a more
conventional exploration of the auditability-
measurability nexus through to a consideration
of the socially negotiated nature of audit know-
ledge. After considering each of these cases
and their dominant auditability strategies in
turn, the more general implications of the
analysis are explored. In particular, the pros-
pects for a “sociology of audit knowledge”,
and the challenges that this may pose for
the more established framework of audit
judgement research, are considered.
THE SYSTEM OF AUDIT KNOWLEDGE
A great deal has been written recently in
North America, the U.K. and elsewhere on
the “crisis” of financial audit. Problems of
legitimacy have been generated by publicly
visible scandals and questions of indepen-
dence, of reporting responsibilities and of the
burden of litigation preoccupy both academics
and practitioners. In the face of these difficul-
ties, which have a longer history than is com-
monly imagined, accountants have been
determined to defend their jurisdiction (Sikka
& Willmott, forthcoming). And despite the
problems confronting financial audit practice,
audit as a regulatory model seems to be remark-
ably durable and has developed in many new
areas. As Beneviste (1973, p. 137) puts it, “In
every policy making environment there is a
culture that affects the style of discussion and
intervention.” The recent “explosion” of audit-
ing (Power, 1994a) reflects a decisive shift in
regulatory style and is due in part to the
success and power of the large accounting
firms to promote their claims to expertise in
new areas, particularly as advisors to and
agents of government. Audit has become
important to a new style of public administra-
tion. While it has become fashionable to
emphasize the decentralizing and market
oriented tendencies of “rethinking govem-
ment” (Osborne & Gaebler, 1992) centralist
anxieties of control nevertheless persist and it
is here that audit plays a vital role. As Van
Gunsterten (1976, p. 142) has noted, “the
anxious ruler tries to make his phantasies
come true by way of a mixture of minute con-
trols and rigorous isolation”. In the face of reg-
ulatory anxiety, audit institutionalizes the
production of comfort (Pentland, 1993, p.
6lO), something which is particularly evident
in the rise of quality assurance programmes.
292 M. POWER
To understand the institutional significance
of auditing and auditors (not all of whom are
necessarily accountants) requires a more
detailed consideration of auditing as a system
of knowledge, its core values and the manner
in which it reproduces those values in existing
and new settings. The notion of a “system” of
knowledge is not to be read too strictly; one
could equally talk of a “field” in Bourdieu’s
(1990) sense.3 The argument attempts to
draw attention to the selfconstituting nature
of auditing knowledge rather than to make the-
oretically doctrinal claims. Furthermore, this
exploration of how things are made auditable
complements but adopts a different focus from
that of a political economy of auditing in which
the interests and behaviour of accounting firms
would be relevant. To put the point in Abbott’s
(1988) terms, the argument emphasizes the
task dimension of audit rather than the jurisdic-
tional struggles (Sikka & Willmott, forthcom-
ing) and regulatory games (Willmott, 1991)
which are conducted by accounting firms.
With this somewhat artificial division of intel-
lectual labour in mind, the broad structure of
the system of auditing knowledge can be con-
sidered.
One can identify at least four principle ele-
ments or levels in the system represented in
Fig. 1. Firstly, one can talk of the official know-
ledge structures of audit practice. This is the
public face of audit, its codified rules and reg-
ulations on appropriate procedure and be-
haviour which have evolved over time
(Preston et aZ., forthcoming). Such rules may
subsist in the technical publications of firms
or they may be documented at the level of
professional institutes and regulatory bodies.
These rules constitute auditing “best prac-
tice” and have the widely claimed virtue of
being credible in a court of law. Accordingly,
auditing working papers seek to reproduce the
public face of auditing with contingent legal
and regulatory audiences in mind (Cushing &
3. PRACTICE
/ \
2. EDUCATION 4. CONTROL
\ /
I. KNOWLEDGE
Fig. 1. The system of auditing knowledge.
Loebbecke, 1986; Francis, 1994, p. 260; Van
Maanen & Pentland, 1994). In this way, as
Pentland (1993, p. 610) has argued, the produc-
tion of regulatory comfort flows out of the
audit process in an institutionalized form
because practitioners have incentives to repll-
cate this form in their representations of what
they do.
Second, there are the many mechanisms of
knowledge dissemination which involve vary-
ing degrees of formal and informal, on- and
off-site, education (Power, 1991), training and
socialization (Harper, 1991; Coffey, 1994). It is
here that certain styles of behaviour, speech
and recording of practice are learned and the
audit practitioner is constructed. It is also here
that formal examination systems contribute to
the institutionalization of audit knowledge by
connecting idiosyncratic procedures to legiti-
mate forms of abstract knowledge (Abbott,
1988; Carpenter & Dirsmith, 1993). In this
way, credentializing mechanisms which, from
one point of view, function as barriers to entry
also reproduce the internal “technical” culture
within which audit practitioners can be judged
by their peers.
Third is the level of practice itself. At this
level of the system, particular audit judge-
ments are made and written up. Here the pub
’ on this occasion, I prefer the idiom of system, as it is used by theorists such as Luhmaon, since the theme of the self-
reproduction of knowledge is useful in the auditing context.
MAKING THINGS AUDITABLE 293
lit production of comfort by the audit process
is the product of elaborate internal interactions
within the audit process itself, in particular to
shape representations of audit knowledge con-
sistent with economic cons&alms, and of stra-
teglc games between auditor and auditee ln
which accounting “facts” are negotiated
(Pentland, 1994). It is also here that vague lntul-
tlons about assurance levels must be ratlona-
llzed for public consumption (Humphrey &
Molzer, 1990) and where emotional responses
about comfort must be re-presented as cogni-
tive l.n an “essentially unknowable situation”
(Pentland, 1993). This level of practice repro-
duces and depends on official and legitimate
myths of practice (Boland, 1982) such as sam-
pling and risk analysis. However, the process is
not without conflict between practitioner
values and the institutional demand for accep
table representations of practice. The eternal
dialectic between structure and judgement
in audit knowledge is an example of this.
The struggle is between formal, public and
supposedly replicable forms of knowledge
and local craft cultures of situated “expert”
practice (see Francis, 1994, pp. 251-257).
Within this dialectic there is a constant “cog-
nitive reinvention of the audit” in which cost
reduction and audit quality can be reconciled
and represented (Fischer, 1996). It could be
said that the structure vs judgement issue,
which is common to most professionalized
fields, is less concerned with how audits are
actually done than with how they are repre-
sented. But, as Francis (1994) and Van Maanen
& Pentland (1994) suggest, the distinction
between doing an audit and writing an audit
is not sharp: “at the limit, the audit becomes
a pure simulacrum, an institutionally driven
discourse about auditing that is its own rea-
lity. Audits become centred solely on the pro-
duction of working papers for the purpose
(reality) of producing working papers . . . audit
has become its own sign with the focus on the
production of the sign (working papers)”
(Francis, 1994, p. 261). The publication of
audit opinions circulates signs with aesthetic
(comfort), rather than informational, value
Gash % Urry, 1994, p. 15).
The “writing” of auditing is also important
for the fourth element of the system of audit
knowledge: the varlous feedback mechanisms
by which the practice and official knowledge
structures mediate ideals of quality control.
Here we can identify institutionalized mechan-
isms of peer review which provide comfort
about comfort production. As Fogarty (1996)
notes, this process is itself dependent on the
production of working papers. The quality
audit expresses a manner of writing the
audit. In addition there are those in-house pro-
cesses whereby attempts are made to amend
audit procedures and to construct new forms
of “value adding” proprietorial audit practice.
The benefits of these changes become benefits
only when new technologies are accepted
(Fischer, forthcoming) and the public face of
the reconstructed audit knowledge is that of
efficiency gains and technical improvement.
In this writing of audit quality, there is a circuit
of conformation and reproduction of the tasks
which constitute the system of audit knowl-
edge. Quality control procedures may function
less to make quality observable and more to
construct and define quality itself.
This preliminary sketch of a system of know-
ledge composed of four interacting but distin-
guishable elements is directly relevant to the
present theme of “making things auditable”.
On one interpretation of the theme, it is a
rather trivial matter. Making things auditable
is what practitioners do when they audit
organizations and processes. This is not to say
that techniques cannot be “improved”; this is
going on all the time. Nor is it to say that prac-
titioners never make mistakes; quality control
procedures exist for that very purpose. On
such a view “making things auditable” is
largely a matter of common sense and audit
judgement research examines the operation
of this common sense in experimentally con-
trolled settings. The image is one of auditors
as cognitive agents confronted with “cue
rich” environments. On the basis of personal
factors and a tried and tested kit of techniques
they can conduct a reasonable audit.
294 M. POWER
In what follows I wish to explore a different
interpretation of what might be meant by
“making things auditable”, one that builds on
the constructivist reading of the system of
auditing knowledge provided above. Common
sense suggests that the forms of audit know-
ledge are relatively stable and accepted and it
is their implementation and, from the point of
view of audit judgement research, the consen-
sus supporting that implementation, which is
the interesting research problem. But from a
constructivist standpoint, consensus at the
level of application is less interesting than the
consensus which supports, albeit temporarily,
the system of audit knowledge as a whole. The
contrast can be put like this: whereas the
common sense view of “making things audit-
able” would argue that techniques are accepted
by practitioners because they “work”, this
essay is concerned with how techniques and
procedures are perceived to “work” because
they are institutionally acceptable. Making
things auditable in this specialized sense is
not simply a technical matter and the variabil-
ity of how auditability is accomplished and
cl ai med cannot simply be attributed to
improvements in audit technique, as the com-
mon sense view might argue. What it is for an
audit technique to work or not work itself
depends on what gets accepted as common
(legitimate) sense within the system and to
address the production of legitimate know-
ledge emphasizes the institutionalizing process
which inform all four levels of the system of
audit knowledge in Fig. 1. It is the producti on
rather than the presumpti on of the common
sense of auditability which concerns us here.
This production has two related themes which
are often run together in the concept of “social
construction”: negotiation and creation.
NEGOTIATING AUDIT KNOWLEDGE AND
CREATING AUDITABLE ENVIRONMENTS
The negotiation of knowledge involves the
processes of closure which render knowledge
acceptable and stable. Particular procedures
and techniques come to be accepted or not
accepted as constituting reliable knowledge,
either at the public level through specific
codification or more informally in terms of
acceptable practice. This is evident for those
procedures where practitioner consensus is
not automatic and where there is “interpre-
tive flexibility” (Pinch & Bijker, 1987, p. 40).
One such area of flexibility is the auditability of
charity income which is “made auditable” in
specific cases not necessarily because an objec-
tively superior technique exists but because of
practitioner determination to make it auditable
by “taking a view” that controls over, say,
collecting tins are a reliable guarantee of the
completeness of income. The particular audit
can then be written in such a way as to connect
this determintion with institutionally acceptable
and defendable forms of reasoning. As Pentland
(1993, pp. 611-612) puts it: “Fundamentally
auditing involves the certification of the
unknowable . . . Rituals of copying numbers
allowed the underlying indeterminacy of the
U.S. mortgage market to be auditable i.e. some-
thing auditors can be comfortable with.” Of
course, events like the crisis in the U.S. Savings
and Loans industry show that society can
choose to withdraw its trust in auditors’ certi-
fication of the unknowable. And, as the emerg-
ing debate on the regulation of derivatives
suggests, it may also push auditors into new
“unknowable” areas.
Another example of flexibility concerns the
impact of the information technology environ-
ment on auditor conceptions of evidence.
Yates’s (1993) study of the U.S. life insurance
industry demonstrates the longstanding resis
tance, particularly by auditors, to records in
magnetic form. Auditors are only just begin-
ning to overcome this resistance and audit-
ability or non-auditability in such cases as
these reflect the ability to write the audit in
MAKING THINGS AUDITABLE 295
such a way as to conform to official bodies of
knowledge produced by auditing standard
setters and to accepted standards of evidence.
In turn standard setters will grant official recog-
nition to forms of audit knowledge which have
become acceptable (as cost-efficient solutions)
to practitioners. Hence, the closure of what
counts as knowledge depends not so much
on solving “problems in the common sense
of that word but on whether the relevant social
groups see the problem as being solved” (Pinch
& Bijker, 1987, p. 44). This process reflects a
cycle of externalization from specific practices,
objectivication as knowledge in official docu-
ments and then the reinternalization of this
institutionalized knowledge at the level of prac-
tice (Fischer, 19%).*
The negotiation of acceptable audit knowl-
edge can be contrasted with the manner in
which the system of auditing makes itself
possible by actively creating the external
organizational environment in which it oper-
ates. Though it would be implausible to suggest
that organizations are literally created by audit
processes, it can nevertheless be said that a
significant “auditable sub-organization” is con-
structed and partly (often) or wholly (rarely)
exists to correspond to the audit process.5
The question is whether controls, measure-
ment systems and their associated forms of
documentation preexist the audit process or
have been created with a view to making the
organization auditable. Where organizations
have already been “legalized” (Scott, 1994)
bureaucratic media, systems, documents and
so on act as a surface upon which audit can
“work”. In general, audit procedures, like any
technique, demand the environments in which
they can be perceived to succeed; problems
and their technical solutions are tightly
coupled (Pinch & Bijker, 1987, p. 30).
Clearly, primary forms of documentation such
as invoices play many roles and it would be
absurd to suggest that they have been con-
structed solely relative to the audit process.
But in other cases it may be less clear cut.
There is evidence in complaints about the
bureaucracy of quality assurance mechanisms
that systems have been created for the pur-
pose of being audited and little else. This com-
plaint has also been levelled at the regulatory
system for financial auditing in the U.K. Audit-
ing may demand the creation of measurement
and control systems which are explicitly
designed to affect auditee behaviour. The
empirical question is the extent to which audit
“trusts, and make use of, order which is there
and which is constantly being recreated” (Van
Gunsterten, 1976) or whether audit “colonizes”
the organization and creates auditees to make
its own processes possible.6
The negotiation of audit knowledge and the
creation of auditable environments are linked
themes in the project of making things audit-
able. The greater the institutional reliance on
the system of audit knowledge by regulators
and others, the greater the potential for forms
of audit knowledge to be supported by active
transformations in the auditee organization.
Financial auditing would not be possible with-
* To take another example, sampling emerged as an officially acceptable technique because practitioners could not cost
effectively test in detail Iarge volumes of transactions. Non-100% testing was externalized by practice and projected on to a
public stage of debate where it was objectified as leghimate technique. Subsequently it was reintemalized by practice in
the form of statistical sampling (Power, 1992a; Carpenter & Dirsmith, 1993).
’ In this respect auditing can be regarded as an autopoietic system of knowledge in Luhmann’s se- which constructs the
environment-system distinction for itself internally. It is a system which builds for itself the facts which are relevant to its
continued functioning.
6 In other words, Is audit in some sense an autopoietic system which “productively misunderstands” (Teubner, 1992) its
environments but does not substantively interact with them? Or, as Armstrong (199 1) and Johnson & Kaplan (1987) have
suggested, is financial auditing responsible for disseminating a distinctive financial control cukure at the expense of other
possibilities?
296 M. POWER
out a data base of books, records and internal
controls but equally it reinforces this control
environment in order to maintain auditability.
According to theorists such as Flint (1988, p.
32), audit is not possible without clear stan-
dards of auditee performance. But it would be
more correct to say that audit is not possible
without “auditable” standards of performance.
This is not quite a tautology since the negotia-
tion of audit knowledge and the creation of
auditable environments are not free from
contlict and resistance. What counts as audit-
able may be fundamentally contested and
agents may resist attempts to transform them
into auditees. In this sense the system of audit
knowledge is powerful but not monolithic, a
point which will become evident from a
more detailed consideration of how things are
made auditable in three different contexts.
MAKING RESEARCH AUDITABLE
Following widespread national and inter-
national initiatives in public sector manage-
ment and control, the need to demonstrate
“economy, efficiency and effectiveness” is
now ubiquitous and value-for-money auditing
practices have grown rapidly. The wave of
“new public management” (Hood, 1991) has
created remunerative opportunities for the con-
sultancy arms of the large accountancy fums’
and it has been suggested that consultants are
now the new policy makers8
In the U.K. and elsewhere, the reorganiza-
tion of health care continues to occupy con-
siderable public attention and there is
resistance amongst medical practitioners to
many of the market based changes to their
working environment (Broadbent et al .,
1992). These changes are also visible in many
other areas of public service provision, such as
education. As far as higher education is con-
cerned, a number of funding related reforms
has taken place (Puxty et al ., 1994, pp. 155-
158) and the financing of research has been a
particular area for change. Public sector funded
research is increasingly under pressure to
provide a return on investment (Sherman,
1994; White Paper, 1993) and greater account-
ability for the use of public funds has been
demanded. The task of making research audita-
ble is the responsibility of the Higher Education
Funding Council (HEFC) who commissioned a
report from accountants Coopers & Lybrand to
address ways in which this might be achieved
(Coopers & Lybrand, 1993). It will be argued
that the analysis and recommendations of this
report are shaped by a conception of audit-
ability which reflects a .particular style of
making things auditable and which demon-
strates the close links between auditability
and measurability.
The HEFC provides funding to universities
by way of a block grant. In the past this has
been calculated and notionally split between
teaching and research. The purpose of the
Coopers Report was to explore mechanisms
of accountability for the research element of
this funding. The HEFC consultation paper
(HEFC, 1993) which was produced to accom-
pany the Coopers Report, stresses the require-
ment of public accountability for the use of
research funds and the need to establish
arrangements which are “auditable” at an
“appropri ate level of detail” (emphasis
added) in order to show that the research ele-
ment of the HEFC block grant has been “prop
erly used”. The HEFC was concerned with the
precise nature of the mechanisms in question
and whether a (“top down”) allocative method
of demonstrating accountability would be
“appropriate” in the required sense or whether
’ The role of accountants in these changes is complex: they are cause and effect, shaping and being shaped by their
instih~tional environment: “The nature and language of expertise i.e. the fact that certain kinds of problems, measure-
ments, and concerns are highlighted by technical language, favours some implementers and beneficiaries and does not
affect others” (Beneviste, 1973, p. 131).
* See “Commentary: Auditing the Accountants”, Tbe Politkal Quarterly (1993) pp. 269-271.
MAKING THINGS AUDITABLE 297
there would be a need to account in more
detail for expenditure (“bottom up”).
The Coopers Report addresses possible inter-
pretations of the concept of accountability
and argues that, for each Higher Education
Institution (HEI), accountability could be pub
licly demonstrated by reference to seven
possible modes of audit as follows:
The audit of research output via the mechan-
ism of Research Assessment Exercise (RAE)
results.
The audit of projects conducted by
research centres in HEIs.
The verification of the percentage of
research outcomes funded by the HEFC.
The verification of the percentage of
research outcomes per Academic Subject
Category (AK) funded by the HRFC.
The audit of resource allocation where
block grants have been allocated to
research and teaching activities.
The audit of expenditure of research funds
in relation to analyses of actual patterns of
expenditure.
The audit of income sources allocated to
different expenditure categories.
These possible “methods” of operationaliz-
ing accountability embody different levels of
aggregation and detail at which reporting and
auditing would occur. Options (a) and (b) lend
themselves to non-financial measures of
research output, such as intellectual judge-
ments about its quality.’ Options (c) and (d)
quantify the percentage of these outcomes
funded by the HRFC at two different levels of
aggregation. However, the Coopers Report
states that the arrangements under (a), (b),
(c) and (d) are not really “auditable” and
argues that the only serious candidates for
mechanisms of accountability are those which
are auditable at the level of detail of (e), (f) and
(g). The Report goes on to elaborate the difii-
culties inherent in operationalizing these three
options: many central costs in higher education
institutions are deducted from income sources
before they are attributed to individual units or
cost centres (the grant is “top sliced”) and any
exercise to allocate “lumpy” forms of income
(block grants) and expenditure to discrete
activities is subject to the classical difiiculties
of choosing an allocation base. In the end
Coopers & Lybrand favour option (f) above,
for which the biggest difIiculty is the allocation
of salary costs. To overcome this problem a
system of time recording is proposed under
the following categories of activity:
T:
R:
Ra:
Rb:
Rc:
c:
s:
A:
Teaching
Research,
AIlowable research for HEFC
purposes
Other research organization grants
Explicitly subsidized research
Consulting
Savings
Administration
In making this proposal, Coopers & Lybrand
acknowledge HEFC demands for sensitivity in
the design of a non-intrusive system (in this
way both Coopers and the HEFC seek to pre-
serve the ideal of audit neutrality: audits are
intended to “colonize” only to the extent of
intended improvements of accountability
mechanisms). The Report proposes an alloca-
tion system for the seven expenditure cate-
gories which operates in terms of “notional
week” units of time. Academic time would be
reported on a quarterly basis, usually compiled
by the head of department. It also proposes
(pat-a. Cll) that time spent on administration
should be reallocated to the other categories.
The Report frequently appeals to the con-
cept of auditability although it is never defined
directly. For example, in defending its choice
of option (f) above, it says, “to be auditable, an
institution would have to adopt a mechanism
for analysing expenditure . . . we doubt that
y Although the rise of patentable output as an assessment category for RE4s suggests a more commercial orientation at this
level too (Sherman, 1994).
298 M. POWER
anything much less could be described as audi-
table in the normal sense of the word” (para.
708). And in the context of the attribution of
income under option (g) it is stated that, ‘I...
all that would be being audited would be that
the assumptions had in fact been followed -
not the validity of the assumptions themselves”
(para. 720). Indeed, even though the Report
never defines auditability directly, a number
of possibilities for demonstrating accountabil-
ity are dismissed as being “unauditable” in the
sense of being “too dependent on assump-
tions” and “merely mathematical”. However,
when the proposed time sheet system is
looked at in detail i t i s al so unaudi tabl e i n
these terms: it operates with such highly aggre-
gated categories that it is heavily dependent
upon assumptions and allocations its& and
must therefore fail the concept of auditability
implicit in the Report’s assumptions even
though it is intended to satisfy them.
At one level this internal contradiction in the
appeal to auditability suggests technical prob-
lems in the specific design of the time sheet
system. One could address these problems by
refining the reporting categories, the periodi-
city of reporting and so on. For example, one
specific concern in the context of academic
research is the problem of cross-subsidization
of non-research activities. However, even in
very “precise” systems of time recording,
such as those used by firms of accountants
and solicitors, there is still scope for “crea-
tive” time recording. It is well known that
certain jobs can bear the allocation of fictitious
time. Furthermore, cross-subsidization across
different tasks occurs because billing has only
a very loose relationship to recorded time,
especially where there is also unrecorded
time (McNair, 1991). Accordingly, the pros-
pects are poor for a relatively crude time
reporting system to address the problem of
cross-subsidization. However, the issues are
not simply technical in this operational sense.
The very fact that there are different possi-
bilities for research accountability suggests that
auditability is variable in meaning (on this point
see also Sikka et al ., 1994). The concept of
auditability Implicit in the Coopers Report
expresses a level and style of calculative
elaboration which the drafters of the report
believe will be an i nsti tuti onal l y acceptabl e
(to the HRFC and the government) operation-
alization of the demand for accountability. This
is evident from the fact that the Coopers
Report is not concerned with measuring and
representing the time spent by academics as
accurately as possible, but only “sufficiently
accurate as to be meaningful . . . (and) audit-
able” (para. 112). Of a “more accurate time
recording system . . . we doubt that it would
be acceptable to the generality of the aca-
demic community” (para. 606). And one pur-
pose of the measurement system is to
“reassure” the HEFC (para. 110). Hence, far
from being an objective and neutral property
of Information systems, auditability is largely a
product of a consensus about the nature and
detai l of evidence required by those whom the
audit is intended to serve. This consensus
about the “appropriate level of detail” reflects
a certain style of verf$catfon, a style which is
not necessarily natural or objective but which
serves institutionally legitimate “rituals of
inspection”.
In the Coopers Report, concepts of account-
ability, auditability and time recording mutually
define one another and cannot be indepen-
dently settled. The auditable measurement
system which makes research accountable is
worked out as a compromise between differ-
ent perceived pressures and constituencies.
Even though the rejection of options (a)-(d)
above may reflect an “accounting” bias, the
report stops short of recommending a more
elaborate time sheet system with multiple
codes. Indeed, doubts about the technical
efficacy of the proposed time sheet system and
its lack of apparent precision and elaboration
demonstrates how there is nothing “natural”
about the level of calculative detail in a measure-
ment system. One cannot say that arrangements
which have it are “auditable” and those which
don’t are not. Rather, “measurabiity” and
i‘auditability” are negotiable practices which
are determined by an institutional need or an
MAKING THINGS AUDITABLE 299
anxiety (Van Gunsterten, 1976) for a style of
control which reafIirms core values of account-
ability and efficiency. This institutional need
has its genesis in broader transformations in
public service provision as we saw above. It
generates a regulatory style caught between
“disciplinary” values which require appro-
priate images of rigour (on this point see also
Fogarty, 1996) and a more facilitative emphasis
consistent with the values of the auditee
domain. It follows that operationalizations of
the concept of auditability express a contin-
gent mixture of trust and distrust and a distinc-
tive regulatory demand for comfort.
Making research auditable demonstrates the
close relation between auditing and measure-
ment. Indeed, making things auditable is also
making things measurable. Auditing requires a
“reality” against which its verilicatory pro-
cedures can operate and in the research con-
text audit must literally create the environment
in which it operates. This case also demon-
strates a very direct coupling between what I
call above the negoti ati on of audit knowledge
and the creati on of the audit environment.
The former concerns those processes of
exploration of a style of measurement which
corresponds to institutionally stabilized concep
tions of what is auditable. The latter concerns
the explicit transformations in the auditee
domain that this proposal would bring about.
Accounting measurement systems, such as the
proposed time recording technology con-
sidered here, make possible certain ways of
thinking and acting. In so doing they inhibit
other ways of thinking and acting. Already
some of the data produced by the Research
Assessment Exercise in the U.K. has reinforced
tendencies to talk in terms of “high” and
“low” earning subjects. High earning depart-
ments and subjects may now wish to bargain
on the basis of their new found economic
“strength”. Equally, a time recording sytem
which separates research and teaching would
make it possible to think in a binary fashion and
to change organizations, contracts and working
habits on that basis. Making research auditable,
even in the technically crude manner proposed,
embodies a potential for forms of discourse and
for research strategies which were not for-
merly conceivable but which subsequently
become “rational” (Puxty et al , 1994).
The research context shows how audit
knowledge is negotiated as auditing is
extended into new areas. The Coopers Report
extends into a new domain presuppositions
about auditability which are relatively stable
within the system of financial audit know-
ledge. Indeed, the stability of this knowledge
base is one of the sources of the accountant-
consultant’s authority in new domains; it is
difficult for outsiders to question concepts of
accountability and auditability as accountants
formulate them. However, in this case the dis
cussion has focused on a discourse of intention
rather than outcome. In fact the proposed time
recording system is on ice and the proposals
met considerable resistance, not least from
the HEFC itself. This demonstrates that the
project of making things auditable is also a pre-
carious one, requiring not simply a stability
wftbin the expert system of knowledge about
appropriate procedure but also a level of opera-
tional consensus among sponsoring institutions
and auditees themselves.
MAKING QUALITY AUDITABLE
Quality certification has become big busi-
ness. Today quality has become an explicit
organizing concept for a wide variety of institu-
tions and there has been an explosion of con-
ferences and publications on the subject
geared to industry-specific audiences. New
institutions, such as the Higher Education
Quality Council, and new institutional roles,
such as Quality Assessors and Directors of
Quality Enhancement, have been created with
the explicit aim of defining, encouraging, manag-
ing and monitoring quality. And on the back of
these developments there has even been a
reflexive application of quality ideas to the
300 M. POWER
quality assurance process itself.1o The concept
of quality is at the heart of an elaborate process
of image management which invokes and
demands a tight coupling between quality per-
formance, however that is to be delined, and
processes to ensure that this performance is
visible to a wider audience. “Making quality
auditable” is therefore an essential element of
this quality impression management. Without
audit and the certification that follows from
audit, quality remains too private an affair. It
is as if there is no quality without quality
assurance.
The discourse of quality has its origins in
very specific engineering preoccupations with
controlling the “fitness of use” of products and
processes (Wolnizer, 1987). This involved
forms of verification which were directly linked
to production processes and product inspection
practices became organized around measures
of statistical control. In other words, quality
was originally a “production” based concept
(Bowbrick, 1992, p. 7) with relatively well-
defined measurement parameters, such as
defect rates. l1 In recent years there has been
an important transformation and generalization
of this engineering based conception of quality.
Quality has been extracted from the specifici-
ties of engineering discourse and has been
expressed in more abstract terms.12 In short,
quality has been transformed from an engineer-
ing concept to a management concept, a
“managerial turn” (Power, 1994b) which has
provided opportunities for quality experts to
sell their services. And it has recently been
suggested that “the quality audit represents a
new market offering growth opportunities for
the public accounting profession” (AAA, 1993).
Quality management and assurance has itself
become a product to be priced like the com-
modities whose quality it is intended to pro
mote. At the centre of this “commodilication”
of quality is the quality management system
whose role is not merely to monitor and con-
trol product and process standards for local use
but also to be externally auditable. Perform-
ance and the visibility of performance are
tightly coupled in the idea of a management
system; the system is the “hinge” between
internal operations and the external audience.
Accordingly, the auditabihty of quality is not a
subordinate matter, it is almost the essence of
quality itself. Quality is an empty concept with-
out accreditability and hence auditability. In
this way audit processes have the potential to
become constitutive of quality.
If product or service quality is well defined
by reference to standards of performance
which have a high degree of consensus, then
systems of control can be regarded merely as
secondary monitoring of compliance with
these standards. ’ 3 For example, where pro-
ducts or activities have public, readily visible
“non-expert” criteria of success and failure
(such as light bulbs and plumbing services)
certification of quality concerns itself directly
lo For example, in September 1994 a conference was organized by the South Bank University, London, and the British
Standards Institute on the theme of “Quality in Auditing” (emphasis added).
I1 Of course, it can be argued that notions of “defect” are not very well dehned at all. Furthermore, as the sociology of
technology informs us, there is nothing %&ural” about product design and concepts of “fitness for use”. So ideas of
quality itself are profoundly social in character. Notwithstandii such doubts, I merely wish to work with the contrast
between a technical standards orientation towards quality and a systems approach.
r* One should not overstate this point. In parallel with this generalization of the quality concept, there has also been an
explosion of product-specitic standards in areas such as safety. Such standards have emerged in the so-called “self-
regulatory” space between the state and industry and orfgnixations such as the British Standards Institute have acquired
the multiple roles of quasi-regulator. industry mouthpiece, lobbyist and technical advisor.
I3 Flemming-Rudd (1989, p. 127) makes a similar point when he argues that auditing would in principle be very simple if
financial accounting was a bona fide measurement system.
MAKING THINGS AUDITABLE
301
with compliance with these standards as well
as indirectly with the systems of control for
assuring quality. In such cases the auditability
and certifiability of quality is a secondary
process; it would be an exaggeration to claim
a constitutive role for it. Just as tinancial audi-
tors may rely on internal controls within the
auditee organization, so it would appear that
management systems preexist the audit of
their quality. However, where standards are
illdefined and controversial or even where
they don’t exist at all, the certification of quality
assurance systems may take on a life of its own.
The “technical” elements of these systems,
such as sampling and other statistically based
controls, have been common currency for
many years. The idea of such a system plays
an increasingly influential institutional role, for-
malized in general quality standards such as BS
5750 (now BS EN IS0 9000). This point can be
illustrated in the context of recent initiatives
for the audit of environmental management
systems. In the U.K., the British Standards Insti-
tute (BSI) issued BS 7750, Environmental
Management Systems (BSI, 1992) which is an
adaptation and application of the general
quality assurance principles of BS 5750. In
March 1993, the European Commission issued
a Regulation on Eco-Management and Audit-
ing (EMA) which is similar in orientation (CEC,
1993). Both schemes focus on the quality of
internal management systems rather than the
quality of the product or service itself as speci-
fied in standards. Both schemes emphasize a
system structure which can be verified and
approved by independent outsiders. Further-
more “environmental audit” is conceived pri-
marily as a “management tool” within the
environmental management system more gen-
erally (Hillary, 1993).‘* Both are voluntary
schemes and, at the time of writing, it is not
yet clear to what extent organizations will
register with either or both of them. Both
schemes provide for external accreditation
arrangements in order that compliance can be
publicly signalled and the discourse of environ-
mental management has made much of the
“competitive advantage” that this will bring
for registrants under these schemes. BS 7750
provides for a structured and integrated environ
mental management system. The elements of
this system are derived from the common prin
ciples embodied in earlier quality documents
(BSI, 1992, p. 2). Indeed, BS 5750 and 7750
constitute a kind of conceptual framework for
management systems and hence abstraction is
necessary for their applicability to “all types
and sizes of organization”. Thus, the shift in
quality assurance from standards to systems is
also a shift from the specific to the abstract.
And of course, as Abbott (1988) reminds us,
abstract bodies of knowledge provide oppor-
tunities for claims. to occupational monopoly;
professionalization projects in the enviromnen-
tal management field have been conspicuous
(Power, 1994b).
For both EMA and BS 7750, quality is con-
ceived as compliance with tobe-specified stan-
dards of performance. The justification for
splitting form and substance in this way is
that the setting of any standards is considered
to be better than none at all and may be a
stimulus for yearon-year improvements. The
idea of performance benchmarking has
emerged from this division of intellectual
labour between documents such as BS 7750
and standard setting processes. BS 7750 initially
de-prioritizes and abstracts from the substance
of performance in favour of system values and
their auditability. Performance is simply an
abstract and formal reference point which is
subsumed under the goal of verifiability. How-
ever, BS 7750 insists that once an environmental
management system is established, then compa-
nies will have a benchmark against which they
can attempt to improve performance. In other
words, an environmental management system is
I4 Strictly speaking there are two levels of audit: one which is an internal management affair and one which is an external
assurance function. This distinction, which resembles closely that between internal and external 6nancial auditors, is not
crucial to the argument being aticed.
302 M. POWER
claimed to make substantive change thinkable
and possible. l5
An unintended consequence of this division
of labour is that environmental performance
has come to be closely identified with having
an (auditable) system (see Shaylor er al., 1994).
In addition to abstracting from all specitic
knowledges, such as engineering and account-
ing, EMA and BS 7750 embody arrangements by
which the system performance can be made
externally visible. Indeed, it is essential that
the system structure embodies the capacity to
be verified externally. While a quality assurance
system may (or may not) have any perceived
techi ti cal benefits for the organization, it will
not give them any perceived i nsti tuti onal
benefits without certification. Hence, demon-
strability, auditability and verifiability are funda-
mental properties of the system structure and
the environmental management system can be
understood as a “surface” which makes them
possible. Far from being a by-product of man-
agement systems structures, “auditability”
becomes, in the absence of specific standards
of performance, their constitutive ideal. In
other words, for BS 7750 and EMA auditability
is central to their status as self-regulatory “pro
ducts”. They would have no institutional value,
although they may have technical value, with-
out systems elements which are in large part
designed to be “made auditable”.
with the standard is to “achieve and demon-
strate sound environmental performance”
(p. 3). The role of demonstration in the form
of audit is central because the “standard is
intended to support certification schemes”.
The basic elements of the scheme provide for
review of the environmentally relevant per-
formance of the entity, the development of a
policy in relation to these performances and
then the development of a system with the
capability of assuring and demonstrati ng com-
pliance with the policy. Accordingly, “the
organisation shall establish and maintain a
system of records in order to demonstrate com-
pliance with the requirements of the manage-
ment system” (BSI, 1992, p. 7). The systems
elements will include registers and other docu-
ments, such as manuals specifying procedures,
and then arrangements for audit and feedback;
“environmental performance” itself is not
specified beyond the requirement that entities
will formulate environmental policy relative to
legal requirements, perceptions of pressures
from the community and their own “culture”.
The commodilication of quality assurance in
general, and environmental performance in
particular, is evident in the BS 7750 emphasis
upon the role of the external consumer. For
example, it is stated that a goal of compliance
This silence on the content of environmental
policy is taken for granted by the emphasis on
formal systems values. It gives the environ-
mental audit and assurance process a certain
abstract indifference to the substance of per-
formance which reflects a broad shift in regu-
latory orientation from performance standards
to systems, or what Fogarty (1996) and others
describe as a shift from substance to process.
Another important shift is also implied here:
from the i nspecti on of processes to the audi t
of systems compliance.
._
” Initial German resistance to the EC Regulation is instructive @hllary, 1993). German companies regard themselves as
leaders on the question of standards of environmental performance in industry-specific settings. In addition, German
environmental laws are among the most demanding in Europe. Accordingly, the EC Regulation, with its emphasis upon
systems endrely abstracted from Iitst-order performance, was viewed as permitting and legitimating the lower standards of
other member states and of eclipsing the “superior” performance of German companies. This perception reflects a
difference In philosophy in which there is greater resistance to subsuming standards of performance under an abstract
systems concept. The German regulatory style favours “uniform emissions standards and technical expertise” (Weale,
1992, p. 179). Implicit in the German concern about EMA is the risk that perfo-ce becomes too closely identilied with
the ideal of auditability itself. If the role of an environmental management system is to make itself visible, and hence
audltable, for accreditation purposes, then the management system effectively becomes an artifact for the purpose of
external persuasion and legitimacy.
MAKING THINGS AUDITABLE 303
Whatever the outcome of the almost theo
logical deliberations about the meaning of
environmental and quality audits, it is the
material traces of the management system and
its associated forms of documentation which
are the conditions of possibility for audit pro
tocols, checklists and questionnaires.‘6 The
audit process is constituted by a “rhetoric of
records” which couples the auditee and auditor.
The environmental management system effec-
tively mediates the “front (public) regions of
an organization and its back (private)
regions” (Van Maanen & Pentland, 1994, p.
54). Accordingly, “making quality auditable”
is essentially a process of constructing a par-
ticular kind of auditable front region for an
organization and it is here that impression man-
agement and management are tightly coupled
through the audit process. For example, an
emphasis on the demonstrability and defend-
abfZfty of compliance to accreditation organiza-
tions is evident in FSS 7750: “organisations may
find it beneficial to establish self-assessment
procedures carried out by the responsible line
management to assess audit readiness” (PSI,
1992, p. 14). Here we have an explicit sugges-
tion of apre-audft audi t, a quality check to see
whether the system is suitable to be checked
for quality. Although this suggestion is made
only in an appendix of the standard, it never-
theless expresses the ideal of auditability in its
purest form where actions must always be con-
ducted with a view to their auditability at a
later date by different parties.
The manner in which quality is made audit-
able emphasizes both the negotiation of audit
knowledge, with selective borrowings from the
financial audit tradition, and the creation of
auditable environment by institution&zing
internal elements of the organization. The
idea of a management system and its control
structures is both an essential component of a
form of audit knowledge which can justify the
abandonment of direct inspection and also an
institutionally legitimate practice for an audi
tee. A close analysis of the official documents
in this area reveals much about how a “logic of
auditability” gives priority to the accreditation
of systems of control, rather than standards, an
emphasis with obvious cost implications. BS
7750 and EMA operate in an institutional space
in which companies place an economic val ue
on demon&ratable compliance for legal pur-
poses and for marketing advantage with a
newly conscious public. This “certification
explosion” corresponds to .a speciiic style of
control through the production of symbols of
comfort (Pentland, 1993). It would be wrong to
suggest that there has been no resistance to the
systems emphasis of documents such as BS
5750 and 7750.” There is widespread concern
that they have become ends in themselves and
have constructed a mentality of “abstract com-
pliance” regardless of the substance of quality
Stand&S.‘* And there are concerns that envir-
onmental audit will become a simulacrum in
which “there no longer i s an audit, only a
discourse about an audit” (Francis, 1994, p.
261).
lo A great deal of effort and expense has been expended on defining audit in the environmental area. While there has been
agreement about its role as a “management tool” (KC, l-l), the relation between reviews and verifiction, internal and
external audits, and the nature of external reporting have been the subject of considerable discussion. BS 7750 differ-
entiates between verifrcation~and audit in terms of their temporal proximity to the process being controlled. Audit appears
to be more of an exposr and independent function than verification which seems closer to a kind of selfchecking but “in
all cases . . . the objectives should be to control the activity in question in accordance with specified requirements and to
verity the outcome”.
r’ See, for example, “Concern at Pointless Quality Rules”, Ti mes Hi gber Educati on Suppl ement (9 April 1993).
r* See, “Quality Under Fire”, Fi nanci al Ti mes (21 June 1994). which raises the problem of the quality of quality assessors;
“Bitten by the Bug”, Ffnancful Ti mes (20 September 1994) which reports the use of BS 5750 in controlling supplier
quality.
304
M. POWER
The institutional origins of the demand for
environmental audits is a complex admixture
of inlluences (Power, 1994a). A general shift
in regulatory style, publicly articulated con-
cerns about the environment, a critical mass
of consuhing practitioners operating in related
areas, the threat of litigation and a history of
more general quality initiatives have all shaped
the regulatory space within which environmen-
tal auditing initiatives have and are being for-
mulated. But, above all, environmental audit
must be possible and cost effective. The audit
of environmental management systems repre-
sents a particular style of dealing with and pro-
cessing environmental risk which reaffirms
the possibility of a cost-effective assurance
function. It is a style which focuses on manage-
ment process, which emphasizes the compat-
ibility of commercial and environmental
imperatives and whose object is the “produc-
tion of comfort” for varied constituencies. The
auditability and certifiability of BS 7750 and
EMA are essential to these diverse public roles
and, more generally, to the status of quality
assurance as a product worth paying for.
MAKING BRANDS AUDITABLE
The 1980s was a period of intense merger
and acquisitions activity in the U.K. As com-
panies became commodities to be bought and
sold (Espeland & Hirsch, 1990), targets and pre-
dators sought increasingly to account “crea-
tively” both to resist or enhance the chances
of takeover and, where successful, to enhance
post-acquisition performance. Consequently,
the accounting rules for business combinations
and the enforcement of these rules by financial
auditors evoked a stream of critical commen-
tary (e.g. Smith, 1992). The use of “non-subsidi-
ary
subsidiaries”, of merger relief in
combination with acquisition accounting and
of pre-acquisition provisions deliberately under-
mined the intention of existing guidance. More
generally, the pressure to perform according to
the perceived criteria of capital markets (such
as earnings per share) coupled with indetermi-
nate accounting rules (e.g. about income recog-
niton) fuelled this process.
The U.K. brand accounting debate has its
origins in this ferment of takeover activity.”
Acquirers with large amounts of goodwill in
their consolidated balance sheets were faced
with two unpalatable options under the exist-
ing, somewhat lax, rules in the U.K.: immediate
writeoff to reserves or capitalization and amor-
tization. For a complex mixture of reasons a
number of U.K. companies (notably Grand
Metropolitan plc and Kanks Hovis MacDougall
plc) sought to value and capitalize their brands.
IWM’s policy was particularly controversial
because it involved the valuation of internally
generated brands as well as those purchased as
part of an acquisition.
At the centre of the accounting debate is the
question: can brands be measured with
s@cient reliability to be recognized on the
“balance sheet”. The FASB and L4SC concep
tual frameworks guide us to this question, as
the essential hurdle condition for the recogni-
tion of an asset, but not to its answer. If the
answer is that brands can be measured reliably
then the related question as to whether brand-
names are separable (from goodwill and other
intangibles) will be affirmative. In other words,
in the case of brands, and intangibles more
generally, it is impossible to distinguish clearly
between measurement, recognition and “ele-
ment” issues; the problem of separability can-
not be disentangled from that of the reliability
of the technology of measurement (Napier &
Power, 1992). Hence a linkage between the
question of measurability and that auditability,
which was explored above in the case of
research, is also relevant here.
Opposition to brand accounting in the U.K.
crystallized around the London Business School
I9 The bac@ound to this debate has been documented extensively elsewhere (see Barwise et al., 1989; Napier & Power,
1992).
MAKING THINGS AUDITABLE 305
(INS) report (Barwise et al., 1989). This report
argued that many of the claimed rationales for
capitalizing brands were doubtful. It argued
that, despite the assertions of valuers such as
Interbrand plc, there was no general agree-
ment about the validity of their valuation meth-
odology. The report claimed that this
methodology was neither “totally theoretically
valid nor empirically verifiable” (p. 7). Indeed,
the LBS report explicitly links the question of
verifiability to that of accounting recognition:
“Verifiability . . . has implications for ‘audlt-
ability’ since it is a necessary condition for
recognition that asset valuations should be audi-
table” (p. 16). The recognition test for status as
an asset seems to be that putative assets be
measured and uerf’ed with reasonable cer-
tainty. Here then we see another version of
the claim for a tight link between the credibility
of economic measurement for accounting
recognition purposes and auditability.
The LBS report also argues that brand valua-
tions give rise to auditability problems because
all the “auditors can really check is the process,
not the book values”. In other words, auditors
can check any calculation to agreed procedures
but cannot check the procedures themselves
since they are “not experts and cannot make
such judgements” (p. 74). In contrast to the
LBS, Sherwood (1990, pp. 82-84) whose firm
were the auditors of RHM, is less doubtful
about the limits of auditor expertise. He sug-
gests that the auditor can “verify the underly-
ing facts” on which valuation is based; brand
valuation is not an “exact science” and it is
“better to be broadly right than precisely
wrong”. On the face of it, the question seems
to be whether the auditor is restricted to retra-
versing accounting calculations or whether
s/he can inspect directly any “inputs” into
this calculative process, especially since the
form of the calculation is a matter of conven-
tion.20 But, on closer inspection, much of the
controversy about verifiability and auditability
hangs on the expertise of those whose calcula-
tions are being revisited by the auditor. In
other words verifiability and auditability are
less properties of things in themselves and
more a function of the institutional credibility
of experts. Auditability on this view is a func-
tion of agreement about the limits of auditor
expertise and the credibility of other special-
ists. There are at least three levels to this issue:
(1) If the measurement/calculation is widely
regarded as a matter of accounting “common
sense”, there is no need for other expertise.
(2) If the measurement/calculation is
regarded as depending on a particular body of
knowledge, the auditor may choose to endorse
and rely upon that expertise.
(3) If the
measurement/calculation is
regarded as beyond expertise, it is unverifiable.
The difference between each level is not
absolute and may reflect different jurisdictions
of professional knowledge. For example, veri-
fying the valuation of commercial vehicles
might be something that falls firmly within
the province of accounting expertise. Verify-
ing the valuation of land and buildings seems
to fall within the jurisdiction of chartered sur-
veyors, although their expertise is not immune
from doubt.21 Verifying the value of a healthy
working environment may be regarded, from
the point of view of accountants at least, as
beyond expertise. However, the relation
between levels 1, 2 and 3 is dynamic and, for
any particular accounting issue, potentially con-
testable. A shift from position 3 to 2 reflects a
shift in consensus about the credibility of non-
accounting expertise. For example, in the case
of environmental liabilities accounting neces-
sarily overlaps with legal and scientific bodies
of knowledge; matters which were formerly
unaccountable and unauditable become so by
virtue of the institutionalized credibility of
“other” experts. In the transition from level 2
to 1 external expertise is internalized and
‘” This is probably true of all auditing (cf. Flemming-Ruud, 1989, p. 117).
*I See “A Revamped Red Book”, Ffnuncfuf Ti mes (4 November 1994), which suggests that the credibility of land and
building valuation practice suffered during the recession in the property markets in the early 1990s.
306 M. POWER
appropriated as part of the accountant’s know-
ledge system. This is highly unlikely in the case
of science and law but less implausible in the
case of valuation work, hence the tensions
between accountants and actuaries regarding
pension scheme accounting.
The interesting threshold for the question of
brand accounting is between levels 3 and 2, the
point at which a body of knowledge is sufli-
ciently credible for its practitioners to be reli-
able for accounting purposes. In a technical
release on this matter, the former Accounting
Standards Committee stated that “A valuation
may be regarded as verifiable if different inde-
pendent valuers using the same Information
would be likely to arrive at a similar valua-
tion” (ASC, 1990a, para. 3.2, echoed in ASC,
1990b, para. 27).** Given that such a consen-
sus between valuers is an empi ri cal matter of
the acutal standard deviation among measurers
(Ijiri & Jaedicke, 1966) then there are no a
pri ori grounds for saying that brand valuation
is unreliable and hence tmauditable. The logic
of the ASC view is that questions of auditability
cannot be settled only by looking at the tech-
nical detail of the valuation method. They are a
matter of what becomes generally accepted.
brand were not trusted. However, as account-
ing Iirms began to declare their commercial
interest in and support for valuation method-
ologies in a technical sense, it became increas-
ingly obvious that this marginalization of
Interbrand could be sustained only at the level
of doubts about their specific methodology,
which they claimed to be robust and auditable,
not brand valuation as such.
By locating “auditability” as a function not of
things themselves but of agreement within a
specialist community which learns to observe
and “verify” in a certain way with certain
instruments (Hacking, 1983) features of the
brand accounting debate which might appear
to be marginal begin to assume considerable
importance. The debate in the U.K. was ini-
tiated in part by the decision of Ranks Hovis
MacDougall to capitalize its acquired and
home grown brands using the valuation exper-
tise of its consultants Interbrand. The valuation
methodology was opposed by appealing to its
technical failings (“too subjective”) but under-
lying this claim were doubts about the credibil-
ity of Interbrand (Power, 1992b). Brands were
regarded as unauditable largely because Inter-
As paradoxical as it sounds, the case of brand
valuation strongly suggests that the more the
practice is accepted, the more “true” its
claims become. Arnold et al . (1992) have
argued that, “in order to include intangibles
as assets, managers will have to persuade the
company’s auditors that the amount at which
they are included is reasonable” (pp. 76-77,
emphasis added). The more widespread the
acceptance of brand valuations the easier will
this persuasive process become. The realist
about accounting measurement will want to
argue that any reduction in the standard devia-
tion of measurement across individual valuers
arises because the measures are becoming more
objective. In contrast to this line of reasoning,
the brand valuation case suggests that the stan-
dard deviation of measurement practice
decreases because of a consensus that the valua-
tion method is “objective” and this arises when
a critical mass of practitioners follows an
increasingly institutionalized methodology.
The Involvement of accounting with “alien”
bodies of expertise is not peculiar to brand
accounting. For a number of years it has been
permitted in the U.K. for companies to revalue
their land and buildings. These revaluations are
performed by “expert” valuers - usually char-
tered surveyors - and among the disclosures
relating to the valuation which are required are
its basis and, in the period in which it is carried
out, the names of the valuers and the details of
their qualifications. In addition, auditors rely
upon the work of actuaries in a number of
__
“ If all valuers use a method such as NPV then they may differ only in their assumptions about cash flows and discount
rate. These assumptions are verifiable in Sherwood’s sense to the extent that they are based on extrapolations from
“existing facts” and agreed methods of extrapolation.
MAKING THINGS AUDITABLE 307
different contexts - a relationship for which
guidance has been supplied by the Auditing
Practices Committee in the U.K. (APC,
1990).23 This means that verification takes
place against the background of a network of
trusted experts. The list of such specialists is
expanding and now includes environmental
consultants for some purposes. Substantiating
the credibility of that expertise, rather than
any detail about what has been done, is becom-
ing a fundamental auditing and disclosure
requirement.
This explicitly sanctioned reliance on non-
accounting professional expertise in the U.K.
is addressed more generally in the auditing
guideline Rel i ance On Other Speci al i sts,
(APC, 1!%36), to be replaced by Statement of
Auditing Standard (SAS) 520, Usi ng the Work
of a Speci al i st.24 It is stated that the auditor,
like the accountant, cannot be expected to
have detailed knowledge and experience of
specialists in other disciplines but he/she
must nevertheless form an opinion of i nter
aZfa the need for specialist evidence and the
competence and obj ecti vi ty of the speci al i st.
The guideline states that the latter is normally
“indicated by technical qualifications or mem-
bership of an appropriate professional body.
Exceptionally, in the absence of any such indi-
cations of his competence, the specialists’s
experience and established reputation may be
taken into account” (pat-a. 9). It is clear that
institutional legitimacy in the form of estab-
lished professional status is regarded as strong
evidence for such credibility and hence for
the acceptability of the related practices. In
addition, the auditor must consider the rela-
tionship between the specialist and the client
and whether the specialist has a signiticant
financial interest in the client.
At the extreme, auditors will audit brand
valuations in the same way many other items
are audited: by relying primarily on other
expertise. In this way the auditability of proble-
matic things is ultimately accomplished by an
external i zati on and procedural i zati on of the
evidence process, a specific style of delegation
to credible experts which is a mixture of trust
and verification. It is not verification in the
pure and probably unrealizable sense of un-
mediated contact with the thing to be verified
but it makes claims to auditability possible.25
Brand valuation is not simply a body of tech-
niques and operations. Rather, it represents a
body of knowledge in which the relevant
experts must seek a certain level of social cred-
ibility and trust.26 Expertise is in general a
peculiar mixture of internal (epistemic) and
external (institutional) validity in which the
“how” and the “who” of that expertise are
deeply interrelated. The question of the audit-
ability of brand valuations is therefore inextric-
ably linked to territorial sensitivities and doubts
about credibility couched in the seemingly
neutral and disinterested language of “asset
measurement”. Where measurement is con-
troverisal, the credibility of measures assumes
considerable importance. From this point of
view the distinction between measurement
and calculation which preoccupies the norma-
tive-realist school of accounting theory is not
an absolute one. Collins (1985, p. 145) has
argued that knowledge claims become more
certain the further they are from their point
of origin; reliability of measurement becomes
a function of this distance and trust enables
L3 The AFC was replaced by the Auditing Practices Board (APB) during 1991.
** Similar guidance exists in the U.S.A.
a5 See Lee’s (1993, pp. 21-22) examples of different situations where third patty reliance may be necessary to alleviate
doubt.
z6 It is notable that in Australia there are fewer inhibitions about relying upon non-auditor expertise provided that there is
sufficient disclosure (Australian Accounting Research Foundation, 1989).
308
M. POWER
knowledge to become “black boxed” in
Latour’s (1987) sense.
How are we to explain the fact that in the
U.K. in 1988 there was widespread scepticism
about the auditability of brands whereas by
1994 this position seems to have softened?
Nothing has changed in the underlying measure-
ment technology to make it more “reliable”.
What has changed is the climate of accept-
ability for the practice. Indeed, in a very impor-
tant sense brand valuations ure auditable
because auditors have given clean audit
reports and in this way they have acquired a
aefucto institutional legitimacy. Since 1988 the
consensus about the credibility of brand
accounting has widened and resistance has
tailed off. In contrast to the previous cases,
making brands auditable emphasizes the nego-
tiated nature of audit knowledge construction
and of consensus formation in relation to
bodies of expertise, rather than the creation
of an audit environment.” Brand valuations
have become auditable because large numbers
of people who matter regard them as reliable.
Indeed, accountants themselves are providing a
brand valuation service so there has been a
slide from level 3 to 2 to 1 in the analysis
above. The auditor can now rely on the
“who” of the independent expert rather than
examine the substance of the valuation itself,
especially where the who is another accoun-
tant. The normalization of the measurement
of brand values is coextensive with making
them auditable.
CONSTRUCTING AUDITABILI’IY
In the preceding sections three cases have
been considered where the concept of audit-
ability has played a key role in shaping policy
deliberations. One could say that in all these
cases it is a concept which is appealed to
more than it is understood. When questions
of auditabllity are invoked it is usually ln the
form of vague claims to expert common
sense. Each of these different examples shares
another common characteristic: they are con-
texts of practice which are or have been nego-
tiable and in which audit practices, either
proposed or actual, have been resisted and
remain controversial. Pressures for greater
accountability for academic research funds
are recent and continuing. Markets for environ-
mental auditing and environmental manage-
ment systems have also recently been
stimulated by regulatory initiatives in Europe,
though they have a longer history in North
America, and the regulatory arrangements are
still in their infancy. Brand accounting is far
from being an entirely legitimate practice in
the U.K. and, while it is likely to become so,
the accounting and auditing controversy is not
yet over. In other words, each of the three
cases has not yet been subject to closure.
Accordingly, assumptions sustaining the “logic
of auditability” are readily visible in these
unstable contexts. In each case, it has been
suggested that questions of “auditability”, far
from being obvious, are the product of active
strategies of “making things auditable”.
It was suggested that the commercial stra-
tegies of individual auditors or firms take place
against the backdrop of a system or field of
*’ Naturally there is a link between the themes of negotiation and creation here in so far as brand valuers will need to base
their work on data supported by a system which controls and records it. Whether they would do this anyway regardless of
financial reporting and auditing is unclear. Napier (1994, p. 95) has argued that, “In Britain companies are permitted to
capitalise the costs of developing new products and processes. Although market research is explicitly excluded from the
definition of development costs, there are clear parallels between product development and brand establishment (indeed,
in consumer goods industries, the distinction between them is artificial). Brand oriented companies might wish to design
their internal management accounting systems in order to identify the cost of establishing and developing new brands,
with a view to using the AK’s own logic in the case of product development costs as a justification for capitalisii the
costs of creating and establishing new brands”.
MAKING THINGS AUDITABLE 309
knowledge which is both the condition of
possibility for these stragegles and is repro-
duced by them. Such a system requires, above
all, both a stable and legitimate knowledge base
and an “audltable environment” to which this
can be applied. What is at stake ln this co-
production of stable knowledge and audltable
environments for audit practice is to a large
extent a project of “fact building” -tour,
1987, p. 104). At first glance such an idea ls
counterlntultive since facts are facts, they are
not created. However, once these facts have
been bullt, audit knowledge can be regarded
as common sense and the audit environment
assumes a “natural’ externality to the audit
process. Once facts are “built” the context of
their construction is effaced and one is left
with practitioner common sense and routine
practice, until that practice fails, in which
case new processes of fact bullding (and blame
avoidance) are set in motion and new audit
techniques and responsibilities are created.
In making academic research auditable, the
measurement technology of the time recording
system creates a layer of facts which make the
auditablllty of research activity possible. Of
particular significance here is a conception of
auditability imported into the research context
by consultants who recommend a particular
level of detailed elaboration in this layer of
facts. The building of these auditable facts con-
sists in creating a sufficiently atomized and
elaborate domain for the purpose of making
research accountable, a ritual of precision
which has little to do with accurate representa-
tion of research activity and more to do with
producing a legitimate style of regulatory con-
trol. Rhetorics of auditability, measurability and
accountability are tangled up in this context.
In the second case of making quality audit-
able, environmental management systems and
environmental performance illustrate how
auditability is linked to the creation of a system
which establishes a bureaucratic surface upon
which the audit process can work, indepen-
dently of substantive performance. By empty-
ing the system of content the ideal of
auditability can emerge unencumbered by idio
syncratic specificities. The environmental man-
agement system specifies the construction of a
domain of facts capable of external certitica-
tlon. In this way the management system is
not only a technological construct; its ele-
ments have an essential public face which is
offered for the purpose of accreditation.
In the thlrd case of brand accounting, it was
argued that the auditabillty of these valuations
depends in large part upon the credibility of
the expert valuers. Once experts are credible,
the substance of what they know need not
become a direct object of the audit process.
In this manner things are made auditable by
constructing networks of trust which can be
proceduralized. Hence, a thing which was
unauditable at one time may become auditable
later by virtue of a shift in the network of trust.
Practices which were once soft, subjective and
unauditable can become hard, objective and
auditable. There is therefore nothing intrinsic
about the objectivity of certain facts over
others; this is relative to the position of a fact
within a field or system of knowledge. For
example, the more entrenched a measure-
ment/auditing procedure (low standard devia-
tion of measures/auditors) has become, the
more it is likely to be regarded as a matter of
common sense. Auditability is therefore a dis-
tinctive form of administrative objectivity
(Porter, 1994), one in which certain routines
and procedures have acquired an accepted
role in facilitating audit practice.
Table 1 summarizes the conclusions of the
paper. While making research auditable
stressed the creation of a measurable environ-
ment, the brand context concentrated on the
construction of trust in expertise. The audit of
quality illustrates both the construction of audit
knowledge around the idea of a management
system and the creation of an auditable envir-
onment through the implementation of this
system. The dimensions of this matrix are not
intended to be exhaustive and could be
extended in both dimensions. For example,
the columns could be extended into, say, finan-
cial services audits (Power, 1993a), or even
into more traditional and institutionally stable
310 M. POWER
TABLE 1. Making things auditable
Context of fact building
Audit oE
Method of fact building Research
Credibility of other experts LOW
Abstract management systems LOW
Detailed measurement Hieh
Qdty
LOW
High
LOW
Brands
wa
LOW
LOW
contexts such as the audit of debtors. The rows
could be extended to embrace other forms of
fact building, such as sampling (Power, 1992a;
Carpenter & Dirsmith, 1993). While a particu-
lar form of “fact building” for audit purposes
has been emphasized in each of the three
cases, this is not intended to exclude the
others. Thus, the audit of brand valuations
also depends on systems which capture
marketing data. Environmental audits involve
reliance upon various specialists. And the audit
of research activity seems likely also to involve
abstract quality assurance systems.
Fact building, whether for science or audit-
ing, is an expensive process. Recent studies
which draw attention to the socially nego-
tiated nature of the audit process emphasize
the interpenetration of economic and epis-
temic dimensions of audit practice. What gets
accepted and stabilized as evidence and tech-
nique is always affected and limited by eco-
nomic factors. In this sense the construction
of auditable environments and the building of
audit-relevant facts must be registered con-
stantly in relation to a need to maintain a
cost-assurance equation for the auditor, not
only directly incurred and knowable costs but
also the possible costs arising from litigation
processes. Making academic research audit-
able imposes costs on the auditee and leaves
the external auditor unburdened by the more
time-consuming and costly audit of research
output. In the case of environmental audit,
the audit of systems is less costly than the audit
of transactions and procedures. This might
require the auditor to invest in alien bodies of
knowledge either directly or by the use of
specialists. In the case of brands, the audit
process relies on valuation experts whose
costs are borne by the auditee.
One problem in these and many other cases
is that it is relatively easy to know the cost
element of the audit process. The assurance
function is much more difficult to specify.
This is something that audit has in common
with a number of activities where it is difficult
to measure benefits: policing, teaching and so
on (Power, 1993b). Making things auditable is
in large part to do with maintaining institu-
tionalized images of assurance which are
externally legitimate and which are consistent
with the claimed practicalities of cost. This is a
precarious task, and gaps, and hence legitimacy
problems, can emerge when the demand for
assurance is out of step with the supply for a
given cost. All three cases considered suggest
how the building of auditable facts involves an
“exacerbated concern with documentation”
(Fogarty, 1996) and procedure: timesheets,
system documents, working papers to support
the reliance on experts. Making things audit-
able is the construction of the visible signs of
“reasonable practice” for consumption by
markets, regulators, courts of law, the state
and others whose programmes depend on the
production of comfort. Audit reports are a
symbol of legitimacy which do not so much
communicate as “give off” information by
virtue of a rhetoric of “neutrality, objectivity,
dispassion, expertise” (Van Maanen & Pentland,
1994, p. 54).
The production of auditable facts is therefore
not a simple question of writing up what has
been done. The process of writing up is a stra-
tegic act which brings the fact of auditability
into being and has consequences for the “pro-
fessional” identity of the auditor. By editing out
MAKING THINGS AUDITABLE 311
elements which might raise questions, audit
documentation is also a way of socializing
staff, a form of “institutionalized purification”
which produces the administrative objectivity
which corresponds to auditability. Facts have
no reality for audit purposes until they are orga-
nizationally inscribed in some way: “To pro-
vide an account in . . . the auditing world . . .
means adhering to descriptive devices (numer-
ical and narrative) that are by and large conven-
tional and arbitrary. They are neither right nor
wrong but stand as coding or reporting stan-
dards that are “generally accepted” as ade-
quate for the task. They can be regarded as
strategic representations, collectively validated
by members, designed to put the organization’s
best foot forward” (Van Maanen & Pentland,
1994, p. 81).
Finally, the three cases considered above
challenge the idea of independent verili-
ability. For example, Wolnizer (1987) argues
that the concept of “independent testability”
should characterize audit. However, at crucial
junctures in his argument, the social and con-
sensually grounded aspects of testability
become evident. Thus, in arguing that the
past states of phenomena may be reliably
authenticated “if reliably documented” (p.
15), the dependence of testability upon a
domain of testable documented facts is clear.
For Wolnizer, and other auditing “realists”,
these facts are somehow independent of rela-
tively trivial documentation processes. In addi-
tion, Wolnizer argues that the testability of
statements consists “in their openness to criti-
cal scrutiny by any skilled tester” Op. 16). But
who is skilled in this context? Much depends
upon the social allocation of trust and hence
the concept of auditability is already loaded
with problems of whose scrutiny is to count.
Wolnizer argues that replication “is the nub of
independent testability: that skilled inquirers
may repeatedly test hypotheses and that their
results may, in turn, be corroborated or refuted
by others. Capacity for replication is essential
. ..‘I (p. 20). However, Collins’ (1985) study of
scientific replication suggests that the produc-
tion of public knowledge depends crucially
upon the credibility of the agents producing
it. An enormous amount of consensus pre-
cedes any process of replication because an
event will count as replication, and hence as
an instance of possible refutation, only if it is
conducted by reputable experts. In other
words, replication requires as much a con-
sensus about whose judgement is to count
as it does a consensus in the judgements
themselves.
Attempts to describe auditing in a manner
which stresses cognitive accomplishments
such as verification and replication systematic-
ally disattend to, but cannot entirely abstract
from, the social support for these accomplish-
ments. This is most evident within the audit
judgement tradition of research which is con-
cerned very broadly with the -forms of consen-
sus, or lack of them, which emerge from the
judgements of individual auditors in response
to experimentally constructed environments.
Broadly speaking, this tradition attempts to
understand on a systematic basis the nature of
auditor responses to environmental cues, their
processing of information and its biases, and
the nature and stability of the judgements
they make. This paradigm of inquiry is inter-
ested primarily in the consensus of specific
groups of auditors as a product of audit tech-
nologies in conjunction with “human informa-
tion processing” structures. What is necessarily
invisible within this tradition of research is the
manner in which audit knowledge is an institu-
tionalized system of knowledge. In the three
cases I have tried to show that, for environ-
ments to be auditable, a consensus about the
form of audit knowledge and about a domain of
facts relevant for audit purposes must exist or
must be created, since all techniques demand
the environments in which they “work”.
CONCLUSION
Like any practice, auditing has a “front” and
“back” stage in Goffman’s terms. The back
stage practice works hard to produce, for insti-
tutional consumption, the front stage as a
312 M. POWER
‘ ‘natural’ ’ outcome.28 Audit judgement
research focuses on the production of con-
sensus on the front stage as the contingent
product of individual cognitive judgements.
But, from the point of view of the institutional
construction of auditability, cognition itself
emerges from a more fundamental consensus
produced in back stage arenas. Processes of
consensus formation about evidence and rele-
vant facts precede and make possible the
“cognitive judgements” of individual auditors
which may or may not deviate from one
another. Audit judgement research makes
sense when the system of knowledge is stable
and where the judgements of auditors in rela-
tion to this background stability are the inter-
esting variable. In this respect, audit judgement
research can be regarded as a “normal
science” of audit practice. Lack of consensus
at the individual level may indicate poor train-
ing, and so on, but the system of knowledge is
not usually directly at issue. However, lack of
individual auditor consensus may also indicate
more systematic instabilities in the system of
knowledge. When the system of knowledge is
unstable, as it is in the three cases considered
above, what is of interest is less the process of
consensus formation at the level of the indivi-
dual auditor but those processes by which pro-
cedures and routines, paradigms of auditability,
become institutionalized as the public face of
practice.
To conclude, a sociology of audit technique
(Power, forthcoming) which takes on the “back
stage” of audit knowledge production provides
an alternative to the cognitive tradition. Such a
sociology takes the cognitive claims of audit
practice as expl anandum rather than expl a-
nans (Pinch & Bijker, 1987, p. 24). As recent
themes in the sociology of science suggest,
concepts of evidence, observation, experl-
ment, testability and replication are far from
being stable elements which can be utilized
to explicate audit practice. They are them-
selves the product of processes which mark
out the, often competitive, jurisdictions of
knowledge-producing communities. Making
things auditable is a constant and precarious
project of a system of knowledge which must
reproduce itself and sustain its institutional role
from a diverse assemblage of routines, prac-
tices and economic constraints. It is when
this knowledge system extends its reach into
new areas that this project, and the logic of
auditability which requires facts for its proce-
dures, is most apparent. It is a logic in which
the demand for things to be auditable and for
things to be seen to be auditable are almost
identical:
the more we are concerned with the financial
health of our institutions, the more we must rely on
appearances created by organizations whose very suc-
cess is judged by the appearances they create (van
Maanen & Pentland, 1994, p. 60).
BIBLIOGRAPHY
AAA, A Statement of Basic Accounting Theory (Sarasota, Florida: American Accounting Association,
1966).
AAA, The Auditor’s Report (1993).
AARF, Exposure Draf 49: Accounting for I dentijiable I ntangible Assets (Sydney: Austrakan Accounting
Research Foundation, 1989).
Abbott, A., The System of Professfons: an Essay on the Division of Expert Labour (Chicago: University of
Chicago Press, 1988).
as Latour (1987) makes very similar claims for natural science but with a different metaphor: the two faces of Janus. One
face corresponds to “science in the making” and the other represents “ready made science” with its context and process
effaced for public consumption.
MAKING THINGS AUDITABLE 313
Apt, Reliance on Orber Spectaltsts (London: Auditing Practices Committee, 1986).
APC, Practtce Note 2: Accounting for Pension Costs under SSAP 24, Liaison Betuwen the Actuary and
the Auditor (London: Auditing Practices Committee, 1990).
Armstrong, P., Contradiction and So&l Dynamics in the Capitalist Agency Rclatlonship, Accounting,
Organizations and Socfery (1991) pp. l-26.
Arnold, J., Egglnton, D., Kirkham, L., Macve, R. & Peasnell, K., Goodwill and Other Intangibles (London:
Institute of Chartered Accountants in EngIand and Wsles, 1992).
ASC, Technical Release 780, Accounting for Intangibk Pixed Assets (London: Accounting Stsndards
Committee, 1990s).
ASC, Brposure Draft 52, Accounting for Inrangibk PLred Assets (london: Accounting Standards
Committee, 199Ob).
Banvisc, P., Higson, C., Iiklcrmsn, A. & Mush, P., Accounting f or Brands (London: London Business
School/K.AEW, 1989).
Bencvistc, G., me Politics of &pert&e (London: Croom Helm, 1973).
Bolsnd, R., Myth and Technology in the American Accounting Profession, Journal of Management
SZudies (1982) pp. 109-127.
BourdIcu, P., In Other Words: Essays Towa& a Refrerue Sociology, Adsmson, M. (transl.) (Cambridge:
Polity Press, 1990).
Bow br i c k , P., lYbe Economics of Quali& Grades and Bran& (landon: Routledgc, >992).
Broadbent, J., Laughlin, R. % Shearn, D., Recent Fins&al and Administrative Chsngcs in General
Practice: an Unhealthy Intrusion Into Medical Autonomy, Plnancial Accountability and Management
(1992) pp. 129-148.
BSI, Environmental Management Sys&ms (London: British Standards Institute, 1992).
Carpenter, B. % Dirsmiti, M., Sampling and the Abstraction of Knowledge in the Auditing Profession: an
Rxtcnded Institutional Theory Perspective, Accounting, Organizations and Society (1993) pp. 41-63.
CEC, Council Regulation @EC) No. 1836/93 of 29 June 1993, Allowing Voluntary Participation by
Companies in the Industrial Sector in a Community Ecemanagemcnt and Audit Scheme, O@ciar
Journal (hne 1993).
Coffey, A., Timing is Everything: Graduate Accountants, Time and Orga&stional Commitment, Sociology
(1994) pp. 943-956.
Collins, H., Cbanging Or& Replication and Induction in &tent@ Practice (London: Sage, 1985).
Coopers % Lybrsnd, Research Accountability (London: Coopers & Lybrand, 1993).
Gushing, B. E. % Locbbccke, J. K., Comparison of Audtt Metbodologfes of Large Accountfng Pinns
(Sarasota, Floriti American Accounting Association, 1986).
Espcland, W. & Hirsch, P., Ownership Changes, Accounting Practice and the Redefinition of the Corpora-
tion, Accounting, Organizations and Socfev (1990) pp. 77-%.
FASB, Statement of Financial Accounting Concepts No. 2, Qualttative Cbaracterktics of Accounting
Infwmatfon (Stamford, Connecticut: Financial Accounting Standards Board, 1980).
Felix, W. L. % Kinney, W. R., Research in the Auditor’s Opinion Formulation Process: State of the Art, The
Accounting Revkw (1982) pp. 245-271.
Fischer, M. J., “Real-izing” the Benefits of New Technologies as a Source of Audit Evidence: an Intcr-
prctive Field Study, Accounting, Or gani zat i ons and Soc i et y (1996) pp. 219-242.
FIcmmingRuud, T., Auditing as Venfiation of Financial Information (Oslo: Nonvqian University
Press, 1989).
Flint, D., Pbilosopby and Princtpks of Auditing (London: Macmillan Education, 1988).
Fogarty, T., The Imagery and Reality of Peer Review in the U.S.: Insights from Institutional Theory,
Accounting, Organ#xations and Society (1996) pp. 243-267.
Francis, J., Auditing, Hcrmeneutics and Subjectivity, Accounting, Organizations and Socie~ (1994)
pp. 235-269.
Hacking, I., Representing and Inrewening (Cambridge: Cambridge University Press, 1983).
Hsrpcr, R., Notes on tic Accounting Character: an Ethnogxaphy of Auditing, Unpublished manuscript,
University of Lancaster (1991).
HEFC, AccounlabUity for Research Punds (H&her Education Funding Council, 1993).
Hillary, R., Z& Ecc+management and Audit Scbeme~ a Practical G&de (Lctchworth: Technical
Communications, 1993).
Hood, C., A Public Management for all Seasons, Pub&c Admintitratfon (1991) pp. 3-19.
314 M. POWER
Humphrey, C. & Moizer, P., From Techniques to Ideologies: an Alternative Perspective on the Audit
Function, Crfti cal Perspectfves on Accounti ng (1990) pp. 217-238.
ICC, EI fectfve Envi ronmental Audfti ng (Pari s: I CC Publishing, 1991).
Ijhi, Y. & Jaedicke, R. K., Reliability and Objectivity of Accounting Measurement, The Ac c ount i ng Revfew
(1966) pp. 474-483.
Johnson, K. T. & Kaplan, R. S., Rel evance Lost - tbe Ri se and Fal l of Management Accountfng
(Cambridge, Massachusetts: Harvard Business School Press, 1987).
Kirkham, L., Putting Auditing Practices in Context: Decipherhtg the Message in Auditor Responses to
Selected Environmental Cues, Critkal Perspectfves on Accountfng (1992) pp. 291-314.
Lash, S. & Urry, J., Economi es of Sfgns and Space (London: Sage, 1994).
Latour, B., Science i n Acti on (Milton Keynes: Open University Press, 1987).
Lee, T., Corporate Audi t 7beory (London: Chapman & HalI, 1993).
McNair, C. J., Proper Compromises: the Management Control Dilemma in Public Accounting and its
Impact on Auditor Behaviour, Accountfng, Organfzatfons and Soci ety (1991) pp. 635-654.
Mautz, R. K. & Sharaf, H. A., Zbe Pbfl osopby of Audftfng (Sarasota, Fl ordi az AM, 1961).
Napier, C., Brand Accounting in the United Kingdom, in Jones, G. & Morgan, N. (eds), Addfng Vafue:
Brands and Marketi ng fn tbe Food and Drfnk I ndustrfes (Iondon: Routledge, 1994) pp. 76-180.
Napier, C. & Power, M., Professional Research, Lobbying and IntangIbles: a Review Essay, Accountfng
and Busi ness Research (W%tter 1992) pp. 85-95.
Osborne, D. & Gaebler, T., Rei nventi ng Government (Reading, Massachusetts: Addison Wesley, 1992).
Pentland, B., Getting Comfortable with the Numbers: Auditing and the Micro ProductIon of Macro Order,
Accountfng, Organi zati ons and Socfety (1993) pp. 605-620.
PentIand, B., Audit the Taxpayer, not the Return: Tax Audhing as an Expression Game, Working paper,
John E. Anderson School of Management, UCLA (1994).
Pinch, T. & Bijker, W. E., The Social Construction of Facts and Artifacts: Or How the Sociology of Science
and the Sociology of Knowledge Might Benefit Each Other, in Bijker, W., Hughes, T. & Pinch, T. (eds),
The Social Construction of Tecbnol ogfcal Systems, pp. 17-50 (CambrIdge, Massachusetts: MIT Press,
1987).
Porter, T., Making Things Quantitative, Science in Context (1994) pp. 389-407.
Power, M., Educating Accountants: Towards a Critical Ethnography, Accountfng, OrganLzatfons and
Sodew (1991) pp. 333-353.
Power, M., From Common Sense to Expertise: Reflections on the Pre-hIstory of Audit Samplhtg, Account-
i ng, Organi zati ons and Soci ety (1992a) pp. 37-62.
Power, M., The Politics of Brand Accounting In the United Kingdom, European Accountfng Revfew
(1992b) pp. 39-68.
Power, M., Auditing and the Politics of Regulatory Control in the U.K. FlnanclaI Services Sector, in
McCahery, J., Picciono, S. & Scott, C. (eds), Corporate Control and Accountabi l i ty, pp. 187-202
(Oxford Oxford University Press, 1993a).
Power, M., The Politics of Financial Auditing, The Pol i ti cal Quarterfy (1993b) pp. 272-284.
Power, M., The Audi t Expl osi on (London: Demos, 1994a).
Power, M., Expertise and the Construction of Relevance: Accountants, Science and Environmenml Audit,
in Proceedi ngs of tbe I nterdfscapl i nary Perspectfves i n Accounti ng Conference. Depattment of
Accounting and Finance, University of Manchester, @tly 1994b).
Power, M., Auditing, Expertise and the Sociology of Technique, Criticui Pmpectfves on Accountfng
(forthcoming).
Preston, A., Cooper, D. J., Scarbrough, D. P. & Chilton, R. C., Changes in the Code of Ethics of the US
Accounting Profession, 1917 and 1988: the Continual Quest for Legitimation, Accounti ng, Organfza-
t f om and Socfety (1995) pp. 507-546.
Puxty, A., Sikka, P. & Wilhnott, H., Systems of SurveiEance and the Silencing of Academic Labour, Bt f t t sb
Accovntfng Revi ew (1994) pp. 137-171.
Scott, W. R., Law and Organizations, in SitkIn, S. B. & Bies, R. J. (eds), ZZh? Legalfstic Organization, pp.
3-18 (Thousand Oaks, California: Sage, 1994).
Shaylor, M., Welford, R. % Shaylor, G., BS7750: Panacea or Palliative?, Eccnnanagement and Audftfng
(1994) pp. 26-30.
Sherman, B., Governing Science: Patents and Public Sector Research, Scfence in Context (1994) pp.
515-537.
MAKING THINGS AUDITABLE 315
Sherwood, K., An Auditor’s Approach to Brands, in Power, M. (ed.), Br and and Goodwi l l Accounti ng
Strategi es, pp. 78-86 (Cambridge: Woodhead Faulkner, 1990).
Sikka, P., Puxty, A., Willmott, H. & Cooper, C., The Impossibility of mting the Expectations Gap:
Some Theory and Evidence, Working paper, East London Business School (1994).
Sikka, P. & WiRmott, H., The Power of Independence: Defending and Extending the Jurisdiction of
Ac c ount i ng i n t he U.K., Accounffng, Organi zatfons and Soci ety (1995) pp. 547-581.
Smith, T., Accountingfor Growth (London: Century Business, 1992).
Solomons, D., Making Accounti ng PO&~ (New York: Oxford University Press, 1986).
Teubner, G., The Two Faces of Janus: Rethink@ Legal Pluralism, Cardozo Law Revfew (1992) pp.
1443-1462.
Van Gunsterten, H. R., Z%e euestfor Conl rol : a CrMque of i be Rati onal centrai -rul e Appmacb i n Publ i c
Af/afrs (Chichester: John Wiley, 1976).
Van Maanen, J. & Pentland, B., Cops and Auditors: the Rhetoric of Records, in Sitkin, S. L Bies, R. (eds),
2%~ Legalistic Organization pp. 53-W (Thousand Oaks, California: Sage, 1994).
Weale, A., Vorspnmg durch Technik? The Politics of German Environmental Regulation, in Dyson, K.
(ed.), T&e Pol tttcs of German Regul atfon, pp. 159-183 (Aldershot: Dartmouth, 1992).
White Paper, Real i zi ng Our Potenti ak a Straregy for Sci ence, Engi neeri ng and Technol ogy (London:
HMSO Cm 2250, 1993).
Wiion, H., The Auditing Game: a Question of Ownership and Control, Cr&tcal Perspecthes on
Accountfng (1991) pp. 109-121.
Wolnizer, P. W., Audi ti ng as J nakpendent Authenti cati on (Sydney: Sydney University Press, 1987).
Yates, J., From Tabulators to Early Computers in the U.S. Life Insurance Industry: Co-evolution and
Continuities, Working paper 3618-93, Sloan School of Management, MIT ‘(1993).

doc_260314099.pdf
 

Attachments

Back
Top