Performance-measurement system design and functional strategic decision influence: The rol

Description
Although conceptual research in the accounting literature suggests that the use of performance-measurement
systems affects the influence of organizational actors, empirical evidence
for this suggestion is largely limited to anecdotal evidence and a few qualitative case
studies. Drawing on institutional theory, we develop predictions that link the use of performance
measures to the influence of functional subunits in strategic decision making. Our
research model tests the effects of two types of performance-measure use on functional
strategic decision influence: (1) decision-facilitating use and (2) use for accountability.
Moreover, we propose that the effects of using performance measures for these two purposes
depend on the reliability and functional specificity of the measures the functional
subunits use. We empirically test our hypotheses and a research question with survey data
from 192 marketing directors of German firms. We find that the effect of performancemeasure
use on functional strategic decision influence depends on the two properties of
the performance measures. We find no significant effects when these properties are not
considered. However, decision-facilitating use of performance measures has a positive
effect on functional strategic decision influence when the measures are specific to the functional
subunit.

Performance-measurement system design and functional strategic
decision in?uence: The role of performance-measure properties
Martin Artz
a,?
, Christian Homburg
a,b
, Thomas Rajab
c
a
University of Mannheim, 68131 Mannheim, Germany
b
University of Melbourne, Victoria 3010, Australia
c
The Boston Consulting Group, 60325 Frankfurt am Main, Germany
a b s t r a c t
Although conceptual research in the accounting literature suggests that the use of perfor-
mance-measurement systems affects the in?uence of organizational actors, empirical evi-
dence for this suggestion is largely limited to anecdotal evidence and a few qualitative case
studies. Drawing on institutional theory, we develop predictions that link the use of perfor-
mance measures to the in?uence of functional subunits in strategic decision making. Our
research model tests the effects of two types of performance-measure use on functional
strategic decision in?uence: (1) decision-facilitating use and (2) use for accountability.
Moreover, we propose that the effects of using performance measures for these two pur-
poses depend on the reliability and functional speci?city of the measures the functional
subunits use. We empirically test our hypotheses and a research question with survey data
from 192 marketing directors of German ?rms. We ?nd that the effect of performance-
measure use on functional strategic decision in?uence depends on the two properties of
the performance measures. We ?nd no signi?cant effects when these properties are not
considered. However, decision-facilitating use of performance measures has a positive
effect on functional strategic decision in?uence when the measures are speci?c to the func-
tional subunit. With respect to the use of performance measures for accountability we ?nd
countervailing effects, as the effect on functional strategic decision in?uence is positive
when the measures are more reliable but negative when they are more speci?c to the func-
tional subunit. We discuss these ?ndings in light of existing evidence and theory.
Ó 2012 Elsevier Ltd. All rights reserved.
Introduction
Research in managerial accounting suggests that impor-
tant links exist between the use and design of accounting
systems and strategic decision making within organiza-
tions (Abernethy & Vagnoni, 2004; Chenhall & Lang?eld-
Smith, 1998; Lillis & van Veen-Dirks, 2008; Luft & Shields,
2003). In particular, one major research stream explores
how the use of accounting information relates to organiza-
tional actors’ in?uence on the strategic priorities of organi-
zations. Conceptual work has studied this research issue
intensively, stating, for example, that in?uence in strategic
decision making derives from using accounting informa-
tion that allows adaptation to uncontrollable events (Bariff
& Galbraith, 1978; Saunders, 1981). Other studies assert
that power structures depend on accounting information
to confer legitimacy on decisions and actions and to shape
attitudes and beliefs about their rationality (Ansari &
Euske, 1987; Markus & Pfeffer, 1983). Surprisingly, how-
ever, only limited empirical research has followed up the
rich body of conceptual spadework, and extant investiga-
tions rely primarily on anecdotal evidence and a few
qualitative case studies (Abernethy & Chua, 1996; Markus
& Pfeffer, 1983; Wickramasinghe, 2006).
In particular, prior research does not address two
important issues. First, ambiguity remains as to what asso-
ciation, if any, exists between the use of accounting
0361-3682/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2012.07.001
?
Corresponding author. Tel.: +49 621 181 1553.
E-mail address: [email protected] (M. Artz).
Accounting, Organizations and Society 37 (2012) 445–460
Contents lists available at SciVerse ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
systems and the distribution of strategic decision in?uence
within organizations (Abernethy & Vagnoni, 2004). Second,
both conceptual and empirical investigations widely
neglect the role of information characteristics for the asso-
ciations in question. As information characteristics affect
both decision-making quality (Ittner & Larcker, 2001) and
evaluators’ judgments of performance information (Lipe
& Salterio, 2000), an important question is whether the
use of accounting information affects in?uence structures
per se or whether its impact depends on speci?c informa-
tion properties.
We study these research issues in the context of the dis-
tribution of power among functional subunits in pro?t-ori-
ented organizations, such as the marketing, operations, or
research and development functions. These functions rep-
resent central organizational actors and they often pursue
countervailing interests in important strategic decision
areas (Bariff & Galbraith, 1978; Ortega, 2003), with the pri-
mary arena of functional competition for strategic in?u-
ence being the top management team, which includes
the company’s most senior decision makers (Fligstein,
1987; Jensen & Zajac, 2004). The most widely observed
top management team structure entails a functionally dif-
ferentiated assignment of responsibilities among the most
senior executives representing each function as, for in-
stance, the Vice President (VP) Marketing, the VP Opera-
tions, or the VP Finance (Carpenter, 2011; Henri, 2006).
In the struggle for strategic in?uence, the heads of each
function strive to gather support for their strategic initia-
tives from their functional peers and from the Chief Exec-
utive Of?cer (CEO), who presides over the management
board (Hambrick & Mason, 1984).
The focal construct of our study is functional strategic
decision in?uence, which refers to the extent to which a
functional subunit, represented at the top executive level
by the functional VP, has in?uence over the organization’s
strategic priorities and the use of strategic resources (Aber-
nethy & Vagnoni, 2004, p. 216). Functional subunits repre-
sent important organizational actors that often pursue
opposing interests in important strategic decision areas
(Ortega, 2003), and the question of how much a particular
function can in?uence strategic decision making has major
implications for the organization’s strategic trajectory and
approach to market challenges. For instance, the R&D func-
tion might lobby for a strategic emphasis on product
sophistication and quality, whereas the operations function
might advocate focusing on production costs and ef?ciency.
Drawing on institutional theory as a theoretical founda-
tion, we propose that the use of performance measures for
various purposes and with different properties in a given
function may affect the functional VP’s in?uence over stra-
tegic decision making at the top executive level. In partic-
ular, we consider two types of performance-measure use.
First, prior literature has identi?ed a decision-facilitating
demand for managerial accounting information (Demski,
2008; Demski & Feltham, 1976), which refers to the need
for accounting information for planning and decision mak-
ing. Second, investigators have found both analytical and
experimental evidence for an accountability demand, which
refers to the need for accounting information to document
performance and contribution to organizational value
(Birnberg, Hoffman, & Yuen, 2008; Evans, Heiman-Hoff-
man, & Rau, 1994). Our study investigates how the use of
performance measures for decision facilitation and
accountability within a particular functional subunit af-
fects the functional subunit’s strategic decision in?uence.
Moreover, we investigate whether and how performance-
measure properties affect the associations between perfor-
mance-measure use and functional strategic decision
in?uence. We analyze two properties that seem particu-
larly relevant in a functional context: performance-
measure reliability and functional speci?city.
We empirically test our predictions using a cross-indus-
try sample of 192 marketing directors of German ?rms. Re-
sults support our expectation that the purported effects
depend on the properties of the performance measures the
functions employ. While model estimations showno signif-
icant effects without taking interaction effects into account,
we ?nd evidence that the effect of decision-facilitating use
of performance measures on functional strategic decision
in?uence is positive and signi?cant for high levels of perfor-
mance-measures’ functional speci?city. With respect to the
use of performance measures for accountability, we ?nd
that the effect is signi?cantly positive for high levels of reli-
ability and negative for high levels of functional speci?city.
Our investigation extends the scant body of extant
empirical research on the relationship between perfor-
mance-measurement system design and power structures
within organizations. We ?nd that performance-measure
properties play an important role that prior studies have
overlooked. Further, in light of analytical and experimental
support for the accountability demand for accounting
information (Birnberg & Zhang, 2010; Birnberg et al.,
2008; Evans et al., 1994), our study provides evidence for
the importance of the accountability demand for func-
tional power structures, thus extending this nascent
stream of empirical research to an examination in an orga-
nizational context. Finally, our study substantially broad-
ens the empirical basis of prior research in this area of
inquiry by relying on a large sample of business companies
across several industries.
Theoretical background and hypotheses
Our research model, presented in Fig. 1, hypothesizes
that functions relying more strongly on the use of perfor-
mance measures will attain greater in?uence in strategic
decision making because they conform to institutionalized
expectations within the organization (DiMaggio & Powell,
1983; Meyer & Rowan, 1977) and favorably shape the im-
age perceptions of other top executives (Ansari & Euske,
1987; Markus & Pfeffer, 1983). We predict that the use of
performance measures for decision facilitation and
accountability positively affects functional strategic deci-
sion in?uence.
1
Additionally, we hypothesize that the
1
Prior research has also identi?ed a decision-in?uencing demand, which
relates to the role of accounting information to incentivize and control
employees. As we describe below, decision-in?uencing use is more relevant
to solving control problems on the local level within the function.
Therefore, our model treats decision-in?uencing use of performance
measures as a control variable without testing speci?c hypotheses.
446 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
strength of these effects depends on two properties of the
performance measures that seem particularly relevant in a
functional context. Reliability of performance measures re-
fers to the amount of noise in performance indicators (Bank-
er & Datar, 1989; Merchant & Van der Stede, 2012), or the
‘‘quality of information that assures that information is rea-
sonably free of error and bias and faithfully represents what
it purports to represent’’ (Christensen & Demski, 2003, p.
427).
2
We hypothesize that the association between perfor-
mance-measure use and functional strategic decision in?u-
ence increases with greater reliability of the performance
measures. Functional speci?city of performance measures
refers to the extent to which the measures are unique to
a particular function (Arya, Glover, Mittendorf, & Ye,
2005; Chenhall & Lang?eld-Smith, 2007). In contrast to
general measures of ?nancial performance, business func-
tions may rely on idiosyncratic metrics, such as lead time
and inventory turnover in production management (van
Veen-Dirks, 2010) or customer satisfaction and quality
indices in marketing (Arya et al., 2005). We develop the
hypothesis that the association between performance-
measure use for accountability and functional strategic
decision in?uence decreases when the measures are more
speci?c to the function. With respect to decision-facilitat-
ing use of performance measures, underlying theory pro-
vides no clear rationale for predicting the effect of
functional speci?city. Therefore, we pose a research ques-
tion (RQ) asking whether the association between perfor-
mance-measure use for decision facilitation and
functional strategic decision in?uence will increase or de-
crease with greater functional speci?city of the perfor-
mance measures. Moreover, our model controls for a
large set of covariates, including functional performance.
Following prior studies, we de?ne functional strategic
decision in?uence as the in?uence a functional subunit
has on the strategic priorities and the use of strategic re-
sources of the organization (Abernethy & Vagnoni, 2004;
Homburg, Workman, & Krohmer, 1999). In our research
model, this construct considers the in?uence of functional
subunits on eight strategic decision areas, such as the com-
pany’s strategic direction, newproduct development, major
capital expenditures, or the choice of strategic business
partners. Prior research has typically focused on organiza-
tional and environmental factors as antecedents of func-
tional strategic decision in?uence. For instance, studies
?nd that a ?rm’s strategic focus and top management team
composition, as well as environmental uncertainty in the
market, affect functional weight in strategic decision mak-
ing (Homburg et al., 1999; Verhoef & Lee?ang, 2009).
In the management accounting literature, we are aware
of only one investigation that has examined how the use of
accounting systems affects the strategic decision in?uence
of organizational actors (Abernethy & Vagnoni, 2004). That
Fig. 1. Research model and hypotheses. np = not predicted; RQ = research question. Dashed arrows denote relationships of control variables for which no
explicit hypotheses were derived.
2
Reliability as de?ned here relates closely to Ijiri’s (1975, p. 36) criterion
of ‘‘hardness.’’ A hard measure is one ‘‘constructed in such a way that it is
dif?cult for people to disagree.’’ Highly reliable measures (i.e., measures
free of error and bias) can be expected to also exhibit a signi?cant degree of
‘‘hardness.’’
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 447
study investigates the association between the use of
accounting systems and the strategic decision in?uence
held by physicians in health care organizations, but it ?nds
no signi?cant links. The absence of signi?cant relationships
is surprising, because studies widely recognize that the use
of accounting systems and in?uence structures within
organizations are closely entwined (Covaleski & Dirsmith,
1986; Kurunmäki, 1999; Markus & Pfeffer, 1983). Against
this background, our study aims to contribute to the
understanding of the link between the use of accounting
information and in?uence structures in organizations. In
the following, we develop hypotheses for the associations
purported in our research model and for the effects of
the interacting variables.
Hypotheses development for main effects
Institutional theory affords a viable theoretical founda-
tion to substantiate our research propositions. While orig-
inating from the analysis of inter?rm relationships, this
research stream has provided a theoretical basis for the
examination of a wide array of interpersonal and intraor-
ganizational phenomena (Davis & Marquis, 2005; Scott,
2005), and it has frequently served as a theoretical frame-
work for empirical studies in the accounting literature
(Abernethy & Chua, 1996; Brignall & Modell, 2000). This
theory posits that normative pressures exerted by external
and internal constituencies profoundly affect organiza-
tional behavior (DiMaggio & Powell, 1983; Zucker, 1987).
These pressures entail processes by which certain norms,
rules, and routines become established and accepted, thus
forming an institutional environment for thought and action
of organization participants (Scott, 2001). Institutional the-
ory links this key tenet to intraorganizational distribution
of power through the concept of organizational legitimacy,
which is a ‘‘generalized perception or assumption that
the actions of an entity are desirable, proper, or appropri-
ate within some socially constructed system of norms, val-
ues, and beliefs,’’ providing ‘‘a meaningful normative and
cognitive force that may empower or constrain organiza-
tional actors’’ (Suchman, 1995, p. 574). In the institutional
framework, legitimacy derives from conformity with the
normative expectations of the institutional environment
(Meyer & Rowan, 1977). In turn, organizational actors that
attain legitimacy will more easily gather support from
their internal constituencies and in?uence managerial
decision making in their favor (Pfeffer, 1981).
Our study assumes that the institutional environment
within a pro?t-oriented organization exhibits a particularly
strong normative demand for rational and effective man-
agement practices (Miller, 1994). This assumption seems
especially relevant for top decision-making circles, as
upper echelon executives should have internalized the
principles of effective and rational management they were
taught during their executive education and professional
careers. Thus, we would expect the institutional
environment of top management executives to exhibit a
signi?cant normative expectation for rational and compre-
hensible decision-making processes. Moreover, prior re-
search has provided both analytical and experimental
evidence that suggests an institutionalized expectation
for accountability (Birnberg et al., 2008; Evans et al.,
1994). Since any functional subunit consumes the organi-
zation’s resources, it must account for the ef?cient use of
these resources and establish its contribution to the organi-
zation’s success (Pfeffer, 1981).
We argue that the more functions use performance
measures for decision making and accountability, the more
they conform to these institutionalized expectations and,
as a result, exert greater strategic decision in?uence within
the organization. Performance measurement can play an
important role in reifying qualities of effective and rational
management (Ansari & Euske, 1987). Further, a rich body
of literature highlights accounting’s ability to act as a
‘‘legitimating institution’’ (Richardson, 1987, p. 341; Wolk,
Francis, & Tearney, 1999) and enhance the legitimacy of
individual or group activities (Markus & Pfeffer, 1983;
Moll, Burns, & Major, 2006). Additionally, accounting infor-
mation is commonly believed to re?ect veri?able informa-
tion (Snavely, 1967) and to ‘‘embody the ideology of
rational decision making’’ (Markus & Pfeffer, 1983, p.
207). Thus, functional VPs who bene?t from legitimacy
conferred by their function’s performance-measurement
practices may in?uence strategic decisions more strongly.
A closely related argument advanced by investigators is
that performance-measurement system design affects im-
age perceptions of other organizational actors. For in-
stance, Markus and Pfeffer (1983, p. 207) state that
‘‘accounting and control systems are symbols, suggesting
images of the organizations in which they exist.’’ A func-
tion that follows appropriate practice models and proce-
dures for decision making, such as relying on
performance measures, will likely enjoy a better image.
Furthermore, the use of performance measures for
accountability may convince potential sources of support
that the function’s speci?c tasks and achievements are
substantial and important, thus contributing to favorable
image perceptions (Pfeffer, 1981).
To summarize the above analysis, theory suggests that
VPs advocating their functions may have less dif?culty
promoting strategic initiatives at the top management le-
vel when their functions rely more intensively on perfor-
mance measurement. Therefore, our ?rst hypothesis
states that a positive association exists between the use
of performance measures for decision making and account-
ability and functional strategic decision in?uence:
H1a. A positive association exists between the decision-
facilitating use of performance measures and functional
strategic decision in?uence.
H1b. A positive association exists between the use of per-
formance measures for accountability and functional stra-
tegic decision in?uence.
Notably, when designing their performance-measure-
ment systems, functional managers may not necessarily
have the intention to affect image perceptions of their func-
tion. Some managers may rely on performance measure-
ment to demonstrate to their top management peers that
the function is well run and rationally managed. Others
may do so because they are concerned about their
448 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
function’s performance and see performance measures as
an effective management tool. Moreover, while studies
suggest that most functions participate substantially in
designing their performance-measurement system
(Gerdin, 2005; Wouters & Wilderom, 2008), other actors
like the IT department or the controller’s of?ce are typically
involved in the development process, and functions’ control
over the design and use of their performance-measurement
system may be limited. As we explicate below, our model
controls for both functional performance and the extent
of functional self-participation in designing their perfor-
mance-measurement system. This approach allows us to
assess whether performance-measure use affects func-
tional strategic decision in?uence beyond effects attribut-
able to functional performance, and it takes into account
that functions’ control over the development of their
performance-measurement system varies across ?rms.
Hypotheses development for interaction effects
One of the main research questions of our study is
whether performance-measure use affects functional stra-
tegic decision in?uence per se or whether the measures
must have particular properties to generate the purported
effects. In empirical accounting research, the consideration
of information properties can reveal interesting patterns in
the data which an analysis over the sample average would
be unable to detect (Merchant & Van der Stede, 2012;
Moers, 2006). Moreover, from a methodological perspec-
tive, empirical models might be underspeci?ed without
taking performance-measure properties into account (Itt-
ner & Larcker, 2001). Our model considers two properties
that seem particularly relevant in the context of functional
subunits: reliability and functional speci?city.
Performance-measure property: reliability
Because imprecise measures lose much of their infor-
mation value, reliability is an important quality for perfor-
mance measures (Merchant & Van der Stede, 2012). Thus,
we expect this property to affect the effects of the use of
performance measures for both decision facilitation and
accountability. Decision-facilitating use considers perfor-
mance measures to be an important input for economic
judgments and decisions (Demski, 2008) and to reduce
pre-decision uncertainty (Christensen & Demski, 2003).
For instance, using product cost data may help managers
ensure appropriate pricing and product-emphasis deci-
sions, and analyzing standard cost variances can help
determine the sources of deviations from planned perfor-
mance. Such application of performance measures aims
to facilitate consistent framing of decisions, and it supports
the illustration of decision alternatives and their expected
effects. Drawing on more precise measures for decision
making provides better insight into cause and effect rela-
tionships (Emsley, 2000) and enables managers to reach
better informed decisions (van Veen-Dirks, 2010). Thus,
since employing measures with higher reliability may con-
tribute more positively to the function’s legitimacy and
favorable image, we expect the use of more reliable mea-
sures to have a stronger positive effect on functional stra-
tegic decision in?uence.
A similar reasoning would seem to apply for the use of
performance measures for accountability. In this case, per-
formance measures serve to account for differential budget
spending as well as functional performance and value con-
tribution. Investigators suggest that measures employed
for accountability should be indisputable and should leave
different evaluators with little reason to disagree over
interpretations (Evans et al., 1994; Ijiri, 1975). In this re-
gard, reliability, or the absence of noise and bias, is an
essential quality, because reliable measures are less con-
testable and bene?t from higher credibility when pre-
sented to outside evaluators (Malina & Selto, 2001).
Evidence from a recent longitudinal ?eld study in the con-
text of organizational change supports this line of reason-
ing, showing that reliability is an important property of
accounting information because it makes this information
‘‘harder’’ and therefore more persuasive when presented
to organizational stakeholders (Rowe, Shields, & Birnberg,
2012).
From these arguments, we derive the second hypothesis
of our study, stating that using reliable performance mea-
sures strengthens the effect of both use for decision facili-
tation and use for accountability on functional strategic
decision in?uence:
H2a. The association between the decision-facilitating use
of performance measures and functional strategic decision
in?uence increases with greater reliability of the perfor-
mance measures.
H2b. The association between the use of performance
measures for accountability and functional strategic deci-
sion in?uence increases with greater reliability of the per-
formance measures.
Performance-measure property: functional speci?city
Assessing the effects of functional speci?city is more
complex, especially as previous research indicates that
function-speci?c performance measurement may have
either a positive or a negative effect. With respect to per-
formance-measure use for accountability, we expect that
functional speci?city of performance measures weakens
the potential positive effect on functional strategic deci-
sion in?uence. Specialized performance measures tend to
focus on functional operations without having explicit
links to the company as a whole and are therefore less
likely to be congruent with organizational goals (Datar,
Kulp, & Lambert, 2001; Feltham & Xie, 1994). In addition,
experimental evidence indicates that measures common
to multiple subunits dominate evaluators’ judgments of
performance information, whereas measures unique to
particular subunits are underused or even ignored (Banker,
Chang, & Pizzini, 2004; Lipe & Salterio, 2000), a behavioral
phenomenon referred to as common measures bias
(Humphreys & Trotman, 2011). As a result, speci?c mea-
sures may be less effective, or may be perceived as less
effective by outside parties, when the purpose is to account
for the function’s performance and value contribution.
Thus, we derive a third hypothesis, stating that the higher
the functional speci?city of performance measures, the
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 449
weaker the effect of the use for accountability on func-
tional strategic decision in?uence:
H3. The association between the use of performance
measures for accountability and functional strategic deci-
sion in?uence decreases with higher functional speci?city
of the performance measures.
Interaction effects with respect to decision-facilitating
use are more dif?cult to assess. Function-speci?c measures
are more likely to be perceived as decision-relevant in the
functional context (Arya et al., 2005). Relevance of account-
ing information is ‘‘the capacity of information to make a
difference in a decision by helping users to formpredictions
about the outcome of past, present, and future events or to
con?rm or correct prior expectations’’ (Christensen &
Demski, 2003, p. 427). Decision-relevant performance mea-
sures might strengthen the effect of decision-facilitating
use, because such performance indicators will be more
helpful in solving the function’s particular decision prob-
lems (Mayston, 1985). Therefore, a reasonable conclusion
is that decision-facilitating use of function-speci?c mea-
sures would contribute positively to legitimacy and image
perceptions. This line of reasoning suggests that the associ-
ation between decision-facilitating use and functional
strategic decision in?uence should increase with higher
functional speci?city of performance measures.
On the other hand, observers outside the function might
perceive the use of function-speci?c performance measures
as less congruent with organizational goals and as indicat-
ing a parochial, self-centered perspective of the function’s
management practices. In that case, the use of function-
speci?c measures for decision making might raise doubts
as to whether the practice will elicit organizationally desir-
able decisions and actions. This line of reasoning suggests
that the associations between decision-facilitating use of
performance measures and functional strategic decision
in?uence should decrease with higher functional speci?city
of the measures. Succinctly stated, no clear rationale exists
for making inferences about the interaction effects of func-
tional speci?city. Therefore, we pose the following research
question and let the empirics shed light on this effect:
RQ. Does the association between decision-facilitating
use of performance measures and functional strategic deci-
sion in?uence decrease or increase with higher functional
speci?city of the performance measures?
Research design and method
Sample selection
To facilitate the collection of an appropriately sized
sample and to control for function-speci?c, unobserved
heterogeneity, we focused our study on one functional
subunit per organization. We chose the marketing function
for the following reasons.
3
First, most ?rms have a
marketing function, allowing us to survey a representative
sample of manufacturing and service ?rms across different
industries. Second, marketing is a separate, clearly de?ned
business function often represented in the top management
team. Third, performance measurement in the marketing
function is a topic of current interest in academia given
the rising importance of intangible, market-related assets
(Lev, 2001; Wyatt, 2008). Fourth, from an empirical perspec-
tive, the marketing function provides a suitable research
setting that offers suf?cient variation, in both perfor-
mance-measurement system design and functional strate-
gic decision in?uence, to allow analyses of our research
issues.
As the data required to test our research propositions
are not available from archival databases, the survey meth-
od seems appropriate for our research. Following the
guidelines from the Tailored Design Method (Dillman,
Smyth, & Christian, 2009), we collected survey data by mail
from top executives responsible for the marketing function
in German business ?rms. From a commercial data pro-
vider, we obtained a company database that re?ects the re-
gional, industry, and ?rm size distribution of the country.
We were able to identify the VP Marketing for 2200 ?rms.
Subsequently, we sent a questionnaire with a personalized
letter to these executives, emphasizing the importance of
their participation and assuring the con?dentiality of their
answers. As an incentive for participation, we offered a
benchmarking report of our study results and two free pa-
pers from our university working paper series. To encour-
age response, we followed up with telephone calls after six
weeks. We received 260 usable questionnaires, resulting in
a response rate of 12%. Given our focus on members of the
top management team, the response rate appears to be sat-
isfactory and in line with prior accounting research
addressing top executives.
4
We performed two tests for potential self-selection
biases in the sample. The ?rst test was for non-response
bias on the level of the individual participants. Comparing
the earliest and latest thirds of respondents showed that
no variable differed at a signi?cance level of 5%. The second
test comprised an analysis of whether the ?rms we initially
contacted systematically differed from the responding
?rms in terms of industry af?liation or ?rm size. We used
v
2
goodness-of-?t tests to compare the distribution of our
sample and the original population and found no signi?-
cant differences (p = .97 for industry; p = .80 for number
of employees).
5
3
Our sample comprised top management team members representing
the marketing/sales function. For simplicity, we henceforth refer to the ‘‘VP
Marketing’’ and the ‘‘marketing function’’ (see also Jaworski & Young,
1992).
4
Some survey studies in the accounting literature report response rates
of 80% and more. However, many of these response rates are based on
convenience samples and are not comparable to ours (Van der Stede,
Young, & Chen, 2005), restricting the generalizability of inferences drawn
from the data analysis. Further, our focus on a member of the top
management team might lead to lower response rates than surveys of
middle-level managers or accountants. Response rates of top management
informants reported in prior studies were similar to ours (Widener, 2006;
Young, 1996).
5
This result also holds after discarding responses of key informants
other than the VP Marketing as well as questionnaires with missing values,
as described below.
450 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
Key informant reliability and data validation
Key informant reliability is a critical issue in survey-
based research, and our study design is at risk for two
potential biases in variable measurement. First, evaluat-
ing functional strategic decision in?uence within the ?rm
requires substantial experience in organizational decision
making. Second, self-evaluations of functional VPs might
be subject to biased perceptions and overcon?dence. For
these reasons, we took several measures during the
administration of the survey to improve the reliability
and validity of our data. At the outset, we collected data
from top management team members, yielding a sample
of highly experienced executives who reported an aver-
age professional experience of 18.1 years. Additionally,
to obtain the highest possible degree of comparability
between individual answers, we excluded questionnaires
that were answered by a person other than the VP Mar-
keting. Finally, we discarded questionnaires that had
missing values. The ?nal sample comprised 192 re-
sponses, all answered by VPs of the marketing function.
Table 1 reports the sample distribution by industry and
?rm size.
To provide further validity of our data, we collected re-
sponses from a second key informant group to assess con-
vergent validity and address potential self-evaluation
biases. In our study, self-evaluation biases would seem
problematic for three constructs. That is, the marketing
executives might overstate their function’s strategic deci-
sion in?uence, the reliability of their function’s perfor-
mance measures, and their function’s performance. For
71 of those ?rms whose VP Marketing responded, we were
able to obtain additional data for these three constructs
from a second key informant. For these ?rms, the Head of
Management Accounting provided a second assessment
of the strategic in?uence, the reliability of performance
measures, and the overall performance of the marketing
function. The responses of these second key informants
served to validate the responses of the marketing execu-
tives. Both informants were asked not to share their an-
swers prior to completing the survey.
To assess the consistency of responses of these two
informant groups, we computed the r
wg(j)
index, which is
established in the literature on triangulation methods in
survey-based research (Finn, 1970). The r
wg(j)
index for a
construct j ranges from 0 to 1, with values above .7 indicat-
ing acceptable consistency and values above .8 considered
excellent (LeBreton, Burgess, Kaiser, Atchley, & James,
2003). For the three constructs in question, all r
wg(j)
indices
exceeded the threshold value of .8 (functional strategic
decision in?uence: r
wg(j)
= .91; reliability of function’s per-
formance measures: r
wg(j)
= .87; functional performance:
r
wg(j)
= .95). In summary, our analyses suggest that key
informant biases should not be a major issue for our data.
Construct measurement
Our research model comprises ?ve main constructs and
several control variables. We pre-tested our measurement
instruments with 15 marketing and accounting managers
in 10 companies. The pre-test aimed to assess the appro-
priateness of our questionnaire design and to verify
whether the items captured the relevant dimensions of
our constructs and whether practitioners found the survey
items understandable and plausible. The managers’ feed-
back led to minor changes in the format of the question-
naire and the wording of individual items. To assess
measurement reliability, we computed Cronbach’s alpha
to evaluate the internal consistency of the constructs. All
constructs exceeded the threshold value of .7 proposed in
the literature (Nunnally, 1978). To assess discriminant
validity, we tested our data against the criterion proposed
by Fornell and Larcker (1981), which demands that the
average variance extracted of each factor exceed the
squared correlations between this factor and all other con-
structs. All constructs passed this test. Tables 2 and 3 pro-
vide a correlation matrix and an overview of the
descriptive and reliability statistics of all variables. In the
following, we offer brief descriptions of the measurement
instruments. The scale items for all constructs appear in
Appendix.
Functional strategic decision in?uence
We measure functional strategic decision in?uence
with an instrument adopted from Abernethy and Vagnoni
(2004), which refers to the extent to which a functional
subunit has in?uence on eight strategic decision areas
within the organization, such as new product develop-
ment, expansion to new markets, major capital expendi-
tures, or the choice of strategic business partners.
Function’s use of performance measures
Prior literature in the functional context has mainly fo-
cused on control system design instead of the actual use of
Table 1
Sample distributions by industry and ?rm size.
# Observations %
Panel A: sample by industry
Automotive 29 15.1
Financial services 17 8.9
Chemicals 15 7.8
Pharmacy 4 2.1
Retailing 11 5.7
Mechanical engineering 32 16.7
Consumer goods 14 7.3
Utilities 13 6.8
Metal production 7 3.7
Electronics 18 9.4
Logistics and transportation 10 5.2
Building and construction 9 4.7
IT and telecommunication 8 4.2
Other 5 2.6
Sum 192 100
Panel B: sample by employees
Fewer than 500 employees 24 12.5
500–749 employees 74 38.5
750–999 employees 29 15.1
1000–2499 employees 46 24.0
2500–4999 employees 11 5.7
5000–9999 employees 7 3.7
More than 10,000 employees 1 .5
Sum 192 100
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 451
performance measures (Abernethy, Bouwens, & van Lent,
2004). Therefore, we developed two novel constructs. With
regard to decision-facilitating use, we asked to what extent
the marketing function employed performance measures
for (1) decision making, (2) budget allocation, (3) variance
analyses with regard to planned performance, and (4)
tracking progress to pre-de?ned goals.
6
For performance-
measure use for accountability, we developed a new
measure based on the work of Evans et al. (1994). The
instrument relies on three items that re?ect the extent to
which the functions use performance measures to account
for differentiated budget spending, functional performance,
and contribution to organization performance in quantita-
tive terms.
Performance-measure properties
We developed the items for functional speci?city from
the work of Arya et al. (2005) and Lipe and Salterio
(2000). The scale consists of four indicators that measure
the extent to which performance measures are linked to
disaggregated marketing-related objectives and activities.
We surveyed reliability with two items that assess
whether the performance measures the function employs
are precise and actually represent what they purport to
represent (Christensen & Demski, 2003; Ittner & Larcker,
2001).
Table 2
Intercorrelation matrix.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
1 Functional strategic decision in?uence –
2 Decision-facilitating use .36 .68
3 Use for accountability .24 .50 .77
4 PM reliability .33 .49 .25 .69
5 PM functional speci?city .31 .47 .25 .57 .80
6 Firm size .06 .04 À.06 .07 .04 .61
7 Decision-in?uencing use .29 .47 .34 .33 .22 À.01 –
8 Functional background of the CEO .10 .07 .12 .08 .12 À.05 À.04 –
9 Market environment uncertainty .04 .00 .05 À.21 À.10 À.13 .03 À.01 –
10 Strategic focus differentiation .36 .23 .21 .22 .23 À.03 .17 .16 .05 –
11 Strategic focus cost leadership .16 .20 .23 .18 .31 .00 .24 .05 .04 .15 –
12 Functional performance .26 .13 .03 .26 .21 À.03 .05 .11 À.09 .34 .09 .73
13 Functional self-participation in PMS design .04 .02 .00 .08 .04 À.05 À.11 À.05 À.16 À.06 À.02 .09 –
14 VP Marketing professional experience .15 .03 .05 À.09 À.07 À.09 À.01 .11 .06 .25 .08 .12 À.14 –
Note: PM = performance-measure; PMS = performance-measurement system; VP = Vice President. Sample based on n = 192 ?rms. Absolute values of cor-
relation coef?cients above .13 (.17) are signi?cant on a 5% (1%) level. Diagonal entries (in bold) denote the square root of the average variance extracted (not
calculated for index-based constructs). For adequate discriminant validity, the Fornell–Larcker criterion requires that each diagonal entry exceed the
corresponding off-diagonal entries. All constructs pass this test.
Table 3
Psychometric quality assessment.
Construct Theoretical range Empirical range Mean SD Cronbach’s alpha
Functional strategic decision in?uence 1–7 2.0–7 4.92 .96 Index
Decision-facilitating use 1–7 1.25–7 5.00 1.03 .77
Decision-in?uencing use 1–7 1.0–7 4.72 1.13 .81
Use for accountability 1–7 1.0–7 4.52 1.18 .70
Performance-measure reliability 1–7 1.7–7 5.27 1.09 .77
Performance-measure functional speci?city 1–7 2.0–7 4.82 1.09 .73
Firm size
a
NA 250–34,040 1310 2633 Single item
Functional background of the CEO
b
0/1 0/1 NA NA Dummy
Market environment uncertainty 1–7 1.5–6.0 3.62 .97 Index
Strategic focus differentiation 1–7 1.8–7 5.43 .98 Index
Strategic focus cost leadership 1–7 2.0–7 4.59 1.05 Index
Functional performance À3 to +3 À2.2 to 3 1.07 1.00 .88
Industry af?liation
c
0/1 0/1 NA NA Dummy
Functional self-participation in PMS design 0–100 0–80 31.74 17.67 Single item
VP Marketing prof’l experience 0/NA 1–62 18.12 8.38 Single item
Note: NA = not applicable; PMS = performance-measurement system; SD = standard deviation; VP = Vice President.
a
Firm size is measured by the natural logarithm of the number of employees.
b
Measured by a dummy variable. 1 indicates a marketing background and 0 otherwise.
c
Measured by a dummy variable indicating the industry af?liation of the company (see Table 1, panel A).
6
As items 3 and 4 could potentially refer to decision-in?uencing use, we
ran a principal component analysis over all eight items for both types of
use. Items 3 and 4 loaded strictly on the ?rst component (construct
decision-facilitating use) as intended, and not on the second component
(construct decision-in?uencing use). Additionally, we ran all analyses
excluding both items when measuring the construct decision-facilitating
use. The results remained stable.
452 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
Control variables
To control for contextual factors that might affect the
dependent variable of our model, we include a series of
covariates. The performance-measurement literature rec-
ommends that empirical research consider ?rm-related
factors and industry characteristics (Chenhall, 2003; Ittner
& Larcker, 2001). We therefore control for ?rm size and
industry af?liation, which prior research regards as impor-
tant context factors (Chapman, 1997; Waterhouse & Thies-
sen, 1978). Additionally, we control for the functional
background of the CEO, which may affect decision processes
at the top management level (Glaser, Lopez-de-Silanes, &
Sautner, 2012). We measure ?rm size using the natural
logarithm of the number of employees, and we assess
industry af?liation and the CEO’s background by dummy
variables that indicate the speci?c industry type and pri-
mary functional background of the CEO. We predict that
when the CEO has a marketing background, the marketing
function should wield more in?uence. With respect to ?rm
size and industry af?liation, we make no speci?c predic-
tions about the signs of effects.
We also control for ?rms’ strategic focus and the level of
market environment uncertainty. We assess these variables
with scales developed on the basis of prior studies in the
literature (Duncan, 1972; Govindarajan & Gupta, 1985).
The instrument for strategic focus considers the character-
istics of cost leadership and differentiation strategies and
allows these dimensions to vary independently rather than
classifying ?rms as either cost leadership or differentiation
strategy archetypes.
7
From a theoretical perspective, market
environment uncertainty and a differentiation strategy
should relate positively to the in?uence of the marketing
function, because these conditions emphasize exploration
of evolving customer needs and adaptation of products
and services to effectively meet those needs (Verhoef & Lee-
?ang, 2009). Conversely, if the ?rm adopts a strategic focus
on costs and ef?ciency, the in?uence of the marketing func-
tion should be lower.
Further, we control for function-speci?c variables. The
?rst of these is functional performance, measured by a con-
struct employed in prior research to assess the perfor-
mance of the marketing function (Vorhies & Morgan,
2003). As high-performing functions should have more
clout in strategic decision making by virtue of being good
performers (Jensen & Zajac, 2004), we expect a positive
association with functional in?uence. Controlling for this
effect allows us to explore empirically whether perfor-
mance-measurement practices affect power structures be-
yond effects attributable to subunit performance. We also
control for the degree of the functional self-participation in
performance-measurement system design, which may be
related to the independent and interacting variables (e.g.,
functional speci?city of performance measures). Using a
scale ranging from 0 (no participation at all) to 100
(complete determination), we measure the level of partic-
ipation by asking respondents to indicate to what extent
their function participated in the system’s development
and design. We make no speci?c prediction for the sign
of this effect. We also control for decision-in?uencing use
of performance measures. We surveyed how intensively
functions employ performance measures in evaluating
managerial performance, determining compensation, and
applying sanctions concerning budget responsibility and
decision rights (Abernethy, Bouwens, & van Lent, 2010).
As decision-in?uencing use should be more relevant for
solving ‘‘local’’ incentive and control problems on the func-
tional level, we refrained from making a speci?c prediction
for the sign of the effect.
Finally, we control for respondent-speci?c variables. To
avoid bias resulting from different hierarchical positions,
we control for the in?uence of formal authority, de?ned
in terms of positions on the organization chart (Abernethy
& Vagnoni, 2004). As we describe above, our sample collec-
tion procedure implicitly controls for this type of bias, not
only by surveying a sample of ?rms whose top manage-
ment team includes a VP for the examined function but
also by discarding questionnaires that were answered by
a person other than the functional VP. Additionally, we
control for the VP Marketing’s professional experience. We
expect this variable to have a positive effect, because exec-
utives develop more effective interpersonal in?uence tac-
tics over time and can draw on their observations and
experiences from prior decision situations to attain greater
sway in strategic decision making (Mowday, 1979).
Analytical approach and model estimation results
Toestimateour empirical model, we employedmultivar-
iate regression analysis with an ordinary least squares (OLS)
estimator and heteroscedasticity-robust standard errors
(White, 1980). The variance in?ationfactors of all constructs
indicate no substantial degree of multicollinearity (Woold-
ridge, 2002). For interpretation purposes, we mean-cen-
tered all independent variables, and all interaction terms
are the product of their underlying constructs. The estima-
tion of our model followed a three-step approach.
First, we analyzed a controls-only model to assess
whether the control variables relate to strategic decision
in?uence as predicted (model 1 in Table 4). By and large,
the effects of the covariates re?ect prior expectations. While
no statistically signi?cant coef?cient sign is opposite to our
predictions, nonsigni?cant effects might be attributable to
the fact that, so far, no complex model has tested these con-
structs simultaneously.
8
In a second step, we included the
main effects of performance-measure use (model 2 in Table 4),
with the values for the performance-measure properties equal
7
The original taxonomy of business strategies developed by Porter
(1980) assumes that ?rms focus their strategy exclusively on either
differentiation or cost leadership. More recent examinations suggest that
?rms may also follow joint strategies aiming to achieve differentiation and
cost leadership simultaneously (Hill, 1988; Miller & Dess, 1993). Following
recent work in the accounting literature, our measurement approach takes
this view into account and includes variables for a differentiation and a cost
leadership focus side-by-side (Lillis & van Veen-Dirks, 2008).
8
We conducted an additional analysis to test variations in the extent of
marketing’s strategic decision in?uence across different industries and
found only modest differences at the industry level. All means for
marketing’s in?uence within a particular type of industry are less than
half a standard deviation below or above the overall sample mean,
indicating that most of the variance in marketing’s in?uence is attributable
to ?rm-speci?c and environmental drivers, as our research model suggests.
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 453
to zero (i.e., without interactions). The estimation of the model
does not provide support for H1. Both coef?cients are positive,
but not signi?cant ona 5%level (H1a: decision-facilitating use:
b = .125, p > .05; H1b: use for accountability: b = .061, p > .05).
Given that zero values of the performance-measure properties
represent the sample average (owing to mean-centering), by
implication the use of performance measures does not affect
functional strategic decision in?uence ‘‘on average’’ (Hart-
mann & Moers, 1999; Moers, 2006).
A third model then examined whether including perfor-
mance-measure properties provides additional explana-
tory power (model 3 in Table 4). With respect to the
property of reliability, we ?nd signi?cant positive interac-
tions with the use for accountability (b = .106, p < .05),
whereas the interaction term with decision-facilitating
use is nonsigni?cant (b = .012, p > .05).
9
These results pro-
vide partial support for H2. Regarding the property of
functional speci?city, we ?nd a negative interaction effect
for the use for accountability (b = À.118; p < .05), as
predicted by H3. We also tested the interaction with
decision-facilitating use, for which we made no explicit
inferences about the signs of the interaction term. We
?nd a positive and signi?cant interaction (b = .127, p < .05).
Notably, model 3 explains signi?cant incremental variance
to both other models as indicated by an adjusted R
2
measure
of .44 (vs. .29 and .40 in model 1 and model 2 respectively).
A closer examination of the economic interpretations of
the interaction effects is worthwhile. To this end, we deter-
mined the partial derivatives of functional strategic deci-
sion in?uence to performance-measure use. This analysis
allows us to test how different levels of the performance-
measure properties affect the baseline effect of perfor-
mance-measure use on the dependent variable. We
analyzed the following equations:
@ functional strategic decision influence
@ decisionÀfacilitating use
¼:180þ:127Âfunctional specificityþ:012Âreliability
@ functional strategic decision influence
@ use for accountability
¼:080À:118Âfunctional specificityþ:106Âreliability
Table 4
Regression analysis results.
Variable Prediction Model 1
(controls-only)
Model 2
(main effects)
Model 3
(with interactions)
Decision-facilitating use of performance measures + .125 (.076) .180 (.084)
*
Performance-measure use for accountability + .061 (.064) .080 (.059)
Decision-facilitating use  performance-measure reliability + .012 (.060)
Use for accountability  performance-measure reliability + .106 (.057)
*
Decision-facilitating use  performance-measure functional speci?city Np .127 (.063)
*
Use for accountability  performance-measure functional speci?city – À.118 (.055)
*
Controls
Decision-in?uencing use of performance measures Np .081 (.062) .057 (.056)
Performance-measure reliability + .138 (.077)
*
.165 (.070)
**
Performance-measure functional speci?city Np .057 (.066) .029 (.062)
Firm size Np .066 (.093) .052 (.084) .096 (.088)
Functional background of the CEO (dummy) + .135 (.171) .082 (.168) .146 (.172)
Market environment uncertainty + .097 (.072) .141 (.067)
*
.123 (.066)
*
Strategic focus differentiation + .278 (.078)
**
.182 (.075)
**
.185 (.074)
**
Strategic focus cost leadership – .078 (.055) À.021 (.060) À.077 (.061)
Functional performance + .178 (.088)
*
.129 (.076)
*
.110 (.074)
VP Marketing professional experience + .008 (.008) .013 (.008) .015 (.009)
*
Functional self-participation in PMS design Np .003 (.003) .004 (.003) .004 (.003)
Industry af?liation (dummy) Np Included Included Included
Constant 2.063
**
2.857
**
2.642
**
Number of observations (n) 192 192 192
R-square .29 .40 .44
Partial R-square added (incremental F-test) .11
**
.04
**
Adjusted R-square .20 .31 .34
Note: PMS = performance-measurement system; VP = Vice President. The table reports nonstandardized coef?cients. Robust standard errors are shown in
parentheses. Signi?cance levels are one-tailed for variables with a directional prediction and two-tailed otherwise +/–/Np denote positive/negative/non-
predicted relations.
*
Signi?cance at the 5% level.
**
Signi?cance at the 1% level.
9
As a valid theoretical rationale exists for expecting a signi?cant positive
interaction effect, an important question is whether our sample size is
powerful enough to ?nd an existing effect. Therefore, we analyzed whether
our sample has adequate statistical power to reject the null hypothesis of
no effect. We investigated this possibility, dubbed the type II error
(Verbeek, 2008), by computing the power of H2a using a critical signi?-
cance level of a = 5%. We found that we can detect a true effect size of .050
with about 90% power and a true effect size of .037 with about 80% power—
a threshold suggested by Cohen (1992) and previously applied in the
accounting literature (e.g., Abernethy et al., 2010). The signi?cant interac-
tions (Table 4) have effect sizes greater than .050. Therefore, we conclude
that if an effect exists, our setting is powerful enough to ?nd it. We still
have to reject H2a.
454 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
These partial derivatives describe the impact of the two
uses of performance measures on functional strategic deci-
sion in?uence as a function of the performance-measure
properties. Table 5 provides the marginal effects of the
relationships depending on selected values of the interact-
ing variables. The slope values allow some insightful inter-
pretations. For constant values of reliability at the mean of
zero, the data show that for high functional speci?city of
performance measures, a positive association exists be-
tween decision-facilitating use and functional strategic
decision in?uence. Further, this effect is stronger the high-
er the level of speci?city (e.g., .459, p < .01 at the highest
sample value). The slopes for lower levels of speci?city
are negative, but not signi?cant (e.g., À.175, p > .05 at the
lowest sample value).
Regarding the use of performance measures for
accountability, we ?nd a similar pattern for the interaction
with reliability. Negative (nonsigni?cant) slopes measured
for low values (e.g., À.253, p > .05 at the lowest sample va-
lue) change to positive, signi?cant slopes for higher values
of reliability (e.g., .273, p < .05 at the highest sample value).
However, the interaction effect of functional speci?city
operates in the opposite direction. Slopes are positive for
low values (e.g., .408, p < .01 at the lowest sample value)
and negative (but nonsigni?cant) for high values of func-
tional speci?city (e.g., À.178, p > .05 at the highest sample
value).
Additional analyses
Possibly our results are affected by organizational struc-
tures within the ?rm. In particular, the more homogeneous
the various subunits within an organization are, the fewer
subunit-speci?c measures in the whole ?rm are likely to
exist. In such organizations, measures peculiar to a partic-
ular function may be better understood, because fewer
other measures exist to which outsiders (in our case the
top management team) must attend. For our research
model, we expect a less negative effect of performance-
measure speci?city if performance measures are used for
accountability purposes in more homogenous organiza-
tions. We empirically tested this proposition by employing
service (vs. manufacturing) ?rms as a proxy variable for
homogeneity of business functions.
10
We added a triple
interaction term to our model 3 shown in Table 4, including
use for accountability, performance-measure speci?city, and
an indicator variable classifying service ?rms. In line with
our expectation, we ?nd a positive triple interaction
(p < .10), which indicates that the negative effect of account-
ability use for a high level of functional speci?city of
performance measures is decreasing in the homogeneity of
business functions within the ?rm.
Moreover, in empirical research on performance-mea-
surement system choices, an important factor is the possi-
bility of different causal ?ows of effects (Chenhall & Moers,
2007). For our study, an alternative explanation could sub-
stantiate an inversely speci?ed model in which functional
strategic decision in?uence affects the use of performance
measures in the function, and not vice versa. Agency the-
ory predicts that when a ?rm allocates more decision
rights to a particular function, the ?rm is likely to subject
the function to tighter control to ensure that its use of deci-
sion rights produces organizationally desirable outcomes
(Abernethy et al., 2004). Thus, for an inverse effect (i.e.,
functional strategic decision in?uence affects the use of
performance measures), we should observe a negative cor-
relation between functional strategic decision in?uence
and the function’s self-participation in the design of its
performance-measurement system. However, we do not
observe this condition empirically. As Table 2 shows, this
correlation is close to zero and nonsigni?cant. In untabu-
lated analyses, we regressed functional self-participation
in designing the performance-measurement system on
functional strategic decision in?uence (including controls)
and ?nd a nonsigni?cant association as well. These results
provide no evidence for the validity of an inversely speci-
?ed model.
Additionally, we employed two speci?cation tests to as-
sess whether our model speci?cation ?ts the data. In the
?rst test, we estimated a model in which all controls and
property variables feed back to all three uses of perfor-
mance measures, thus treating the different uses as endog-
enous variables. We used four interrelated regressions and
the seemingly unrelated estimator (SUR) to account for
systematic correlations between the error terms (Woold-
ridge, 2002). The results of our main predictions do not
change substantially, which is consistent with the speci?-
cation of functional strategic decision in?uence as the
independent variable in the analysis. As a second test, we
estimated a recursive model with the three types of perfor-
mance-measure use as dependent variables and functional
strategic decision in?uence as the independent variable,
including all controls. In conformity with our interpreta-
tion of results, we ?nd no signi?cant effects when strategic
decision in?uence serves as the independent variable.
Thus, our tests do not support reverse causation.
Discussion and conclusions
Although prior investigations make a strong conceptual
case for an association between a function’s performance-
measurement practice and its in?uence on strategic deci-
sionmaking (Markus &Pfeffer, 1983; Saunders, 1981; Wick-
ramasinghe, 2006), empirical research investigating this
phenomenon is lacking. Drawing on institutional theory,
this study analyzes how the use of performance measures
for decision facilitation and accountability in the marketing
function affects the function’s in?uence over strategic deci-
sion making. Furthermore, this investigation is the ?rst to
consider interaction effects of performance-measure
10
We argue that, in service ?rms, the operations or research and
development function is more closely related to marketing, whereas in
manufacturing ?rms, operations or research and development is substan-
tially different from marketing. We classi?ed ?nancial services, retailing,
logistics and transportation, and IT and telecommunication as mainly
service-driven ?rms and automotive, chemicals, pharmacy, mechanical
engineering, consumer goods, utilities, metal production, electronics, and
building and construction as mainly manufacturing-driven ?rms. As we
could not classify the ‘‘other’’ ?rms appropriately, we performed the
analysis with n = 187 ?rms.
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 455
properties, extending prior research in this area of inquiry
that has so far neglected the role of information properties.
The model was tested using a large sample of top executives
across a range of industries, thus meaningfully extending
extant empirics.
Analysis of the main effects shows that neither decision-
facilitating use nor use for accountability has signi?cant
associations with functional strategic decision in?uence
when the effects of the properties are not considered. How-
ever, including interaction effects shows that the use of per-
formance measures can have either a positive or a negative
effect on functional strategic decision in?uence. We ?nd
evidence that the effect of performance-measure use for
accountability is positive and signi?cant for high levels of
performance-measure reliability. On the other hand, how-
ever, the use for accountability has no effect for very low
levels of reliability. These ?ndings are consistent with the
theoretical rationale of our model, which states that top
executives face institutionalized expectations to account
for their function’s performance and value contribution.
In view of these expectations, employing reliable perfor-
mance measures for accountability may enhance the func-
tion’s legitimacy and perceived image, giving functional
VPs more weight in strategic decision making. However,
the use of less reliable measures for accountability does
not seem to confer legitimacy in the same way.
With respect to the functional speci?city of perfor-
mance measures, the data support the hypothesized inter-
action effect with the use for accountability. Using
performance measures for accountability has a positive ef-
fect on functional in?uence when the measures are less
speci?c to the particular function and thus better re?ect
congruity with organizational goals. These ?ndings echo
prior accounting research, which argues that standardized
measures offer more meaningful opportunities for relative
performance evaluation. Managers outside a particular
function may be unable to fully exploit the information
found in a diverse set of measures unique to that particular
function (Arya et al., 2005). Similarly, our ?ndings are con-
sistent with previous experimental evidence suggesting
that top managers evaluating multiple subunits place more
weight on measures common to many subunits than on
those speci?c to particular subunits (Lipe & Salterio, 2000).
Regarding decision-facilitating use, underlying theory
does not clearly point toward the signs of interaction effects
with functional speci?city. Tests of these interactions show
a positive interaction term, indicating that the use of more
speci?c metrics in decision making has a positive effect.
This ?nding lends support to the perspective that custom-
ized measures designed speci?cally for a particular subunit
can be better targeted to the subunit’s idiosyncratic needs
and are therefore more helpful in supporting the function’s
decision processes (Arya et al., 2005). An alternative line of
reasoning might argue that outsiders may see the use of
speci?c performance measures in decision making as an
indication of a parochial, self-centered perspective of the
function’s management practices, resulting in decisions
that are less congruent with organizational goals. However,
we ?nd no empirical support for this argument.
A further interesting insight is that the property of reli-
ability does not exert a signi?cant interaction effect for
decision-facilitating use. Thus, employing precise mea-
sures in decision making apparently is not required for a
positive effect. Although surprising at a ?rst glance, this
?nding coincides with anecdotal evidence that the use of
accounting information for decision making may enhance
the legitimacy of individual or group activities, regardless
of the actual information value (Markus & Pfeffer, 1983).
In other words, for performance-measure use in decision
making to confer legitimacy, how decisions and actions
were actually affected by the systems may be less relevant.
Proponents of institutional theory have also noted that
organizational legitimacy may occasionally derive from a
super?cial adoption of techniques or policies and that for
some expectations of the institutional environment, a mere
‘‘ceremonial conformity’’ may placate potential sources of
support (Meyer & Rowan, 1977, p. 340; Scott, 2001). This
theoretical view might offer a possible interpretation for
our results. While reliability is plausibly an important
quality for measures used for accountability, which should
be beyond debate (Gjesdal, 1981; Ijiri, 1975), top managers
may not attach the same importance to reliability with re-
gard to the use for decision making. In a similar vein, prior
studies in the accounting literature conducted in different
functional settings argue that reliability of performance
measures is less critical for decision-facilitating use be-
cause the role of behavioral risk in this context is less rel-
evant (e.g., van Veen-Dirks, 2010).
Overall, our ?ndings provide empirical evidence that
functional performance-measurement practices may affect
the ability of functions’ top managers to promote strategic
initiatives at the top management level. Moreover, our
Table 5
Partial derivatives (slopes) of signi?cant interactions.
Interaction term Value of interaction variable
Lowest
sample value
Sample
mean À 1 SD
Sample mean
(centered at zero)
Sample
mean + 1 SD
Highest
sample value
Decision-facilitating use  PM reliability Not tested because of nonsigni?cant interaction term
Decision-facilitating use  PM functional speci?city À.175 (.162) .042 (.086) .180 (.084)
*
.319 (.127)
**
.459 (.187)
**
Use for accountability  PM reliability À.253 (.184) À.040 (.084) .080 (.059) .199 (.090)
*
.273 (.122)
*
Use for accountability  PM functional speci?city .408 (.163)
**
.208 (.083)
**
.008 (.059) À.049 (.085) À.178 (.136)
Note: PM = performance-measure; SD = standard deviation. Robust standard errors are shown in parantheses. Signi?cance levels based on two-tailed tests.
*
Signi?cance at the 5% level.
**
Signi?cance at the 1% level.
456 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
study provides evidence that the accountability demand
for accounting information (Evans et al., 1994) plays a role
for organizational power structures. However, the proper-
ties of performance measures need particular attention, as
they seem to determine whether the effect of perfor-
mance-measure use is positive or negative.
Limitations and directions for future research
While our study addresses important research issues,
some interesting directions for future research arise from
our ?ndings. Perhaps the most challenging task that re-
mains is to extend the present study to other functional
contexts. Our study analyzes the strategic decision in?u-
ence of the marketing function in a cross-industry sample
of German business ?rms. Surveying the same function in
each organization facilitated the collection of a sizable
sample, and it also mitigates methodological problems re-
lated to function-speci?c unobserved heterogeneity that
would occur in a cross-functional sample. Nonetheless, fu-
ture research should assess the stability of our ?ndings in
other functional contexts. In particular, the top manage-
ment team of all ?rms in our sample included a VP for
the marketing function, which indicates that the function
is regarded as in?uential and important. Further research
could explore whether less important functions whose
measures possess the same characteristics would bene?t
in the same way.
Scholars should also more closely examine some of the
effects studied in our model. Signi?cant potential seems to
exist for further research regarding the use of accounting
information for accountability in organizational contexts.
An interesting investigation would be to explore what
makes the use of information to account for functions’
achievements more or less effective. For instance, previous
experimental research has analyzed how the organization
and presentation of performance measures in a balanced
scorecard affect evaluators’ perceptions of business unit
performance (Cardinaels & van Veen-Dirks, 2010; Lipe &
Salterio, 2002). Further investigations could explore in
more depth how functions should use performance infor-
mation within the organization to achieve the strongest
impact on their strategic in?uence. Furthermore, the fact
that reliability does not affect the ef?cacy of decision-facil-
itating use warrants further examination. Although prior
conceptual work offers a plausible explanation for the ob-
served result, our ?ndings contain a residual degree of
ambiguity. As to the best of our knowledge our study offers
the ?rst empirical test of the effect, future research should
assess whether our ?ndings can be replicated.
Some other limitations of our study imply further direc-
tions for future research. As surveys targeting top manage-
ment representatives face restrictions regarding the length
of the research instrument, we were unable to include all
possible antecedents of functional strategic decision in?u-
ence. While our study takes into account important covari-
ates such as functional performance and respondents’
professional experience, future research should explore
whether including additional variables explains incremen-
tal variance in the dependent variable and affects the
stability of our ?ndings. In this regard, a particularly valu-
able contribution would be the investigation of the effects
of personality characteristics such as executives’ leadership
effectiveness, interpersonal skills, or decision effectiveness.
Additionally, investigators should address three main
methodological concerns. First, our measurement instru-
ments require further testing. This study relies on both
newly developed constructs and established scales drawn
from previous research. Although sound theoretical con-
ceptualizations in the accounting literature facilitated the
development of the new scales, further research should
provide support for their psychometric properties. Second,
the survey-based approach has a potential for measure-
ment error. Our choice of key informants and our speci?c
study design may alleviate this concern to some extent.
We restricted the analysis to top management representa-
tives responsible for the marketing function and collected
additional data from a second respondent group to assess
the validity of key informant perceptions for the most crit-
ical constructs. Notwithstanding these efforts, we cannot
completely rule out measurement errors. Lastly, cross-sec-
tional data cannot establish the causality of purported rela-
tionships. While evidence from previous literature and
additional econometric tests lend support to our interpre-
tation of results, any implied causality re?ects the theoret-
ical position taken (van Lent, 2007), and substantiating the
purported ?owof effects, as for instance with a longitudinal
analysis design, remains a challenge left to future research.
Acknowledgments
The authors are listed alphabetically and contributed
equally to this paper. We appreciate helpful comments of
Markus Glaser, Martin Holzhacker, Christian Kunz, Matth-
ias Mahlendorf (our discussant), David Marginson, Alexan-
der Schmidt, Dirk Totzek, participants of the 3rd Annual
Conference for Management Accounting Research (AC-
MAR) and of the 34th Annual Congress of the European
Accounting Association (EAA). We further thank Michael
Shields (Editor) and two anonymous reviewers for con-
structive suggestions for improvement.
Appendix A
Measurement instruments.
I. Functional strategic decision in?uence
Seven-point scale: ‘‘very little in?uence’’ to ‘‘very
strong in?uence’’ (1–7).
Please provide your assessment of the in?uence of the
marketing function in the following decision areas:
1. Strategic direction of the company.
2. Expansion in new geographic markets.
3. Customer satisfaction measurement and
management.
4. New product development.
5. Major capital expenditures.
6. Pricing decisions.
(continued on next page)
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 457
Appendix A (continued)
7. Choice of strategic business partners.
8. Design of customer service and support.
II. Decision-facilitating use of performance measures
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
(1–7).
Please indicate whether performance measures are
used in your function for the following:
1. Decision making.
2. Budgeting.
3. Variance analyses of planned performance.
4. Tracking progress to pre-de?ned goals.
III. Decision-in?uencing use of performance measures
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
(1–7).
Please indicate whether performance measures are
used in your function for the following:
1. Evaluating employee performance within the
function.
2. Rewarding employee performance within the
function.
3. Determining compensation practices within the
function.
4. Applying sanctions within the function (e.g.,
concerning decision rights, budgets).
IV. Use of performance measures for accountability
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
(1–7).
Please indicate whether performance measures are
used in your function for the following:
1. To account for the function’s budget spending.
2. To account for the function’s performance.
3. To illustrate the function’s contribution to ?rm
performance relative to other functions in
quantitative terms.
V. Performance-measure reliability
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
(1–7).
Please indicate whether the performance measures
used in your function show the following
characteristics:
1. The performance measures used in our function
are reliable.
2. The performance measures used in our function
represent what they purport to represent.
VI. Performance-measure functional speci?city
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
(1–7).
Please indicate whether the performance measures
used in your function show the following
characteristics:
1. The performance measures are relevant for the
marketing function.
2. The performance measures put special weight on
customer-, competitor-, and market-related
measures.
Appendix A (continued)
3. Besides results-oriented measures (e.g., sales,
customer satisfaction), the performance measures
also include input- (e.g., meeting the marketing
budget) and process-related measures (e.g., length
of marketing processes).
4. The performance measures provide a balanced
picture of the marketing function.
VII. Firm size
What is the approximate number of full-time
employees in your company?
VIII. Functional background of the CEO
Dummy variable indicating the primary functional
background of the CEO.
1. Marketing.
2. Sales.
3. Research and development.
4. Purchasing/production/logistics.
5. Finance/controlling.
IX. Market environment uncertainty
Seven-point scale: ‘‘very rarely’’ to ‘‘very frequently’’
(1–7).
Please indicate how frequently the following aspects
change in the market:
1. Products and services offered by competition.
2. Marketing and sales strategy of competitors.
3. Customers’ preferences for product features.
4. The price-to-value ratio customers expect.
X. Strategic focus (differentiation; cost leadership)
Seven-point scale: ‘‘totally disagree’’ to ‘‘totally agree’’
(1–7).
Items 1–4 refer to a differentiation strategy, items 5–8
refer to a cost leadership strategy.
To what degree does the competitive strategy of your
company emphasize the following goals?
1. Building up premium product or brand image.
2. Offering highly differentiated/innovative
products.
3. Obtaining high prices.
4. Creating superior customer value by knowledge
of customers’ preferences and customized products.
5. Standardization of products and services with few
variants and ancillary services.
6. Standardization of processes in production and
sales.
7. Using economies of scale in purchasing volumes.
8. Cost ef?ciency in overhead functions and general
administration.
XI. Functional performance
Seven-point scale from ‘‘much worse than
competition’’ [À3] to ‘‘much better than
competition’’ [+3].
How would you rate your function’s performance in
relation to its competition during the past 3 years
with respect to the following aspects?
(continued on next page)
458 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460
Appendix A (continued)
1. Achieving customer satisfaction.
2. Creating customer utility.
3. Achieving customer loyalty.
4. Acquisition of new customers.
5. Achievement of growth targets.
6. Achievement of planned market share.
XII. Industry af?liation
Dummy variable indicating the industry af?liation of
the company.
XIII. Functional self-participation in performance-
measurement system design
To what extent does your function participate in
designing its performance-measurement system
compared to other actors outside the function?
Scale ranging from 0 to 100. 0 = no participation at all;
100 = complete determination.
XIV. VP Marketing professional experience
How many years of professional experience do you
have? Approx. _________ years.
References
Abernethy, M. A., Bouwens, J., & van Lent, L. (2004). Determinants of
control system design in divisionalized ?rms. The Accounting Review,
79, 545–570.
Abernethy, M. A., Bouwens, J., & van Lent, L. (2010). Leadership and
control system design. Management Accounting Research, 21, 2–16.
Abernethy, M. A., & Chua, W. F. (1996). A ?eld study of control system
‘‘redesign’’: The impact of institutional processes on strategic choice.
Contemporary Accounting Research, 13, 569–606.
Abernethy, M. A., & Vagnoni, E. (2004). Power, organization design, and
managerial behavior. Accounting, Organizations and Society, 29,
207–225.
Ansari, S., & Euske, K. J. (1987). Rational, rationalizing, and reifying uses of
accounting data in organizations. Accounting, Organizations and
Society, 12, 375–384.
Arya, A., Glover, J., Mittendorf, B., & Ye, Lixin. (2005). On the use of
customized versus standardized performance measures. Journal of
Management Accounting Research, 17, 7–21.
Banker, R. D., Chang, H., & Pizzini, M. J. (2004). The balanced scorecard:
Judgemental effects of performance measures linked to strategy. The
Accounting Review, 79, 1–23.
Banker, R. D., & Datar, S. M. (1989). Sensitivity, precision, and linear
aggregation of signals for performance evaluation. Journal of
Accounting Research, 27, 21–39.
Bariff, M. L., & Galbraith, J. R. (1978). Intraorganizational power
considerations for designing information systems. Accounting,
Organizations and Society, 3, 15–27.
Birnberg, J. G., & Zhang, Y. (2010). When betrayal aversion meets loss
aversion: The effect of economic downturn on internal control system
choices. In Proceedings of the American Accounting Association. San
Francisco (August 2010).
Birnberg, J. G., Hoffman, V. B., & Yuen, S. (2008). The accountability
demand for information in China and the US—A research note.
Accounting, Organizations and Society, 33, 20–32.
Brignall, S., & Modell, S. (2000). An institutional perspective on
performance measurement and management in the new public
sector. Management Accounting Research, 11, 281–306.
Cardinaels, E., & van Veen-Dirks, P. M. G. (2010). Financial versus non-
?nancial information: The impact of information organization and
presentation in a balanced scorecard. Accounting, Organizations and
Society, 35, 565–578.
Carpenter, M. A. (2011). The handbook of research on top management
teams. Cheltenham, UK: Edward Elger Publishing.
Chapman, C. S. (1997). Re?ections on a contingent view of accounting.
Accounting, Organizations and Society, 22, 189–205.
Chenhall, R. H. (2003). Management control systems design within its
organizational context: Findings from contingency-based research
and directions for the future. Accounting, Organizations and Society, 28,
127–168.
Chenhall, R. H., & Lang?eld-Smith, K. (1998). The relationship between
strategic priorities, management techniques, and management
accounting: An empirical investigation using a systems approach.
Accounting, Organizations and Society, 23, 234–264.
Chenhall, R. H., & Lang?eld-Smith, K. (2007). Multiple perspectives of
performance measures. European Management Journal, 25, 266–
282.
Chenhall, R. H., & Moers, F. (2007). The issue of endogeneity within
theory-based, quantitative management accounting research.
European Accounting Review, 16, 173–195.
Christensen, J. A., & Demski, J. S. (2003). Accounting theory: An information
content perspective. Boston, MA: McGraw-Hill.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
Covaleski, M. A., & Dirsmith, M. W. (1986). The budgetary process of
power and politics. Accounting, Organizations and Society, 11, 193–
214.
Datar, S., Kulp, S. C., & Lambert, R. A. (2001). Balancing performance
measures. Journal of Accounting Research, 39, 75–92.
Davis, G. F., & Marquis, C. (2005). Prospects for organization theory in the
early twenty-?rst century: Institutional ?elds and mechanisms.
Organization Science, 16, 322–343.
Demski, J. S. (2008). Managerial uses of accounting information (2nd ed.).
New York: Springer.
Demski, J. S., & Feltham, G. A. (1976). Cost determination: A conceptual
approach. Ames: Iowa State University Press.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and
mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ:
John Wiley and Sons.
DiMaggio, P., & Powell, W. (1983). The iron cage revisited: Institutional
isomorphism and collective rationality in organizational ?elds.
American Sociological Review, 48, 147–160.
Duncan, R. (1972). Characteristics of organizational environments and
perceived environmental uncertainty. Administrative Science
Quarterly, 17, 313–327.
Emsley, D. (2000). Variance analysis and performance: Two empirical
studies. Accounting, Organizations and Society, 25, 1–12.
Evans, J. H., Heiman-Hoffman, V. B., & Rau, S. E. (1994). The accountability
demand for information. Journal of Management Accounting Research,
6, 24–42.
Feltham, G. A., & Xie, J. (1994). Performance measure congruity and
diversity in multi-task principal/agent relations. The Accounting
Review, 69, 429–453.
Finn, R. H. (1970). A note on estimating the reliability of categorical data.
Educational and Psychological Management, 30, 71–76.
Fligstein, N. (1987). The intraorganizational power struggle: Rise of
?nance personnel to top leadership in large corporations. American
Sociological Review, 52, 44–58.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models
with unobservable variables and measurement error. Journal of
Marketing Research, 18, 39–50.
Gerdin, J. (2005). Management accounting system design in
manufacturing departments: An empirical investigation using a
multiple contingencies approach. Accounting, Organizations and
Society, 30, 99–126.
Gjesdal, F. (1981). Accounting for stewardship. Journal of Accounting
Research, 19, 208–231.
Glaser, M., Lopez-de-Silanes, F., & Sautner, Z. (2012). Opening the black
box: Internal capital markets and managerial power. Journal of
Finance, Forthcoming.
Govindarajan, V., & Gupta, A. K. (1985). Linking control systems to
business unit strategy: Impact on performance. Accounting,
Organizations and Society, 10, 51–66.
Hambrick, D. C., & Mason, P. M. (1984). Upper echelons: The organization
as a re?ection of its top managers. Academy of Management Review, 9,
193–206.
Hartmann, F. G. H., & Moers, F. (1999). Testing contingency hypotheses in
budgetary research: An evaluation of the use of moderated regression
analysis. Accounting, Organizations and Society, 24, 291–315.
Henri, J.-F. (2006). Organizational culture and performance measurement
systems. Accounting, Organizations and Society, 31, 77–103.
Hill, C. W. L. (1988). Differentiation versus low cost or differentiation and
low cost: A contingency framework. Academy of Management Review,
13, 401–412.
Homburg, C., Workman, J. P., & Krohmer, H. (1999). Marketing’s in?uence
within the ?rm. Journal of Marketing, 63, 1–17.
M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460 459
Humphreys, K. A., & Trotman, K. T. (2011). The balanced scorecard: The
effect of strategy information on performance evaluation judgments.
Journal of Management Accounting Research, 23, 81–98.
Ijiri, Y. (1975). Theory of accounting measurement. Studies in accounting
research (Vol. 10). Sarasota, FL: American Accounting Association.
Ittner, C. D., & Larcker, D. F. (2001). Assessing empirical research in
managerial accounting: A value-based management perspective.
Journal of Accounting and Economics, 32, 349–410.
Jaworski, B. J., & Young, S. M. (1992). Dysfunctional behavior and
management control: An empirical study of marketing managers.
Accounting, Organizations and Society, 17, 17–35.
Jensen, M. C., & Zajac, E. J. (2004). Corporate elites and corporate strategy:
How demographic preferences and structural position shape the
scope of the ?rm. Strategic Management Journal, 25, 507–524.
Kurunmäki, L. (1999). Professional vs. ?nancial capital in the ?eld of
health care—Struggles for the redistribution of power and control.
Accounting, Organizations and Society, 24, 95–124.
LeBreton, J. M., Burgess, J. R. D., Kaiser, R. B., Atchley, E. K., & James, L. R.
(2003). The restriction of variance hypothesis and interrater
reliability and agreement: Are ratings from multiple sources really
dissimilar? Organizational Research Methods, 6, 80–128.
Lev, B. (2001). Intangibles: Management, measurement, and reporting.
Harrisonburg, VA: R.R. Donnelley.
Lillis, A. M., & van Veen-Dirks, P. M. G. (2008). Performance measurement
system design in joint strategy settings. Journal of Management
Accounting Research, 20, 25–27.
Lipe, M. G., & Salterio, S. E. (2000). The balanced scorecard: Judgmental
effects of common and unique performance measures. The Accounting
Review, 75, 283–298.
Lipe, M. G., & Salterio, S. E. (2002). A note on the judgmental effects of the
balanced scorecard’s information organization. Accounting,
Organizations and Society, 27, 531–540.
Luft, J. L., & Shields, M. D. (2003). Mapping management accounting:
Graphics and guidelines for theory consistent empirical research.
Accounting, Organizations and Society, 12, 169–249.
Malina, M. A., & Selto, F. A. (2001). Communicating and controlling
strategy: An empirical study of the effectiveness of the balanced
scorecard. Journal of Management Accounting Research, 13, 47–90.
Markus, M. L., & Pfeffer, J. (1983). Power and the design and
implementation of accounting and control systems. Accounting,
Organizations and Society, 8, 205–218.
Mayston, D. J. (1985). Non-pro?t performance indicators in the public
sectors. Financial Accountability & Management, 1, 51–74.
Merchant, K. A., & Van der Stede, W. A. (2012). Management control
systems: Performance measurement, evaluation and incentives (3rd ed.).
London, UK: Prentice Hall.
Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal
structure as myth and ceremony. American Journal of Sociology, 83,
340–363.
Miller, A., & Dess, G. G. (1993). Assessing Porter’s (1980) model in terms of
its generalizability, accuracy, and simplicity. Journal of Management
Studies, 30, 553–585.
Miller, P. (1994). Accounting as social and institutional practice: An
introduction. In A. G. Hopwood & P. Miller (Eds.), Accounting as social
and institutional practice (pp. 1–39). Cambridge, UK: Cambridge
University Press.
Moers, F. (2006). Performance measure properties and delegation. The
Accounting Review, 81, 897–924.
Moll, J., Burns, J., & Major, M. (2006). Institutional theory. In Z. Hoque
(Ed.), Methodological issues in accounting research: Theories and
methods (pp. 183–206). London, UK: Spiramus Press.
Mowday, R. T. (1979). Leader characteristics, self-con?dence, and
methods of upward in?uence in organizational decision situations.
Academy of Management Journal, 22, 709–725.
Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-
Hill.
Ortega, J. (2003). Power in the ?rm and managerial career concerns.
Journal of Economics and Management Strategy, 12, 1–29.
Pfeffer, J. (1981). Power in organizations. Marsh?eld, MA: Pitman.
Porter, M. E. (1980). Competitive strategy. New York: Free Press.
Richardson, A. J. (1987). Accounting as a legitimating institution.
Accounting, Organizations and Society, 12, 341–355.
Rowe, C., Shields, M. D., & Birnberg, J. G. (2012). Hardening soft
accounting information: Games for planning organizational change.
Accounting, Organizations and Society, 37, 260–279.
Saunders, C. S. (1981). Management information systems,
communications, and departmental power: An integrative model.
Academy of Management Review, 6, 431–442.
Scott, W. R. (2001). Institutions and organizations (2nd ed.). Newbury Park,
CA: Sage Publications.
Scott, W. R. (2005). Institutional theory: Contributing to a theoretical
research program. In K. G. Smith & M. Hitt (Eds.), Great minds in
management: The process of theory development (pp. 450–484). New
York: Oxford University Press.
Snavely, H. (1967). Accounting information criteria. The Accounting
Review, 42, 223–232.
Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional
approaches. Academy of Management Review, 20, 571–610.
Van der Stede, W. A., Young, S. M., & Chen, C. X. (2005). Assessing the
quality of evidence in empirical management accounting research:
The case of survey studies. Accounting, Organizations and Society, 30,
655–684.
Van Lent, L. (2007). Endogeneity in management accounting research: A
comment. European Accounting Review, 16, 197–205.
Van Veen-Dirks, P. M. G. (2010). Different uses of performance measures:
The evaluation versus reward of production managers. Accounting,
Organizations and Society, 35, 141–164.
Verbeek, M. (2008). A guide to modern econometrics. Hoboken, NJ: John
Wiley and Sons.
Verhoef, P. C., & Lee?ang, P. S. H. (2009). Understanding the marketing
department’s in?uence within the ?rm. Journal of Marketing, 73,
14–37.
Vorhies, D. W., & Morgan, N. A. (2003). A con?guration theory assessment
of marketing organization ?t with business strategy and its
relationship with marketing performance. Journal of Marketing, 67,
100–115.
Waterhouse, J., & Thiessen, P. (1978). A contingency framework for
management accounting systems research. Accounting, Organizations
and Society, 3, 65–76.
White, H. (1980). A heteroscedastic-consistent covariance matrix
estimator and a direct test of heteroscedasticity. Econometrica, 48,
817–838.
Wickramasinghe, D. (2006). Power and accounting: A guide to critical
research. In Z. Hoque (Ed.), Methodological issues in accounting
research: Theories and methods (pp. 339–360). London, UK: Spiramus
Press.
Widener, S. K. (2006). Associations between strategic resource
importance and performance measure use: The impact on ?rm
performance. Management Accounting Research, 17, 433–457.
Wolk, H. I., Francis, J. R., & Tearney, M. G. (1999). Accounting theory: A
conceptual and institutional approach (2nd ed.). Boston, MA: Kent
Publishing.
Wooldridge, J. M. (2002). Econometric analysis of cross section and panel
data. Cambridge, MA: MIT Press.
Wouters, M., & Wilderom, C. (2008). Developing performance
measurement systems as enabling formalization: A longitudinal
?eld study of a logistics department. Accounting, Organizations and
Society, 11, 488–516.
Wyatt, A. (2008). What ?nancial and non-?nancial information on
intangibles is value relevant? A review of the evidence. Accounting
and Business Research, 38, 217–256.
Young, S. M. (1996). Survey research in management accounting: A
critical assessment. In A. Richardson (Ed.), Research methods in
accounting: Issues and debates (pp. 55–68). Vancouver, Canada: CGA
Canada Research Foundation.
Zucker, L. G. (1987). Institutional theories of organization. Annual Review
of Sociology, 13, 443–464.
460 M. Artz et al. / Accounting, Organizations and Society 37 (2012) 445–460

doc_582370282.pdf
 

Attachments

Back
Top