Description
This essay discusses how incorporating qualitative analyses and insight in positivist field
studies can strengthen researchers’ ability to draw causal inferences. Specifically, I review
how the rich institutional knowledge available in field settings can be used to increase
internal validity by improving the specification of empirical models and tests and by providing
greater insight into statistical results, particularly through the investigation of the
causal processes linking the accounting practices and outcomes.
Strengthening causal inferences in positivist ?eld studies
Christopher D. Ittner
?
The Wharton School, University of Pennsylvania, 1326 Steinberg Hall-Dietrich Hall, Philadelphia, PA 19104, USA
a b s t r a c t
This essay discusses how incorporating qualitative analyses and insight in positivist ?eld
studies can strengthen researchers’ ability to draw causal inferences. Speci?cally, I review
how the rich institutional knowledge available in ?eld settings can be used to increase
internal validity by improving the speci?cation of empirical models and tests and by pro-
viding greater insight into statistical results, particularly through the investigation of the
causal processes linking the accounting practices and outcomes.
Ó 2013 Elsevier Ltd. All rights reserved.
Introduction
Positivistic research in accounting addresses cause-
and-effect questions. For example, do differences in envi-
ronmental or strategic contexts lead to differences in
management control systems? Do certain activities drive
overhead costs? Does the adoption of a balanced score-
card system improve performance? However, despite this
focus on causal questions, ?eld researchers’ capacity to
draw strong causal inferences is hindered by their inabil-
ity to conduct or study true, randomized natural experi-
ments. Instead, researchers must rely on non- or quasi-
experimental methods. The limitations in these methods
give rise to concerns regarding the extent to which causa-
tion can be inferred from ?eld-based accounting studies.
The objective of this essay is to discuss how the incorpo-
ration of qualitative methods in positivistic ?eld research
can provide a powerful mechanism to enhance a study’s
causal inferences. In particular, researchers can take
advantage of the rich institutional knowledge available
in the ?eld to strengthen the validity of the analyses
through improved speci?cation of empirical models and
tests, and the ability to provide greater insight into statis-
tical results (particularly through greater understanding
of the causal processes linking the accounting practices
and outcomes).
As Cook and Campbell (1979) note in their in?uential
book on quasi-experimental ?eld research methods, causal
inferences in social science research can never be proven
with certainty because the inferences depend upon many
assumptions that cannot be directly veri?ed. Any research
method contains some level of uncertainty because all of
the causes of observed effects or how they relate to each
other are rarely if ever known. Empirical researchers must
therefore attempt to assess the probability that a speci?c
factor caused an outcome to occur. This requires choosing
research methods that enhance a study’s internal validity
(i.e., the extent to which a study’s causal conclusions are
justi?ed). As Cook and Campbell (1979, p. 11) argue, ‘‘we
want to base causal inferences on procedures that reduce
the uncertainty about causal connections even though
uncertainty can never be reduced to zero.’’
Although considerable philosophical debate exists
regarding the nature of causality, Mills’ three well-known
criteria provide a practical foundation for assessing causal
relationships in empirical studies: (1) the cause has to pre-
cede the effect in time; (2) the cause and effect have to be
related (i.e., co-vary); and (3) other explanations of the
cause-and-effect relationship have to be eliminated. The
incorporation of qualitative analyses can enhance the
internal validity of positivistic ?eld research by providing
as much evidence as possible that these criteria hold in
the chosen research site(s).
All empirical tests of causal relationships in non-exper-
imental settings are susceptible to multiple threats to
0361-3682/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2013.10.003
?
Tel.: +1 215 898 7786.
E-mail address: [email protected]
Accounting, Organizations and Society 39 (2014) 545–549
Contents lists available at ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
validity, including correlated omitted variables and endo-
geneity, interactions, non-linearities, simultaneities, and
measurement error, among others. To the extent possible,
researchers should attempt to minimize these threats in
their empirical tests. At a basic level, this requires
researchers to control for any confounding in?uences that
may affect the outcomes of interest in their regression or
structural equations models. More advanced options in-
clude taking advantage of methodological improvements
in statistical techniques that can enhance the ability to
draw causal inferences in non-experimental studies.
1
For
example, difference-in-difference tests can be used to com-
pare time series changes in outcomes for groups of individ-
uals or organizations that received a treatment (for example,
implemented a balanced scorecard) relative to those that did
not, with the non-treatment group serving as the control for
factors other than the treatment that could in?uence the
outcome. Propensity scoring methods can statistically match
individuals or organizations in treatment and non-treatment
groups, thereby attempting to replicate a randomized exper-
iment as closely as possible by obtaining treatment and con-
trol groups with similar covariate distributions. Regression
discontinuity designs can take advantage of exogenously-im-
posed discontinuities or cut-offs (e.g., age limits to become
eligible for an incentive plan) to assign observations to treat-
ment and non-treatment groups in the absence of random
assignment. Assuming that individuals on either side of
the cut-off are similar, any outcome effect should be due
to the treatment. Instrumental variables approaches, which
require the identi?cation of instrumental variables that af-
fect the treatment assignment but not the error term in
the outcome model, can be used to control for endogeneity
and other correlated omitted variables problems. Because
the instrumental variable is correlated with the treatment
but uncorrelated with the other determinants of the out-
come, the estimated effect of the instrument on the outcome
should relate only to the treatment’s outcome effect, and not
to the effects of variables that are correlated with the
treatment.
While these and other statistical methods can improve
researchers’ ability to draw causal inferences, their appli-
cation still requires the identi?cation of the appropriate
variables and controls to include in the tests, the timing
and nature of key events (both those being studied and
potentially confounding events), and the measurement of
variables. Many of these issues are likely to be idiosyn-
cratic to each research site, making them dif?cult to iden-
tify and incorporate in arms-length, large sample studies.
In contrast, ?eld researchers can take advantage of detailed
institutional knowledge of their research context to better
specify their empirical tests. More often than not, this con-
textual knowledge requires qualitative research to precede
the speci?cation of empirical tests. Cook and Campbell
(1979, p. 93) go so far as to argue that ‘‘?eld experimenta-
tion should always include qualitative research to describe
and illuminate the context and conditions under which re-
search is conducted. These efforts often may uncover
important site-speci?c threats to validity and contribute
to valid explanations of experimental results in general
and of perplexing or unexpected outcomes in particular.’’
Improved speci?cation of empirical models and tests
One of the biggest contributions that detailed institu-
tional knowledge can make to casual inference in ?eld
studies is identifying potentially confounding factors.
Non-random assignment of individuals and organizations
to treatment and non-treatment groups (for example, to
adopters and non-adopters of an accounting innovation)
can lead to differences in outcomes that are not due to
variations in accounting practices if the two groups differ
on important dimensions. In many cases, accounting and
control practices are implemented concurrently with other
management changes, such as the hiring of a new manage-
ment team or the implementation of advanced manufac-
turing practices or customer satisfaction initiatives. In
other cases, certain types of organizations and employees
are more likely to adopt or to be assigned to a treatment
than others, with these differences having direct effects
on outcomes that are not due to the treatment. Factors
such as economic environments, labor markets, or unioni-
zation can vary over time or across organizational units.
These factors can in?uence the potential bene?ts from an
accounting practice, confounding any analysis of what
the practice’s effects would have been in the absence of
these differences. Any conclusions regarding the in?uence
of an accounting practice on an outcome must control for
confounding issues such as these.
Knowledge of the research context can help uncover the
key control variables or matching criteria to include in the
statistical tests in order to minimize these confounding ef-
fects. Consider, for example, Grif?th and Neeley’s (2009)
analysis of a balanced scorecard-based pay scheme in a
buildings supplies ?rm. One division of the ?rm adopted
the scheme while the other did not. To improve their abil-
ity to make casual inferences, Grif?th and Neeley matched
branches from the two divisions (which sold similar but
not identical products to the same customer base) based
on postal code. Thus, the matched pairs sold similar prod-
ucts to the same customers and faced similar local factors
such as economic cycles and labor market conditions,
seemingly a nearly perfect quasi-experiment. However,
their qualitative ?eld work indicated that one division sold
products that were used both inside and outside buildings,
while the other focused primarily on inside usage. Thus,
weather played a role in ?nancial outcomes. In addition,
one division sold goods that were mostly used for re?tting
buildings, while the other’s goods were mostly used for
new construction. Based on this ?rm-speci?c knowledge,
the authors incorporated local weather conditions and dif-
ferent types of construction activity as additional controls
in their tests.
Bol and Moers (2010) used in-depth semi-structured
interviews, observations, and analysis of internal docu-
ments to better understand the introduction and diffusion
of a balanced scorecard compensation plan in units belong-
ing to a cooperative bank. Their qualitative analyses re-
vealed a number of potential in?uences on scorecard
1
See Cook and Campbell (1979) and Antonakis, Bendahan, and Lalive
(2010) for discussions of these techniques.
546 C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549
adoption that needed to be incorporated into their statisti-
cal tests. In particular, controls for mergers and acquisi-
tions, the appointment of new directors, and the
implementation of new information systems in some of
the individual banks were included in their empirical mod-
els, factors that would not have been considered in the ab-
sence of the initial qualitative analyses.
Knowledge of confounding events may also lead
researchers to exclude observations that do not meet
the necessary conditions for the study. In a quasi-experi-
mental study of balanced scorecard adopters and non-
adopters in a Canadian bank, Davis and Albright (2004)
found that ?ve branch banks did not meet the study’s
requirements due to extraneous events unrelated to the
implementation of the scorecard. These observations were
eliminated to improve the authors’ ability to relate
changes in performance to the adoption of the accounting
change.
Another potential model speci?cation bene?t that de-
tailed contextual knowledge can provide is the identi?ca-
tion of issues such as simultaneous events and selection
effects that can in?uence causal inferences. Did the re-
search site adopt accounting and control practice in a
sequential manner (for example, choosing an organiza-
tional design before deciding how to measure perfor-
mance), or did it simultaneously choose and implement a
package of accounting and control mechanisms (in which
case simultaneity concerns must be addressed in the
empirical tests)? What factors entered the decision to
adopt a given practice in some parts of an organization
but not in another? If individuals or units self-selected into
treatment or non-treatment groups, statistical methods
such as Heckman self-selection models are necessary to
control for this confounding effect. By conducting qualita-
tive research prior to conducting statistical tests, ?eld
researchers are in a better position to select appropriate
statistical methods and adjustments that limit threats to
statistical validity.
Finally, deep contextual knowledge and access to a
broad range of quantitative and qualitative data sources
can be used to develop and measure variables that better
capture the latent constructs of interest. Construct validity,
or the extent to which a measure captures the theoretical
construct it is intended to capture, is an important issue
in assessing causality (Cook & Campbell, 1979). To draw
causal inferences, researchers must demonstrate that the
variables of interest are properly operationalized. Qualita-
tive research provides two potential bene?ts on this front.
First, ?eld researchers can ‘‘take advantage of the opportu-
nity to expand the domain of observables relating to par-
ticular constructs and to be on the lookout for attributes
missing in the literature. An important strength of ?eld re-
search is the ability to identify complex empirical attri-
butes that de?ne constructs’’ (Lillis, 2006, p. 465). Using
qualitative research methods such as interviews, observa-
tion, and analysis of internal documents, ?eld researchers
can get a better understanding of the attributes of key con-
structs in their research context.
Second, this knowledge can be used to develop more
valid quantitative indicators that subsequently can be em-
ployed in statistical analyses (Modell, 2005). For example,
a number of methods are available for quantifying qualita-
tive data, which can then serve as variables in econometric
models. Survey questions can be re?ned to re?ect the spe-
ci?c organizational context and newly-identi?ed attri-
butes. Non-traditional, site-speci?c quantitative data
sources such as database logs of the frequency and breadth
of accounting system usage can be identi?ed for inclusion
as indicators for the key attributes. By combining detailed
institutional knowledge and multiple data sources, in con-
junction with performing traditional convergent and dis-
criminant validity tests, ?eld researchers can better
demonstrate that their empirical constructs exhibit con-
struct validity.
Ability to provide greater insight into statistical results
Even with the advances in statistical methods for
assessing causality, researchers must still assess the direc-
tion of association, examine the possibility that other
(potentially superior) models exist, and rule out plausible
alternative explanations for the ?ndings. For example, a
?eld researcher might ?nd statistical support for the
hypothesized causal linkages in a structural equations
model. However, in many cases there are equally plausible
alternative models, which can contain different linkages or
directions of causality that run in opposite directions.
While statistical methods are available for re?ning the
hypothesized causal model, these methods cannot take
into account un-modeled linkages, omitted variables, or
alternative functional forms (e.g., non-linearities, simulta-
neities, interactions, and direction of causality). Moreover,
data analyses do not always yield the hypothesized results.
In the context of structural equations modeling, model ?t
may improve if a hypothesized link is dropped, but the sta-
tistical tests do not indicate the reason the link was not sig-
ni?cant. As Kelloway (1998, p. 69) argues in his guide to
structural equations modeling, the researcher ‘‘is then
forced to become a detective whose goal is to use whatever
means are available to specify what might have caused the
observed effect’’.
One area where rich institutional knowledge can help
positivist ?eld researchers address these issues is through
the analysis of cases that are not well predicted by their
chosen statistical methods. Knowledge of these cases can
strengthen causal inferences by helping to check for possi-
ble omitted variables, measurement errors, nonlinearities,
and interaction effects (Bennett, 2002; Seawright and Gerr-
ing, 2008). Quantitative researchers often treat outliers as
nuisances that reduce the model’s goodness of ?t, and
steps frequently are taken to minimize their in?uence.
But detailed investigation of large outliers can help identify
why these observations deviate from the hypothesized sta-
tistical model. Case-speci?c knowledge of outliers can be a
very fruitful avenue for identifying omitted variables,
re?ning measures for speci?c variables, ascertaining the
functional form linking the independent and dependent
variables, and strengthening inferences by improving the
understanding of the causal mechanisms at play.
Similarly, qualitative analyses can help explain unex-
pected or insigni?cant results. Grif?th and Neely’s (2009)
examination of a balance scorecard-based pay plan initially
C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549 547
found little change in pro?ts (relative to a matched control
group that did not implement the plan) following the new
pay scheme’s introduction. However, there was substantial
variation in outcomes across the branches receiving the
scorecard plan treatment. After interviewing these manag-
ers about their uses of the new scorecard data, Grif?th and
Neeley identi?ed differences in managers’ experience as a
potential explanation for the limited overall pro?t results.
Their statistical tests were subsequently modi?ed to in-
clude interactions between experience and scorecard
adoption. Consistent with the interview evidence, the re-
vised tests indicated that branches with more experienced
managers were better able to respond to the new scorecard
incentives, with scorecard adoption signi?cantly associ-
ated with increased pro?ts in branches with more experi-
enced managers. As this example illustrates, the
combination of quantitative and qualitative research
methods can lead the ?eld researcher to ‘‘edit one’s think-
ing about both the cause and the effect, and one can sug-
gest, after the fact, other constructs that might ?t the data
better than those with which the [?eld study] began’’
(Cook and Campbell, 1979, p. 69).
Process tracing provides a useful technique for using
multi-methods research to strengthen causal inferences
by enhancing a ?eld researcher’s understanding of the
mechanisms underlying observed statistical relationships.
Process tracing is a qualitative research method that ‘‘at-
tempts to identify the intervening causal process – the cau-
sal chain and causal mechanism – between an independent
variable (or variables) and the outcome of the dependent
variable’’ (George & Bennett, 2005, p. 206). The general
method of process tracing involves generating and analyz-
ing qualitative data (from histories, archival documents,
interviews, observation, and other sources) on the causal
mechanisms or processes (such as speci?c events, actions,
or decisions) that link hypothesized causes to observed ef-
fects. In contrast to statistical tests of co-variation that fo-
cus on causal effects (did X lead to Y?), process tracing
focuses on causal explanations (through what mechanisms
did X lead to Y?), thereby opening the ‘‘black-box’’ be-
tween cause and effect (e.g., Hedstrom & Swedberg,
1998). A key notion in process tracing is that a variable
cannot have a causal effect on an outcome unless there is
an underlying causal mechanism. This is particular impor-
tant in accounting research because accounting practices
in themselves have no effect on outcomes. It is only
through the use of the information for decision-making,
performance evaluation, or other purposes that accounting
systems ‘‘cause’’ changes in outcomes.
Knowledge of causal mechanisms can greatly improve a
?eld researcher’s ability to draw causal inferences from
their statistical results (Bennet & Checkel, 2012; Molina-
Azorin, 2011), particularly by helping them assess the ex-
tent to which two of Mills’ three criteria for causal infer-
ence hold in the sample. While statistical tests can
provide evidence on the co-variation condition, inferring
causality still requires the causes to precede the effects
and for other plausible explanations to be eliminated.
Through process tracing, the temporal ordering of events
and decisions linking the putative cause and outcome can
be established. This can con?rm that the causal event or
activity did in fact precede a subsequent link in the causal
chain or the ?nal outcome, and can improve the speci?ca-
tion of time series tests by highlighting leads and lags in
relationships. Alternative explanations and spurious corre-
lations can also be identi?ed, including the possibility that
any observed association between the introduction of an
accounting practice and an outcome is due to a ‘‘Haw-
thorne’’ effect. If the researcher cannot ?nd evidence of
some critical step in the hypothesized causal chain linking
the accounting practice to the ultimate outcome of inter-
est, the statistical result may be spurious. Alternatively,
careful process mapping may identify causal links or expla-
nations that were not expected, leading the researcher to
reject the original hypothesis and/or to modify the hypoth-
esis or empirical tests. By using process mapping to inform
and interpret statistical tests, ?eld researchers can increase
the reader’s con?dence that the observed statistical rela-
tions re?ect causal relations.
Conclusions
The examination of cause-and-effects relationships will
always by a major focus of accounting research. Positivist
?eld researchers can increase internal validity and
strengthen causal inferences by taking advantage of the
rich institutional knowledge that can be obtained in the
?eld. The resulting qualitative analyses and insights can
be used improve model and variable speci?cation prior to
conducting empirical tests, or can be used after the initial
tests have been conducted to better understand the ob-
served statistical relations or to suggest further analyses
and re?nements. Through this iterative use of qualitative
and quantitative methods, ?eld researchers can provide
more convincing evidence regarding the causal links be-
tween accounting practices and organizational outcomes.
Acknowledgements
I am indebted to Chris Chapman for his valuable com-
ments on earlier drafts. The ?nancial support of EY is grate-
fully acknowledged.
References
Antonakis, J., Bendahan, S., & Lalive, R. (2010). On making causal claims: A
review and recommendations. Leadership Quarterly, 21, 1086–1120.
Bennet, A., & Checkel, J. (2012). Process tracing: from philosophical roots
to best practices. Simons Papers in Security and Development, No. 21/
2012, Simon Fraser University.
Bennett, A. (2002). Where the model frequently meets the road: Combining
statistical, formal, and case study methods. Georgetown University
working paper.
Bol, J. C., & Moers, F. (2010). The dynamics of incentive contracting: The
role of learning in the diffusion process. Accounting, Organizations and
Society, 35, 721–736.
Cook, T., & Campbell, D. (1979). Quasi-experimentation: Design & analysis
issues for ?eld settings. Boston, MA: Houghton Mif?in.
Davis, S., & Albright, T. (2004). An investigation of the effect of balanced
scorecard implementation on ?nancial performance. Management
Accounting Research, 15, 135–153.
George, A., & Bennett, A. (2005). Case studies and theory development in the
social sciences. Cambridge, MA: MIT Press.
Grif?th, R., & Neeley, A. (2009). Performance pay and managerial
experience in multitask teams: Evidence from within a ?rm. Journal
of Labor Economics, 27, 49–82.
548 C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549
Hedstrom, P., & Swedberg, R. (Eds.). (1998). Social mechanisms: An
analytical approach to social theory. Cambridge: Cambridge
University Press.
Kelloway, E. K. (1998). Using LISREL for structural equations modeling: A
researcher’s guide. London: Sage Publications.
Lillis, A. (2006). Reliability and validity in ?eld study research. In Z. Hoque
(Ed.), Methodological issues in accounting research—Theories, methods
and issues. London: Spiramus Press.
Modell, S. (2005). Triangulation between case study and survey methods
in management accounting research: An assessment of validity
implications. Management Accounting Research, 16, 231–254.
Molina-Azorin, J. F. (2011). The use and added value of mixed methods in
management research. Journal of Mixed Methods Research, 5, 7–24.
Seawright, J., & Gerrig, J. (2008). Case-selection techniques in case study
research: a menu of qualitative and quantitative options. Political
Research Quarterly, 61, 294–308.
C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549 549
doc_307122602.pdf
				
			This essay discusses how incorporating qualitative analyses and insight in positivist field
studies can strengthen researchers’ ability to draw causal inferences. Specifically, I review
how the rich institutional knowledge available in field settings can be used to increase
internal validity by improving the specification of empirical models and tests and by providing
greater insight into statistical results, particularly through the investigation of the
causal processes linking the accounting practices and outcomes.
Strengthening causal inferences in positivist ?eld studies
Christopher D. Ittner
?
The Wharton School, University of Pennsylvania, 1326 Steinberg Hall-Dietrich Hall, Philadelphia, PA 19104, USA
a b s t r a c t
This essay discusses how incorporating qualitative analyses and insight in positivist ?eld
studies can strengthen researchers’ ability to draw causal inferences. Speci?cally, I review
how the rich institutional knowledge available in ?eld settings can be used to increase
internal validity by improving the speci?cation of empirical models and tests and by pro-
viding greater insight into statistical results, particularly through the investigation of the
causal processes linking the accounting practices and outcomes.
Ó 2013 Elsevier Ltd. All rights reserved.
Introduction
Positivistic research in accounting addresses cause-
and-effect questions. For example, do differences in envi-
ronmental or strategic contexts lead to differences in
management control systems? Do certain activities drive
overhead costs? Does the adoption of a balanced score-
card system improve performance? However, despite this
focus on causal questions, ?eld researchers’ capacity to
draw strong causal inferences is hindered by their inabil-
ity to conduct or study true, randomized natural experi-
ments. Instead, researchers must rely on non- or quasi-
experimental methods. The limitations in these methods
give rise to concerns regarding the extent to which causa-
tion can be inferred from ?eld-based accounting studies.
The objective of this essay is to discuss how the incorpo-
ration of qualitative methods in positivistic ?eld research
can provide a powerful mechanism to enhance a study’s
causal inferences. In particular, researchers can take
advantage of the rich institutional knowledge available
in the ?eld to strengthen the validity of the analyses
through improved speci?cation of empirical models and
tests, and the ability to provide greater insight into statis-
tical results (particularly through greater understanding
of the causal processes linking the accounting practices
and outcomes).
As Cook and Campbell (1979) note in their in?uential
book on quasi-experimental ?eld research methods, causal
inferences in social science research can never be proven
with certainty because the inferences depend upon many
assumptions that cannot be directly veri?ed. Any research
method contains some level of uncertainty because all of
the causes of observed effects or how they relate to each
other are rarely if ever known. Empirical researchers must
therefore attempt to assess the probability that a speci?c
factor caused an outcome to occur. This requires choosing
research methods that enhance a study’s internal validity
(i.e., the extent to which a study’s causal conclusions are
justi?ed). As Cook and Campbell (1979, p. 11) argue, ‘‘we
want to base causal inferences on procedures that reduce
the uncertainty about causal connections even though
uncertainty can never be reduced to zero.’’
Although considerable philosophical debate exists
regarding the nature of causality, Mills’ three well-known
criteria provide a practical foundation for assessing causal
relationships in empirical studies: (1) the cause has to pre-
cede the effect in time; (2) the cause and effect have to be
related (i.e., co-vary); and (3) other explanations of the
cause-and-effect relationship have to be eliminated. The
incorporation of qualitative analyses can enhance the
internal validity of positivistic ?eld research by providing
as much evidence as possible that these criteria hold in
the chosen research site(s).
All empirical tests of causal relationships in non-exper-
imental settings are susceptible to multiple threats to
0361-3682/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2013.10.003
?
Tel.: +1 215 898 7786.
E-mail address: [email protected]
Accounting, Organizations and Society 39 (2014) 545–549
Contents lists available at ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
validity, including correlated omitted variables and endo-
geneity, interactions, non-linearities, simultaneities, and
measurement error, among others. To the extent possible,
researchers should attempt to minimize these threats in
their empirical tests. At a basic level, this requires
researchers to control for any confounding in?uences that
may affect the outcomes of interest in their regression or
structural equations models. More advanced options in-
clude taking advantage of methodological improvements
in statistical techniques that can enhance the ability to
draw causal inferences in non-experimental studies.
1
For
example, difference-in-difference tests can be used to com-
pare time series changes in outcomes for groups of individ-
uals or organizations that received a treatment (for example,
implemented a balanced scorecard) relative to those that did
not, with the non-treatment group serving as the control for
factors other than the treatment that could in?uence the
outcome. Propensity scoring methods can statistically match
individuals or organizations in treatment and non-treatment
groups, thereby attempting to replicate a randomized exper-
iment as closely as possible by obtaining treatment and con-
trol groups with similar covariate distributions. Regression
discontinuity designs can take advantage of exogenously-im-
posed discontinuities or cut-offs (e.g., age limits to become
eligible for an incentive plan) to assign observations to treat-
ment and non-treatment groups in the absence of random
assignment. Assuming that individuals on either side of
the cut-off are similar, any outcome effect should be due
to the treatment. Instrumental variables approaches, which
require the identi?cation of instrumental variables that af-
fect the treatment assignment but not the error term in
the outcome model, can be used to control for endogeneity
and other correlated omitted variables problems. Because
the instrumental variable is correlated with the treatment
but uncorrelated with the other determinants of the out-
come, the estimated effect of the instrument on the outcome
should relate only to the treatment’s outcome effect, and not
to the effects of variables that are correlated with the
treatment.
While these and other statistical methods can improve
researchers’ ability to draw causal inferences, their appli-
cation still requires the identi?cation of the appropriate
variables and controls to include in the tests, the timing
and nature of key events (both those being studied and
potentially confounding events), and the measurement of
variables. Many of these issues are likely to be idiosyn-
cratic to each research site, making them dif?cult to iden-
tify and incorporate in arms-length, large sample studies.
In contrast, ?eld researchers can take advantage of detailed
institutional knowledge of their research context to better
specify their empirical tests. More often than not, this con-
textual knowledge requires qualitative research to precede
the speci?cation of empirical tests. Cook and Campbell
(1979, p. 93) go so far as to argue that ‘‘?eld experimenta-
tion should always include qualitative research to describe
and illuminate the context and conditions under which re-
search is conducted. These efforts often may uncover
important site-speci?c threats to validity and contribute
to valid explanations of experimental results in general
and of perplexing or unexpected outcomes in particular.’’
Improved speci?cation of empirical models and tests
One of the biggest contributions that detailed institu-
tional knowledge can make to casual inference in ?eld
studies is identifying potentially confounding factors.
Non-random assignment of individuals and organizations
to treatment and non-treatment groups (for example, to
adopters and non-adopters of an accounting innovation)
can lead to differences in outcomes that are not due to
variations in accounting practices if the two groups differ
on important dimensions. In many cases, accounting and
control practices are implemented concurrently with other
management changes, such as the hiring of a new manage-
ment team or the implementation of advanced manufac-
turing practices or customer satisfaction initiatives. In
other cases, certain types of organizations and employees
are more likely to adopt or to be assigned to a treatment
than others, with these differences having direct effects
on outcomes that are not due to the treatment. Factors
such as economic environments, labor markets, or unioni-
zation can vary over time or across organizational units.
These factors can in?uence the potential bene?ts from an
accounting practice, confounding any analysis of what
the practice’s effects would have been in the absence of
these differences. Any conclusions regarding the in?uence
of an accounting practice on an outcome must control for
confounding issues such as these.
Knowledge of the research context can help uncover the
key control variables or matching criteria to include in the
statistical tests in order to minimize these confounding ef-
fects. Consider, for example, Grif?th and Neeley’s (2009)
analysis of a balanced scorecard-based pay scheme in a
buildings supplies ?rm. One division of the ?rm adopted
the scheme while the other did not. To improve their abil-
ity to make casual inferences, Grif?th and Neeley matched
branches from the two divisions (which sold similar but
not identical products to the same customer base) based
on postal code. Thus, the matched pairs sold similar prod-
ucts to the same customers and faced similar local factors
such as economic cycles and labor market conditions,
seemingly a nearly perfect quasi-experiment. However,
their qualitative ?eld work indicated that one division sold
products that were used both inside and outside buildings,
while the other focused primarily on inside usage. Thus,
weather played a role in ?nancial outcomes. In addition,
one division sold goods that were mostly used for re?tting
buildings, while the other’s goods were mostly used for
new construction. Based on this ?rm-speci?c knowledge,
the authors incorporated local weather conditions and dif-
ferent types of construction activity as additional controls
in their tests.
Bol and Moers (2010) used in-depth semi-structured
interviews, observations, and analysis of internal docu-
ments to better understand the introduction and diffusion
of a balanced scorecard compensation plan in units belong-
ing to a cooperative bank. Their qualitative analyses re-
vealed a number of potential in?uences on scorecard
1
See Cook and Campbell (1979) and Antonakis, Bendahan, and Lalive
(2010) for discussions of these techniques.
546 C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549
adoption that needed to be incorporated into their statisti-
cal tests. In particular, controls for mergers and acquisi-
tions, the appointment of new directors, and the
implementation of new information systems in some of
the individual banks were included in their empirical mod-
els, factors that would not have been considered in the ab-
sence of the initial qualitative analyses.
Knowledge of confounding events may also lead
researchers to exclude observations that do not meet
the necessary conditions for the study. In a quasi-experi-
mental study of balanced scorecard adopters and non-
adopters in a Canadian bank, Davis and Albright (2004)
found that ?ve branch banks did not meet the study’s
requirements due to extraneous events unrelated to the
implementation of the scorecard. These observations were
eliminated to improve the authors’ ability to relate
changes in performance to the adoption of the accounting
change.
Another potential model speci?cation bene?t that de-
tailed contextual knowledge can provide is the identi?ca-
tion of issues such as simultaneous events and selection
effects that can in?uence causal inferences. Did the re-
search site adopt accounting and control practice in a
sequential manner (for example, choosing an organiza-
tional design before deciding how to measure perfor-
mance), or did it simultaneously choose and implement a
package of accounting and control mechanisms (in which
case simultaneity concerns must be addressed in the
empirical tests)? What factors entered the decision to
adopt a given practice in some parts of an organization
but not in another? If individuals or units self-selected into
treatment or non-treatment groups, statistical methods
such as Heckman self-selection models are necessary to
control for this confounding effect. By conducting qualita-
tive research prior to conducting statistical tests, ?eld
researchers are in a better position to select appropriate
statistical methods and adjustments that limit threats to
statistical validity.
Finally, deep contextual knowledge and access to a
broad range of quantitative and qualitative data sources
can be used to develop and measure variables that better
capture the latent constructs of interest. Construct validity,
or the extent to which a measure captures the theoretical
construct it is intended to capture, is an important issue
in assessing causality (Cook & Campbell, 1979). To draw
causal inferences, researchers must demonstrate that the
variables of interest are properly operationalized. Qualita-
tive research provides two potential bene?ts on this front.
First, ?eld researchers can ‘‘take advantage of the opportu-
nity to expand the domain of observables relating to par-
ticular constructs and to be on the lookout for attributes
missing in the literature. An important strength of ?eld re-
search is the ability to identify complex empirical attri-
butes that de?ne constructs’’ (Lillis, 2006, p. 465). Using
qualitative research methods such as interviews, observa-
tion, and analysis of internal documents, ?eld researchers
can get a better understanding of the attributes of key con-
structs in their research context.
Second, this knowledge can be used to develop more
valid quantitative indicators that subsequently can be em-
ployed in statistical analyses (Modell, 2005). For example,
a number of methods are available for quantifying qualita-
tive data, which can then serve as variables in econometric
models. Survey questions can be re?ned to re?ect the spe-
ci?c organizational context and newly-identi?ed attri-
butes. Non-traditional, site-speci?c quantitative data
sources such as database logs of the frequency and breadth
of accounting system usage can be identi?ed for inclusion
as indicators for the key attributes. By combining detailed
institutional knowledge and multiple data sources, in con-
junction with performing traditional convergent and dis-
criminant validity tests, ?eld researchers can better
demonstrate that their empirical constructs exhibit con-
struct validity.
Ability to provide greater insight into statistical results
Even with the advances in statistical methods for
assessing causality, researchers must still assess the direc-
tion of association, examine the possibility that other
(potentially superior) models exist, and rule out plausible
alternative explanations for the ?ndings. For example, a
?eld researcher might ?nd statistical support for the
hypothesized causal linkages in a structural equations
model. However, in many cases there are equally plausible
alternative models, which can contain different linkages or
directions of causality that run in opposite directions.
While statistical methods are available for re?ning the
hypothesized causal model, these methods cannot take
into account un-modeled linkages, omitted variables, or
alternative functional forms (e.g., non-linearities, simulta-
neities, interactions, and direction of causality). Moreover,
data analyses do not always yield the hypothesized results.
In the context of structural equations modeling, model ?t
may improve if a hypothesized link is dropped, but the sta-
tistical tests do not indicate the reason the link was not sig-
ni?cant. As Kelloway (1998, p. 69) argues in his guide to
structural equations modeling, the researcher ‘‘is then
forced to become a detective whose goal is to use whatever
means are available to specify what might have caused the
observed effect’’.
One area where rich institutional knowledge can help
positivist ?eld researchers address these issues is through
the analysis of cases that are not well predicted by their
chosen statistical methods. Knowledge of these cases can
strengthen causal inferences by helping to check for possi-
ble omitted variables, measurement errors, nonlinearities,
and interaction effects (Bennett, 2002; Seawright and Gerr-
ing, 2008). Quantitative researchers often treat outliers as
nuisances that reduce the model’s goodness of ?t, and
steps frequently are taken to minimize their in?uence.
But detailed investigation of large outliers can help identify
why these observations deviate from the hypothesized sta-
tistical model. Case-speci?c knowledge of outliers can be a
very fruitful avenue for identifying omitted variables,
re?ning measures for speci?c variables, ascertaining the
functional form linking the independent and dependent
variables, and strengthening inferences by improving the
understanding of the causal mechanisms at play.
Similarly, qualitative analyses can help explain unex-
pected or insigni?cant results. Grif?th and Neely’s (2009)
examination of a balance scorecard-based pay plan initially
C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549 547
found little change in pro?ts (relative to a matched control
group that did not implement the plan) following the new
pay scheme’s introduction. However, there was substantial
variation in outcomes across the branches receiving the
scorecard plan treatment. After interviewing these manag-
ers about their uses of the new scorecard data, Grif?th and
Neeley identi?ed differences in managers’ experience as a
potential explanation for the limited overall pro?t results.
Their statistical tests were subsequently modi?ed to in-
clude interactions between experience and scorecard
adoption. Consistent with the interview evidence, the re-
vised tests indicated that branches with more experienced
managers were better able to respond to the new scorecard
incentives, with scorecard adoption signi?cantly associ-
ated with increased pro?ts in branches with more experi-
enced managers. As this example illustrates, the
combination of quantitative and qualitative research
methods can lead the ?eld researcher to ‘‘edit one’s think-
ing about both the cause and the effect, and one can sug-
gest, after the fact, other constructs that might ?t the data
better than those with which the [?eld study] began’’
(Cook and Campbell, 1979, p. 69).
Process tracing provides a useful technique for using
multi-methods research to strengthen causal inferences
by enhancing a ?eld researcher’s understanding of the
mechanisms underlying observed statistical relationships.
Process tracing is a qualitative research method that ‘‘at-
tempts to identify the intervening causal process – the cau-
sal chain and causal mechanism – between an independent
variable (or variables) and the outcome of the dependent
variable’’ (George & Bennett, 2005, p. 206). The general
method of process tracing involves generating and analyz-
ing qualitative data (from histories, archival documents,
interviews, observation, and other sources) on the causal
mechanisms or processes (such as speci?c events, actions,
or decisions) that link hypothesized causes to observed ef-
fects. In contrast to statistical tests of co-variation that fo-
cus on causal effects (did X lead to Y?), process tracing
focuses on causal explanations (through what mechanisms
did X lead to Y?), thereby opening the ‘‘black-box’’ be-
tween cause and effect (e.g., Hedstrom & Swedberg,
1998). A key notion in process tracing is that a variable
cannot have a causal effect on an outcome unless there is
an underlying causal mechanism. This is particular impor-
tant in accounting research because accounting practices
in themselves have no effect on outcomes. It is only
through the use of the information for decision-making,
performance evaluation, or other purposes that accounting
systems ‘‘cause’’ changes in outcomes.
Knowledge of causal mechanisms can greatly improve a
?eld researcher’s ability to draw causal inferences from
their statistical results (Bennet & Checkel, 2012; Molina-
Azorin, 2011), particularly by helping them assess the ex-
tent to which two of Mills’ three criteria for causal infer-
ence hold in the sample. While statistical tests can
provide evidence on the co-variation condition, inferring
causality still requires the causes to precede the effects
and for other plausible explanations to be eliminated.
Through process tracing, the temporal ordering of events
and decisions linking the putative cause and outcome can
be established. This can con?rm that the causal event or
activity did in fact precede a subsequent link in the causal
chain or the ?nal outcome, and can improve the speci?ca-
tion of time series tests by highlighting leads and lags in
relationships. Alternative explanations and spurious corre-
lations can also be identi?ed, including the possibility that
any observed association between the introduction of an
accounting practice and an outcome is due to a ‘‘Haw-
thorne’’ effect. If the researcher cannot ?nd evidence of
some critical step in the hypothesized causal chain linking
the accounting practice to the ultimate outcome of inter-
est, the statistical result may be spurious. Alternatively,
careful process mapping may identify causal links or expla-
nations that were not expected, leading the researcher to
reject the original hypothesis and/or to modify the hypoth-
esis or empirical tests. By using process mapping to inform
and interpret statistical tests, ?eld researchers can increase
the reader’s con?dence that the observed statistical rela-
tions re?ect causal relations.
Conclusions
The examination of cause-and-effects relationships will
always by a major focus of accounting research. Positivist
?eld researchers can increase internal validity and
strengthen causal inferences by taking advantage of the
rich institutional knowledge that can be obtained in the
?eld. The resulting qualitative analyses and insights can
be used improve model and variable speci?cation prior to
conducting empirical tests, or can be used after the initial
tests have been conducted to better understand the ob-
served statistical relations or to suggest further analyses
and re?nements. Through this iterative use of qualitative
and quantitative methods, ?eld researchers can provide
more convincing evidence regarding the causal links be-
tween accounting practices and organizational outcomes.
Acknowledgements
I am indebted to Chris Chapman for his valuable com-
ments on earlier drafts. The ?nancial support of EY is grate-
fully acknowledged.
References
Antonakis, J., Bendahan, S., & Lalive, R. (2010). On making causal claims: A
review and recommendations. Leadership Quarterly, 21, 1086–1120.
Bennet, A., & Checkel, J. (2012). Process tracing: from philosophical roots
to best practices. Simons Papers in Security and Development, No. 21/
2012, Simon Fraser University.
Bennett, A. (2002). Where the model frequently meets the road: Combining
statistical, formal, and case study methods. Georgetown University
working paper.
Bol, J. C., & Moers, F. (2010). The dynamics of incentive contracting: The
role of learning in the diffusion process. Accounting, Organizations and
Society, 35, 721–736.
Cook, T., & Campbell, D. (1979). Quasi-experimentation: Design & analysis
issues for ?eld settings. Boston, MA: Houghton Mif?in.
Davis, S., & Albright, T. (2004). An investigation of the effect of balanced
scorecard implementation on ?nancial performance. Management
Accounting Research, 15, 135–153.
George, A., & Bennett, A. (2005). Case studies and theory development in the
social sciences. Cambridge, MA: MIT Press.
Grif?th, R., & Neeley, A. (2009). Performance pay and managerial
experience in multitask teams: Evidence from within a ?rm. Journal
of Labor Economics, 27, 49–82.
548 C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549
Hedstrom, P., & Swedberg, R. (Eds.). (1998). Social mechanisms: An
analytical approach to social theory. Cambridge: Cambridge
University Press.
Kelloway, E. K. (1998). Using LISREL for structural equations modeling: A
researcher’s guide. London: Sage Publications.
Lillis, A. (2006). Reliability and validity in ?eld study research. In Z. Hoque
(Ed.), Methodological issues in accounting research—Theories, methods
and issues. London: Spiramus Press.
Modell, S. (2005). Triangulation between case study and survey methods
in management accounting research: An assessment of validity
implications. Management Accounting Research, 16, 231–254.
Molina-Azorin, J. F. (2011). The use and added value of mixed methods in
management research. Journal of Mixed Methods Research, 5, 7–24.
Seawright, J., & Gerrig, J. (2008). Case-selection techniques in case study
research: a menu of qualitative and quantitative options. Political
Research Quarterly, 61, 294–308.
C.D. Ittner / Accounting, Organizations and Society 39 (2014) 545–549 549
doc_307122602.pdf