Causality in the context of analytical models and numerical experiments

Description
Intuition tends to guide model formulation, as it is generally impossible to consider all
dimensions of a problem. The ability to surprise, heightening the focus on paradox and
the contradiction of reality, is therefore more useful than a literal representation of reality.
While numerical experiments are useful in exploring patterns not well suited to analytic
approaches, features of the model that underlies the experiment determines the experiments’
ability to provide insight and offer surprise.

Causality in the context of analytical models and numerical
experiments
Ramji Balakrishnan
?
, Mark Penno
The University of Iowa, Tippie College of Business, W274 Pappajohn Business Building, Iowa City, IA 52242-1000, United States
a b s t r a c t
Intuition tends to guide model formulation, as it is generally impossible to consider all
dimensions of a problem. The ability to surprise, heightening the focus on paradox and
the contradiction of reality, is therefore more useful than a literal representation of reality.
While numerical experiments are useful in exploring patterns not well suited to analytic
approaches, features of the model that underlies the experiment determines the experi-
ments’ ability to provide insight and offer surprise.
Ó 2013 Elsevier Ltd. All rights reserved.
Introduction
There is a long tradition of scholarship on causality in
basic philosophy (e.g., Aristotle as noted in Falcon, 2011),
philosophy of science, and other disciplines (e.g., Friedman,
1953). This essay addresses the role that causality plays in
the development of analytical models and numerical
experiments in accounting research. We argue that the
process of specifying causal relationships is primarily an
intuitive one, and as such, successful research relies as
much on imagination as it does on technical skills. This
argument rests on the assertion that the fundamental phe-
nomena in accounting are complex and multi-faceted,
dooming a mechanical approach to failure. We further ar-
gue that analytical models and numerical experiments
help us identify and explain patterns. The impact of this re-
search arises from revealing the application of intuitively
familiar patterns to new settings, thereby creating
surprise.
The role of analytical models
Analytical models demonstrate tautologies. A tautology
is an argument whose conclusion follows logically from its
premises. Given the pejorative meaning of ‘tautology’ in its
everyday usage, one might wonder about the value of such
an exercise. Tautologies narrow our attention to a limited
set of factors and this restriction is valuable because it
reduces complexity. Aragones, Gilboa, Postlewaite, and
Schmeidler (2005) refer to this process as ‘fact-free
learning.’ They point out that
People may be surprised to notice certain regularities
that hold in existing knowledge they have had for some
time. That is, they may learn without getting new fac-
tual information.
The role of a model is to provide an explanation for recur-
ring patterns in the data. Identifying the factors that drive
the pattern of interest, however, is a dif?cult exercise.
1
To
illustrate, consider a setting with n factors and a require-
ment that any model be restricted to using only k < n factors.
If the model builder were to consider all possible subsets,
she would have
n!
k!ðnÀkÞ!
models to evaluate. Let n = 50 and
k = 3. Then, a conscientious modeler who wishes to compare
all possible models must evaluate 19,600 different models be-
fore choosing the best one. Keeping k = 3, we compute that
the number of comparisons required for n = 60 as 34,220
and the number required for n = 100 as 161,700. As seen in
0361-3682/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2013.09.004
?
Corresponding author. Tel.: +1 3193350958.
E-mail addresses: [email protected] (R. Balakrishnan),
[email protected] (M. Penno).
1
The problems we discuss are similar in nature (but not identical to)
‘hard’ problems. That term is used technically to describe a complexity that
increases exponentially as the number of possible causes considered by the
modeler increases.
Accounting, Organizations and Society 39 (2014) 531–534
Contents lists available at ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
Table 1, increasing the size of the subset of modeled factors
has an even larger impact; the number of potential models
to consider is large and grows exponentially. With our ?nite
life span, only intuition can somehow select an adequate
model.
2
We argue that surprise is the critical element of an ana-
lytic model. A model may be created as to represent the
phenomenon that we are concerned about, or intentionally
as a counterexample. Some of the most celebrated analyt-
ical results referenced by accounting academics entail
unexpected results such as unraveling (Verrecchia, 1983),
the irrelevance of capital structure (Modigliani & Miller,
1958), market failure (Akerlof, 1970), impossibility of some
desirable comparisons (Arrow, 1950), the value of an
otherwise worthless education (Spence, 1973), and the va-
lue of all noisy but informative data in contracting with
risk-averse agents (Holmstrom, 1979). These models are
powerful because they tell us that the world would be
quite different than the one that we would know if we lim-
it ourselves to the factors of the model.
As an illustration, Modigliani and Miller (1958) present
us with a world in which capital structure does not matter.
But, because we observe the effects of capital structure,
their paper led the way for introducing moral hazard and
adverse selection into models of ?nancial markets. In other
words, they presented us with a base-model to which new
factors can be added incrementally, thereby imposing or-
der on the research process. For example, Miller and Rock
(1985) use the 1958 article as a springboard to motivate
their explanation of dividend-investment decisions. In
the latter article, dividends signal the ?rm’s private infor-
mation about the permanence of reported earnings
changes. Now dividends matter. Because private informa-
tion (adverse selection) was absent in the 1958 article,
we can immediately appreciate its impact. Here causality
is exploited via the tautology, and the type of causality ren-
dered in not that of the real world. As a second illustration, a
prominent strategy might be to take a widely-held belief or
business proverb
3
and produce a counter-example. Lambert
(1985) exempli?es this approach by showing that com-
monly encountered two-tailed variance investigation poli-
cies may not be optimal when the information is used for
contracting within the con?nes of a principal-agent model.
His result suggests that something other than contracting
may be responsible for the prevalence of two-tailed investi-
gation policies. In retrospect, Lambert’s paper spelled an end
to the modeling of variance-investigations in the principal-
agent literature, but not because it had provided a ?nal
answer.
The role of numerical experiments
An example, a numerical experiment and a system simulation
Examples, numerical experiments and system simula-
tions all employ numbers-based analyses to glean insight.
We view them as occupying different points in a contin-
uum. At one end, a numerical example illustrates a speci?c
insight. Such an example establishes plausibility and is
sometimes the only way to prove the existence of a partic-
ular solution for a model.
4
A well-crafted example also can
aid readers understand and appreciate the insights derived
in an analytic model. However, by virtue of being one-off,
an example can only prove negation of a causal relation. A
single example can be extremely powerful when it chal-
lenges our intuition. In this case, a single example may serve
a role similar to that of a model described above, but (from
an expositional perspective) much more economically.
At the other end, Monte Carlo simulations of complex
but reasonably well-understood systems are useful in gen-
erating predictions. Examples include predicting the path
of hurricanes and the outcomes of elections. The goal often
is to model how a complex system might evolve in re-
sponse to various combinations of input parameters. When
combined with mechanical feedback systems, simulations
of physical phenomenon also help create an immersive
environment that might be used for training, particularly
in the context of rare but costly instances (e.g., loss of an
engine by an airplane).
5
Numerical experiments in accounting lie between these
end points. Particularly in the context of information re-
lated questions, we argue that modeling and simulating
the entire system is a daunting exercise. However, the
many interaction among system components also means
that a few examples are insuf?cient to capture the com-
plexity of the problem. By focusing attention on selected
parts, numerical experiments take the middle path and
seek to uncover patterns that help direct our efforts. For
example, a complete model of the role of limited informa-
tion on product planning decisions requires components
that address capacity acquisition which is based on
expected market conditions, the allocation of capacity
among products based on realized market conditions, a
Table 1
Number of potential models.
Size of subset
of factors
Number of potential factors to consider
25 50 60 100
3 2300 19,600 34,220 161,700
4 12,650 230,300 487,635 3,921,225
5 53,130 2,118,760 5,461,512 75,287,520
2
See Aragones et al. (2005) for a more sophisticated version of this
argument. We also note that empirical regularities often provide a useful
starting point for identifying potentially important factors.
3
Simon (1946) is a critic of ‘‘proverbs of administration,’’ He points out
that ‘‘A fact about proverbs that greatly enhances their quotability is that
they almost always occur in mutually contradictory pairs. ‘Look before you
leap!’ but ‘he who hesitates is lost.’’’ He points out that ‘‘Most of the
propositions that make up the body of administrative theory share this
defect.’’ Interestingly, he does add that ‘‘If it is a matter of rationalizing
behavior that has already taken place or justifying action that has already
been decided upon, then proverbs are ideal.’’
4
For example, while a researcher identi?es the mathematical conditions
required for an equilibrium, there may not exist any parameters for which
these conditions hold. Consequently, ?nding an example for which the
conditions hold is a necessary condition for the analysis to be meaningful.
5
The ?eld of system dynamics is related. The goal here is to model the
outcomes of structures and policies, taking account of the feedback effects.
While early applications focused on corporate policies, the idea has been
applied to urban planning (Forrester, 1969) and to the global socioeco-
nomic system (Meadows, Randers, & Meadows, 1993).
532 R. Balakrishnan, M. Penno / Accounting, Organizations and Society 39 (2014) 531–534
cost allocation model (which often have multiple and com-
peting uses), and strategic pricing. Our current under-
standing does not permit us model this entire system of
linked decisions; however, even an elaborate example can-
not capture the complexity of the interactions among the
components.
We deliberately use the term numerical ‘experiment’
to emphasize the analogy to behavioral experimentation.
Experimental design, which facilitates the identi?cation
of causal links among theoretical constructs, is paramount
in both approaches. Ex ante consideration of dependent
and independent variables, possible parameter values,
and potential controls for confounds are key ingredients
of successful research. Much like experimentalists who
utilizes human subjects control for non-treatment factors
by randomly assigning participants to treatment condi-
tions, a numerical approach requires that the researcher
explicitly control for any confounds by suitable randomi-
zation protocols. Below, we discuss several applications to
demonstrate the potential bene?ts that arise from numer-
ical experimentation. We draw from economics, ?nance,
and accounting to underscore the generality of our
argument.
Sample uses of numerical experiments
Numerical experimentation may be useful in demon-
strating that factors commonly viewed as necessary for a
particular observed phenomenon may actually not be nec-
essary to explain it. To illustrate, consider Gode and Sunder
(1993) who consider the drivers of the allocative ef?-
ciency—the matching of the highest bidder with the lowest
cost producer––of a market. Analytic models of this phe-
nomenon consider ‘smart’ traders who make optimal deci-
sions. Sunder and Gode consider a market populated by
zero-intelligence traders in the sense that these trading
programs do not seek to maximize pro?t, and do not ob-
serve, remember, or learn. Yet, the resulting bids and of-
fers, when coupled with reasonable market rules,
generate equilibriums with near 100% allocative ef?ciency;
rationality of individual players, a crucial assumption in
analytic models, may not be a central force. This ?nding
is a surprise. Further, by sequentially introducing rules,
Gode and Sunder (1993) identify the critical rules that
drive allocative ef?ciency.
6
Numerical experimentation helps check the robustness
of analytic results—is the challenge to our intuition valid
only in some instances or is this something that one
should expect to observe routinely? Consider the idea
that improving a cost accounting system by adding more
cost pools will improve the accuracy of reported product
costs. Datar and Gupta (1994) challenge this intuition by
analytically decomposing the error in reported costs into
portions attributable to aggregation (putting dissimilar
costs into the same cost pool), speci?cation (using an
incorrect cost driver) and measurement. They demon-
strate that improvements on any one dimension need
not be bene?cial: the gain can be offset by the accompa-
nying reductions in other dimensions because the change
eliminates only one part of off-setting errors. However, it
is not possible to analytically establish the size of the re-
gion (as de?ned by the possible values of parameters) in
which the result holds. Labro and Vanhoucke (2007) test
this claim by seeding a benchmark model with two kinds
of errors, each at 10 levels (0–90%), and compute the deg-
radation in the accuracy of reported costs. They show that
the posited degradation of accuracy in reported costs oc-
cur only under speci?cally identi?ed circumstances. In
this case, our intuition about the usefulness of partial
re?nements of cost systems is correct more often than
it is wrong. By checking over all plausible ranges for the
treatment variable, this numerical experiment facilitates
our understanding of the intriguing result in Datar and
Gupta (1994).
Numerical analysis often might be the only way to
study certain phenomena and to help shrink the set of fac-
tors for an analytic researcher to consider. This application
can be particularly insightful when the discovered rela-
tions are not intuitive. For example, Foster and Viswana-
than (1996) study speculative trading in market in which
traders have disparate information and whose information
might be correlated. However, because each trader learns
about the others’ information from trades, analytic models
necessarily simplify the information structure by assuming
independent draws or, more commonly, identical informa-
tion. Using a numerical experiment, Foster and Viswana-
than (1996) ?nd that trading patterns (whether to trade
heavily right now or to play a waiting game) depends crit-
ically on the initial correlation of informed traders’ private
signals.
7
We note that agent-based models are advantaged
in this endeavor of identifying patterns and key factors.
8
These models consider boundedly rational individual agents,
who may experience learning and adaptation, maximize
their self-interest and interact with other agents using sim-
ple behavioral rules. However, even simple settings often
give rise to complex phenomenon of interest following the
maxim that whole is greater than the sum of parts. Davis
and Pesch (2013) provide a recent example in auditing—they
show that the agents’ susceptibility to social in?uence deter-
mines which of the two equilibrium patterns of fraud
emerges in an organization and the ef?cacy of alternate gov-
ernance mechanisms. Palmrose (2009) argues for greater use
6
Poggio, Lo, LeBaron, and Chan (2001) take the idea one step further and
advocate the use of arti?cial-intelligence or AI traders (i.e., programs with
speci?ed trading algorithms and preferences) to study market mechanisms
and institutions. They show that markets with AI-traders resemble those
with human traders. Relative a market with human subjects, the AI-trader
market is preferred because it eliminates the human ‘‘wildcard,’’ more
closely corresponds to theory, and allows for greater experimentation,
particularly in the context of learning.
7
We view such numerical experiments as explorations that teach us
about a complex system we inadequately understand. To head in the right
direction, we should have some well-de?ned goals and questions when we
start; however, we also will inevitably develop new goals and questions
along the way.
8
Agent based models are distinct from principal-agent models which
use analytic methods to investigate contracting among parties with
divergent preferences. Principal-Agent models in accounting tend to focus
on the roles for organization structures, policies, and information. Agent-
based models consider systems with many agents, who may be organized
in a hierarchy. Each agent interacts repeatedly with other agents using
simple rules. The focus is on the evolution of system wide phenomenon.
R. Balakrishnan, M. Penno / Accounting, Organizations and Society 39 (2014) 531–534 533
of such computational experiments, writing that ‘‘. . . [agent-
based] models might help us understand the many different
cycles in which economic activities occur and the implica-
tions of these various cycles for accounting recognition
and measurement within the contours of quarterly and an-
nual reporting.’’
Finally, numerical experiments can help us grasp the
magnitude of main and interaction effects. For example,
Balakrishnan, Hansen, and Labro (2011) explore how
features of a cost accounting system interact to deter-
mine the error in reported costs. They generalize results
by examining the interaction in a number of cases that
vary underlying characteristics of the production envi-
ronment. They ?nd that system accuracy is robust to
the precision of available information – knowing the ex-
act correlation in the consumption patterns of two re-
sources is not much more useful than knowing the
resource usage is correlated at a low or high level. Sim-
ilarly, they also ?nd limited loss from pooling many
small resources (accounting for as much as 30% of costs)
into one pool for miscellaneous costs. Insights into such
‘‘effect sizes’’ are often valuable as we translate research
?ndings into nuggets for consumption by students and
practitioners.
9
This translation is greatly facilitated when
the researcher takes care to base parameter choices on
available empirical evidence (often gathered from ?eld
studies).
Concluding comments
Without a limitation on the number of factors that we
designate as ‘causes,’ the world becomes a ‘blooming,
buzzing confusion’’ (James, 1890). In fact, our perception
of the world is made possible only by limiting the number
of factors that we attend to. Academic taxonomist George
Gaylord Simpson (1961, 2–3) eloquently states this
observation:
There could be no intelligible language if each thing (or
each perception) were designated by a separate word
. . . A practical bene?t, then, of ignoring small differ-
ences for similarity-based judgments is that the
descriptions used need not be overly complex.
In a sense, a causal relationship is something that we in-
vent and maintain as long as it is useful. Consequently, we
require simple descriptions, or ‘models’ to formally com-
municate what we (after-the-fact) have discovered, refer-
ring to them as ‘relationships.’ We conclude that because
research seeks to identify a limited number of factors,
our search is guided as much by imagination and creativity
as it is by mathematical competence, and that the success
of each methodology frequently relies on the element of
surprise.
Acknowledgement
We thank Chris Chapman, Eva Labro and Shiva Sivara-
makrishnan for insightful comments on earlier drafts.
References
Akerlof, G. A. (1970). The market for ‘Lemons’: Quality uncertainty and
the market mechanism. Quarterly Journal of Economics, 84(3),
488–500.
Aragones, E., Gilboa, I., Postlewaite, A., & Schmeidler, D. (2005). Fact-free
learning. American Economic Review, 95, 1355–1368.
Arrow, K. J. (1950). A dif?culty in the concept of social welfare. Journal of
Political Economy, 58(4), 328–346.
Balakrishnan, R., Hansen, S., & Labro, E. (2011). Evaluating heuristics used
when designing product costing systems. Management Science, 57,
520–541.
Datar, S., & Gupta, M. (1994). Aggregation, speci?cation and measurement
errors in product costing. The Accounting Review, 69(3), 567–591.
Davis, J., Pesch, H., (2013). Fraud dynamics in organizations. Accounting,
Organizations and Society, 38(6–7), 469–483.
Friedman, M. (1953). Essays in positive economics. Chicago: University of
Chicago Press.
Falcon, A. (2011). Aristotle on causality. In Edward N. Zalta (Ed.), The
stanford encyclopedia of philosophy. .
Forrester, J. (1969). Urban dynamics. Portland, OR: Productivity Press.
Foster, F. Douglas., & Viswanathan, S. (1996). Strategic trading when
agents forecast the forecasts of others. Journal of Finance, 51,
1437–1478.
Gode, D., & Sunder, S. (1993). Allocative ef?ciency with zero-intelligence
traders: market as a substitute for individual rationality. Journal of
Political Economy, 101, 119–137.
Holmstrom, B. (1979). Moral hazard and observability. The Bell Journal of
Economics, 10(1), 74–91.
Lambert, R. (1985). Variance investigation in agency settings. Journal of
Accounting Research, 23(2), 633–647.
Labro, E., & Vanhoucke, M. (2007). A simulation analysis of interactions
among errors in costing systems. The Accounting Review, 82(4),
939–962.
James, W., 1890, The principles of psychology (vol. 1). (reprinted by Cosimo
Classics, 2007).
Meadows, J., Randers, J., & Meadows, D. L. (1993). Beyond the limits:
Confronting global collapse, envisioning a sustainable future. London:
Chelsea Green Publishers.
Miller, M., & Rock, K. (1985). Dividend policy under asymmetric
information. Journal of Finance, 40(4), 1031–1051.
Modigliani, F., & Miller, M. (1958). The cost of capital, corporation ?nance
and the theory of investment. American Economic Review, 48(3),
261–297.
Palmrose, Z. (2009). Science, politics and accounting: a view from the
potomac. The Accounting Review, 84(2), 281–297.
Poggio, T., Lo, A., LeBaron, B., Chan, N. (2001). Agent-based model of
?nancial markets: A comparison with experimental markets. Working
paper. MIT Sloan School of Management.
Simpson, G. (1961). Principles of animal taxonomy. New York: Columbia
University Press.
Simon, H. A. (1946). The proverbs of administration. Public Administration
Review, 6(1), 53–67.
Spence, A. M. (1973). Job market signaling. Quarterly Journal of Economics,
87(3), 355–374.
Towry, K. (2012). Discussion of ‘‘subordinates as the ?rst line of defense
against biased ?nancial reporting’’. Journal of Management Accounting
Research, 21, 25–30.
Verrecchia, R. (1983). Discretionary disclosure. Journal of Accounting and
Economics, 5, 179–194.
9
As Towry (2012) notes, unlike numerical analyses, experiments that
use human subjects are not well suited for estimating effect size. Relatedly,
documenting effect sizes could potentially lead to empirically testable
hypotheses.
534 R. Balakrishnan, M. Penno / Accounting, Organizations and Society 39 (2014) 531–534

doc_582284958.pdf
 

Attachments

Back
Top