Description
Understandingforecastsisimportantbecauseoftheirpervasivenessinbusinessdecisionssuchasbud-geting,production,andfinancialreporting.Inthisstudyweuseanabstractexperimenttoexaminehowthepreparationofdisaggregatedforecastsinteractswithperformance-basedincentivestoinfluencetheaccuracyandoptimismofforecasts.Wemanipulatetwofactorsbetweensubjectsattwolevelseach:fore-casttype(disaggregatedoraggregated)andperformance-basedincentives(presentorabsent).Consistentwithourpredictions,wefindthat(1)preparingdisaggregatedforecastsleadstogreaterimprovementsinforecastaccuracy(comparedtopreparingaggregatedforecasts)intheabsenceofperformance-basedin-centivesthaninthepresenceofperformance-basedincentives,and(2)preparingdisaggregatedforecastsleadstogreaterincreasesinforecastoptimism(comparedtopreparingaggregatedforecasts)inthepres-enceofperformance-basedincentivesthanintheabsenceofperformance-basedincentives.Ourstudycontributestoourunderstandingofunintentionalbiasesintheforecastingprocess.
Accounting, Organizations and Society 46 (2015) 8–18
Contents lists available at ScienceDirect
Accounting, Organizations and Society
journal homepage: www.elsevier.com/locate/aos
The effects of forecast type and performance-based incentives on the
quality of management forecasts
Clara Xiaoling Chen
a,?
, Kristina M. Rennekamp
b
, Flora H. Zhou
c
a
University of Illinois at Urbana-Champaign, United States
b
Cornell University, United States
c
Georgia State University, United States
a r t i c l e i n f o
Article history:
Received 21 May 2014
Revised 13 March 2015
Accepted 14 March 2015
Available online 1 April 2015
Keywords:
forecast type
management forecasts
management estimates
disaggregated forecasts
unintentional optimism
performance-based incentives
a b s t r a c t
Understanding forecasts is important because of their pervasiveness in business decisions such as bud-
geting, production, and ?nancial reporting. In this study we use an abstract experiment to examine how
the preparation of disaggregated forecasts interacts with performance-based incentives to in?uence the
accuracy and optimism of forecasts. We manipulate two factors between subjects at two levels each: fore-
cast type (disaggregated or aggregated) and performance-based incentives (present or absent). Consistent
with our predictions, we ?nd that (1) preparing disaggregated forecasts leads to greater improvements in
forecast accuracy (compared to preparing aggregated forecasts) in the absence of performance-based in-
centives than in the presence of performance-based incentives, and (2) preparing disaggregated forecasts
leads to greater increases in forecast optimism (compared to preparing aggregated forecasts) in the pres-
ence of performance-based incentives than in the absence of performance-based incentives. Our study
contributes to our understanding of unintentional biases in the forecasting process. Our results have im-
portant practical implications for designers of management control systems who elicit internal forecasts
from managers. Finally, our results also have important practical implications for those who either pre-
pare or use external management forecasts.
© 2015 Elsevier Ltd. All rights reserved.
1. Introduction
Understanding forecasts is important because of their perva-
siveness in business decisions such as budgeting, compensation,
and ?nancial reporting. Inaccurate forecasts can reduce the effec-
tiveness of the production planning process and negatively impact
production e?ciency, cost management, and ultimately ?rm per-
formance (e.g., Bruggen, Grabner, & Sedatole, 2013). To increase the
chance of obtaining accurate forecasts from an agent, a principal
needs to be careful in designing the management control system
that elicits such forecasts from the agent (e.g., Osband, 1989).
One such control system that is commonly used is the plan-
ning and budgeting system of a ?rm (Merchant & Van der Stede,
2012). Within the planning and budgeting system, an important
?
Corresponding author at: Department of Accountancy, University of Illinois at
Urbana—Champaign, 389 Wohlers Hall, 1206 South Sixth Street, Champaign, IL
61820, United States. Tel.: +1 (217) 244 3953.
E-mail addresses: [email protected] (C.X. Chen), [email protected] (K.M. Ren-
nekamp), [email protected] (F.H. Zhou).
design choice is the level of aggregation at which the principal
elicits forecasts from the agent. In practice, ?rms vary consider-
ably in the level of aggregation of the information elicited by the
planning and budgeting system (Merchant & Van der Stede, 2012).
For example, top management can request that divisional man-
agers prepare either an aggregated forecast (e.g., forecast total sales
for the division) or a disaggregated forecast (e.g., forecast sales
for individual products within the division) (see Kahn, 1998 and
Lapide, 2006). Although managers are likely to prepare both dis-
aggregated and aggregated forecasts for internal decision-making
purposes, the level of forecast aggregation required by the budget-
ing system will determine which forecast is more salient to them.
Further, research on the anchoring and adjustment bias suggests
that managers likely anchor on the numbers in the forecast that
are most salient to them (Bromiley, 1987; Tversky & Kahneman,
1974). Therefore, the level of aggregation at which the principal
elicits forecasts from the agent should affect managers’ forecasts
even when both types of forecasts are prepared.
Although economic theory suggests that a rational agent will
provide the same forecast of a summary performance measure re-http://dx.doi.org/10.1016/j.aos.2015.03.002
0361-3682/© 2015 Elsevier Ltd. All rights reserved.
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 9
gardless of the level of forecast aggregation (or forecast type), psy-
chology theory suggests that forecast type will in?uence the qual-
ity of the agent’s forecasts, where forecast quality can refer to both
the accuracy and optimism (or bias) in a forecast. We investigate
how a control system design choice—forecast type—interacts with
incentives to affect two dependent measures of forecast quality:
forecast accuracy and forecast optimism. Forecast accuracy refers to
the degree of closeness between a forecast and the actual out-
come. Forecast optimism refers to consistent differences between
forecasts and actual outcomes; that is, the extent to which fore-
casts exhibit a general tendency to be too high relative to actual
outcomes. Speci?cally, we examine how forecast type affects fore-
cast accuracy and forecast optimism in the presence or absence of
explicit performance-based incentives that are tied to the measure
being forecasted.
Drawing on psychology, forecasting, and accounting literatures
on forecasts, we generate the following predictions for forecast ac-
curacy and forecast optimism, respectively. First, we predict that
preparing disaggregated forecasts leads to greater improvements
in forecast accuracy (compared to preparing aggregated forecasts)
in the absence of performance-based incentives than in the pres-
ence of performance-based incentives. When performance-based
incentives are absent, disaggregated forecasts involve more care-
ful and objective consideration of forecast components, which
should improve forecast accuracy compared to preparing aggre-
gated forecasts. Second, we predict that preparing disaggregated
forecasts leads to greater increases in forecast optimism (compared
to preparing aggregated forecasts) in the presence of performance-
based incentives than in the absence of performance-based incen-
tives. When managers produce disaggregated forecasts but do have
explicit incentives to achieve favorable performance on the fore-
casted measure, they have both the motivation and opportunity to
produce optimistic forecasts. In other words, while the preparation
of disaggregated forecasts involves more complete consideration
of information, theory suggests that individuals with performance-
based incentives are likely to consider that additional information
in a biased way that helps them reach their desired conclusions
(Hales, 2007).
To test our predictions we conduct an abstract laboratory exper-
iment where participants complete a knowledge task with ques-
tions from four different categories (e.g., English, math, grammar,
and logic) and prepare forecasts of their performance. Participants
complete two rounds of the task. After the initial round, partic-
ipants receive feedback on their performance. Before the second
round begins, participants provide forecasts of their second-round
performance. Participants then answer the second round of ques-
tions and learn their actual performance.
We use an abstract task in our study for two reasons. First,
we are interested in examining a fundamental psychological bias
rather than reactions to rich, institutional features. An abstract
knowledge test allows us to test the fundamental processes that af-
fect the characteristics of our two types of forecasts while avoiding
noise in participants’ responses that could arise from asking them
to do an unfamiliar task like forecasting revenues and expenses.
Second, using a task with rich institutional features could intro-
duce other incentives that may lead to intentional biases in the
forecasts. For example, in an internal budgeting setting, managers
may intentionally provide lower forecasts to increase the proba-
bility of achieving targets or intentionally provide higher forecasts
to increase resource allocations (Fisher, Maines, Peffer, & Sprinkle,
2002). Using an abstract task removes the institutional features
that might drive managers to intentionally produce biased fore-
casts, allowing us to isolate the effects of unintentional bias.
We manipulate two factors between subjects at two levels each.
First, to manipulate forecast type, participants in the disaggregated
forecast condition forecast their scores in all four categories of the
test (e.g., English, math, grammar and logic), while participants in
the aggregated forecast condition forecast their total score.
1
Sec-
ond, we manipulate whether explicit performance-based incentives
are present or absent.
2
We hold average participant compensation
constant across the two incentive conditions. We examine two de-
pendent variables: (1) forecast accuracy, where overestimation of
scores is treated as equivalent to underestimation of scores; and
(2) forecast optimism, which captures systematic tendency to over-
estimate scores.
Consistent with our predictions, we ?nd that: (1) preparing
disaggregated forecasts leads to greater improvements in forecast
accuracy (compared to preparing aggregated forecasts) in the ab-
sence of performance-based incentives than in the presence of
performance-based incentives; and (2) preparing disaggregated
forecasts leads to greater increases in forecast optimism (compared
to preparing aggregated forecasts) in the presence of performance-
based incentives than in the absence of performance-based incen-
tives. Given that participants’ pay would be higher in the absence
of the forecast error and forecast optimism described above, our
results show that participants’ judgments con?ict with their ?nan-
cial incentives and therefore suggest that the biases we observe are
unintentional.
Our study contributes to our understanding of unintentional bi-
ases in the forecasting process. Since unintentional biases may be
more di?cult to discipline than intentional, incentive-driven bi-
ases, our study provides insights that are likely useful to both pre-
parers and users of forecasts. First, our results contribute to the
budgeting literature. Prior budgeting literature focuses heavily on
the opportunistic behavior of agents in the budgeting process and
the effectiveness of truth-inducing incentives (e.g., Chow, Cooper,
& Waller, 1988; Church, Hannan, & Kuang, 2012; Shields & Young,
1993; Waller, 1988; Webb, 2002; Young, 1985). However, uninten-
tional biases such as those documented in our paper are more dif-
?cult to mitigate. Speci?cally, our results show that a control sys-
tem design choice that has so far been largely overlooked in man-
agement accounting research—the level of forecasts elicited—can
have unintended consequences for potential bias and accuracy in
management forecasts.
Second, by highlighting the potential effect of an internal
planning and budgeting system design choice (i.e., forecast type)
on externally reported management forecasts, our study comple-
ments the accounting literature on management forecasts as well
as an emerging literature that examines the link between ex-
ternal disclosures and internal decision-making (e.g., Goodman,
Neamtiu, Shroff, & White, 2014; Hemmer & Labro, 2008; McNi-
chols & Stubben, 2008). Prior research on management forecasts
has shown that disaggregated forecasts increase the market’s per-
ception of the informational value and credibility of management
forecasts (Hirst, Koonce, & Venkataraman, 2007; Hutton, Miller, &
Skinner, 2003; Lansford, Lev, & Tucker, 2013), reduce investors’ ?x-
ation on announced earnings (Elliott, Hobson, & Jackson, 2011),
and decrease auditors’ tolerance for misstatement (Libby & Brown,
2013). Our study differs from these prior studies by: (1) taking
the perspective of the preparer, rather than the users, of manage-
ment forecasts; and (2) by focusing on the actual, rather than per-
ceived, quality of disaggregated forecasts. Despite the documented
perceived bene?ts of disaggregated forecasts, our results suggest
1
Although we manipulate the level of disaggregation at two levels in our exper-
iment, the level of disaggregation can vary in degrees in practice. We expect the
directional effects we document in our study to hold with varying levels of disag-
gregation.
2
We manipulate incentives at two levels in our experiment, but the absence
of performance-based incentives versus presence of performance-based incentives
conditions can also map into low-powered incentives versus high-powered incen-
tives in the real world.
10 C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18
that the actual quality of disaggregated management forecasts may
depend on the incentives that managers face.
Finally, our study also adds to the forecasting literature by high-
lighting unintentional behavioral biases in the forecasting process.
The literature on forecasting has largely ignored behavioral expla-
nations for unintentional optimism. Our study also answers the call
in the forecasting literature for more research that sheds light on
the circumstances under which disaggregated forecasts are more
likely to improve on aggregated forecasts (e.g., Henrion, Fischer, &
Mullin, 1993; Kremer, Siemsen, & Thomas, 2012).
2. Theory and hypotheses
Prior research suggests that decomposition, or disaggregation,
is a useful technique for reducing information processing demands
on the estimator, which may lead to more complete considera-
tion of available information and, ultimately, more accurate esti-
mates (Raiffa, 1968; Ravinder, Kleinmuntz, & Dyer, 1988). However,
prior research also suggests that disaggregation and more com-
plete consideration of information are not necessarily always ben-
e?cial to judgment quality (e.g., Henrion et al., 1993). We argue
that while disaggregation can improve the accuracy of estimates in
the absence of performance-based incentives, it can give forecast-
ers greater opportunity to inject optimistic bias into their forecasts
in the presence of explicit performance-based incentives.
2.1. Forecast type and consideration of available information
We ?rst consider how processing of information differs be-
tween preparing disaggregated forecasts and preparing aggregated
forecasts, regardless of the incentives that a manager faces. Prior
research suggests that preparing disaggregated forecasts can re-
duce information processing load for the forecaster, which may
lead to more careful consideration of all available information than
preparing aggregated forecasts (Henrion et al., 1993; Raiffa, 1968;
Ravinder et al., 1988). This occurs because forecasting a number
holistically is often considered a more complex task than decom-
posing the forecasting problem into multiple components ?rst and
then combining the components into an aggregated forecast (e.g.,
Henrion et al., 1993; Ravinder et al., 1988). As task complexity in-
creases, individuals are more likely to choose strategies that lower
total cognitive costs (Bonner, 2008). When individuals use these
strategies, they do not search for all relevant information in mak-
ing decisions and, as a consequence, decision quality is often re-
duced. In a management forecast setting, making an aggregated
forecast requires the manager to attend to all relevant informa-
tion at once, which can be mentally taxing. To make the forecast-
ing task more manageable, the manager may choose to attend to
the most salient pieces of information and ignore or underweight
less important information. Consistent with this argument, in an
audit setting, Zimbelman (1997) shows that auditors’ attention to
fraud-risk factors is higher when they separately assess the risk of
intentional and unintentional misstatement than when they assess
only the overall risk of misstatement.
Related work on support theory also suggests that disaggrega-
tion leads to more careful consideration of individual components
of a given set of information. Research on support theory ?nds that
unpacking an event into two or more of its components helps re-
spondents recall more evidence from memory and/or makes ex-
isting evidence more salient, such that the rated likelihood of the
event occurring increases (Tversky & Koehler, 1994). Support the-
ory has primarily focused on the assessments of probability or fre-
quency of alternative hypotheses, but the cognitive mechanism un-
derlying the unpacking phenomenon is quite general. Van Boven
and Epley (2003) con?rm that unpacking also in?uences evalua-
tions when events are simply described in greater detail as op-
posed to being unpacked into non-overlapping components. Specif-
ically, Van Boven and Epley (2003) show that unpacking leads peo-
ple to think about the details of a category or event, thereby mak-
ing it easier to mentally generate evaluative evidence.
Combined, the prior literature suggests that preparing disag-
gregated forecasts leads to lower information processing demands
and more complete consideration of available information. How-
ever, greater consideration of information has the potential to ei-
ther bene?t or harm managers’ forecasts, depending on whether
the managers have explicit performance-based incentives.
2.2. Forecast accuracy in the absence of performance-based incentives
Drawing on prior literature on forecasting and accounting, we
?rst consider the effects of preparing disaggregated forecasts in
the absence of explicit performance-based incentives. More speci?-
cally, we predict that preparing disaggregated forecasts will lead to
greater improvements in forecast accuracy (compared to preparing
an aggregated forecast) in the absence of performance-based in-
centives than in the presence of performance-based incentives for
at least two reasons. First, as discussed in the previous subsection,
preparing disaggregated forecasts reduces the cognitive load of the
forecaster. In the absence of performance-based incentives, a re-
duction of the cognitive load may lead to more complete consid-
eration of all available information and improve the accuracy of
forecasts. Second, in the absence of performance-based incentives,
disaggregated component forecasts are likely to contain random er-
rors, some of which overstate performance and some of which un-
derstate performance. These random errors will at least partially
cancel each other out when they are combined to derive the top-
level forecast, leading to less error and greater forecast accuracy
for disaggregated forecasts compared to aggregated forecasts (e.g.,
Kleinmuntz, Fennema, & Peecher, 1996; Ravinder et al., 1988).
Based on the above discussion, we expect that disaggregated
forecasts will result in both greater precision in the forecast of each
component (due to greater consideration of information in fore-
casting each component) and greater reduction of random errors
when component forecasts are combined (due to cancellation of
error terms). In turn, we expect these effects to result in overall
greater improvements in forecast accuracy in disaggregated fore-
casts than in aggregated forecasts when performance-based incen-
tives are absent compared to when performance-based incentives
are present.
We note, however, that greater consideration of information in
forecasting each component may induce greater bias in the fore-
casts in the presence of explicit incentives tied to the forecasted
measure. In addition, the error reduction effect discussed above
could be undermined if the errors associated with the component
forecasts are positively correlated, i.e., the errors are systematic
rather than random (Ravinder et al., 1988). Prior research suggests
that this is more likely to be the case in the presence of explicit
performance-based incentives, which we consider next.
2.3. Forecast accuracy in the presence of performance-based
incentives
Prior literature suggests that individuals are naturally, and of-
ten unintentionally, optimistic, and that performance incentives
or directional goals can exacerbate this optimism (Hales, 2007).
Although managers also have incentives for accurate forecasts
because investors associate accurate forecasts with management
credibility and reward accurate forecasts (Ajinkya & Gift, 1984;
Graham, Harvey, & Rajgopal, 2005; Healy & Palepu, 2001; Jennings,
1987; Mercer, 2005), managers’ incentives for accurate forecasts
may be dominated by their incentives for favorable performance
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 11
when they are provided with explicit performance-based incen-
tives.
The discussion above suggests that when managers are re-
warded for higher performance on the forecasted measure, fore-
casts are likely more optimistic, regardless of whether disaggre-
gated or aggregated forecasts are prepared. Although most indi-
viduals have at least some intrinsic motivation for favorable per-
formance, we expect the bias to be greater when performance-
based incentives are explicit. However, these theories only pre-
dict a main effect of performance incentives on forecast optimism.
Within a group of individuals that have explicit performance in-
centives, greater forecast optimism among those who prepare a
disaggregated rather than an aggregated forecast would be consis-
tent with motivated reasoning theory, which we discuss next.
3
Motivated reasoning theory predicts that directional prefer-
ences will affect how people attend to and process information
(Kunda, 1990). In an accounting setting, Hales (2007) shows that
investors’ forecasts of earnings are affected by the investment po-
sition they hold and by whether they are facing the prospect of a
gain or loss on those investments. Speci?cally, investors’ forecasts
of earnings are biased in a direction consistent with their direc-
tional preferences, even if they are only provided with an incentive
to be accurate. Building on Hales (2007), Thayer (2011) shows that
investors seek additional information consistent with their desired
conclusions about an investment.
As discussed at the beginning of this section, theory suggests
that preparing a disaggregated forecast will make a manager more
likely to attend to more detailed information to forecast the indi-
vidual components, leading to more accurate forecasts in the ab-
sence of performance-based incentives. However, when managers
are provided with explicit incentives that are tied to their perfor-
mance on the forecasted measure, preparing disaggregated fore-
casts is less likely to lead to higher forecast accuracy than prepar-
ing aggregated forecasts for two reasons.
First, in the presence of performance-based incentives, a man-
ager making a judgment about future performance has a prefer-
ence for positive performance. Disaggregation will cause the man-
ager to attend to more detailed information about how a favor-
able outcome is likely to be achieved, and hence, allow more op-
portunity for the manager to interpret information in the direction
consistent with his or her preferences (Kunda, 1990). Thus, when
performance-based incentives are present, the potential bene?t of
considering more information might be partially or completely off-
set by the negative effect of attending to more preference-consistent
information (Kunda, 1990). Motivated reasoning theory suggests
that individuals will not consider a balanced set of reasons for a
given outcome when making a judgment (Ditto & Lopez, 1992).
Second, the presence of performance-based incentives is likely
to lead to an overall optimistic bias in component forecasts, i.e.,
systematic overestimation errors. Systematic overestimation errors
in component forecasts will not cancel each other out when the
component forecasts are combined into an overall forecast. There-
fore, we expect that in the presence of performance-based incen-
tives, preparing disaggregated forecasts will not lead to greater
forecast accuracy.
Combined, our theory suggests an interaction between fore-
cast type and performance-based incentives on forecast accu-
racy. Speci?cally, we expect that preparing disaggregated fore-
casts rather than aggregated forecasts increases forecast accuracy
in the absence of performance-based incentives, but not necessar-
ily in the presence of performance-based incentives. This discus-
sion leads to our ?rst hypothesis on forecast accuracy:
3
Theory on optimism alone does not predict that the magnitude of optimism
should vary by forecast type. However, motivated reasoning theory, which more ex-
plicitly incorporates biased processing, does help to make that prediction.
H1. Preparing disaggregated forecasts leads to greater improve-
ments in forecast accuracy (compared to preparing aggregated
forecasts) in the absence of performance-based incentives than in
the presence of performance-based incentives.
2.4. Performance-based incentives and forecast optimism
The theory that we have outlined suggests that disaggregated
forecasts may exhibit greater forecast optimism than aggregated
forecasts when performance-based incentives are present com-
pared to when performance-based incentives are absent. Even
though managers have the motivation to produce optimistic fore-
casts when they prepare an aggregated forecast in the presence
of performance-based incentives, they have less opportunity to in-
ject optimistic bias into their forecasts. Thus, providing disaggre-
gated forecasts should lead to greater forecast optimism than pro-
viding aggregated forecasts in the presence of performance-based
incentives.
In the absence of performance-based incentives, however, man-
agers have weaker motivation to make optimistically biased fore-
casts regardless of whether they prepare aggregated or disaggre-
gated forecasts. Thus, we expect a smaller difference in forecast
optimism between disaggregated forecasts and aggregated fore-
casts in the absence of performance-based incentives than in the
presence of performance-based incentives. Our second hypothesis
is therefore:
H2. Preparing disaggregated forecasts leads to a greater increase in
forecast optimism (compared to preparing aggregated forecasts) in
the presence of performance-based incentives than in the absence
of performance-based incentives.
3. Method
3.1. Participants
We recruit ninety-two undergraduate business students from a
large public university as participants. In the experiment, partic-
ipants complete a knowledge test with questions from four cat-
egories (English, math, grammar and logic) and make associated
forecasts of their performance. Because we examine a fundamental
psychological bias rather than reactions to rich, institutional fea-
tures, we believe students have su?cient knowledge for the task
and can be used as participants (Libby, Bloom?eld, & Nelson, 2002;
Libby & Rennekamp, 2012). Further, undergraduates take knowl-
edge tests (either the SAT or ACT) before entering the university
that are similar to those we use in our study and have the ability
to understand the incentives associated with our forecasting task.
3.2. Research design
To test our hypotheses, we use a 2 (Forecast Type) d7˜ 2
(Performance-Based Incentives) between-subjects experimental de-
sign. We manipulate forecast type at two levels: disaggregated
forecast versus aggregated forecast. Participants complete a ?rst
round of the question-based task to get a sense of their skill in the
four topic categories. In the disaggregated forecast condition, par-
ticipants provide a separate forecast of their performance in each
of the four categories for the second round. In the aggregated fore-
cast condition, participants provide a forecast of their overall per-
formance (or total score) in the second round.
We also manipulate performance-based incentives at two lev-
els: absent or present. Following prior literature (Hales, 2007), we
provide subjects with an incentive to make accurate forecasts in
both conditions to reduce noise in the results. Speci?cally, in the
12 C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18
condition with performance-based incentives, a participant’s pay is
based on two components: the participant’s actual performance on
the task and the accuracy of the participant’s forecast. The formula
is as follows:
Total pay = £4 ? Number of questions answered correctly + (£20 ?£2
?absolute value of the difference between forecast and actual performance)
For the performance-based component, participants receive £4.00
in experimental currency for each question answered correctly, up
to a total of £112.00 if the participant answers all 28 questions
correctly.
4
For the forecast accuracy component, participants re-
ceive a bonus that is £20.00 for a completely accurate forecast.
The bonus is reduced by £2.00 per question if the forecast devi-
ates from the actual performance and drops to zero if the forecast
either over- or underestimates actual performance by ten or more
questions. The participant is always better off answering as many
questions correctly as he/she can, regardless of the forecast he/she
provides because the participant earns £4.00 for every correct an-
swer but loses only £2.00 of the forecast bonus for each question
by which the actual performance differs from the forecast.
5
Thus,
for a given forecast, the participant will always receive higher com-
pensation by performing to the best of his/her ability rather than
by withholding effort after meeting his/her forecast. In the condi-
tion without a performance-based incentive, pay is based only on
the accuracy of the participant’s forecast.
Immediately after receiving information on how they will be
paid, participants are asked to answer a manipulation check ques-
tion on the same page to ensure that they understand the incen-
tive scheme to which they are assigned. Speci?cally, we ask partic-
ipants to indicate the components of their compensation by choos-
ing between two options: (1) my compensation will increase if my
forecast of my performance on the round of questions I answer
is more accurate; and (2) my compensation will increase if I per-
form better on the round of questions I answer. Results reveal that
100% of participants correctly indicate whether their compensation
is based on forecast accuracy only or on both forecast accuracy and
actual performance in the second round of questions.
Participants are informed that once all participants have com-
pleted the task, their earnings in £ will be converted to real U.S.
dollars at a positive but unspeci?ed rate and that they are always
better off trying to earn more £ in the study, since that trans-
lates to greater earnings in U.S. dollars. Participants are informed
before the start of the experiment to expect payments approxi-
mately two weeks after all sessions are conducted. In addition,
each participant receives a $5 show-up fee. On average, partici-
pants receive $20 in U.S. currency for their participation across all
conditions.
3.3. Task and experimental procedures
We randomly assign each laboratory session to either the pres-
ence or absence of explicit performance-based incentives treat-
ment to ensure participants are not aware of our manipulations.
Upon arrival at the experiment, participants are randomly assigned
4
All currency amounts described here are denoted in experimental laboratory
currency unless stated otherwise. Laboratory earnings are converted to U.S. dollar
earnings upon completion of the experiment. Participants do not know in advance
the exchange rate between the two currencies, but do know that earning more lab-
oratory currency will always translate to higher U.S. dollar-denominated earnings.
5
For example, if a participant forecasts that she can get 16 out of the 28 ques-
tions correct, she will get £78 in laboratory currency (£4 ? 15 + (£20 ? £2 ? 1)) if
she ends up answering 15 questions correctly, £84 (£4 ? 16 + £20) if she ends up
answering 16 questions correctly, and £92 (£4 ? 20 + (£20 ? £2 ? 4)) if she ends up
answering 20 questions correctly.
to one of the two forecast type conditions. We ask participants
to read the informed consent form and sign the form before they
start the task. In the experiment, participants complete two rounds
of a mini SAT-type test. We use an initial round of SAT-type ques-
tions to familiarize participants with the task and form expecta-
tions about their future performance. We intentionally choose rel-
atively di?cult questions for our task in order to increase vari-
ation in participants’ forecasting performance. This allows us to
better detect the effects of our independent variables on our de-
pendent measures. To keep the total time required for the task to
a minimum, the ?rst round contains two questions from each of
the four categories, while the second round contains 28 questions,
with seven questions from each of the four categories.
After the ?rst round, participants receive feedback on their per-
formance. Before the second round begins, participants make a pri-
vate forecast of their second-round performance. Participants then
answer the second round of questions. Participants in the disag-
gregated forecast condition are asked to provide forecasts of their
scores for each of the categories of SAT-type questions (English,
math, grammar and logic). Participants in the aggregated forecast
condition are asked to provide a forecast of their total score on
the test. After participants complete the second round of questions,
they answer a post-experimental questionnaire, which includes de-
brie?ng and demographic questions.
3.4. Dependent and control variables
Forecasts of the four components in the disaggregated fore-
cast condition are summed to form total score forecasts, which are
compared to the total scores forecasted in the aggregated forecast
condition. We examine two aspects of forecast quality: accuracy
and optimistic bias. Following prior literature (e.g., Duru & Reeb,
2002; Goodman et al., 2014; Henrion et al., 1993; Mikhail, Walther,
& Willis, 1999), we capture forecast accuracy with the absolute
forecast error, i.e., the absolute value of the difference between
the forecast and the performance. A smaller absolute forecast er-
ror indicates greater forecast accuracy. To facilitate interpretation,
we transform this reverse measure of forecast accuracy by calcu-
lating the difference between the maximum performance, 28, and
the absolute forecast error and use this as our measure of forecast
accuracy. A larger number for this transformed measure indicates
greater forecast accuracy. We measure the optimistic bias in a fore-
cast as the signed forecast error, i.e., the signed difference between
the forecast and the actual performance. A larger positive forecast
error indicates a higher level of optimism.
We control for participants’ performance because prior research
has shown that although skillful individuals often overestimate
their performance relative to others, they also underestimate their
own absolute performance (Klayman, Soll, González-Vallejo, & Bar-
las, 1999; Krueger & Mueller, 2002; Kruger & Dunning, 1999; Lar-
rick, Burson, & Soll, 2007). Therefore, we expect a negative re-
lationship between participants’ performance in the actual round
and their forecast optimism. We control for participants’ perfor-
mance in the ?rst round because higher performance in the ?rst
round will lead to higher expected performance in the actual
round, which is likely to lead to higher forecast errors.
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 13
Table 1
Descriptive statistics.
Forecast type
n Aggregated forecast n Disaggregated forecast
Panel A: Mean (standard deviation) for performance
Absence of Performance-Based Incentives 30 14.40 (5.88) 18 16.39 (2.91)
Presence of Performance-Based Incentives 18 17.89 (2.93) 26 17.31 (2.28)
Panel B: Mean (standard deviation) for forecasts
Absence of Performance-Based Incentives 30 14.93 (8.03) 18 17.78 (3.78)
Presence of Performance-Based Incentives 18 18.72 (3.61) 26 20.15 (4.18)
The table presents descriptive statistics for participants’ performance and forecasts for the four experimental conditions.
We manipulate Performance-Based Incentives at two levels: In the absence of performance-based incentives condition, participants are only compensated for the accuracy
of their performance forecasts; in the presence of performance-based incentives, participants are compensated for both their performance in the second round of SAT-type
questions and the accuracy of their performance forecasts.
We manipulate Forecast Type at two levels: In the aggregated forecast condition, participants provide a holistic forecast for their performance in the second round of SAT-
type questions; in the disaggregated forecast condition, participants provide a separate forecast for their performance in each of the four components of the second round
of SAT-type questions.
Performance = Participants’ actual performance (number of correct answers to the questions) in the second round of SAT-type questions.
Forecast = Forecast of total score in the second round of SAT-type questions in the aggregated forecast condition; sum of forecasts of the four components in the second
round of SAT-type questions in the disaggregated forecast condition.
4. Results
Table 1 provides descriptive statistics of average performance
and average forecasts of total score for the four conditions.
6, 7
Con-
sistent with performance-based incentives increasing effort and
performance, we observe that participants’ performance in the sec-
ond round is signi?cantly higher in the presence of performance-
based incentives than in the absence of performance-based incen-
tives (17.55 versus 15.15, p < 0.01, two-tailed).
4.1. Test of H1: Forecast type, performance-based incentives, and
forecast accuracy
H1 predicts that disaggregated forecasts will lead to greater im-
provement in forecast accuracy in the absence of performance-
based incentives than in the presence of performance-based incen-
tives, compared to aggregated forecasts. Again, we measure fore-
cast accuracy by calculating the difference between the maximum
possible score of 28 and the absolute forecast error, where a larger
difference corresponds to greater forecast accuracy. We test this
interaction using contrast coding as well as follow-up simple ef-
fects tests using an analysis of covariance (ANCOVA) (Buckless &
Ravenscroft, 1990). We include participants’ performances in the
?rst and second rounds of questions as covariates to control for
variation in the data that is not the focus of our study. We con-
trol for participants’ performance in the ?rst round because higher
performance in the ?rst round will lead to higher expected per-
formance in the second round, which, in turn, will lead to higher
forecast error holding the actual performance constant. Consistent
with this conjecture, a regression analysis shows that forecast er-
rors are positively associated with actual performance in the ?rst
round. We also control for participants’ performance in the second
round to control for the negative correlation between performance
6
The difference in cell sizes is due to the number of participants who showed
up for a given session and imperfect randomization of the online survey software.
7
Of the ninety-two participants, four participants forecasted zero in the aggre-
gated forecast condition in the absence of performance-based incentives, indicat-
ing an intention to game the incentive system. Excluding the four participants who
made forecasts of zero strengthens the results. Among the other eighty-eight partic-
ipants, there is no signi?cant difference in performance between disaggregated and
aggregated forecast types when performance-based incentives are absent (p = 0.57)
or present (p = 0.55), indicating other participants were not engaged in similar
gaming behavior.
and forecast optimism documented in prior literature (Klayman
et al., 1999; Krueger & Mueller, 2002; Kruger & Dunning, 1999;
Larrick et al., 2007). Consistent with prior research, a regression
analysis shows that forecast optimism is negatively associated with
actual performance in the second round.
Based on our ?rst hypothesis, we use contrast weights of +3 in
the absence of performance-based incentives/disaggregated fore-
cast condition and ?1 in the other three conditions. The results
presented in Panel C, Table 2 show that the planned contrast is
marginally signi?cant, supporting our hypothesis (p = 0.07, one-
tailed).
8
The follow-up simple contrasts con?rm the interaction be-
tween performance-based incentives and forecast type on forecast
accuracy. Speci?cally, when there are no performance-based in-
centives, preparing a disaggregated forecast leads to signi?cantly
greater forecast accuracy (p = 0.02, one-tailed). By contrast, when
participants have performance-based incentives, there is no signi?-
cant difference in forecast accuracy between the disaggregated and
the aggregated forecast types (p = 0.42, two-tailed).
4.2. Test of H2: Forecast type, performance-based incentives, and
forecast optimism
H2 predicts an interaction between forecast type and
performance-based incentives such that the disaggregated forecast
condition leads to a greater increase in forecast optimism than
the aggregated forecast condition in the presence of performance-
based incentives than in the absence of performance-based
incentives. To test H2, we estimate an analysis of covariance (AN-
COVA) using signed forecast errors as the dependent variable and
performance-based incentives and forecast type as the indepen-
dent variables. A more positive signed forecast error indicates a
higher level of forecast optimism. We also test this interaction us-
ing contrast coding as well as follow-up simple effects tests where,
based on our second hypothesis, we use contrast weights of +3
in the presence of performance-based incentives/disaggregated
forecast condition, +1 in the presence of performance-based
incentives/aggregated forecast condition, and ?2 in the absence of
performance-based incentives/aggregated forecast and absence of
performance-based incentives/disaggregated forecast conditions.
9
8
The planned contrast is statistically signi?cant (p = 0.03, one-tailed) when we
exclude the four participants who made forecasts of zero.
9
Since the simple effect of performance-based incentives in the aggregated fore-
cast condition is insigni?cant, we verify that our results are robust to an alterna-
14 C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18
Table 2
The effects of forecast type and performance-based incentives on forecast accuracy.
Forecast type
n Aggregated forecast n Disaggregated forecast
Panel A: Mean (standard deviation) for forecast accuracy
Absence of Performance-Based Incentives 30 23.20 (4.26) 18 25.50 (1.54)
Presence of Performance-Based Incentives 18 24.94 (2.01) 26 24.15 (3.20)
Source df Mean square F p-value
Panel B: ANCOVA model of forecast accuracy
Performance-Based Incentives 1 0.68 0.07 0.80
Forecast Type 1 8.68 0.87 0.35
Performance-Based Incentives d7˜ Forecast Type 1 42.36 4.23 0.02
Trial Performance 1 30.49 3.05 0.08
Performance 1 0.10 0.01 0.92
Error 82 10.00
Source df F p-value
Panel C: Planned contrast coding and follow-up simple effect tests
Overall tests:
Preparing disaggregated forecasts (compared to preparing aggregated forecasts) leads to a greater improvement in forecast accuracy in the
absence of performance-based incentives than in the presence of performance-based incentives. Contrast weights (?1, 3, ?1, ?1)
1 2.19 0.07
Follow-up simple effect tests:
Effect of forecast type in the absence of performance-based incentives 1 4.51 0.02
Effect of forecast type in the presence of performance-based incentives 1 0.64 0.42
Effect of performance-based incentives in the disaggregated forecast conditions 1 1.59 0.21
Effect of performance-based incentives in the aggregated forecast conditions 1 2.61 0.11
The table presents descriptive statistics, the ANCOVA model, and simple contrasts for forecast accuracy for the four treatments. See Table 1 for descriptions of the manip-
ulations of performance-based incentives and forecast type. Reported p-values are two-tailed unless testing a one-tailed prediction, as signi?ed by bold face.
Forecast Accuracy = The difference between the maximum possible score of 28 and the absolute difference between forecast and performance in the second round of SAT-
type questions, where forecast is the forecast of total score in the second round of SAT-type questions in the aggregated forecast conditions and the sum of component
forecasts in the second round of SAT-type questions in the disaggregated forecast conditions. Higher measures indicate greater forecast accuracy.
Trial Performance = Participants’ performance in the ?rst round of SAT-type questions.
Performance = Participants’ actual performance in the second round of SAT-type questions.
The results presented in Panel C of Table 3 show that
the planned contrast is statistically signi?cant, supporting H2
(p = 0.03, one-tailed).
10
The follow-up simple contrasts (Table 3,
Panel C) con?rm the ordinal interaction between performance-
based incentives and forecast type on forecast optimism. Speci?-
cally, in the absence of performance-based incentives, there is no
signi?cant difference in forecast optimism between disaggregated
and aggregated forecasts (p = 0.63, two-tailed). By contrast, when
participants receive performance-based incentives, forecast opti-
mism is higher in the disaggregated forecast condition than in the
aggregated forecast condition (p = 0.08, one-tailed).
11
Overall, our results are consistent with H2. When participants’
incentives are not tied to performance, producing disaggregated
forecasts does not lead to more optimistically biased forecasts.
However, when participants’ incentives are tied to performance,
they have a preference for favorable performance. As a result, pro-
ducing disaggregated forecasts gives participants both the moti-
vation and the opportunity to engage in biased processing of in-
formation and interpret it in a way that is consistent with their
preferences, which leads to signi?cantly more optimistically biased
forecasts compared to producing aggregated forecasts.
tive allocation of contrast weights (speci?cally, +3 in the presence of performance-
based incentives/disaggregated forecast condition and ?1 in the other three condi-
tions). This set of contrast weights is more restrictive by not allowing for a simple
effect of performance-based incentives in the aggregated forecast condition.
10
The planned contrast is also statistically signi?cant (p = 0.02, one-tailed) when
we exclude the four participants who made forecasts of zero.
11
These results are stronger when we exclude the four participants who made
forecasts of zero: when participants receive performance-based incentives, forecast
optimism is higher in the disaggregated forecast condition than in the aggregated
forecast condition (p = 0.07, one-tailed).
4.3. Supplemental analyses
In this section we conduct additional analyses to support the
theoretical arguments underlying our hypotheses.
4.3.1. Effect of disaggregation on forecast accuracy in the absence of
performance-based incentives
To develop H1 we rely on arguments suggesting that in the
absence of performance-based incentives the disaggregated fore-
cast type results in: (1) greater attention to information for each
component forecast and (2) random errors in the component fore-
casts that cancel each other out in the top-level forecast. These
two effects should lead to greater forecast accuracy in the disag-
gregated forecasts than in the aggregated forecasts in the absence
of performance-based incentives.
The ?rst argument implies the forecast for each component is
more precise and better calibrated under the disaggregated fore-
cast type than under the aggregated forecast type (e.g., Henrion
et al., 1993). To test this, we compare the standard deviation of
the absolute forecast error in the absence of performance-based in-
centives/aggregated forecast condition and the standard deviation
of the sum of the absolute forecast errors of the component fore-
casts in the disaggregated forecast condition. Since we expect dis-
aggregated forecasts to be better calibrated than aggregated fore-
casts, we expect the standard deviation to be lower in the condi-
tion where participants prepare disaggregated forecasts. As shown
in Table 4, Panel B, a Levene’s test of equal variances con?rms this
conjecture (2.12 vs. 4.26, p = 0.03, one-tailed) (Levene, 1960).
The second argument implies that the sum of the absolute fore-
cast errors of the four component forecasts in the disaggregated
forecast condition is not necessarily lower than the absolute fore-
cast error in the aggregated forecast condition. This is because the
greater accuracy for the disaggregated condition is partially driven
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 15
Table 3
The effects of forecast type and performance-based incentives on forecast optimism.
Forecast type
n Aggregated forecast n Disaggregated forecast
Panel A: Mean (standard deviation) for forecast optimism
Absence of Performance-Based Incentives 30 0.53 (6.45) 18 1.39 (2.64)
Presence of Performance-Based Incentives 18 0.83 (3.63) 26 2.85 (4.14)
Source df Mean square F p-value
Panel B: ANCOVA model of forecast optimism
Performance-Based Incentives 1 42.81 2.29 0.07
Forecast Type 1 33.98 1.82 0.18
Performance-Based Incentives d7˜ Forecast Type 1 8.29 0.44 0.25
Trial Performance 1 302.70 16.18
Understandingforecastsisimportantbecauseoftheirpervasivenessinbusinessdecisionssuchasbud-geting,production,andfinancialreporting.Inthisstudyweuseanabstractexperimenttoexaminehowthepreparationofdisaggregatedforecastsinteractswithperformance-basedincentivestoinfluencetheaccuracyandoptimismofforecasts.Wemanipulatetwofactorsbetweensubjectsattwolevelseach:fore-casttype(disaggregatedoraggregated)andperformance-basedincentives(presentorabsent).Consistentwithourpredictions,wefindthat(1)preparingdisaggregatedforecastsleadstogreaterimprovementsinforecastaccuracy(comparedtopreparingaggregatedforecasts)intheabsenceofperformance-basedin-centivesthaninthepresenceofperformance-basedincentives,and(2)preparingdisaggregatedforecastsleadstogreaterincreasesinforecastoptimism(comparedtopreparingaggregatedforecasts)inthepres-enceofperformance-basedincentivesthanintheabsenceofperformance-basedincentives.Ourstudycontributestoourunderstandingofunintentionalbiasesintheforecastingprocess.
Accounting, Organizations and Society 46 (2015) 8–18
Contents lists available at ScienceDirect
Accounting, Organizations and Society
journal homepage: www.elsevier.com/locate/aos
The effects of forecast type and performance-based incentives on the
quality of management forecasts
Clara Xiaoling Chen
a,?
, Kristina M. Rennekamp
b
, Flora H. Zhou
c
a
University of Illinois at Urbana-Champaign, United States
b
Cornell University, United States
c
Georgia State University, United States
a r t i c l e i n f o
Article history:
Received 21 May 2014
Revised 13 March 2015
Accepted 14 March 2015
Available online 1 April 2015
Keywords:
forecast type
management forecasts
management estimates
disaggregated forecasts
unintentional optimism
performance-based incentives
a b s t r a c t
Understanding forecasts is important because of their pervasiveness in business decisions such as bud-
geting, production, and ?nancial reporting. In this study we use an abstract experiment to examine how
the preparation of disaggregated forecasts interacts with performance-based incentives to in?uence the
accuracy and optimism of forecasts. We manipulate two factors between subjects at two levels each: fore-
cast type (disaggregated or aggregated) and performance-based incentives (present or absent). Consistent
with our predictions, we ?nd that (1) preparing disaggregated forecasts leads to greater improvements in
forecast accuracy (compared to preparing aggregated forecasts) in the absence of performance-based in-
centives than in the presence of performance-based incentives, and (2) preparing disaggregated forecasts
leads to greater increases in forecast optimism (compared to preparing aggregated forecasts) in the pres-
ence of performance-based incentives than in the absence of performance-based incentives. Our study
contributes to our understanding of unintentional biases in the forecasting process. Our results have im-
portant practical implications for designers of management control systems who elicit internal forecasts
from managers. Finally, our results also have important practical implications for those who either pre-
pare or use external management forecasts.
© 2015 Elsevier Ltd. All rights reserved.
1. Introduction
Understanding forecasts is important because of their perva-
siveness in business decisions such as budgeting, compensation,
and ?nancial reporting. Inaccurate forecasts can reduce the effec-
tiveness of the production planning process and negatively impact
production e?ciency, cost management, and ultimately ?rm per-
formance (e.g., Bruggen, Grabner, & Sedatole, 2013). To increase the
chance of obtaining accurate forecasts from an agent, a principal
needs to be careful in designing the management control system
that elicits such forecasts from the agent (e.g., Osband, 1989).
One such control system that is commonly used is the plan-
ning and budgeting system of a ?rm (Merchant & Van der Stede,
2012). Within the planning and budgeting system, an important
?
Corresponding author at: Department of Accountancy, University of Illinois at
Urbana—Champaign, 389 Wohlers Hall, 1206 South Sixth Street, Champaign, IL
61820, United States. Tel.: +1 (217) 244 3953.
E-mail addresses: [email protected] (C.X. Chen), [email protected] (K.M. Ren-
nekamp), [email protected] (F.H. Zhou).
design choice is the level of aggregation at which the principal
elicits forecasts from the agent. In practice, ?rms vary consider-
ably in the level of aggregation of the information elicited by the
planning and budgeting system (Merchant & Van der Stede, 2012).
For example, top management can request that divisional man-
agers prepare either an aggregated forecast (e.g., forecast total sales
for the division) or a disaggregated forecast (e.g., forecast sales
for individual products within the division) (see Kahn, 1998 and
Lapide, 2006). Although managers are likely to prepare both dis-
aggregated and aggregated forecasts for internal decision-making
purposes, the level of forecast aggregation required by the budget-
ing system will determine which forecast is more salient to them.
Further, research on the anchoring and adjustment bias suggests
that managers likely anchor on the numbers in the forecast that
are most salient to them (Bromiley, 1987; Tversky & Kahneman,
1974). Therefore, the level of aggregation at which the principal
elicits forecasts from the agent should affect managers’ forecasts
even when both types of forecasts are prepared.
Although economic theory suggests that a rational agent will
provide the same forecast of a summary performance measure re-http://dx.doi.org/10.1016/j.aos.2015.03.002
0361-3682/© 2015 Elsevier Ltd. All rights reserved.
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 9
gardless of the level of forecast aggregation (or forecast type), psy-
chology theory suggests that forecast type will in?uence the qual-
ity of the agent’s forecasts, where forecast quality can refer to both
the accuracy and optimism (or bias) in a forecast. We investigate
how a control system design choice—forecast type—interacts with
incentives to affect two dependent measures of forecast quality:
forecast accuracy and forecast optimism. Forecast accuracy refers to
the degree of closeness between a forecast and the actual out-
come. Forecast optimism refers to consistent differences between
forecasts and actual outcomes; that is, the extent to which fore-
casts exhibit a general tendency to be too high relative to actual
outcomes. Speci?cally, we examine how forecast type affects fore-
cast accuracy and forecast optimism in the presence or absence of
explicit performance-based incentives that are tied to the measure
being forecasted.
Drawing on psychology, forecasting, and accounting literatures
on forecasts, we generate the following predictions for forecast ac-
curacy and forecast optimism, respectively. First, we predict that
preparing disaggregated forecasts leads to greater improvements
in forecast accuracy (compared to preparing aggregated forecasts)
in the absence of performance-based incentives than in the pres-
ence of performance-based incentives. When performance-based
incentives are absent, disaggregated forecasts involve more care-
ful and objective consideration of forecast components, which
should improve forecast accuracy compared to preparing aggre-
gated forecasts. Second, we predict that preparing disaggregated
forecasts leads to greater increases in forecast optimism (compared
to preparing aggregated forecasts) in the presence of performance-
based incentives than in the absence of performance-based incen-
tives. When managers produce disaggregated forecasts but do have
explicit incentives to achieve favorable performance on the fore-
casted measure, they have both the motivation and opportunity to
produce optimistic forecasts. In other words, while the preparation
of disaggregated forecasts involves more complete consideration
of information, theory suggests that individuals with performance-
based incentives are likely to consider that additional information
in a biased way that helps them reach their desired conclusions
(Hales, 2007).
To test our predictions we conduct an abstract laboratory exper-
iment where participants complete a knowledge task with ques-
tions from four different categories (e.g., English, math, grammar,
and logic) and prepare forecasts of their performance. Participants
complete two rounds of the task. After the initial round, partic-
ipants receive feedback on their performance. Before the second
round begins, participants provide forecasts of their second-round
performance. Participants then answer the second round of ques-
tions and learn their actual performance.
We use an abstract task in our study for two reasons. First,
we are interested in examining a fundamental psychological bias
rather than reactions to rich, institutional features. An abstract
knowledge test allows us to test the fundamental processes that af-
fect the characteristics of our two types of forecasts while avoiding
noise in participants’ responses that could arise from asking them
to do an unfamiliar task like forecasting revenues and expenses.
Second, using a task with rich institutional features could intro-
duce other incentives that may lead to intentional biases in the
forecasts. For example, in an internal budgeting setting, managers
may intentionally provide lower forecasts to increase the proba-
bility of achieving targets or intentionally provide higher forecasts
to increase resource allocations (Fisher, Maines, Peffer, & Sprinkle,
2002). Using an abstract task removes the institutional features
that might drive managers to intentionally produce biased fore-
casts, allowing us to isolate the effects of unintentional bias.
We manipulate two factors between subjects at two levels each.
First, to manipulate forecast type, participants in the disaggregated
forecast condition forecast their scores in all four categories of the
test (e.g., English, math, grammar and logic), while participants in
the aggregated forecast condition forecast their total score.
1
Sec-
ond, we manipulate whether explicit performance-based incentives
are present or absent.
2
We hold average participant compensation
constant across the two incentive conditions. We examine two de-
pendent variables: (1) forecast accuracy, where overestimation of
scores is treated as equivalent to underestimation of scores; and
(2) forecast optimism, which captures systematic tendency to over-
estimate scores.
Consistent with our predictions, we ?nd that: (1) preparing
disaggregated forecasts leads to greater improvements in forecast
accuracy (compared to preparing aggregated forecasts) in the ab-
sence of performance-based incentives than in the presence of
performance-based incentives; and (2) preparing disaggregated
forecasts leads to greater increases in forecast optimism (compared
to preparing aggregated forecasts) in the presence of performance-
based incentives than in the absence of performance-based incen-
tives. Given that participants’ pay would be higher in the absence
of the forecast error and forecast optimism described above, our
results show that participants’ judgments con?ict with their ?nan-
cial incentives and therefore suggest that the biases we observe are
unintentional.
Our study contributes to our understanding of unintentional bi-
ases in the forecasting process. Since unintentional biases may be
more di?cult to discipline than intentional, incentive-driven bi-
ases, our study provides insights that are likely useful to both pre-
parers and users of forecasts. First, our results contribute to the
budgeting literature. Prior budgeting literature focuses heavily on
the opportunistic behavior of agents in the budgeting process and
the effectiveness of truth-inducing incentives (e.g., Chow, Cooper,
& Waller, 1988; Church, Hannan, & Kuang, 2012; Shields & Young,
1993; Waller, 1988; Webb, 2002; Young, 1985). However, uninten-
tional biases such as those documented in our paper are more dif-
?cult to mitigate. Speci?cally, our results show that a control sys-
tem design choice that has so far been largely overlooked in man-
agement accounting research—the level of forecasts elicited—can
have unintended consequences for potential bias and accuracy in
management forecasts.
Second, by highlighting the potential effect of an internal
planning and budgeting system design choice (i.e., forecast type)
on externally reported management forecasts, our study comple-
ments the accounting literature on management forecasts as well
as an emerging literature that examines the link between ex-
ternal disclosures and internal decision-making (e.g., Goodman,
Neamtiu, Shroff, & White, 2014; Hemmer & Labro, 2008; McNi-
chols & Stubben, 2008). Prior research on management forecasts
has shown that disaggregated forecasts increase the market’s per-
ception of the informational value and credibility of management
forecasts (Hirst, Koonce, & Venkataraman, 2007; Hutton, Miller, &
Skinner, 2003; Lansford, Lev, & Tucker, 2013), reduce investors’ ?x-
ation on announced earnings (Elliott, Hobson, & Jackson, 2011),
and decrease auditors’ tolerance for misstatement (Libby & Brown,
2013). Our study differs from these prior studies by: (1) taking
the perspective of the preparer, rather than the users, of manage-
ment forecasts; and (2) by focusing on the actual, rather than per-
ceived, quality of disaggregated forecasts. Despite the documented
perceived bene?ts of disaggregated forecasts, our results suggest
1
Although we manipulate the level of disaggregation at two levels in our exper-
iment, the level of disaggregation can vary in degrees in practice. We expect the
directional effects we document in our study to hold with varying levels of disag-
gregation.
2
We manipulate incentives at two levels in our experiment, but the absence
of performance-based incentives versus presence of performance-based incentives
conditions can also map into low-powered incentives versus high-powered incen-
tives in the real world.
10 C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18
that the actual quality of disaggregated management forecasts may
depend on the incentives that managers face.
Finally, our study also adds to the forecasting literature by high-
lighting unintentional behavioral biases in the forecasting process.
The literature on forecasting has largely ignored behavioral expla-
nations for unintentional optimism. Our study also answers the call
in the forecasting literature for more research that sheds light on
the circumstances under which disaggregated forecasts are more
likely to improve on aggregated forecasts (e.g., Henrion, Fischer, &
Mullin, 1993; Kremer, Siemsen, & Thomas, 2012).
2. Theory and hypotheses
Prior research suggests that decomposition, or disaggregation,
is a useful technique for reducing information processing demands
on the estimator, which may lead to more complete considera-
tion of available information and, ultimately, more accurate esti-
mates (Raiffa, 1968; Ravinder, Kleinmuntz, & Dyer, 1988). However,
prior research also suggests that disaggregation and more com-
plete consideration of information are not necessarily always ben-
e?cial to judgment quality (e.g., Henrion et al., 1993). We argue
that while disaggregation can improve the accuracy of estimates in
the absence of performance-based incentives, it can give forecast-
ers greater opportunity to inject optimistic bias into their forecasts
in the presence of explicit performance-based incentives.
2.1. Forecast type and consideration of available information
We ?rst consider how processing of information differs be-
tween preparing disaggregated forecasts and preparing aggregated
forecasts, regardless of the incentives that a manager faces. Prior
research suggests that preparing disaggregated forecasts can re-
duce information processing load for the forecaster, which may
lead to more careful consideration of all available information than
preparing aggregated forecasts (Henrion et al., 1993; Raiffa, 1968;
Ravinder et al., 1988). This occurs because forecasting a number
holistically is often considered a more complex task than decom-
posing the forecasting problem into multiple components ?rst and
then combining the components into an aggregated forecast (e.g.,
Henrion et al., 1993; Ravinder et al., 1988). As task complexity in-
creases, individuals are more likely to choose strategies that lower
total cognitive costs (Bonner, 2008). When individuals use these
strategies, they do not search for all relevant information in mak-
ing decisions and, as a consequence, decision quality is often re-
duced. In a management forecast setting, making an aggregated
forecast requires the manager to attend to all relevant informa-
tion at once, which can be mentally taxing. To make the forecast-
ing task more manageable, the manager may choose to attend to
the most salient pieces of information and ignore or underweight
less important information. Consistent with this argument, in an
audit setting, Zimbelman (1997) shows that auditors’ attention to
fraud-risk factors is higher when they separately assess the risk of
intentional and unintentional misstatement than when they assess
only the overall risk of misstatement.
Related work on support theory also suggests that disaggrega-
tion leads to more careful consideration of individual components
of a given set of information. Research on support theory ?nds that
unpacking an event into two or more of its components helps re-
spondents recall more evidence from memory and/or makes ex-
isting evidence more salient, such that the rated likelihood of the
event occurring increases (Tversky & Koehler, 1994). Support the-
ory has primarily focused on the assessments of probability or fre-
quency of alternative hypotheses, but the cognitive mechanism un-
derlying the unpacking phenomenon is quite general. Van Boven
and Epley (2003) con?rm that unpacking also in?uences evalua-
tions when events are simply described in greater detail as op-
posed to being unpacked into non-overlapping components. Specif-
ically, Van Boven and Epley (2003) show that unpacking leads peo-
ple to think about the details of a category or event, thereby mak-
ing it easier to mentally generate evaluative evidence.
Combined, the prior literature suggests that preparing disag-
gregated forecasts leads to lower information processing demands
and more complete consideration of available information. How-
ever, greater consideration of information has the potential to ei-
ther bene?t or harm managers’ forecasts, depending on whether
the managers have explicit performance-based incentives.
2.2. Forecast accuracy in the absence of performance-based incentives
Drawing on prior literature on forecasting and accounting, we
?rst consider the effects of preparing disaggregated forecasts in
the absence of explicit performance-based incentives. More speci?-
cally, we predict that preparing disaggregated forecasts will lead to
greater improvements in forecast accuracy (compared to preparing
an aggregated forecast) in the absence of performance-based in-
centives than in the presence of performance-based incentives for
at least two reasons. First, as discussed in the previous subsection,
preparing disaggregated forecasts reduces the cognitive load of the
forecaster. In the absence of performance-based incentives, a re-
duction of the cognitive load may lead to more complete consid-
eration of all available information and improve the accuracy of
forecasts. Second, in the absence of performance-based incentives,
disaggregated component forecasts are likely to contain random er-
rors, some of which overstate performance and some of which un-
derstate performance. These random errors will at least partially
cancel each other out when they are combined to derive the top-
level forecast, leading to less error and greater forecast accuracy
for disaggregated forecasts compared to aggregated forecasts (e.g.,
Kleinmuntz, Fennema, & Peecher, 1996; Ravinder et al., 1988).
Based on the above discussion, we expect that disaggregated
forecasts will result in both greater precision in the forecast of each
component (due to greater consideration of information in fore-
casting each component) and greater reduction of random errors
when component forecasts are combined (due to cancellation of
error terms). In turn, we expect these effects to result in overall
greater improvements in forecast accuracy in disaggregated fore-
casts than in aggregated forecasts when performance-based incen-
tives are absent compared to when performance-based incentives
are present.
We note, however, that greater consideration of information in
forecasting each component may induce greater bias in the fore-
casts in the presence of explicit incentives tied to the forecasted
measure. In addition, the error reduction effect discussed above
could be undermined if the errors associated with the component
forecasts are positively correlated, i.e., the errors are systematic
rather than random (Ravinder et al., 1988). Prior research suggests
that this is more likely to be the case in the presence of explicit
performance-based incentives, which we consider next.
2.3. Forecast accuracy in the presence of performance-based
incentives
Prior literature suggests that individuals are naturally, and of-
ten unintentionally, optimistic, and that performance incentives
or directional goals can exacerbate this optimism (Hales, 2007).
Although managers also have incentives for accurate forecasts
because investors associate accurate forecasts with management
credibility and reward accurate forecasts (Ajinkya & Gift, 1984;
Graham, Harvey, & Rajgopal, 2005; Healy & Palepu, 2001; Jennings,
1987; Mercer, 2005), managers’ incentives for accurate forecasts
may be dominated by their incentives for favorable performance
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 11
when they are provided with explicit performance-based incen-
tives.
The discussion above suggests that when managers are re-
warded for higher performance on the forecasted measure, fore-
casts are likely more optimistic, regardless of whether disaggre-
gated or aggregated forecasts are prepared. Although most indi-
viduals have at least some intrinsic motivation for favorable per-
formance, we expect the bias to be greater when performance-
based incentives are explicit. However, these theories only pre-
dict a main effect of performance incentives on forecast optimism.
Within a group of individuals that have explicit performance in-
centives, greater forecast optimism among those who prepare a
disaggregated rather than an aggregated forecast would be consis-
tent with motivated reasoning theory, which we discuss next.
3
Motivated reasoning theory predicts that directional prefer-
ences will affect how people attend to and process information
(Kunda, 1990). In an accounting setting, Hales (2007) shows that
investors’ forecasts of earnings are affected by the investment po-
sition they hold and by whether they are facing the prospect of a
gain or loss on those investments. Speci?cally, investors’ forecasts
of earnings are biased in a direction consistent with their direc-
tional preferences, even if they are only provided with an incentive
to be accurate. Building on Hales (2007), Thayer (2011) shows that
investors seek additional information consistent with their desired
conclusions about an investment.
As discussed at the beginning of this section, theory suggests
that preparing a disaggregated forecast will make a manager more
likely to attend to more detailed information to forecast the indi-
vidual components, leading to more accurate forecasts in the ab-
sence of performance-based incentives. However, when managers
are provided with explicit incentives that are tied to their perfor-
mance on the forecasted measure, preparing disaggregated fore-
casts is less likely to lead to higher forecast accuracy than prepar-
ing aggregated forecasts for two reasons.
First, in the presence of performance-based incentives, a man-
ager making a judgment about future performance has a prefer-
ence for positive performance. Disaggregation will cause the man-
ager to attend to more detailed information about how a favor-
able outcome is likely to be achieved, and hence, allow more op-
portunity for the manager to interpret information in the direction
consistent with his or her preferences (Kunda, 1990). Thus, when
performance-based incentives are present, the potential bene?t of
considering more information might be partially or completely off-
set by the negative effect of attending to more preference-consistent
information (Kunda, 1990). Motivated reasoning theory suggests
that individuals will not consider a balanced set of reasons for a
given outcome when making a judgment (Ditto & Lopez, 1992).
Second, the presence of performance-based incentives is likely
to lead to an overall optimistic bias in component forecasts, i.e.,
systematic overestimation errors. Systematic overestimation errors
in component forecasts will not cancel each other out when the
component forecasts are combined into an overall forecast. There-
fore, we expect that in the presence of performance-based incen-
tives, preparing disaggregated forecasts will not lead to greater
forecast accuracy.
Combined, our theory suggests an interaction between fore-
cast type and performance-based incentives on forecast accu-
racy. Speci?cally, we expect that preparing disaggregated fore-
casts rather than aggregated forecasts increases forecast accuracy
in the absence of performance-based incentives, but not necessar-
ily in the presence of performance-based incentives. This discus-
sion leads to our ?rst hypothesis on forecast accuracy:
3
Theory on optimism alone does not predict that the magnitude of optimism
should vary by forecast type. However, motivated reasoning theory, which more ex-
plicitly incorporates biased processing, does help to make that prediction.
H1. Preparing disaggregated forecasts leads to greater improve-
ments in forecast accuracy (compared to preparing aggregated
forecasts) in the absence of performance-based incentives than in
the presence of performance-based incentives.
2.4. Performance-based incentives and forecast optimism
The theory that we have outlined suggests that disaggregated
forecasts may exhibit greater forecast optimism than aggregated
forecasts when performance-based incentives are present com-
pared to when performance-based incentives are absent. Even
though managers have the motivation to produce optimistic fore-
casts when they prepare an aggregated forecast in the presence
of performance-based incentives, they have less opportunity to in-
ject optimistic bias into their forecasts. Thus, providing disaggre-
gated forecasts should lead to greater forecast optimism than pro-
viding aggregated forecasts in the presence of performance-based
incentives.
In the absence of performance-based incentives, however, man-
agers have weaker motivation to make optimistically biased fore-
casts regardless of whether they prepare aggregated or disaggre-
gated forecasts. Thus, we expect a smaller difference in forecast
optimism between disaggregated forecasts and aggregated fore-
casts in the absence of performance-based incentives than in the
presence of performance-based incentives. Our second hypothesis
is therefore:
H2. Preparing disaggregated forecasts leads to a greater increase in
forecast optimism (compared to preparing aggregated forecasts) in
the presence of performance-based incentives than in the absence
of performance-based incentives.
3. Method
3.1. Participants
We recruit ninety-two undergraduate business students from a
large public university as participants. In the experiment, partic-
ipants complete a knowledge test with questions from four cat-
egories (English, math, grammar and logic) and make associated
forecasts of their performance. Because we examine a fundamental
psychological bias rather than reactions to rich, institutional fea-
tures, we believe students have su?cient knowledge for the task
and can be used as participants (Libby, Bloom?eld, & Nelson, 2002;
Libby & Rennekamp, 2012). Further, undergraduates take knowl-
edge tests (either the SAT or ACT) before entering the university
that are similar to those we use in our study and have the ability
to understand the incentives associated with our forecasting task.
3.2. Research design
To test our hypotheses, we use a 2 (Forecast Type) d7˜ 2
(Performance-Based Incentives) between-subjects experimental de-
sign. We manipulate forecast type at two levels: disaggregated
forecast versus aggregated forecast. Participants complete a ?rst
round of the question-based task to get a sense of their skill in the
four topic categories. In the disaggregated forecast condition, par-
ticipants provide a separate forecast of their performance in each
of the four categories for the second round. In the aggregated fore-
cast condition, participants provide a forecast of their overall per-
formance (or total score) in the second round.
We also manipulate performance-based incentives at two lev-
els: absent or present. Following prior literature (Hales, 2007), we
provide subjects with an incentive to make accurate forecasts in
both conditions to reduce noise in the results. Speci?cally, in the
12 C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18
condition with performance-based incentives, a participant’s pay is
based on two components: the participant’s actual performance on
the task and the accuracy of the participant’s forecast. The formula
is as follows:
Total pay = £4 ? Number of questions answered correctly + (£20 ?£2
?absolute value of the difference between forecast and actual performance)
For the performance-based component, participants receive £4.00
in experimental currency for each question answered correctly, up
to a total of £112.00 if the participant answers all 28 questions
correctly.
4
For the forecast accuracy component, participants re-
ceive a bonus that is £20.00 for a completely accurate forecast.
The bonus is reduced by £2.00 per question if the forecast devi-
ates from the actual performance and drops to zero if the forecast
either over- or underestimates actual performance by ten or more
questions. The participant is always better off answering as many
questions correctly as he/she can, regardless of the forecast he/she
provides because the participant earns £4.00 for every correct an-
swer but loses only £2.00 of the forecast bonus for each question
by which the actual performance differs from the forecast.
5
Thus,
for a given forecast, the participant will always receive higher com-
pensation by performing to the best of his/her ability rather than
by withholding effort after meeting his/her forecast. In the condi-
tion without a performance-based incentive, pay is based only on
the accuracy of the participant’s forecast.
Immediately after receiving information on how they will be
paid, participants are asked to answer a manipulation check ques-
tion on the same page to ensure that they understand the incen-
tive scheme to which they are assigned. Speci?cally, we ask partic-
ipants to indicate the components of their compensation by choos-
ing between two options: (1) my compensation will increase if my
forecast of my performance on the round of questions I answer
is more accurate; and (2) my compensation will increase if I per-
form better on the round of questions I answer. Results reveal that
100% of participants correctly indicate whether their compensation
is based on forecast accuracy only or on both forecast accuracy and
actual performance in the second round of questions.
Participants are informed that once all participants have com-
pleted the task, their earnings in £ will be converted to real U.S.
dollars at a positive but unspeci?ed rate and that they are always
better off trying to earn more £ in the study, since that trans-
lates to greater earnings in U.S. dollars. Participants are informed
before the start of the experiment to expect payments approxi-
mately two weeks after all sessions are conducted. In addition,
each participant receives a $5 show-up fee. On average, partici-
pants receive $20 in U.S. currency for their participation across all
conditions.
3.3. Task and experimental procedures
We randomly assign each laboratory session to either the pres-
ence or absence of explicit performance-based incentives treat-
ment to ensure participants are not aware of our manipulations.
Upon arrival at the experiment, participants are randomly assigned
4
All currency amounts described here are denoted in experimental laboratory
currency unless stated otherwise. Laboratory earnings are converted to U.S. dollar
earnings upon completion of the experiment. Participants do not know in advance
the exchange rate between the two currencies, but do know that earning more lab-
oratory currency will always translate to higher U.S. dollar-denominated earnings.
5
For example, if a participant forecasts that she can get 16 out of the 28 ques-
tions correct, she will get £78 in laboratory currency (£4 ? 15 + (£20 ? £2 ? 1)) if
she ends up answering 15 questions correctly, £84 (£4 ? 16 + £20) if she ends up
answering 16 questions correctly, and £92 (£4 ? 20 + (£20 ? £2 ? 4)) if she ends up
answering 20 questions correctly.
to one of the two forecast type conditions. We ask participants
to read the informed consent form and sign the form before they
start the task. In the experiment, participants complete two rounds
of a mini SAT-type test. We use an initial round of SAT-type ques-
tions to familiarize participants with the task and form expecta-
tions about their future performance. We intentionally choose rel-
atively di?cult questions for our task in order to increase vari-
ation in participants’ forecasting performance. This allows us to
better detect the effects of our independent variables on our de-
pendent measures. To keep the total time required for the task to
a minimum, the ?rst round contains two questions from each of
the four categories, while the second round contains 28 questions,
with seven questions from each of the four categories.
After the ?rst round, participants receive feedback on their per-
formance. Before the second round begins, participants make a pri-
vate forecast of their second-round performance. Participants then
answer the second round of questions. Participants in the disag-
gregated forecast condition are asked to provide forecasts of their
scores for each of the categories of SAT-type questions (English,
math, grammar and logic). Participants in the aggregated forecast
condition are asked to provide a forecast of their total score on
the test. After participants complete the second round of questions,
they answer a post-experimental questionnaire, which includes de-
brie?ng and demographic questions.
3.4. Dependent and control variables
Forecasts of the four components in the disaggregated fore-
cast condition are summed to form total score forecasts, which are
compared to the total scores forecasted in the aggregated forecast
condition. We examine two aspects of forecast quality: accuracy
and optimistic bias. Following prior literature (e.g., Duru & Reeb,
2002; Goodman et al., 2014; Henrion et al., 1993; Mikhail, Walther,
& Willis, 1999), we capture forecast accuracy with the absolute
forecast error, i.e., the absolute value of the difference between
the forecast and the performance. A smaller absolute forecast er-
ror indicates greater forecast accuracy. To facilitate interpretation,
we transform this reverse measure of forecast accuracy by calcu-
lating the difference between the maximum performance, 28, and
the absolute forecast error and use this as our measure of forecast
accuracy. A larger number for this transformed measure indicates
greater forecast accuracy. We measure the optimistic bias in a fore-
cast as the signed forecast error, i.e., the signed difference between
the forecast and the actual performance. A larger positive forecast
error indicates a higher level of optimism.
We control for participants’ performance because prior research
has shown that although skillful individuals often overestimate
their performance relative to others, they also underestimate their
own absolute performance (Klayman, Soll, González-Vallejo, & Bar-
las, 1999; Krueger & Mueller, 2002; Kruger & Dunning, 1999; Lar-
rick, Burson, & Soll, 2007). Therefore, we expect a negative re-
lationship between participants’ performance in the actual round
and their forecast optimism. We control for participants’ perfor-
mance in the ?rst round because higher performance in the ?rst
round will lead to higher expected performance in the actual
round, which is likely to lead to higher forecast errors.
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 13
Table 1
Descriptive statistics.
Forecast type
n Aggregated forecast n Disaggregated forecast
Panel A: Mean (standard deviation) for performance
Absence of Performance-Based Incentives 30 14.40 (5.88) 18 16.39 (2.91)
Presence of Performance-Based Incentives 18 17.89 (2.93) 26 17.31 (2.28)
Panel B: Mean (standard deviation) for forecasts
Absence of Performance-Based Incentives 30 14.93 (8.03) 18 17.78 (3.78)
Presence of Performance-Based Incentives 18 18.72 (3.61) 26 20.15 (4.18)
The table presents descriptive statistics for participants’ performance and forecasts for the four experimental conditions.
We manipulate Performance-Based Incentives at two levels: In the absence of performance-based incentives condition, participants are only compensated for the accuracy
of their performance forecasts; in the presence of performance-based incentives, participants are compensated for both their performance in the second round of SAT-type
questions and the accuracy of their performance forecasts.
We manipulate Forecast Type at two levels: In the aggregated forecast condition, participants provide a holistic forecast for their performance in the second round of SAT-
type questions; in the disaggregated forecast condition, participants provide a separate forecast for their performance in each of the four components of the second round
of SAT-type questions.
Performance = Participants’ actual performance (number of correct answers to the questions) in the second round of SAT-type questions.
Forecast = Forecast of total score in the second round of SAT-type questions in the aggregated forecast condition; sum of forecasts of the four components in the second
round of SAT-type questions in the disaggregated forecast condition.
4. Results
Table 1 provides descriptive statistics of average performance
and average forecasts of total score for the four conditions.
6, 7
Con-
sistent with performance-based incentives increasing effort and
performance, we observe that participants’ performance in the sec-
ond round is signi?cantly higher in the presence of performance-
based incentives than in the absence of performance-based incen-
tives (17.55 versus 15.15, p < 0.01, two-tailed).
4.1. Test of H1: Forecast type, performance-based incentives, and
forecast accuracy
H1 predicts that disaggregated forecasts will lead to greater im-
provement in forecast accuracy in the absence of performance-
based incentives than in the presence of performance-based incen-
tives, compared to aggregated forecasts. Again, we measure fore-
cast accuracy by calculating the difference between the maximum
possible score of 28 and the absolute forecast error, where a larger
difference corresponds to greater forecast accuracy. We test this
interaction using contrast coding as well as follow-up simple ef-
fects tests using an analysis of covariance (ANCOVA) (Buckless &
Ravenscroft, 1990). We include participants’ performances in the
?rst and second rounds of questions as covariates to control for
variation in the data that is not the focus of our study. We con-
trol for participants’ performance in the ?rst round because higher
performance in the ?rst round will lead to higher expected per-
formance in the second round, which, in turn, will lead to higher
forecast error holding the actual performance constant. Consistent
with this conjecture, a regression analysis shows that forecast er-
rors are positively associated with actual performance in the ?rst
round. We also control for participants’ performance in the second
round to control for the negative correlation between performance
6
The difference in cell sizes is due to the number of participants who showed
up for a given session and imperfect randomization of the online survey software.
7
Of the ninety-two participants, four participants forecasted zero in the aggre-
gated forecast condition in the absence of performance-based incentives, indicat-
ing an intention to game the incentive system. Excluding the four participants who
made forecasts of zero strengthens the results. Among the other eighty-eight partic-
ipants, there is no signi?cant difference in performance between disaggregated and
aggregated forecast types when performance-based incentives are absent (p = 0.57)
or present (p = 0.55), indicating other participants were not engaged in similar
gaming behavior.
and forecast optimism documented in prior literature (Klayman
et al., 1999; Krueger & Mueller, 2002; Kruger & Dunning, 1999;
Larrick et al., 2007). Consistent with prior research, a regression
analysis shows that forecast optimism is negatively associated with
actual performance in the second round.
Based on our ?rst hypothesis, we use contrast weights of +3 in
the absence of performance-based incentives/disaggregated fore-
cast condition and ?1 in the other three conditions. The results
presented in Panel C, Table 2 show that the planned contrast is
marginally signi?cant, supporting our hypothesis (p = 0.07, one-
tailed).
8
The follow-up simple contrasts con?rm the interaction be-
tween performance-based incentives and forecast type on forecast
accuracy. Speci?cally, when there are no performance-based in-
centives, preparing a disaggregated forecast leads to signi?cantly
greater forecast accuracy (p = 0.02, one-tailed). By contrast, when
participants have performance-based incentives, there is no signi?-
cant difference in forecast accuracy between the disaggregated and
the aggregated forecast types (p = 0.42, two-tailed).
4.2. Test of H2: Forecast type, performance-based incentives, and
forecast optimism
H2 predicts an interaction between forecast type and
performance-based incentives such that the disaggregated forecast
condition leads to a greater increase in forecast optimism than
the aggregated forecast condition in the presence of performance-
based incentives than in the absence of performance-based
incentives. To test H2, we estimate an analysis of covariance (AN-
COVA) using signed forecast errors as the dependent variable and
performance-based incentives and forecast type as the indepen-
dent variables. A more positive signed forecast error indicates a
higher level of forecast optimism. We also test this interaction us-
ing contrast coding as well as follow-up simple effects tests where,
based on our second hypothesis, we use contrast weights of +3
in the presence of performance-based incentives/disaggregated
forecast condition, +1 in the presence of performance-based
incentives/aggregated forecast condition, and ?2 in the absence of
performance-based incentives/aggregated forecast and absence of
performance-based incentives/disaggregated forecast conditions.
9
8
The planned contrast is statistically signi?cant (p = 0.03, one-tailed) when we
exclude the four participants who made forecasts of zero.
9
Since the simple effect of performance-based incentives in the aggregated fore-
cast condition is insigni?cant, we verify that our results are robust to an alterna-
14 C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18
Table 2
The effects of forecast type and performance-based incentives on forecast accuracy.
Forecast type
n Aggregated forecast n Disaggregated forecast
Panel A: Mean (standard deviation) for forecast accuracy
Absence of Performance-Based Incentives 30 23.20 (4.26) 18 25.50 (1.54)
Presence of Performance-Based Incentives 18 24.94 (2.01) 26 24.15 (3.20)
Source df Mean square F p-value
Panel B: ANCOVA model of forecast accuracy
Performance-Based Incentives 1 0.68 0.07 0.80
Forecast Type 1 8.68 0.87 0.35
Performance-Based Incentives d7˜ Forecast Type 1 42.36 4.23 0.02
Trial Performance 1 30.49 3.05 0.08
Performance 1 0.10 0.01 0.92
Error 82 10.00
Source df F p-value
Panel C: Planned contrast coding and follow-up simple effect tests
Overall tests:
Preparing disaggregated forecasts (compared to preparing aggregated forecasts) leads to a greater improvement in forecast accuracy in the
absence of performance-based incentives than in the presence of performance-based incentives. Contrast weights (?1, 3, ?1, ?1)
1 2.19 0.07
Follow-up simple effect tests:
Effect of forecast type in the absence of performance-based incentives 1 4.51 0.02
Effect of forecast type in the presence of performance-based incentives 1 0.64 0.42
Effect of performance-based incentives in the disaggregated forecast conditions 1 1.59 0.21
Effect of performance-based incentives in the aggregated forecast conditions 1 2.61 0.11
The table presents descriptive statistics, the ANCOVA model, and simple contrasts for forecast accuracy for the four treatments. See Table 1 for descriptions of the manip-
ulations of performance-based incentives and forecast type. Reported p-values are two-tailed unless testing a one-tailed prediction, as signi?ed by bold face.
Forecast Accuracy = The difference between the maximum possible score of 28 and the absolute difference between forecast and performance in the second round of SAT-
type questions, where forecast is the forecast of total score in the second round of SAT-type questions in the aggregated forecast conditions and the sum of component
forecasts in the second round of SAT-type questions in the disaggregated forecast conditions. Higher measures indicate greater forecast accuracy.
Trial Performance = Participants’ performance in the ?rst round of SAT-type questions.
Performance = Participants’ actual performance in the second round of SAT-type questions.
The results presented in Panel C of Table 3 show that
the planned contrast is statistically signi?cant, supporting H2
(p = 0.03, one-tailed).
10
The follow-up simple contrasts (Table 3,
Panel C) con?rm the ordinal interaction between performance-
based incentives and forecast type on forecast optimism. Speci?-
cally, in the absence of performance-based incentives, there is no
signi?cant difference in forecast optimism between disaggregated
and aggregated forecasts (p = 0.63, two-tailed). By contrast, when
participants receive performance-based incentives, forecast opti-
mism is higher in the disaggregated forecast condition than in the
aggregated forecast condition (p = 0.08, one-tailed).
11
Overall, our results are consistent with H2. When participants’
incentives are not tied to performance, producing disaggregated
forecasts does not lead to more optimistically biased forecasts.
However, when participants’ incentives are tied to performance,
they have a preference for favorable performance. As a result, pro-
ducing disaggregated forecasts gives participants both the moti-
vation and the opportunity to engage in biased processing of in-
formation and interpret it in a way that is consistent with their
preferences, which leads to signi?cantly more optimistically biased
forecasts compared to producing aggregated forecasts.
tive allocation of contrast weights (speci?cally, +3 in the presence of performance-
based incentives/disaggregated forecast condition and ?1 in the other three condi-
tions). This set of contrast weights is more restrictive by not allowing for a simple
effect of performance-based incentives in the aggregated forecast condition.
10
The planned contrast is also statistically signi?cant (p = 0.02, one-tailed) when
we exclude the four participants who made forecasts of zero.
11
These results are stronger when we exclude the four participants who made
forecasts of zero: when participants receive performance-based incentives, forecast
optimism is higher in the disaggregated forecast condition than in the aggregated
forecast condition (p = 0.07, one-tailed).
4.3. Supplemental analyses
In this section we conduct additional analyses to support the
theoretical arguments underlying our hypotheses.
4.3.1. Effect of disaggregation on forecast accuracy in the absence of
performance-based incentives
To develop H1 we rely on arguments suggesting that in the
absence of performance-based incentives the disaggregated fore-
cast type results in: (1) greater attention to information for each
component forecast and (2) random errors in the component fore-
casts that cancel each other out in the top-level forecast. These
two effects should lead to greater forecast accuracy in the disag-
gregated forecasts than in the aggregated forecasts in the absence
of performance-based incentives.
The ?rst argument implies the forecast for each component is
more precise and better calibrated under the disaggregated fore-
cast type than under the aggregated forecast type (e.g., Henrion
et al., 1993). To test this, we compare the standard deviation of
the absolute forecast error in the absence of performance-based in-
centives/aggregated forecast condition and the standard deviation
of the sum of the absolute forecast errors of the component fore-
casts in the disaggregated forecast condition. Since we expect dis-
aggregated forecasts to be better calibrated than aggregated fore-
casts, we expect the standard deviation to be lower in the condi-
tion where participants prepare disaggregated forecasts. As shown
in Table 4, Panel B, a Levene’s test of equal variances con?rms this
conjecture (2.12 vs. 4.26, p = 0.03, one-tailed) (Levene, 1960).
The second argument implies that the sum of the absolute fore-
cast errors of the four component forecasts in the disaggregated
forecast condition is not necessarily lower than the absolute fore-
cast error in the aggregated forecast condition. This is because the
greater accuracy for the disaggregated condition is partially driven
C.X. Chen et al. / Accounting, Organizations and Society 46 (2015) 8–18 15
Table 3
The effects of forecast type and performance-based incentives on forecast optimism.
Forecast type
n Aggregated forecast n Disaggregated forecast
Panel A: Mean (standard deviation) for forecast optimism
Absence of Performance-Based Incentives 30 0.53 (6.45) 18 1.39 (2.64)
Presence of Performance-Based Incentives 18 0.83 (3.63) 26 2.85 (4.14)
Source df Mean square F p-value
Panel B: ANCOVA model of forecast optimism
Performance-Based Incentives 1 42.81 2.29 0.07
Forecast Type 1 33.98 1.82 0.18
Performance-Based Incentives d7˜ Forecast Type 1 8.29 0.44 0.25
Trial Performance 1 302.70 16.18