Description
There is evidence that some of the Academy of Managements current and past leadership shares at least some elements of this aspiration.
A MODEST PROPOSAL: HOW WE MIGHT CHANGE THE
PROCESS AND PRODUCT OF MANAGERIAL RESEARCH
JEFFREY PFEFFER
Stanford University
I, and possibly many of my colleagues in the
Academy of Management, have a dream for the
future of management research—an aspiration that
builds on, but is not satisfied with, the considerable
accomplishments to date. I would summarize that
dream for management research as including the
following components: (1) having more effect on
the actual practice of management in organizations
in both the private and public sectors; (2) being at
least as much “at the table” and as influential in the
formulation of policy in both the public domain
and private sector as our sister social science dis-
ciplines, and more specifically, on a par with eco-
nomics; and (3) being as connected to, as engaged
with, and as relevant for our profession—manage-
ment—as our sister professional schools in fields
such as engineering, education, and medicine are
to their professions and constituencies.
The level of engagement contemplated might in-
clude actually creating or partnering in the devel-
opment of business practices and techniques and
thereby being an active participant in the manage-
ment innovation process. This is the principle be-
hind the Management Innovation Lab, recently
founded at London Business School by Gary Hamel
and Julian Birkinshaw, with the explicit objective
of creating partnerships between academics and
businesses and their leaders to cocreate better man-
agement practices. This role as a possible source,
not merely an evaluator, of professional innovation
is something that one sees in engineering and med-
ical schools, for instance, where companies, prod-
ucts, technologies and devices, and drugs come, on
occasion, from the schools, their faculties, and the
research that they do. Many universities today have
technology-licensing units that actively work to en-
sure the commercialization of knowledge on their
campuses, at times in partnership with the faculty
inventors. Although I don’t know of any systematic
data, I suspect that technology-licensing offices
have only minimal interaction with business
schools, at least in terms of commercializing re-
search or capitalizing on the ideas produced in
those places.
There is evidence that some of the Academy of
Management’s current and past leadership shares
at least some elements of this aspiration. The goal
of increased impact from our published research is
implied in Sara Rynes’s charge (to me and the other
commentators in this 50th anniversary editors’ fo-
rum) to consider “how management research might
change to have maximal impact on the future of
management.” The goal of affecting public policy
or at least public discussion is implied in the Acad-
emy’s hiring of a public relations firm a number of
years ago in an effort to get research findings more
widely disseminated to a broader audience. Many
presidential addresses given at the AOM’s annual
meeting (e.g., Hambrick, 1994; Pearce, 2004) and
other articles (e.g., Van de Ven & Johnson, 2006)
have considered the effect of the Academy and its
activities on the larger society and business and the
connections between the worlds of theory and
practice. The recent effort to articulate and advance
an agenda of evidence-based management (Rous-
seau, 2006) has as a goal both figuring out what we
know and having that knowledge form the founda-
tion for decisions and actions in both the public
and private sectors.
In this paper, I argue that we have historically not
done particularly well in fulfilling these aspira-
tions. The structure and processes governing both
the careers of academics and the prepublication
review of their work limit the influence of manage-
ment research on practice, social policy, and even
the terms of public discourse about organizational
issues. These limits prevail despite the good inten-
tions and heroic efforts of journal editors and re-
viewers. If we are serious about our aspirations, we
ought to implement what we know about building
innovative organizations that are more effective in
having knowledge turned into action. Thus, this
essay lays out a set of modest—or possibly not so
modest—proposals. But before I move on to these
topics, it is important to first consider the legiti-
macy and appropriateness of the goals proposed
here for management research.
The author gratefully acknowledges the extremely
thoughtful and very helpful suggestions of Sara Rynes,
Roy Suddaby, and Christine Quinn Trank.
? Academy of Management Journal
2007, Vol. 50, No. 6, 1334–1345.
1334
Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’s express
written permission. Users may print, download or email articles for individual use only.
WHAT SHOULD THE ROLE OF BUSINESS
SCHOOLS AND BUSINESS RESEARCH BE?
Conflicts of Interest
As Roy Suddaby has so appropriately and per-
suasively noted in comments that I am sure others
would agree with, the aspirations just described are
not without controversy. Specifically, some might
argue that these objectives for management re-
search are (1) inconsistent with the historical role
of business schools and their faculty as evaluators
of, but not creators or originators of, business prac-
tice, (2) not shared by all in the discipline, and (3)
risky in that closer professional interaction and a
more active role in management innovation raises
the chances that conflicts of interest will arise.
To take the last point first, the risks are clear:
medical schools and, for that matter, engineering
schools and indeed universities as a whole are
fraught with conflicts of interest (see, for instance,
Washburn, 2005) and have certainly been, to some
substantial extent, captured by the industries they
serve. Drug companies that sometimes have medi-
cal faculty with equity and/or managerial interests
in them have funded clinical trials conducted by
these same faculty using university resources. As
public support for universities and university re-
search has waned, the importance of private sup-
port has grown tremendously. For instance, be-
tween 1993 and 2003, industry-sponsored research
at the University of California increased 97 percent
(Washburn, 2005: 19).
These close relationships between industry and
academia almost invariably entail some degree of
mutual influence over the research that gets done
and the questions that get asked as well as over how
that research gets disseminated. Providing one ex-
ample, Blumenthal and colleagues (Blumenthal,
Campbell, Anderson, Causino, & Louis, 1997), sur-
veying life science faculty, found that almost one in
five had delayed publication of research results for
more than six months sometime during the preced-
ing three years to protect commercial interests and
proprietary information. Their analyses showed
that participation in academic-industry research re-
lationships and engaging in the commercialization
of university research were significant predictors of
the decision to delay.
In the management research context, some might
worry that in the effort to obtain careers as advisors
or to get funding from external organizations, the
objectivity with which business school faculty
evaluate the ideas and practices of business organ-
izations could be lessened. Therefore, Suddaby ar-
gued that we risk losing objectivity in becoming
closer participants in the profession and that a
more appropriate role for business school research-
ers is to evaluate the techniques and ideas of others,
providing legitimation but not invention or
development.
One can make at least three responses to this
argument, without for a moment denying its valid-
ity. First, as the medical field illustrates, confining
research to solely an evaluative rather than a devel-
opmental role does not ensure objectivity. For in-
stance, research shows that when drug companies
fund studies of the effectiveness of those drugs, the
results are, not surprisingly, more favorable for the
drugs than when such funding comes from other
sources such as government grants (Bakalar [2007],
and see Washburn [2005: 84] for an extensive re-
view of studies showing how funding source deter-
mines outcome in drug efficacy research). There-
fore, remaining solely in an evaluative role rather
than assuming an inventing or developmental role
does not guarantee an absence of conflict of
interest.
Second, business schools have already been cap-
tured by companies and managerial interests to
some extent, so we may already be paying the cost
without reaping many corresponding benefits.
Walsh, Weber, and Margolis (2003), for instance,
documented the co-occurrence of two trends that
may illustrate the existing influence of companies
on what we study. They showed that, over the past
several decades, a rise in the amount of funding
from wealthy alumni and companies was accompa-
nied by a decline in research on topics of social
responsibility incorporating dependent variables
assessing social impact, rather than economic effi-
ciency. Washburn (2005) cited the comments of a
professor occupying the Kmart Chair on marketing
at Wayne State University: “‘Kmart’s attitude al-
ways has been: What did we get from you this year?
Some professors would say they don’t like that
position, but for me, it’s kept me involved with a
major retailer, and it’s been a good thing’” (Wash-
burn, 2005: 5). The idea that business schools,
heavily dependent on outside donations, are not
already influenced by this dependence and are bas-
tions of unsullied objectivity because management
research is less engaged in the creation or innova-
tion of management practices seems implausible
(e.g., Pfeffer & Salancik, 1978).
And third, in business school disciplines other
than management, most notably finance and eco-
nomics but other areas of study as well, invention
and the economic capture of the fruits of that in-
vention are already well advanced. Finance faculty
such as Nobel prize winners Myron Scholes and
William Sharpe have decamped from academia to
found or serve in important roles in firms that
2007 1335 Pfeffer
employ ideas they have been instrumental in de-
veloping, and in some instances, tenured offers in
finance have been made to Ph.D.’s working on Wall
Street. Economic consulting and forecasting firms
have been cofounded by academics who did con-
siderable original research in universities. And the
successful strategy consulting firm Monitor has Mi-
chael Porter of Harvard Business School as one of
its progenitors. A separation between research and
practice, between the world of scholars and practi-
tioners, that may possibly be true for some seg-
ments of management research or in some coun-
tries does not necessarily hold for all elements of
business school faculty even today, at least in the
United States.
And the argument that the closer connection be-
tween business and academics is a change from
past practice may not be based on accurate obser-
vation. The change in the composition of faculty—
from practitioners with experience in business to
scholars with doctoral degrees—is relatively re-
cent, occurring in part as a response to the Ford and
Carnegie reports in the 1950s that criticized busi-
ness schools as nothing more than glorified trade
schools and pushed for more rigorous social sci-
ence research. Even today, considering the substan-
tial number of former entrepreneurs and business
leaders serving in lecturer roles in schools, it is far
from clear that there is as much separation as some
believe, although we may not be organized to ben-
efit from these ties. Moreover, the recent evolution
of recruiting in management from scholars with
degrees from business schools to scholars with de-
grees from the social science departments of eco-
nomics, psychology, and sociology suggests that
the boundaries between professional practice and
business schools are not constant, either across in-
stitutions or over time.
All of this is not to say that Suddaby and others
aren’t correct in noting that some will not approve
of aspirations for the role of management research I
have articulated and that conflicts of interest aren’t
a problem. But it is clear, from considering other
areas of research within business schools and other
professional schools, that we have a choice as to
what role we want management research to play
and how to construct that role. Data about the ef-
fects of various governance arrangements and roles
and responsibilities and, of course, values and pref-
erences, should inform these decisions.
The Place of Management Research in the
Marketplace for Ideas
The concept that ideas “compete” in a market-
place and that empirical validity is only one—and
maybe not even the most important—characteristic
that determines which ideas win seems both useful
and valid (see, for instance, Bangerter & Heath,
2004; Barber, Heath, & Odean, 2003; Berger &
Heath, 2005). A consideration of the management
idea marketplace suggests that management re-
search produced by academics does not fare partic-
ularly well in this competition and that manage-
ment scholars have not been the progenitors of the
most important management concepts.
Recently, Mol and Birkenshaw (forthcoming)
wrote a book briefly describing what they believe
are the world’s 50 most important management in-
novations—things such as lean manufacturing, ac-
tivity-based cost accounting, T-groups, matrix or-
ganizational structures, and brand management.
What is noteworthy is that in none of the 50 in-
stances did the idea or innovation originate with an
academic or in academic research, at least accord-
ing to their brief descriptions of the innovations
and how they evolved.
A similar, although not quite as dismal, picture
of the role of academic research emerges in Daven-
port and Prusak’s (2003) review of important con-
temporary management ideas. They noted that
“most business schools . . . have not been very ef-
fective in the creation of useful business ideas”
(Davenport & Prusak, 2003: 81). Pfeffer and Fong
(2002) examined business school research’s impact
using several indicators: the proportion of business
best-sellers and the proportion of books cited by
BusinessWeek as best business books written by
business school faculty, citations to books written
by faculty compared to citations to other business
books, and the proportion of leading management
ideas and techniques covered in a Bain survey that
originated with business school faculty rather than
with a consulting firm or a company. Pfeffer and
Fong concluded that business school research was
making a modest contribution to management prac-
tice compared to research and ideas that came from
consulting firms, journalists, and companies.
Without question, each of these compilations
and assessments can be criticized for flaws in
method, sample, or both. But when a number of
different people look at the same basic question—
the relative importance of academic research in
producing useful managerial ideas or innova-
tions—and come to essentially the same conclusion
using different time periods, methods, and criteria,
there must be some kernel of truth in the observa-
tion that management research has not played as
prominent a role in the marketplace of ideas as it
might, and possibly should.
1336 December Academy of Management Journal
THE SOURCES OF THE PROBLEM
In seeking to understand why management re-
search may have had less effect on practice than
research in other professional fields, as well as
other differences, one undoubtedly encounters
many explanations. An idea that ought to be re-
jected immediately is absence of talent, effort, or
attention on the part of individuals engaged in the
enterprise. As evidenced by the enormous amount
of self-reflection in editorial essays by Eden (2002),
Bergh (2003), and Rynes (2006) and the insightful
discussions on reviewing, theory, and the scientific
process in virtually all recent issues of this journal,
AMJ’s people are consciously concerned with the
reviewing process and with what constitutes a con-
tribution, and they have encouraged the use of mul-
tiple methods and a variety of theoretical perspec-
tives. Recent research has shown that where (i.e., in
which journal) an article is published has a large
effect on its being cited, and the evidence is that the
various Academy of Management journals, partic-
ularly AMJ and AMR, are prestigious and have high
impact (Judge, Cable, Colbert, & Rynes, 2007; Pod-
sakoff, MacKenzie, Bachrach, & Podsakoff, 2005;
Tahai & Meyer, 1999).
In view of what we have learned from the quality
movement, it is unlikely that explanations for prob-
lems with management research and its impact are
to be found by looking to any sort of individual
deficiencies or motivations. Instead, more struc-
tural explanations that are relevant to article review
processes, career reward contingencies, and the ef-
fects of competition for status among schools seem
like reasonable places to begin an inquiry into what
may be going wrong.
The Journal Review Process
Management research is published mostly in
peer-reviewed journals and also in books and pub-
lications that are not reviewed, such as magazines,
newsletters, and so forth. It is axiomatic that in
scientific fields, the journal review process is im-
portant. Given the typically extremely high rejec-
tion rates (often over 90 percent) in social science
journals, including those published by the Acad-
emy of Management, journal review essentially de-
termines what papers get into print. In turn, the
prestige of a particular publication outlet partially
determines how much attention and influence the
research it publishes has (Judge et al., 2007). But
the journal review process is fraught with
problems.
At the most basic level, much management and
other social science reviewing is unreliable, a fact
that has been well-documented and extensively
noted. In one classic study (Peters & Ceci, 1982), 12
previously published papers (retyped with author-
ship changed to fictional names) were resubmitted
to the same prestigious psychology journals that
had previously published them. In just 3 of the 12
cases did reviewers even recognize that the al-
ready-published papers had been published. Of the
other 9 cases, in 8 instances these previously ac-
cepted and published works were rejected. Star-
buck (2003), with access to 500 pairs of reviews of
papers during his tenure as editor of Administra-
tive Science Quarterly, reported an interrater corre-
lation of just .12. Miller (2006: 427) presented the
results of a number of studies showing, for the most
part, fairly low levels of agreement among referees.
He noted that “if a journal submission has a true
value in some abstract sense, reviewer dissensus
indicates a lack of convergence on that value”
(Miller, 2006: 426).
Journal reviewing also shows evidence of bias in
data indicating that articles that agree with re-
ceived wisdom are more likely to be accepted than
those that challenge dominant belief. So, for in-
stance, Mahoney (1977) found that referees were
more likely to reject a study with evidence that
disconfirmed widely held hypotheses and were
likely to accept an otherwise-identical paper that
supported existing beliefs. Goodstein and Brazis
(1970) also reported a bias against controversial
findings. Kuhn (1972), among others, has com-
mented on the conservative nature of science, not-
ing that scientists hold to old ideas even in the
presence of disconfirming evidence. This conserva-
tive stance may be useful in the sense that most
innovations fail, but it also constrains the likeli-
hood that innovations in practice will arise from
the academy. And other forms of bias exist in the
review process. Ceci and Peters (1982), for in-
stance, found a bias against authors from low-pres-
tige institutions.
If journal reviewing is unreliable and biased
against controversial or novel findings, then two
empirical consequences logically follow. It should
be the case that many important and new theoreti-
cal statements will be made in books or other out-
lets and not in journals, particularly the most pres-
tigious and selective journals, and that originators
of important theoretical work will report difficulty
in getting that work published. This is precisely
what Campanario (1993) found by examining more
than 300 commentaries by authors of classic pa-
pers, many of whom reported having trouble get-
ting their ideas into print. As Rynes noted, “It has
been widely demonstrated . . . that the social and
political forces associated with scientific progress
2007 1337 Pfeffer
tend toward conservatism” (2006: 1099), which
makes it tough to get new ideas into print.
In the organization sciences, many of the major
theoretical contributions have appeared in books or
in less-prestigious journals. The resource-based
view of strategy (Barney, 1991), the industry struc-
ture-conduct-performance paradigm in strategic
management (Porter, 1979), transactions cost the-
ory (Williamson, 1975), the relationship between
agency theory and corporate governance (Jensen &
Meckling, 1976), charismatic leadership (Bass,
1985), stakeholder theory (Freeman, 1984), organi-
zational demography (Pfeffer, 1983), escalating
commitment to ineffective courses of action (Staw,
1976), and resource dependence theory (Pfeffer &
Salancik, 1978)—a partial list of important ideas—
were all published either first in books or chapters
or, if in articles, in journals that were not top-rated.
Second, unreliability and conservatism in the re-
view process should lessen the differences in qual-
ity between papers published in more and less
prestigious journals. Glick, Miller, and Cardinal (in
press) summarized research that showed, using ci-
tation impact as the dependent measure, relatively
small differences between more prestigious and
less prestigious journals, with less than 10 percent
of the variation in article citations being associated
with a journal and its quality. Starbuck (2005) doc-
umented a decline in the citation advantage of the
most prestigious journals during the period from
1981 to 2001. These results are not necessarily in-
consistent with those reported by Judge and his
coauthors (2007). In that study, the amount of vari-
ation accounted for by journal citation rate is less
than 20 percent, and these authors did not explore
whether the factors affecting article citation
changed over time to provide fewer advantages to
publishing in more prestigious journals, as Star-
buck (2005) argued.
If the compilation of knowledge from research
published in academic journals is to form a foun-
dation for policy prescriptions and management
practices, another issue in journal reviewing looms
large: the overwhelming tendency to publish only
results that show significant effects and to not pub-
lish papers that fail to find effects or replicate find-
ings. Hubbard and Armstrong, summarizing empir-
ical investigations of this issue, noted, “A number
of studies have shown that peer review is biased
against the publication of null . . . or so called neg-
ative results” (1997: 337). This means that pub-
lished results are systematically biased in favor of
those showing predicted effects, which in turn
means that meta-analyses, which invariably rely
mostly if not exclusively on published studies, can
easily overestimate actual effect sizes. As Mc-
Daniel, Rothstein, and Whetzel (2006) noted, pro-
cedures exist for attempting to correct for this sam-
pling error in summarizing what existing research
implies about effects. Because knowing what
doesn’t work is often as important as knowing what
does, it would be nice to encourage the publication
of studies showing what ideas, particularly those
that are widely believed, aren’t true.
Finally, in the domain of management research,
there is a preoccupation with theory as well as an
interest in novelty, and both of these tastes appear
to take precedence over the task of cumulating a lot
of data and knowledge about what is actually going
on and what does and doesn’t work. Bergh (2003:
136) noted that to get published, one needed to
offer empirical and theoretical contributions; that
the work needed to be “interesting”; and that one
screen applied to articles in the review process was
“whether a contribution is surprising and unex-
pected” (2003: 136). Mone and McKinley (1993)
commented on the downside of this search for nov-
elty, and Hambrick (2007) wrote persuasively about
some of the costs of an excessive preoccupation
with theory over facts.
Consider a study of the effect of pay for perfor-
mance on the quality of care and outcomes for
patients suffering heart attacks (Glickman et al.,
2007). Given the pressure to tie health system (hos-
pital) reimbursement and physician compensation
to performance in health care, the effect of pay-for-
performance in this setting is a very important
topic. Also, considering the importance of the de-
pendent variable, mortality from heart attacks, the
question posed has obvious policy and practical
relevance. But there is nothing particularly theoret-
ically “new” or innovative in this (published)
study—pay for performance and even the condi-
tions under which it might or might not work is an
old topic in management research. The methods are
rigorous and the data appropriate but not particu-
larly new or inventive. And there is little “surpris-
ing” or “unexpected” in the results: “The pay-for-
performance program was not associated with a
significant incremental improvement in quality of
care or outcomes for acute myocardial infarction”
(Glickman et al., 2007: 2373). I doubt if this paper
could or would be published in a major manage-
ment or organizational research journal. And
more’s the pity, because accumulating evidence on
what works, and what doesn’t, is fundamentally
important for learning about management, improv-
ing managerial practice, and actually providing the
grist for the meta-analytic mill that the field so
loves (Eden, 2002). And that view doesn’t even
consider the possible benefits for people who have
1338 December Academy of Management Journal
heart attacks and depend on the medical system
and its management for their care.
Unfortunately, this quest for “what’s new” rather
than “what’s true” and a lack of interest in data and
scientific findings also afflicts practitioner journals
in management. As Rynes, Giluk, and Brown (2007)
documented, practitioner-oriented publications in
human resource management fail to disseminate
fundamental and important research findings. As
Guest (2007) noted, this is not just a U.S. phenom-
enon, but one that occurs in the United Kingdom as
well. In talking to the editors and publishers of
important practitioner-oriented publications, Guest
found an interest in stories, case studies, relevant
examples, and new ideas, but relatively little com-
mitment to publishing the sorts of summaries and
findings that one sees in medicine and that would
be required to build an evidence-based practice.
The fact that the interest in novelty rather than
truth besets both academic and practitioner outlets
does not diminish the importance of remedying
these biases.
Finally, as Frey (2003) forcefully argued, the ed-
iting and reviewing process tends to distort or sup-
press the original insights and points of view of
researchers even if they get their work published.
The numbers of management journals and submis-
sions are rising, and—because editors and review-
ers in management volunteer the time they spend
filling their important roles—reviewers and editors
are scarce resources. That gives the occupants of
these positions power. Casual observation suggests
that when people assume significant editorial re-
sponsibilities, citations to their work in the jour-
nals they edit tend to go up; this observation could
be systematically empirically examined. Editors
and reviewers, in positions of power, have a ten-
dency to engage in coproduction, to “help” an au-
thor write the paper they want to see or the paper
they might have written had they done the partic-
ular study. As Frey argued, “Authors only get their
papers accepted if they intellectually prostitute
themselves by slavishly following the demands”
(2003: 205) of people who have no property rights
to the journals or, for that matter, to the works they
print. The process Frey so eloquently described
and that most readers of this article will have lived
through almost assuredly curtails innovation and
results in a conservative and homogenizing bias in
the publication process.
Academic Career Processes in Business Schools
Nor are problems confined to publication and
reviewing issues. Career processes in business
schools are not likely to provide incentives encour-
aging research that will have important effects on
public policy or management practice. In fact, as
Glick et al. (in press), among others, have docu-
mented, career processes are beset by problems and
issues about as serious as those that beset the jour-
nal review processes. Glick and his colleagues
showed that a relatively high proportion (43%) of
people with doctoral degrees in management—
even degrees from middle- and top-tier schools—
leave the field within 16 years of graduation. Fur-
ther, Glick et al. showed that talent is widely
distributed among schools, in that the 32 charter
members of the Academy of Management Journal’s
Hall of Fame were dispersed over 25 universities
and a listing of the top 100 scholars as assessed by
their citation impact found them in 52 different
universities, with only 2 schools having as many as
5 people from the list. Their findings are consistent
with career processes of considerable randomness,
something to be expected in a field with a low level
of paradigm development (Pfeffer, 1993). Although
Glick et al. appropriately worried about the conse-
quences of the career processes they describe for
people seeking to make a life in the organization
sciences, there are also implications for the likeli-
hood of producing important, relevant, managerial
research.
As Laura Esserman, MBA, M.D., and director of a
breast cancer research and treatment center at the
University of California, San Francisco, has noted
in comments to Stanford MBA students, research in
science now entails much more collaboration than
in the past. Research in medicine, engineering, and
in many of the physical sciences is likely to be
team-based. Teams permit more continuity in re-
search efforts over time (since the research program
is less dependent on a single individual), help
bring more resources to bear on research questions
(by drawing on more people), and permit the gath-
ering and analysis of more data (through the efforts
of more people). Larger research teams may also
provide the advantage of multiple perspectives and
skill sets, an advantage in achieving quality noted
long ago in the literature on group decision making
(e.g., Davis, 1969). One striking thing about the
management innovations described in Mol and
Birkenshaw (forthcoming) is the extent to which
these ideas often developed across organizations
and through the actions and interactions of a num-
ber of managers attempting to solve some problem.
Although teams and teamwork are things that
management researchers have studied, participa-
tion in teams and teamwork is not something many
of them do as a style of research—and for good
reasons. Everyone who has participated in meet-
ings involving the evaluation of people with exten-
2007 1339 Pfeffer
sive collaborative research records is familiar with
the attempts to parse out the relative contributions
of the various people who worked with the focal
person and to ensure that the person being evalu-
ated has not somehow been riding on the coattails
of others. The penalties for collaboration are rein-
forced by a criterion often invoked in reviews: “Is
this individual one of the ‘x’ best?” Being part of a
research team makes it more difficult to stand out.
And the criterion of relative status is inevitably and
by definition zero-sum. So the competition for sta-
tus that is part and parcel of the academic career
process in management discourages collaborative
research efforts and the building of the sort of lab-
oratories that one sees in the physical sciences and
medicine.
As Judge et al. (2007) noted, citations are of grow-
ing importance as a metric of performance. This is
as true of individual performance as it is for the
performance of academic institutions. Judge et al.’s
data suggest that articles that are either qualitative
reviews or meta-analyses are likely to garner more
citations, and their structural equation results indi-
cate that being a meta-analysis is one of the three
most important factors affecting citations of an ar-
ticle. However, as Ilgen noted, researchers who
tried to manage their careers on the basis of these
findings would be led “toward nonempirical re-
views and a journal whose primary audience is not
management scholars” (2007: 508). So the incen-
tives for career success rooted in maximizing cita-
tions have negative effects on the production of
research that will affect management practice. The
uncertainty and dissensus that characterize the
journal review process also have other implications
for the best strategies for constructing a life as a
management scholar—implications that also may
be at variance with the aspirations for management
research outlined at the beginning of this article.
Consider these recommendations for thinking
about research in the context of career strategies
from Glick and his colleagues (in press):
Does the project effectively leverage my prior invest-
ments in one of my platforms?
Did my colleagues get excited by my two-minute
topic description in the hallway?
Did I stimulate controversy with a quick sketch of
the research model? Did I find an anomalous result
in the literature that I might be able to explain? How
much more work is required to complete this
project?
Let me suggest that little about these criteria seems
likely to produce research of importance to man-
agement practice or public policy, or maybe even
research that advances the field’s development.
And don’t misunderstand—I am in no way criticiz-
ing the interesting and informative analysis of
Glick and his colleagues. Their recommendations
follow logically from the data on careers they
present. The problem is with the structure of the
career process, not with its observers.
The Competition among Business Schools
A third structural factor that both diminishes
innovation and steers research in directions that
are at best orthogonal to the concerns of the man-
agement profession is the competition among busi-
ness schools for status and resources. Competition
can often produce uniformity and stifle innovation.
As DiMaggio and Powell (1983) noted, one source
of institutional isomporphism is the quest for legit-
imacy, which an actor sometimes achieves by try-
ing to look legitimate—or trying to appear similar
to others. Doria, Rozanski, and Cohen (2003) com-
mented on how business school curricula have be-
come increasingly similar and how it is far from
clear that everyone offering essentially the same
product makes much strategic sense. The same
thing has happened in research, where the pressure
to conform to an American model and publish in
United States–based journals has intensified over
time (Leung, 2007).
In sort of a story of unintended consequences,
this “Americanization” of research began in part
with a quest on the part of schools, and in some
instances governments, to improve the quality of
business schools and managerial research. So, for
instance, in the United Kingdom, the “Research
Assessment Exercise” (“RAE”; Macdonald & Kam,
2007) is used to periodically evaluate the quality of
research being done at U.K. business schools, with
the results of these assessments determining re-
search-funding levels for the ensuing years until
the exercise is repeated.
1
It turns out that research
quality is measured largely by publications and
citations in high-quality journals, and virtually all
of these are U.S. journals. As Macdonald and Kam
noted, “Professional journals are decidedly out of
favour” and “quality journals are overwhelmingly
seen as publishing mainstream research rather than
niche or interdisciplinary work” (2007: 647).
The consequences might be funny if they weren’t
1
In a way not dissimilar to practices in U.S. public
schools, this system tends to ensure that the “rich” get
richer. Instead of allocating resources to help schools
improve, the system rewards those that have already
achieved some degree of excellence.
1340 December Academy of Management Journal
so depressing. For instance, because there are real
economic consequences linked to a U.K. school’s
ranking in the RAE, the competition for faculty—
and faculty movement—seems to correspond to the
periodicity of the assessments. Because “visiting”
faculty—such as high-status individuals from U.S.
universities—can be counted if they are doing re-
search with a given school’s faculty, there are in-
centives to regularly invite accomplished individ-
uals who already have published in the “right”
places back and to involve them in local research.
2
This behavior is not confined to the United King-
dom. Macdonald and Kam (2007: 644) commented
on how schools in Australia and even some in
France pay faculty on a piece-rate basis for publi-
cations in top journals, with the payments in Aus-
tralia varying depending on the tier (the ranking) of
the journal. Again, this makes perfect sense in a
world in which real resources flow depending on
faculty publications in prestigious outlets.
This pressure to publish in the ranked journals,
which tend to be U.S.-centered or at least U.S.-
centric, along with the recruiting of faculty in a
global labor market, has contributed to the produc-
tion of some degree of theoretical isomorphism. As
Leung argued with respect to Asian management
research, “The downside of the adaptive response
to the pressure to publish in highly cited journals is
that virtually all Asian management research falls
within the confines of well-known Western theo-
ries” (2007: 512).
Theoretical isomorphism is, by the way, not the
same as the consensus that characterizes high lev-
els of paradigm development, and it is also not
necessarily going to produce research that is useful
for management practice. As institutional theory
tells us, often what get imitated and signaled are
only the most superficial aspects of something, and
these imitated forms have little effect on deep, un-
derlying processes. Meyer and Rowan’s (1977) clas-
sic study of schools noted how these organizations
could appear to be conforming to some institution-
alized sense of what schools should look like, even
when the formal structures that were imitated had
precious little effect on what actually occurred. In
management disciplines, what seems to attract im-
itation in the quest to signal quality is the attraction
to theory (Colquitt & Zapata-Phelan, 2007), meth-
odological sophistication and, judging by what
journals are highly ranked and which are ranked
farther down, a disdain for work that informs or
that might inform professional practice and public
policy.
SOME MODEST PROPOSALS
I could go on about these issues at length, be-
cause the literature on the topics I have raised is
both extensive and extends well back in time. Cri-
tiques of business school research, career pro-
cesses, and peer reviewing are old news (e.g., Porter
& McKibbin, 1988). But it was important to lay out
some of the issues and an analysis of their root
causes in sufficient detail to move to what we
might—and note I don’t say are—likely to do to
change things.
Two general points inform these proposals. First,
we ought to put what we know into use. There is
extensive research on the innovation process, on
what makes ideas influential, on what managers
do, and on the problems organizations confront.
We ought to use that knowledge in our own man-
agement and organizations. Second, the treatment
ought to correspond, in some way, to the diagnosis
of the problem.
Yet another way to frame a search for what might
or should be different is to ask why medicine, en-
gineering, and education are so different from man-
agement research. Or to ask why, within business
schools, research and teaching about entrepreneur-
ship seem quite different in their degree of connec-
tion to professional practice.
These are important questions that could form
the basis of substantive research. My sense is that
part of the answer in the case of entrepreneurship is
the happy co-occurrence of two forces: strong,
maybe even overwhelming, student and alumni de-
mand coupled with the persistent inability to find
“regular” faculty who could, or would, do research
in the traditional mold on this subject. This is not
to say that there is no research on entrepreneurship
in the typical, elite academic journals or that there
couldn’t be. Rather, it is to note that a need, cou-
pled with an inability to meet that need using
customary approaches, produced—no surprise—
innovation! Some of that innovation involved de-
veloping cotaught courses, where one of the in-
structors was a current or former executive from an
entrepreneurial company. Some of that innovation
entailed hiring people whom we would never have
hired as colleagues using traditional criteria, often
in lecturer roles—entrepreneurs and executives
who were either retired from their primary roles or
2
I have a few colleagues who visit the same U.K.
university each summer. There is nothing malign about
this—one could reasonably argue that their presence and
collaboration on research will help improve the research
skills of the local faculty. However, there is a price paid
for this “training”: namely, the homogenization of re-
search topics and techniques as the schools in Europe
mimic those in the U.S.
2007 1341 Pfeffer
who taught part-time. Some of the innovation en-
compassed changing what we considered to be re-
search, expanding our definitions to encourage
clinical, qualitative research and case writing (see
Vermeulen, 2007) as well as the use of qualitative
field methods more generally. The closer connec-
tion with professional practice—not from an occa-
sional lecture or executive program but from the
coproduction of teaching and research and more
regular interaction—are features that I see, at least
to a somewhat greater extent, in engineering, med-
icine, and education.
These examples suggest that it is possible to be
both relevant and rigorous, to serve the scientific
enterprise even while doing work that informs pol-
icy and practice. So, what might it take, more spe-
cifically, to move us in that direction?
If one issue is that current review and status
processes don’t particularly reward the production
of knowledge that anyone cares about, we need to
change the rewards and how they are allocated. To
take one small example, some years ago the Cali-
fornia Management Review initiated an award for
the best article in each volume. The academic edi-
torial board nominates the three finalists, but a
panel of practitioners selects the winner. Of course,
CMR has a different mission than many of our
journals, and I am not for a moment claiming this
process is perfect. But it does seem that involving
practicing professionals, at least to some degree, in
determining awards and rewards is one reasonable
step toward blending academic and professional
values.
The research by Glick and his colleagues (in
press) and others illustrates that a shockingly high
proportion of papers, even those published in the
elite journals, garner zero citations, with an even
larger percentage obtaining very few. If we take
these data seriously and want our tenure and re-
source allocation criteria to reward impact, then it
seems somewhat inconsistent to have faculty eval-
uation standards that emphasize publishing papers
in certain journals over evaluating the effect of an
individual’s written work, without considering
where it first appeared. This logic suggests weight-
ing citations more strongly than number of papers
and where they have been published in review
processes. And since citations measure scientific
impact only imperfectly and, moreover, we are pre-
sumably concerned about the effects of research on
professional practice above and beyond just its sci-
entific impact, we ought to assess contributions
along those broader dimensions measuring the ef-
fects of our work as well.
To take a case in point, consider David Kelley.
David is the founder and former CEO of IDEO Prod-
uct Development, a company that has not only won
a large number of design awards but one whose
ideas about innovation and brainstorming have
been recognized and are influential in a large num-
ber of companies and professional service firms.
Kelley, a member of the National Academy of En-
gineering and a full professor on the Stanford en-
gineering school faculty, does not have a Ph.D., and
I am not sure he has published anything, particu-
larly in peer-reviewed journals. No self-respecting
business school using normal academic criteria
would have anything to do with him, even though
one could plausibly argue that IDEO, through both
its design and its management practices and cul-
ture, has had more effect on management than
scores of academic articles combined. The engi-
neering school may have wisdom that many busi-
ness schools lack.
If the current reviewing process is at least some-
what unreliable and conservative, there are possi-
ble alternatives. Data suggest that innovations in
products and services (and there is no reason to
believe this would be less true in the domain of
ideas and research) often come from peripheral ac-
tors who have less invested in existing ways of
thinking and doing things (e.g., Christensen, 1998).
Reviewing is in the hands of relatively few people
who are selected in large measure for their demon-
strated socialization into the prevailing topics, the-
ories, and methodologies of a field. But the opera-
tion of prediction markets (Surowiecki, 2004), and
the practices of companies such as Google that
determine new products and new technologies in
part through a voting process, speak to the desir-
ability of leaving judgments about the worth of
research and ideas open to more people in a more
democratic assessment process. In fact, this is just
what the Social Science Research Network (SSRN)
does. Founded by Michael Jensen, an economist
who definitely believes in markets as arbiters of
quality and importance and who has had his own
troubles in getting some of his more innovative
work into print, SSRN posts pretty much every-
thing and then tracks downloads, providing listings
of the most frequently downloaded papers. Jensen
maintains that leaving publication open and letting
the marketplace for ideas determine the usefulness
and worth of research papers is preferable to having
such decisions reside in the hands of a few people.
The Academy of Management journals, and
many others, have made substantial progress in
cutting down review times and posting papers
much more expeditiously. This is an important ef-
fort. Not only is innovation encouraged by rapid
prototyping, but also, an inverse linear relationship
may exist between the average publication delay in
1342 December Academy of Management Journal
a field and the journal impact factor (Yu, Wang, &
Yu, 2005). So it is important to maintain the focus
on expediting review turnaround and Internet
posting.
If we want to build more collaboration, two tar-
gets of intervention emerge. One is the physical
design of our buildings. In business schools faced
with a chronic shortage of space on university cam-
puses, common areas and meeting rooms are often
the first things to go. And business schools typi-
cally look more like traditional office buildings
than like learning laboratories or places that would
facilitate building communities of practice.
The second issue is the implicit message about
collaboration: Collaborate, but not too much, and
certainly not repeatedly with the same people, par-
ticularly if they are more senior than you. Person-
nel reviews are a necessary part of academic gov-
ernance. We should, nonetheless, be conscious of
the extent to which we may be trading work ar-
rangements that might produce more useful and
innovative knowledge for arrangements that make
assigning individual credit easier.
RECONSTRUCTING OUR ENVIRONMENT TO
CREATE A DIFFERENT FUTURE
Environments matter. That is one pervasive les-
son from our field. And different environments are
possible, even in universities. Just look at our col-
leagues in other professional schools. People have
accused me of romanticizing the success of some of
the other professional schools, but I don’t agree. It
is certainly not the case that these schools have
“solved” everything once and for all and that
everything is perfect. But one cannot observe the
advance of medical science and knowledge and its
implementation in practice over the past several
decades, including the almost 50 percent reduction
in death rates from heart disease, and not be im-
pressed. The thrust of the evidence-based medicine
movement was to bring the best scientific knowl-
edge to the bedside (e.g., Rosenberg & Anna, 1995).
As evidence-based medicine has grown, the practi-
cal issues of treatment, diagnosis, and the under-
standing of disease processes have influenced
the research—even the basic science, in some in-
stances—that gets done. In turn, advancing scien-
tific understanding has been implemented in prac-
tice and in the drugs and devices that help to
deliver care. The link between science and practice
is closer, as it seems to be in engineering and com-
puter science as well, but I don’t see any less aca-
demic legitimacy for these fields. If anything, their
science has advanced at least as vigorously (if not
more so) than has ours.
In the end, I am optimistic about our ability to do
research that affects not only management practice
but also public policy. This optimism stems from
the remarkable body of knowledge that we and our
colleagues in related social sciences have built over
the past decades, including the 50 years of this
journal. We know a lot about innovation, about the
design of social and physical environments, about
working in teams, about building communities of
practice, and about a lot of other things that are
relevant to doing research that is both scientifically
and professionally significant. My vision is that we
finally use that knowledge—turning our knowing
into doing—to design our own systems, environ-
ments, and work practices. By so doing, we can act
to fulfill the aspirations of many people in the
Academy of Management and also provide substan-
tial service to the world in which we live.
REFERENCES
Bakalar, N. 2007. Review finds drug makers issue more
positive studies. New York Times, February 27: F7.
Bangerter, A., & Heath, C. 2004. The Mozart effect: Trac-
ing the evolution of a scientific legend. British Jour-
nal of Social Psychology, 43: 605–623.
Barber, B. M., Heath, C., & Odean, T. 2003. Good reasons
sell: Reason-based choice among individual and
group investors in the stock market. Management
Science, 49: 1636–1652.
Barney, J. 1991. Firms, resources, and sustained compet-
itive advantage. Journal of Management, 17: 99–
120.
Bass, B. M. 1985. Leadership and performance beyond
expectations. New York: Collier Macmillan.
Berger, J., & Heath, C. 2005. Idea habitats: How the prev-
alence of environmental cues influences the success
of ideas. Cognitive Science, 29: 195–221.
Bergh, D. D. 2003. From the editors: Thinking strategi-
cally about contribution. Academy of Management
Journal, 46: 135–136.
Blumenthal, D., Campbell, E. G., Anderson, M. S.,
Causino, N., & Louis, K. K. 1997. Withholding re-
search results in academic life science: Evidence
from a national survey of faculty. Journal of the
American Medical Association, 277: 1224–1228.
Campanario, J. M. 1993. Consolation for the scientist:
Sometimes it is hard to publish papers that are later
highly-cited. Social Studies of Science, 23: 342–
358.
Ceci, S. J., & Peters, D. 1982. Peer review: A study of
reliability. Change: The Magazine of Higher
Learning, 14(6): 44–48.
Christensen, C. 1998. The innovator’s dilemma. Boston:
Harvard Business School Press.
2007 1343 Pfeffer
Colquitt, J. A., & Zapata-Phelan, C. P. 2007. Trends in
theory building and theory testing: A five-decade
study of Academy of Management Journal. Acad-
emy of Management Journal, 50: 1281–1303.
Davenport, T. H., & Prusak, L. 2003. What’s the big idea?
Boston: Harvard Business School Press.
Davis, J. H. 1969. Group performance. Reading, MA:
Addison-Wesley.
DiMaggio, P. J., & Powell, W. W. 1983. The iron cage
revisited: Institutional isomorphism and collective
rationality in organizational fields. American Socio-
logical Review, 48: 147–160.
Doria, J., Rozanski, H., & Cohen, E. 2003. What business
needs from business schools. Strategy ? Business,
32: 39–45.
Eden, D. 2002. From the editors: Replication, meta-anal-
ysis, scientific progress, and AMJ’s publication pol-
icy. Academy of Management Journal, 45: 841–
846.
Freeman, R. E. 1984. Strategic management: A stake-
holder approach. Boston: Pittman.
Frey, B. 2003. Publishing as prostitution? Choosing be-
tween one’s own ideas and academic success. Public
Choice, 116: 205–223.
Glick, W. H., Miller, C. C., & Cardinal, L. B. In press.
Making a life in the field of organization science.
Journal of Organizational Behavior.
Glickman, S. W., Ou, F., DeLong, E. R., Roe, M. T., Lytle,
B. L., Mulgund, J., Rumsfeld, J. S., Gibler, W. B.,
Ohman, E. M., Schulman, K. A., & Peterson, E. D.
2007. Pay for performance, quality of care, and out-
comes in acute myocardial infarction. Journal of the
American Medical Association, 297: 2373–2380.
Goodstein, L. D., & Brazis, K. L. 1970. Credibility of
psychologists: An empirical study. Psychological
Reports, 27: 835–838.
Guest, D. 2007. Don’t shoot the messenger: A wake-up
call for academics. Academy of Management Jour-
nal, 50: 1020–1026.
Hambrick, D. C. 1994. Presidential address: What if the
Academy actually mattered? Academy of Manage-
ment Review, 19: 11–16.
Hambrick, D. C. 2007. The field of management’s devo-
tion to theory: Too much of a good thing? Academy
of Management Journal, 50: 1346–1352.
Hubbard, R., & Armstrong, J. S. 1997. Publication bias
against null results. Psychological Reports, 80: 337–
338.
Ilgen, D. R. 2007. Citations to management articles: Cau-
tions for the science about advice for the scientist.
Academy of Management Journal, 50: 507–509.
Jensen, M. C., & Meckling, W. H. 1976. Theory of the
firm: Managerial behavior, agency costs and owner-
ship structure. Journal of Financial Economics, 3:
305–360.
Judge, T. A., Cable, D. M., Colbert, A. E., & Rynes, S. L.
2007. What causes a management article to be cit-
ed—Article, author, or journal? Academy of Man-
agement Journal, 50: 491–506.
Kuhn, T. S. 1972. The structure of scientific revolutions
(2nd ed.). Chicago: University of Chicago Press.
Leung, K. 2007. The glory and tyranny of citation impact:
An East Asian perspective. Academy of Manage-
ment Journal, 50: 510–513.
Macdonald, S., & Kam, J. 2007. Ring a ring o’ roses:
Quality journals and gamesmanship in management
studies. Journal of Management Studies, 44: 640–
655.
Mahoney, M. J. 1977. Publication prejudice: An experi-
mental study of confirmatory bias in the peer review
system. Cognitive Therapy and Research, 1: 161–
175.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Mau-
rer, S. D. Publication bias: A case study of four test
vendors. Personnel Psychology, 59: 927–953.
Meyer, J. W., & Rowan, J. 1977. Institutionalized organi-
zations: Formal structure as myth and ceremony.
American Journal of Sociology, 83: 340–363.
Miller, C. C. 2006. Peer review in the organizational and
management sciences: Prevalence and effects of re-
viewer hostility, bias, and dissensus. Academy of
Management Journal, 49: 425–431.
Mol, M. J., & Birkinshaw, J. Forthcoming. Giant steps in
management: Key management innovations. Lon-
don: Pearson Education.
Mone, M. A., & McKinley, W. 1993. The uniqueness
value and its consequences for organization studies.
Journal of Management Inquiry, 2: 284–296.
Pearce, J. L. 2004. What do we know and how do we
really know it? Academy of Management Review,
29: 175–179.
Pfeffer, J. 1983. Organizational demography. In L. L.
Cummings & B. M. Staw (Eds.), Research in organ-
izational behavior, vol. 5: 299–357. Greenwich, CT:
JAI Press.
Pfeffer, J. 1993. Barriers to the advance of organizational
science: Paradigm development as a dependent vari-
able. Academy of Management Review, 18: 599–
620.
Pfeffer, J., & Fong, C. T. 2002. The end of business
schools? Less success than meets the eye. Academy
of Management Learning and Education, 1: 78–95.
Pfeffer, J., & Salancik, G. R. 1978. The external control of
organizations: A resource dependence perspec-
tive. New York: Harper & Row.
Podsakoff, P. M., McKenzie, S. B., Bachrach, D. G., &
Podsakoff, N. P. 2005. The influence of management
journals in the 1980s and 1990s. Strategic Manage-
ment Journal, 26: 473–488.
Porter, L. W., & McKibbin, L. E. 1988. Management
1344 December Academy of Management Journal
education and development. New York: McGraw-
Hill.
Porter, M. E. 1979. The structure within industries and
companies’ performance. Review of Economics and
Statistics, 61: 214–227.
Rosenberg, W., & Anna, D. 1995. Evidence-based medi-
cine: An approach to clinical problem-solving. Brit-
ish Medical Journal, 310: 1122–1126.
Rousseau, D. M. 2006. Is there such a thing as “evidence-
based management”? Academy of Management Re-
view, 31: 256–269.
Rynes, S. L. 2006. “Getting on board” with AMJ: Balanc-
ing quality and innovation in the review process.
Academy of Management Journal, 49: 1097–1102.
Rynes, S. L., Giluk, T. L., & Brown, K. G. 2007. The very
separate worlds of academic and practitioner peri-
odicals in human resource management: Implica-
tions for evidence-based management. Academy of
Management Journal, 50: 987–1008.
Starbuck, W. H. 2003. Turning lemons into lemonade:
Where is the value in peer reviews? Journal of Man-
agement Inquiry, 12: 344–351.
Starbuck, W. H. 2005. How much better are the most
prestigious journals? The statistics of academic pub-
lication. Organization Science, 16: 180–200.
Staw, B. M. 1976. Knee-deep in the big muddy: A study
of escalating commitment to a chosen course of ac-
tion. Organizational Behavior and Human Perfor-
mance, 16: 27–44.
Surowiecki, J. 2004. The wisdom of crowds. New York:
Doubleday.
Tahai, A., & Meyer, M. J. 1999. A revealed preference
study of management journals’ direct influences.
Strategic Management Journal, 20: 279–296.
Van de Ven, A., & Johnson, P. E. 2006. Knowledge for
theory and practice. Academy of Management Re-
view, 31: 802–821.
Vermeulen, F. 2007. “I shall not remain insignificant”:
Adding a second loop to matter more. Academy of
Management Journal, 50: 754–761.
Walsh, J. P., Weber, K., & Margolis, J. D. 2003. Social
issues and management: Our lost cause found. Jour-
nal of Management, 29: 859–881.
Washburn, J. 2005. University Inc.: The corporate cor-
ruption of higher education. New York: Basic
Books.
Williamson, O. E. Markets and hierarchies. New York:
Free Press.
Yu, G., Wang, X., & Yu, D. 2005. The influence of publi-
cation delays on impact factors. Scientometrics, 64:
235–246.
Jeffrey Pfeffer ([email protected]) is the
Thomas D. Dee II Professor of Organizational Behavior at
the Stanford Graduate School of Business. He received
his Ph.D. from Stanford University. His research interests
include evidence-based management, power and politics
in organizations, and economic language and assump-
tions and their effects on behavior.
2007 1345 Pfeffer
doc_312223455.pdf
There is evidence that some of the Academy of Managements current and past leadership shares at least some elements of this aspiration.
A MODEST PROPOSAL: HOW WE MIGHT CHANGE THE
PROCESS AND PRODUCT OF MANAGERIAL RESEARCH
JEFFREY PFEFFER
Stanford University
I, and possibly many of my colleagues in the
Academy of Management, have a dream for the
future of management research—an aspiration that
builds on, but is not satisfied with, the considerable
accomplishments to date. I would summarize that
dream for management research as including the
following components: (1) having more effect on
the actual practice of management in organizations
in both the private and public sectors; (2) being at
least as much “at the table” and as influential in the
formulation of policy in both the public domain
and private sector as our sister social science dis-
ciplines, and more specifically, on a par with eco-
nomics; and (3) being as connected to, as engaged
with, and as relevant for our profession—manage-
ment—as our sister professional schools in fields
such as engineering, education, and medicine are
to their professions and constituencies.
The level of engagement contemplated might in-
clude actually creating or partnering in the devel-
opment of business practices and techniques and
thereby being an active participant in the manage-
ment innovation process. This is the principle be-
hind the Management Innovation Lab, recently
founded at London Business School by Gary Hamel
and Julian Birkinshaw, with the explicit objective
of creating partnerships between academics and
businesses and their leaders to cocreate better man-
agement practices. This role as a possible source,
not merely an evaluator, of professional innovation
is something that one sees in engineering and med-
ical schools, for instance, where companies, prod-
ucts, technologies and devices, and drugs come, on
occasion, from the schools, their faculties, and the
research that they do. Many universities today have
technology-licensing units that actively work to en-
sure the commercialization of knowledge on their
campuses, at times in partnership with the faculty
inventors. Although I don’t know of any systematic
data, I suspect that technology-licensing offices
have only minimal interaction with business
schools, at least in terms of commercializing re-
search or capitalizing on the ideas produced in
those places.
There is evidence that some of the Academy of
Management’s current and past leadership shares
at least some elements of this aspiration. The goal
of increased impact from our published research is
implied in Sara Rynes’s charge (to me and the other
commentators in this 50th anniversary editors’ fo-
rum) to consider “how management research might
change to have maximal impact on the future of
management.” The goal of affecting public policy
or at least public discussion is implied in the Acad-
emy’s hiring of a public relations firm a number of
years ago in an effort to get research findings more
widely disseminated to a broader audience. Many
presidential addresses given at the AOM’s annual
meeting (e.g., Hambrick, 1994; Pearce, 2004) and
other articles (e.g., Van de Ven & Johnson, 2006)
have considered the effect of the Academy and its
activities on the larger society and business and the
connections between the worlds of theory and
practice. The recent effort to articulate and advance
an agenda of evidence-based management (Rous-
seau, 2006) has as a goal both figuring out what we
know and having that knowledge form the founda-
tion for decisions and actions in both the public
and private sectors.
In this paper, I argue that we have historically not
done particularly well in fulfilling these aspira-
tions. The structure and processes governing both
the careers of academics and the prepublication
review of their work limit the influence of manage-
ment research on practice, social policy, and even
the terms of public discourse about organizational
issues. These limits prevail despite the good inten-
tions and heroic efforts of journal editors and re-
viewers. If we are serious about our aspirations, we
ought to implement what we know about building
innovative organizations that are more effective in
having knowledge turned into action. Thus, this
essay lays out a set of modest—or possibly not so
modest—proposals. But before I move on to these
topics, it is important to first consider the legiti-
macy and appropriateness of the goals proposed
here for management research.
The author gratefully acknowledges the extremely
thoughtful and very helpful suggestions of Sara Rynes,
Roy Suddaby, and Christine Quinn Trank.
? Academy of Management Journal
2007, Vol. 50, No. 6, 1334–1345.
1334
Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’s express
written permission. Users may print, download or email articles for individual use only.
WHAT SHOULD THE ROLE OF BUSINESS
SCHOOLS AND BUSINESS RESEARCH BE?
Conflicts of Interest
As Roy Suddaby has so appropriately and per-
suasively noted in comments that I am sure others
would agree with, the aspirations just described are
not without controversy. Specifically, some might
argue that these objectives for management re-
search are (1) inconsistent with the historical role
of business schools and their faculty as evaluators
of, but not creators or originators of, business prac-
tice, (2) not shared by all in the discipline, and (3)
risky in that closer professional interaction and a
more active role in management innovation raises
the chances that conflicts of interest will arise.
To take the last point first, the risks are clear:
medical schools and, for that matter, engineering
schools and indeed universities as a whole are
fraught with conflicts of interest (see, for instance,
Washburn, 2005) and have certainly been, to some
substantial extent, captured by the industries they
serve. Drug companies that sometimes have medi-
cal faculty with equity and/or managerial interests
in them have funded clinical trials conducted by
these same faculty using university resources. As
public support for universities and university re-
search has waned, the importance of private sup-
port has grown tremendously. For instance, be-
tween 1993 and 2003, industry-sponsored research
at the University of California increased 97 percent
(Washburn, 2005: 19).
These close relationships between industry and
academia almost invariably entail some degree of
mutual influence over the research that gets done
and the questions that get asked as well as over how
that research gets disseminated. Providing one ex-
ample, Blumenthal and colleagues (Blumenthal,
Campbell, Anderson, Causino, & Louis, 1997), sur-
veying life science faculty, found that almost one in
five had delayed publication of research results for
more than six months sometime during the preced-
ing three years to protect commercial interests and
proprietary information. Their analyses showed
that participation in academic-industry research re-
lationships and engaging in the commercialization
of university research were significant predictors of
the decision to delay.
In the management research context, some might
worry that in the effort to obtain careers as advisors
or to get funding from external organizations, the
objectivity with which business school faculty
evaluate the ideas and practices of business organ-
izations could be lessened. Therefore, Suddaby ar-
gued that we risk losing objectivity in becoming
closer participants in the profession and that a
more appropriate role for business school research-
ers is to evaluate the techniques and ideas of others,
providing legitimation but not invention or
development.
One can make at least three responses to this
argument, without for a moment denying its valid-
ity. First, as the medical field illustrates, confining
research to solely an evaluative rather than a devel-
opmental role does not ensure objectivity. For in-
stance, research shows that when drug companies
fund studies of the effectiveness of those drugs, the
results are, not surprisingly, more favorable for the
drugs than when such funding comes from other
sources such as government grants (Bakalar [2007],
and see Washburn [2005: 84] for an extensive re-
view of studies showing how funding source deter-
mines outcome in drug efficacy research). There-
fore, remaining solely in an evaluative role rather
than assuming an inventing or developmental role
does not guarantee an absence of conflict of
interest.
Second, business schools have already been cap-
tured by companies and managerial interests to
some extent, so we may already be paying the cost
without reaping many corresponding benefits.
Walsh, Weber, and Margolis (2003), for instance,
documented the co-occurrence of two trends that
may illustrate the existing influence of companies
on what we study. They showed that, over the past
several decades, a rise in the amount of funding
from wealthy alumni and companies was accompa-
nied by a decline in research on topics of social
responsibility incorporating dependent variables
assessing social impact, rather than economic effi-
ciency. Washburn (2005) cited the comments of a
professor occupying the Kmart Chair on marketing
at Wayne State University: “‘Kmart’s attitude al-
ways has been: What did we get from you this year?
Some professors would say they don’t like that
position, but for me, it’s kept me involved with a
major retailer, and it’s been a good thing’” (Wash-
burn, 2005: 5). The idea that business schools,
heavily dependent on outside donations, are not
already influenced by this dependence and are bas-
tions of unsullied objectivity because management
research is less engaged in the creation or innova-
tion of management practices seems implausible
(e.g., Pfeffer & Salancik, 1978).
And third, in business school disciplines other
than management, most notably finance and eco-
nomics but other areas of study as well, invention
and the economic capture of the fruits of that in-
vention are already well advanced. Finance faculty
such as Nobel prize winners Myron Scholes and
William Sharpe have decamped from academia to
found or serve in important roles in firms that
2007 1335 Pfeffer
employ ideas they have been instrumental in de-
veloping, and in some instances, tenured offers in
finance have been made to Ph.D.’s working on Wall
Street. Economic consulting and forecasting firms
have been cofounded by academics who did con-
siderable original research in universities. And the
successful strategy consulting firm Monitor has Mi-
chael Porter of Harvard Business School as one of
its progenitors. A separation between research and
practice, between the world of scholars and practi-
tioners, that may possibly be true for some seg-
ments of management research or in some coun-
tries does not necessarily hold for all elements of
business school faculty even today, at least in the
United States.
And the argument that the closer connection be-
tween business and academics is a change from
past practice may not be based on accurate obser-
vation. The change in the composition of faculty—
from practitioners with experience in business to
scholars with doctoral degrees—is relatively re-
cent, occurring in part as a response to the Ford and
Carnegie reports in the 1950s that criticized busi-
ness schools as nothing more than glorified trade
schools and pushed for more rigorous social sci-
ence research. Even today, considering the substan-
tial number of former entrepreneurs and business
leaders serving in lecturer roles in schools, it is far
from clear that there is as much separation as some
believe, although we may not be organized to ben-
efit from these ties. Moreover, the recent evolution
of recruiting in management from scholars with
degrees from business schools to scholars with de-
grees from the social science departments of eco-
nomics, psychology, and sociology suggests that
the boundaries between professional practice and
business schools are not constant, either across in-
stitutions or over time.
All of this is not to say that Suddaby and others
aren’t correct in noting that some will not approve
of aspirations for the role of management research I
have articulated and that conflicts of interest aren’t
a problem. But it is clear, from considering other
areas of research within business schools and other
professional schools, that we have a choice as to
what role we want management research to play
and how to construct that role. Data about the ef-
fects of various governance arrangements and roles
and responsibilities and, of course, values and pref-
erences, should inform these decisions.
The Place of Management Research in the
Marketplace for Ideas
The concept that ideas “compete” in a market-
place and that empirical validity is only one—and
maybe not even the most important—characteristic
that determines which ideas win seems both useful
and valid (see, for instance, Bangerter & Heath,
2004; Barber, Heath, & Odean, 2003; Berger &
Heath, 2005). A consideration of the management
idea marketplace suggests that management re-
search produced by academics does not fare partic-
ularly well in this competition and that manage-
ment scholars have not been the progenitors of the
most important management concepts.
Recently, Mol and Birkenshaw (forthcoming)
wrote a book briefly describing what they believe
are the world’s 50 most important management in-
novations—things such as lean manufacturing, ac-
tivity-based cost accounting, T-groups, matrix or-
ganizational structures, and brand management.
What is noteworthy is that in none of the 50 in-
stances did the idea or innovation originate with an
academic or in academic research, at least accord-
ing to their brief descriptions of the innovations
and how they evolved.
A similar, although not quite as dismal, picture
of the role of academic research emerges in Daven-
port and Prusak’s (2003) review of important con-
temporary management ideas. They noted that
“most business schools . . . have not been very ef-
fective in the creation of useful business ideas”
(Davenport & Prusak, 2003: 81). Pfeffer and Fong
(2002) examined business school research’s impact
using several indicators: the proportion of business
best-sellers and the proportion of books cited by
BusinessWeek as best business books written by
business school faculty, citations to books written
by faculty compared to citations to other business
books, and the proportion of leading management
ideas and techniques covered in a Bain survey that
originated with business school faculty rather than
with a consulting firm or a company. Pfeffer and
Fong concluded that business school research was
making a modest contribution to management prac-
tice compared to research and ideas that came from
consulting firms, journalists, and companies.
Without question, each of these compilations
and assessments can be criticized for flaws in
method, sample, or both. But when a number of
different people look at the same basic question—
the relative importance of academic research in
producing useful managerial ideas or innova-
tions—and come to essentially the same conclusion
using different time periods, methods, and criteria,
there must be some kernel of truth in the observa-
tion that management research has not played as
prominent a role in the marketplace of ideas as it
might, and possibly should.
1336 December Academy of Management Journal
THE SOURCES OF THE PROBLEM
In seeking to understand why management re-
search may have had less effect on practice than
research in other professional fields, as well as
other differences, one undoubtedly encounters
many explanations. An idea that ought to be re-
jected immediately is absence of talent, effort, or
attention on the part of individuals engaged in the
enterprise. As evidenced by the enormous amount
of self-reflection in editorial essays by Eden (2002),
Bergh (2003), and Rynes (2006) and the insightful
discussions on reviewing, theory, and the scientific
process in virtually all recent issues of this journal,
AMJ’s people are consciously concerned with the
reviewing process and with what constitutes a con-
tribution, and they have encouraged the use of mul-
tiple methods and a variety of theoretical perspec-
tives. Recent research has shown that where (i.e., in
which journal) an article is published has a large
effect on its being cited, and the evidence is that the
various Academy of Management journals, partic-
ularly AMJ and AMR, are prestigious and have high
impact (Judge, Cable, Colbert, & Rynes, 2007; Pod-
sakoff, MacKenzie, Bachrach, & Podsakoff, 2005;
Tahai & Meyer, 1999).
In view of what we have learned from the quality
movement, it is unlikely that explanations for prob-
lems with management research and its impact are
to be found by looking to any sort of individual
deficiencies or motivations. Instead, more struc-
tural explanations that are relevant to article review
processes, career reward contingencies, and the ef-
fects of competition for status among schools seem
like reasonable places to begin an inquiry into what
may be going wrong.
The Journal Review Process
Management research is published mostly in
peer-reviewed journals and also in books and pub-
lications that are not reviewed, such as magazines,
newsletters, and so forth. It is axiomatic that in
scientific fields, the journal review process is im-
portant. Given the typically extremely high rejec-
tion rates (often over 90 percent) in social science
journals, including those published by the Acad-
emy of Management, journal review essentially de-
termines what papers get into print. In turn, the
prestige of a particular publication outlet partially
determines how much attention and influence the
research it publishes has (Judge et al., 2007). But
the journal review process is fraught with
problems.
At the most basic level, much management and
other social science reviewing is unreliable, a fact
that has been well-documented and extensively
noted. In one classic study (Peters & Ceci, 1982), 12
previously published papers (retyped with author-
ship changed to fictional names) were resubmitted
to the same prestigious psychology journals that
had previously published them. In just 3 of the 12
cases did reviewers even recognize that the al-
ready-published papers had been published. Of the
other 9 cases, in 8 instances these previously ac-
cepted and published works were rejected. Star-
buck (2003), with access to 500 pairs of reviews of
papers during his tenure as editor of Administra-
tive Science Quarterly, reported an interrater corre-
lation of just .12. Miller (2006: 427) presented the
results of a number of studies showing, for the most
part, fairly low levels of agreement among referees.
He noted that “if a journal submission has a true
value in some abstract sense, reviewer dissensus
indicates a lack of convergence on that value”
(Miller, 2006: 426).
Journal reviewing also shows evidence of bias in
data indicating that articles that agree with re-
ceived wisdom are more likely to be accepted than
those that challenge dominant belief. So, for in-
stance, Mahoney (1977) found that referees were
more likely to reject a study with evidence that
disconfirmed widely held hypotheses and were
likely to accept an otherwise-identical paper that
supported existing beliefs. Goodstein and Brazis
(1970) also reported a bias against controversial
findings. Kuhn (1972), among others, has com-
mented on the conservative nature of science, not-
ing that scientists hold to old ideas even in the
presence of disconfirming evidence. This conserva-
tive stance may be useful in the sense that most
innovations fail, but it also constrains the likeli-
hood that innovations in practice will arise from
the academy. And other forms of bias exist in the
review process. Ceci and Peters (1982), for in-
stance, found a bias against authors from low-pres-
tige institutions.
If journal reviewing is unreliable and biased
against controversial or novel findings, then two
empirical consequences logically follow. It should
be the case that many important and new theoreti-
cal statements will be made in books or other out-
lets and not in journals, particularly the most pres-
tigious and selective journals, and that originators
of important theoretical work will report difficulty
in getting that work published. This is precisely
what Campanario (1993) found by examining more
than 300 commentaries by authors of classic pa-
pers, many of whom reported having trouble get-
ting their ideas into print. As Rynes noted, “It has
been widely demonstrated . . . that the social and
political forces associated with scientific progress
2007 1337 Pfeffer
tend toward conservatism” (2006: 1099), which
makes it tough to get new ideas into print.
In the organization sciences, many of the major
theoretical contributions have appeared in books or
in less-prestigious journals. The resource-based
view of strategy (Barney, 1991), the industry struc-
ture-conduct-performance paradigm in strategic
management (Porter, 1979), transactions cost the-
ory (Williamson, 1975), the relationship between
agency theory and corporate governance (Jensen &
Meckling, 1976), charismatic leadership (Bass,
1985), stakeholder theory (Freeman, 1984), organi-
zational demography (Pfeffer, 1983), escalating
commitment to ineffective courses of action (Staw,
1976), and resource dependence theory (Pfeffer &
Salancik, 1978)—a partial list of important ideas—
were all published either first in books or chapters
or, if in articles, in journals that were not top-rated.
Second, unreliability and conservatism in the re-
view process should lessen the differences in qual-
ity between papers published in more and less
prestigious journals. Glick, Miller, and Cardinal (in
press) summarized research that showed, using ci-
tation impact as the dependent measure, relatively
small differences between more prestigious and
less prestigious journals, with less than 10 percent
of the variation in article citations being associated
with a journal and its quality. Starbuck (2005) doc-
umented a decline in the citation advantage of the
most prestigious journals during the period from
1981 to 2001. These results are not necessarily in-
consistent with those reported by Judge and his
coauthors (2007). In that study, the amount of vari-
ation accounted for by journal citation rate is less
than 20 percent, and these authors did not explore
whether the factors affecting article citation
changed over time to provide fewer advantages to
publishing in more prestigious journals, as Star-
buck (2005) argued.
If the compilation of knowledge from research
published in academic journals is to form a foun-
dation for policy prescriptions and management
practices, another issue in journal reviewing looms
large: the overwhelming tendency to publish only
results that show significant effects and to not pub-
lish papers that fail to find effects or replicate find-
ings. Hubbard and Armstrong, summarizing empir-
ical investigations of this issue, noted, “A number
of studies have shown that peer review is biased
against the publication of null . . . or so called neg-
ative results” (1997: 337). This means that pub-
lished results are systematically biased in favor of
those showing predicted effects, which in turn
means that meta-analyses, which invariably rely
mostly if not exclusively on published studies, can
easily overestimate actual effect sizes. As Mc-
Daniel, Rothstein, and Whetzel (2006) noted, pro-
cedures exist for attempting to correct for this sam-
pling error in summarizing what existing research
implies about effects. Because knowing what
doesn’t work is often as important as knowing what
does, it would be nice to encourage the publication
of studies showing what ideas, particularly those
that are widely believed, aren’t true.
Finally, in the domain of management research,
there is a preoccupation with theory as well as an
interest in novelty, and both of these tastes appear
to take precedence over the task of cumulating a lot
of data and knowledge about what is actually going
on and what does and doesn’t work. Bergh (2003:
136) noted that to get published, one needed to
offer empirical and theoretical contributions; that
the work needed to be “interesting”; and that one
screen applied to articles in the review process was
“whether a contribution is surprising and unex-
pected” (2003: 136). Mone and McKinley (1993)
commented on the downside of this search for nov-
elty, and Hambrick (2007) wrote persuasively about
some of the costs of an excessive preoccupation
with theory over facts.
Consider a study of the effect of pay for perfor-
mance on the quality of care and outcomes for
patients suffering heart attacks (Glickman et al.,
2007). Given the pressure to tie health system (hos-
pital) reimbursement and physician compensation
to performance in health care, the effect of pay-for-
performance in this setting is a very important
topic. Also, considering the importance of the de-
pendent variable, mortality from heart attacks, the
question posed has obvious policy and practical
relevance. But there is nothing particularly theoret-
ically “new” or innovative in this (published)
study—pay for performance and even the condi-
tions under which it might or might not work is an
old topic in management research. The methods are
rigorous and the data appropriate but not particu-
larly new or inventive. And there is little “surpris-
ing” or “unexpected” in the results: “The pay-for-
performance program was not associated with a
significant incremental improvement in quality of
care or outcomes for acute myocardial infarction”
(Glickman et al., 2007: 2373). I doubt if this paper
could or would be published in a major manage-
ment or organizational research journal. And
more’s the pity, because accumulating evidence on
what works, and what doesn’t, is fundamentally
important for learning about management, improv-
ing managerial practice, and actually providing the
grist for the meta-analytic mill that the field so
loves (Eden, 2002). And that view doesn’t even
consider the possible benefits for people who have
1338 December Academy of Management Journal
heart attacks and depend on the medical system
and its management for their care.
Unfortunately, this quest for “what’s new” rather
than “what’s true” and a lack of interest in data and
scientific findings also afflicts practitioner journals
in management. As Rynes, Giluk, and Brown (2007)
documented, practitioner-oriented publications in
human resource management fail to disseminate
fundamental and important research findings. As
Guest (2007) noted, this is not just a U.S. phenom-
enon, but one that occurs in the United Kingdom as
well. In talking to the editors and publishers of
important practitioner-oriented publications, Guest
found an interest in stories, case studies, relevant
examples, and new ideas, but relatively little com-
mitment to publishing the sorts of summaries and
findings that one sees in medicine and that would
be required to build an evidence-based practice.
The fact that the interest in novelty rather than
truth besets both academic and practitioner outlets
does not diminish the importance of remedying
these biases.
Finally, as Frey (2003) forcefully argued, the ed-
iting and reviewing process tends to distort or sup-
press the original insights and points of view of
researchers even if they get their work published.
The numbers of management journals and submis-
sions are rising, and—because editors and review-
ers in management volunteer the time they spend
filling their important roles—reviewers and editors
are scarce resources. That gives the occupants of
these positions power. Casual observation suggests
that when people assume significant editorial re-
sponsibilities, citations to their work in the jour-
nals they edit tend to go up; this observation could
be systematically empirically examined. Editors
and reviewers, in positions of power, have a ten-
dency to engage in coproduction, to “help” an au-
thor write the paper they want to see or the paper
they might have written had they done the partic-
ular study. As Frey argued, “Authors only get their
papers accepted if they intellectually prostitute
themselves by slavishly following the demands”
(2003: 205) of people who have no property rights
to the journals or, for that matter, to the works they
print. The process Frey so eloquently described
and that most readers of this article will have lived
through almost assuredly curtails innovation and
results in a conservative and homogenizing bias in
the publication process.
Academic Career Processes in Business Schools
Nor are problems confined to publication and
reviewing issues. Career processes in business
schools are not likely to provide incentives encour-
aging research that will have important effects on
public policy or management practice. In fact, as
Glick et al. (in press), among others, have docu-
mented, career processes are beset by problems and
issues about as serious as those that beset the jour-
nal review processes. Glick and his colleagues
showed that a relatively high proportion (43%) of
people with doctoral degrees in management—
even degrees from middle- and top-tier schools—
leave the field within 16 years of graduation. Fur-
ther, Glick et al. showed that talent is widely
distributed among schools, in that the 32 charter
members of the Academy of Management Journal’s
Hall of Fame were dispersed over 25 universities
and a listing of the top 100 scholars as assessed by
their citation impact found them in 52 different
universities, with only 2 schools having as many as
5 people from the list. Their findings are consistent
with career processes of considerable randomness,
something to be expected in a field with a low level
of paradigm development (Pfeffer, 1993). Although
Glick et al. appropriately worried about the conse-
quences of the career processes they describe for
people seeking to make a life in the organization
sciences, there are also implications for the likeli-
hood of producing important, relevant, managerial
research.
As Laura Esserman, MBA, M.D., and director of a
breast cancer research and treatment center at the
University of California, San Francisco, has noted
in comments to Stanford MBA students, research in
science now entails much more collaboration than
in the past. Research in medicine, engineering, and
in many of the physical sciences is likely to be
team-based. Teams permit more continuity in re-
search efforts over time (since the research program
is less dependent on a single individual), help
bring more resources to bear on research questions
(by drawing on more people), and permit the gath-
ering and analysis of more data (through the efforts
of more people). Larger research teams may also
provide the advantage of multiple perspectives and
skill sets, an advantage in achieving quality noted
long ago in the literature on group decision making
(e.g., Davis, 1969). One striking thing about the
management innovations described in Mol and
Birkenshaw (forthcoming) is the extent to which
these ideas often developed across organizations
and through the actions and interactions of a num-
ber of managers attempting to solve some problem.
Although teams and teamwork are things that
management researchers have studied, participa-
tion in teams and teamwork is not something many
of them do as a style of research—and for good
reasons. Everyone who has participated in meet-
ings involving the evaluation of people with exten-
2007 1339 Pfeffer
sive collaborative research records is familiar with
the attempts to parse out the relative contributions
of the various people who worked with the focal
person and to ensure that the person being evalu-
ated has not somehow been riding on the coattails
of others. The penalties for collaboration are rein-
forced by a criterion often invoked in reviews: “Is
this individual one of the ‘x’ best?” Being part of a
research team makes it more difficult to stand out.
And the criterion of relative status is inevitably and
by definition zero-sum. So the competition for sta-
tus that is part and parcel of the academic career
process in management discourages collaborative
research efforts and the building of the sort of lab-
oratories that one sees in the physical sciences and
medicine.
As Judge et al. (2007) noted, citations are of grow-
ing importance as a metric of performance. This is
as true of individual performance as it is for the
performance of academic institutions. Judge et al.’s
data suggest that articles that are either qualitative
reviews or meta-analyses are likely to garner more
citations, and their structural equation results indi-
cate that being a meta-analysis is one of the three
most important factors affecting citations of an ar-
ticle. However, as Ilgen noted, researchers who
tried to manage their careers on the basis of these
findings would be led “toward nonempirical re-
views and a journal whose primary audience is not
management scholars” (2007: 508). So the incen-
tives for career success rooted in maximizing cita-
tions have negative effects on the production of
research that will affect management practice. The
uncertainty and dissensus that characterize the
journal review process also have other implications
for the best strategies for constructing a life as a
management scholar—implications that also may
be at variance with the aspirations for management
research outlined at the beginning of this article.
Consider these recommendations for thinking
about research in the context of career strategies
from Glick and his colleagues (in press):
Does the project effectively leverage my prior invest-
ments in one of my platforms?
Did my colleagues get excited by my two-minute
topic description in the hallway?
Did I stimulate controversy with a quick sketch of
the research model? Did I find an anomalous result
in the literature that I might be able to explain? How
much more work is required to complete this
project?
Let me suggest that little about these criteria seems
likely to produce research of importance to man-
agement practice or public policy, or maybe even
research that advances the field’s development.
And don’t misunderstand—I am in no way criticiz-
ing the interesting and informative analysis of
Glick and his colleagues. Their recommendations
follow logically from the data on careers they
present. The problem is with the structure of the
career process, not with its observers.
The Competition among Business Schools
A third structural factor that both diminishes
innovation and steers research in directions that
are at best orthogonal to the concerns of the man-
agement profession is the competition among busi-
ness schools for status and resources. Competition
can often produce uniformity and stifle innovation.
As DiMaggio and Powell (1983) noted, one source
of institutional isomporphism is the quest for legit-
imacy, which an actor sometimes achieves by try-
ing to look legitimate—or trying to appear similar
to others. Doria, Rozanski, and Cohen (2003) com-
mented on how business school curricula have be-
come increasingly similar and how it is far from
clear that everyone offering essentially the same
product makes much strategic sense. The same
thing has happened in research, where the pressure
to conform to an American model and publish in
United States–based journals has intensified over
time (Leung, 2007).
In sort of a story of unintended consequences,
this “Americanization” of research began in part
with a quest on the part of schools, and in some
instances governments, to improve the quality of
business schools and managerial research. So, for
instance, in the United Kingdom, the “Research
Assessment Exercise” (“RAE”; Macdonald & Kam,
2007) is used to periodically evaluate the quality of
research being done at U.K. business schools, with
the results of these assessments determining re-
search-funding levels for the ensuing years until
the exercise is repeated.
1
It turns out that research
quality is measured largely by publications and
citations in high-quality journals, and virtually all
of these are U.S. journals. As Macdonald and Kam
noted, “Professional journals are decidedly out of
favour” and “quality journals are overwhelmingly
seen as publishing mainstream research rather than
niche or interdisciplinary work” (2007: 647).
The consequences might be funny if they weren’t
1
In a way not dissimilar to practices in U.S. public
schools, this system tends to ensure that the “rich” get
richer. Instead of allocating resources to help schools
improve, the system rewards those that have already
achieved some degree of excellence.
1340 December Academy of Management Journal
so depressing. For instance, because there are real
economic consequences linked to a U.K. school’s
ranking in the RAE, the competition for faculty—
and faculty movement—seems to correspond to the
periodicity of the assessments. Because “visiting”
faculty—such as high-status individuals from U.S.
universities—can be counted if they are doing re-
search with a given school’s faculty, there are in-
centives to regularly invite accomplished individ-
uals who already have published in the “right”
places back and to involve them in local research.
2
This behavior is not confined to the United King-
dom. Macdonald and Kam (2007: 644) commented
on how schools in Australia and even some in
France pay faculty on a piece-rate basis for publi-
cations in top journals, with the payments in Aus-
tralia varying depending on the tier (the ranking) of
the journal. Again, this makes perfect sense in a
world in which real resources flow depending on
faculty publications in prestigious outlets.
This pressure to publish in the ranked journals,
which tend to be U.S.-centered or at least U.S.-
centric, along with the recruiting of faculty in a
global labor market, has contributed to the produc-
tion of some degree of theoretical isomorphism. As
Leung argued with respect to Asian management
research, “The downside of the adaptive response
to the pressure to publish in highly cited journals is
that virtually all Asian management research falls
within the confines of well-known Western theo-
ries” (2007: 512).
Theoretical isomorphism is, by the way, not the
same as the consensus that characterizes high lev-
els of paradigm development, and it is also not
necessarily going to produce research that is useful
for management practice. As institutional theory
tells us, often what get imitated and signaled are
only the most superficial aspects of something, and
these imitated forms have little effect on deep, un-
derlying processes. Meyer and Rowan’s (1977) clas-
sic study of schools noted how these organizations
could appear to be conforming to some institution-
alized sense of what schools should look like, even
when the formal structures that were imitated had
precious little effect on what actually occurred. In
management disciplines, what seems to attract im-
itation in the quest to signal quality is the attraction
to theory (Colquitt & Zapata-Phelan, 2007), meth-
odological sophistication and, judging by what
journals are highly ranked and which are ranked
farther down, a disdain for work that informs or
that might inform professional practice and public
policy.
SOME MODEST PROPOSALS
I could go on about these issues at length, be-
cause the literature on the topics I have raised is
both extensive and extends well back in time. Cri-
tiques of business school research, career pro-
cesses, and peer reviewing are old news (e.g., Porter
& McKibbin, 1988). But it was important to lay out
some of the issues and an analysis of their root
causes in sufficient detail to move to what we
might—and note I don’t say are—likely to do to
change things.
Two general points inform these proposals. First,
we ought to put what we know into use. There is
extensive research on the innovation process, on
what makes ideas influential, on what managers
do, and on the problems organizations confront.
We ought to use that knowledge in our own man-
agement and organizations. Second, the treatment
ought to correspond, in some way, to the diagnosis
of the problem.
Yet another way to frame a search for what might
or should be different is to ask why medicine, en-
gineering, and education are so different from man-
agement research. Or to ask why, within business
schools, research and teaching about entrepreneur-
ship seem quite different in their degree of connec-
tion to professional practice.
These are important questions that could form
the basis of substantive research. My sense is that
part of the answer in the case of entrepreneurship is
the happy co-occurrence of two forces: strong,
maybe even overwhelming, student and alumni de-
mand coupled with the persistent inability to find
“regular” faculty who could, or would, do research
in the traditional mold on this subject. This is not
to say that there is no research on entrepreneurship
in the typical, elite academic journals or that there
couldn’t be. Rather, it is to note that a need, cou-
pled with an inability to meet that need using
customary approaches, produced—no surprise—
innovation! Some of that innovation involved de-
veloping cotaught courses, where one of the in-
structors was a current or former executive from an
entrepreneurial company. Some of that innovation
entailed hiring people whom we would never have
hired as colleagues using traditional criteria, often
in lecturer roles—entrepreneurs and executives
who were either retired from their primary roles or
2
I have a few colleagues who visit the same U.K.
university each summer. There is nothing malign about
this—one could reasonably argue that their presence and
collaboration on research will help improve the research
skills of the local faculty. However, there is a price paid
for this “training”: namely, the homogenization of re-
search topics and techniques as the schools in Europe
mimic those in the U.S.
2007 1341 Pfeffer
who taught part-time. Some of the innovation en-
compassed changing what we considered to be re-
search, expanding our definitions to encourage
clinical, qualitative research and case writing (see
Vermeulen, 2007) as well as the use of qualitative
field methods more generally. The closer connec-
tion with professional practice—not from an occa-
sional lecture or executive program but from the
coproduction of teaching and research and more
regular interaction—are features that I see, at least
to a somewhat greater extent, in engineering, med-
icine, and education.
These examples suggest that it is possible to be
both relevant and rigorous, to serve the scientific
enterprise even while doing work that informs pol-
icy and practice. So, what might it take, more spe-
cifically, to move us in that direction?
If one issue is that current review and status
processes don’t particularly reward the production
of knowledge that anyone cares about, we need to
change the rewards and how they are allocated. To
take one small example, some years ago the Cali-
fornia Management Review initiated an award for
the best article in each volume. The academic edi-
torial board nominates the three finalists, but a
panel of practitioners selects the winner. Of course,
CMR has a different mission than many of our
journals, and I am not for a moment claiming this
process is perfect. But it does seem that involving
practicing professionals, at least to some degree, in
determining awards and rewards is one reasonable
step toward blending academic and professional
values.
The research by Glick and his colleagues (in
press) and others illustrates that a shockingly high
proportion of papers, even those published in the
elite journals, garner zero citations, with an even
larger percentage obtaining very few. If we take
these data seriously and want our tenure and re-
source allocation criteria to reward impact, then it
seems somewhat inconsistent to have faculty eval-
uation standards that emphasize publishing papers
in certain journals over evaluating the effect of an
individual’s written work, without considering
where it first appeared. This logic suggests weight-
ing citations more strongly than number of papers
and where they have been published in review
processes. And since citations measure scientific
impact only imperfectly and, moreover, we are pre-
sumably concerned about the effects of research on
professional practice above and beyond just its sci-
entific impact, we ought to assess contributions
along those broader dimensions measuring the ef-
fects of our work as well.
To take a case in point, consider David Kelley.
David is the founder and former CEO of IDEO Prod-
uct Development, a company that has not only won
a large number of design awards but one whose
ideas about innovation and brainstorming have
been recognized and are influential in a large num-
ber of companies and professional service firms.
Kelley, a member of the National Academy of En-
gineering and a full professor on the Stanford en-
gineering school faculty, does not have a Ph.D., and
I am not sure he has published anything, particu-
larly in peer-reviewed journals. No self-respecting
business school using normal academic criteria
would have anything to do with him, even though
one could plausibly argue that IDEO, through both
its design and its management practices and cul-
ture, has had more effect on management than
scores of academic articles combined. The engi-
neering school may have wisdom that many busi-
ness schools lack.
If the current reviewing process is at least some-
what unreliable and conservative, there are possi-
ble alternatives. Data suggest that innovations in
products and services (and there is no reason to
believe this would be less true in the domain of
ideas and research) often come from peripheral ac-
tors who have less invested in existing ways of
thinking and doing things (e.g., Christensen, 1998).
Reviewing is in the hands of relatively few people
who are selected in large measure for their demon-
strated socialization into the prevailing topics, the-
ories, and methodologies of a field. But the opera-
tion of prediction markets (Surowiecki, 2004), and
the practices of companies such as Google that
determine new products and new technologies in
part through a voting process, speak to the desir-
ability of leaving judgments about the worth of
research and ideas open to more people in a more
democratic assessment process. In fact, this is just
what the Social Science Research Network (SSRN)
does. Founded by Michael Jensen, an economist
who definitely believes in markets as arbiters of
quality and importance and who has had his own
troubles in getting some of his more innovative
work into print, SSRN posts pretty much every-
thing and then tracks downloads, providing listings
of the most frequently downloaded papers. Jensen
maintains that leaving publication open and letting
the marketplace for ideas determine the usefulness
and worth of research papers is preferable to having
such decisions reside in the hands of a few people.
The Academy of Management journals, and
many others, have made substantial progress in
cutting down review times and posting papers
much more expeditiously. This is an important ef-
fort. Not only is innovation encouraged by rapid
prototyping, but also, an inverse linear relationship
may exist between the average publication delay in
1342 December Academy of Management Journal
a field and the journal impact factor (Yu, Wang, &
Yu, 2005). So it is important to maintain the focus
on expediting review turnaround and Internet
posting.
If we want to build more collaboration, two tar-
gets of intervention emerge. One is the physical
design of our buildings. In business schools faced
with a chronic shortage of space on university cam-
puses, common areas and meeting rooms are often
the first things to go. And business schools typi-
cally look more like traditional office buildings
than like learning laboratories or places that would
facilitate building communities of practice.
The second issue is the implicit message about
collaboration: Collaborate, but not too much, and
certainly not repeatedly with the same people, par-
ticularly if they are more senior than you. Person-
nel reviews are a necessary part of academic gov-
ernance. We should, nonetheless, be conscious of
the extent to which we may be trading work ar-
rangements that might produce more useful and
innovative knowledge for arrangements that make
assigning individual credit easier.
RECONSTRUCTING OUR ENVIRONMENT TO
CREATE A DIFFERENT FUTURE
Environments matter. That is one pervasive les-
son from our field. And different environments are
possible, even in universities. Just look at our col-
leagues in other professional schools. People have
accused me of romanticizing the success of some of
the other professional schools, but I don’t agree. It
is certainly not the case that these schools have
“solved” everything once and for all and that
everything is perfect. But one cannot observe the
advance of medical science and knowledge and its
implementation in practice over the past several
decades, including the almost 50 percent reduction
in death rates from heart disease, and not be im-
pressed. The thrust of the evidence-based medicine
movement was to bring the best scientific knowl-
edge to the bedside (e.g., Rosenberg & Anna, 1995).
As evidence-based medicine has grown, the practi-
cal issues of treatment, diagnosis, and the under-
standing of disease processes have influenced
the research—even the basic science, in some in-
stances—that gets done. In turn, advancing scien-
tific understanding has been implemented in prac-
tice and in the drugs and devices that help to
deliver care. The link between science and practice
is closer, as it seems to be in engineering and com-
puter science as well, but I don’t see any less aca-
demic legitimacy for these fields. If anything, their
science has advanced at least as vigorously (if not
more so) than has ours.
In the end, I am optimistic about our ability to do
research that affects not only management practice
but also public policy. This optimism stems from
the remarkable body of knowledge that we and our
colleagues in related social sciences have built over
the past decades, including the 50 years of this
journal. We know a lot about innovation, about the
design of social and physical environments, about
working in teams, about building communities of
practice, and about a lot of other things that are
relevant to doing research that is both scientifically
and professionally significant. My vision is that we
finally use that knowledge—turning our knowing
into doing—to design our own systems, environ-
ments, and work practices. By so doing, we can act
to fulfill the aspirations of many people in the
Academy of Management and also provide substan-
tial service to the world in which we live.
REFERENCES
Bakalar, N. 2007. Review finds drug makers issue more
positive studies. New York Times, February 27: F7.
Bangerter, A., & Heath, C. 2004. The Mozart effect: Trac-
ing the evolution of a scientific legend. British Jour-
nal of Social Psychology, 43: 605–623.
Barber, B. M., Heath, C., & Odean, T. 2003. Good reasons
sell: Reason-based choice among individual and
group investors in the stock market. Management
Science, 49: 1636–1652.
Barney, J. 1991. Firms, resources, and sustained compet-
itive advantage. Journal of Management, 17: 99–
120.
Bass, B. M. 1985. Leadership and performance beyond
expectations. New York: Collier Macmillan.
Berger, J., & Heath, C. 2005. Idea habitats: How the prev-
alence of environmental cues influences the success
of ideas. Cognitive Science, 29: 195–221.
Bergh, D. D. 2003. From the editors: Thinking strategi-
cally about contribution. Academy of Management
Journal, 46: 135–136.
Blumenthal, D., Campbell, E. G., Anderson, M. S.,
Causino, N., & Louis, K. K. 1997. Withholding re-
search results in academic life science: Evidence
from a national survey of faculty. Journal of the
American Medical Association, 277: 1224–1228.
Campanario, J. M. 1993. Consolation for the scientist:
Sometimes it is hard to publish papers that are later
highly-cited. Social Studies of Science, 23: 342–
358.
Ceci, S. J., & Peters, D. 1982. Peer review: A study of
reliability. Change: The Magazine of Higher
Learning, 14(6): 44–48.
Christensen, C. 1998. The innovator’s dilemma. Boston:
Harvard Business School Press.
2007 1343 Pfeffer
Colquitt, J. A., & Zapata-Phelan, C. P. 2007. Trends in
theory building and theory testing: A five-decade
study of Academy of Management Journal. Acad-
emy of Management Journal, 50: 1281–1303.
Davenport, T. H., & Prusak, L. 2003. What’s the big idea?
Boston: Harvard Business School Press.
Davis, J. H. 1969. Group performance. Reading, MA:
Addison-Wesley.
DiMaggio, P. J., & Powell, W. W. 1983. The iron cage
revisited: Institutional isomorphism and collective
rationality in organizational fields. American Socio-
logical Review, 48: 147–160.
Doria, J., Rozanski, H., & Cohen, E. 2003. What business
needs from business schools. Strategy ? Business,
32: 39–45.
Eden, D. 2002. From the editors: Replication, meta-anal-
ysis, scientific progress, and AMJ’s publication pol-
icy. Academy of Management Journal, 45: 841–
846.
Freeman, R. E. 1984. Strategic management: A stake-
holder approach. Boston: Pittman.
Frey, B. 2003. Publishing as prostitution? Choosing be-
tween one’s own ideas and academic success. Public
Choice, 116: 205–223.
Glick, W. H., Miller, C. C., & Cardinal, L. B. In press.
Making a life in the field of organization science.
Journal of Organizational Behavior.
Glickman, S. W., Ou, F., DeLong, E. R., Roe, M. T., Lytle,
B. L., Mulgund, J., Rumsfeld, J. S., Gibler, W. B.,
Ohman, E. M., Schulman, K. A., & Peterson, E. D.
2007. Pay for performance, quality of care, and out-
comes in acute myocardial infarction. Journal of the
American Medical Association, 297: 2373–2380.
Goodstein, L. D., & Brazis, K. L. 1970. Credibility of
psychologists: An empirical study. Psychological
Reports, 27: 835–838.
Guest, D. 2007. Don’t shoot the messenger: A wake-up
call for academics. Academy of Management Jour-
nal, 50: 1020–1026.
Hambrick, D. C. 1994. Presidential address: What if the
Academy actually mattered? Academy of Manage-
ment Review, 19: 11–16.
Hambrick, D. C. 2007. The field of management’s devo-
tion to theory: Too much of a good thing? Academy
of Management Journal, 50: 1346–1352.
Hubbard, R., & Armstrong, J. S. 1997. Publication bias
against null results. Psychological Reports, 80: 337–
338.
Ilgen, D. R. 2007. Citations to management articles: Cau-
tions for the science about advice for the scientist.
Academy of Management Journal, 50: 507–509.
Jensen, M. C., & Meckling, W. H. 1976. Theory of the
firm: Managerial behavior, agency costs and owner-
ship structure. Journal of Financial Economics, 3:
305–360.
Judge, T. A., Cable, D. M., Colbert, A. E., & Rynes, S. L.
2007. What causes a management article to be cit-
ed—Article, author, or journal? Academy of Man-
agement Journal, 50: 491–506.
Kuhn, T. S. 1972. The structure of scientific revolutions
(2nd ed.). Chicago: University of Chicago Press.
Leung, K. 2007. The glory and tyranny of citation impact:
An East Asian perspective. Academy of Manage-
ment Journal, 50: 510–513.
Macdonald, S., & Kam, J. 2007. Ring a ring o’ roses:
Quality journals and gamesmanship in management
studies. Journal of Management Studies, 44: 640–
655.
Mahoney, M. J. 1977. Publication prejudice: An experi-
mental study of confirmatory bias in the peer review
system. Cognitive Therapy and Research, 1: 161–
175.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Mau-
rer, S. D. Publication bias: A case study of four test
vendors. Personnel Psychology, 59: 927–953.
Meyer, J. W., & Rowan, J. 1977. Institutionalized organi-
zations: Formal structure as myth and ceremony.
American Journal of Sociology, 83: 340–363.
Miller, C. C. 2006. Peer review in the organizational and
management sciences: Prevalence and effects of re-
viewer hostility, bias, and dissensus. Academy of
Management Journal, 49: 425–431.
Mol, M. J., & Birkinshaw, J. Forthcoming. Giant steps in
management: Key management innovations. Lon-
don: Pearson Education.
Mone, M. A., & McKinley, W. 1993. The uniqueness
value and its consequences for organization studies.
Journal of Management Inquiry, 2: 284–296.
Pearce, J. L. 2004. What do we know and how do we
really know it? Academy of Management Review,
29: 175–179.
Pfeffer, J. 1983. Organizational demography. In L. L.
Cummings & B. M. Staw (Eds.), Research in organ-
izational behavior, vol. 5: 299–357. Greenwich, CT:
JAI Press.
Pfeffer, J. 1993. Barriers to the advance of organizational
science: Paradigm development as a dependent vari-
able. Academy of Management Review, 18: 599–
620.
Pfeffer, J., & Fong, C. T. 2002. The end of business
schools? Less success than meets the eye. Academy
of Management Learning and Education, 1: 78–95.
Pfeffer, J., & Salancik, G. R. 1978. The external control of
organizations: A resource dependence perspec-
tive. New York: Harper & Row.
Podsakoff, P. M., McKenzie, S. B., Bachrach, D. G., &
Podsakoff, N. P. 2005. The influence of management
journals in the 1980s and 1990s. Strategic Manage-
ment Journal, 26: 473–488.
Porter, L. W., & McKibbin, L. E. 1988. Management
1344 December Academy of Management Journal
education and development. New York: McGraw-
Hill.
Porter, M. E. 1979. The structure within industries and
companies’ performance. Review of Economics and
Statistics, 61: 214–227.
Rosenberg, W., & Anna, D. 1995. Evidence-based medi-
cine: An approach to clinical problem-solving. Brit-
ish Medical Journal, 310: 1122–1126.
Rousseau, D. M. 2006. Is there such a thing as “evidence-
based management”? Academy of Management Re-
view, 31: 256–269.
Rynes, S. L. 2006. “Getting on board” with AMJ: Balanc-
ing quality and innovation in the review process.
Academy of Management Journal, 49: 1097–1102.
Rynes, S. L., Giluk, T. L., & Brown, K. G. 2007. The very
separate worlds of academic and practitioner peri-
odicals in human resource management: Implica-
tions for evidence-based management. Academy of
Management Journal, 50: 987–1008.
Starbuck, W. H. 2003. Turning lemons into lemonade:
Where is the value in peer reviews? Journal of Man-
agement Inquiry, 12: 344–351.
Starbuck, W. H. 2005. How much better are the most
prestigious journals? The statistics of academic pub-
lication. Organization Science, 16: 180–200.
Staw, B. M. 1976. Knee-deep in the big muddy: A study
of escalating commitment to a chosen course of ac-
tion. Organizational Behavior and Human Perfor-
mance, 16: 27–44.
Surowiecki, J. 2004. The wisdom of crowds. New York:
Doubleday.
Tahai, A., & Meyer, M. J. 1999. A revealed preference
study of management journals’ direct influences.
Strategic Management Journal, 20: 279–296.
Van de Ven, A., & Johnson, P. E. 2006. Knowledge for
theory and practice. Academy of Management Re-
view, 31: 802–821.
Vermeulen, F. 2007. “I shall not remain insignificant”:
Adding a second loop to matter more. Academy of
Management Journal, 50: 754–761.
Walsh, J. P., Weber, K., & Margolis, J. D. 2003. Social
issues and management: Our lost cause found. Jour-
nal of Management, 29: 859–881.
Washburn, J. 2005. University Inc.: The corporate cor-
ruption of higher education. New York: Basic
Books.
Williamson, O. E. Markets and hierarchies. New York:
Free Press.
Yu, G., Wang, X., & Yu, D. 2005. The influence of publi-
cation delays on impact factors. Scientometrics, 64:
235–246.
Jeffrey Pfeffer ([email protected]) is the
Thomas D. Dee II Professor of Organizational Behavior at
the Stanford Graduate School of Business. He received
his Ph.D. from Stanford University. His research interests
include evidence-based management, power and politics
in organizations, and economic language and assump-
tions and their effects on behavior.
2007 1345 Pfeffer
doc_312223455.pdf