Description
Scholars have described how rankings can be consequential for the shaping of the economy.
The prevailing argument is that they wield influence through encouraging ‘mechanisms
of reactivity’ amongst market actors. We ask the question as to whether there are
additional agential aspects found within rankings that extend ‘social’ accounts. We suggest
that ‘sociomateriality’ is also a significant aspect of a ranking’s influence. Through developing
the notion of a ‘ranking device’, we examine how the ‘‘format and furniture’’ of a ranking
can mediate and constitute a domain. Drawing on a detailed study of a prominent
graphical performance measure from within the information technology (IT) arena, we
provide evidence to show that IT markets can be as much a product of the affordances
and constraints of ranking devices as any other (non-material) aspects of the ranking.
The article integrates literature from Accounting research and Science and Technology
Studies to contribute to our understanding of how material things and the economy mutually
constitute one another. It also offers one of the first empirical accounts of the sociomaterial
construction of a graphical ranking
Give me a two-by-two matrix and I will create the market: Rankings,
graphic visualisations and sociomateriality
Neil Pollock
a,?
, Luciana D’Adderio
b,1
a
University of Edinburgh Business School, 29 Buccleuch Place, Edinburgh EH8 9JS, UK
b
The Institute for Studies of Science, Technology and Innovation (ISSTI), Old Surgeons’ Hall, High School Yards, University of Edinburgh, Edinburgh EH1 1LZ, UK
a b s t r a c t
Scholars have described how rankings can be consequential for the shaping of the econ-
omy. The prevailing argument is that they wield in?uence through encouraging ‘mecha-
nisms of reactivity’ amongst market actors. We ask the question as to whether there are
additional agential aspects found within rankings that extend ‘social’ accounts. We suggest
that ‘sociomateriality’ is also a signi?cant aspect of a ranking’s in?uence. Through develop-
ing the notion of a ‘ranking device’, we examine how the ‘‘format and furniture’’ of a rank-
ing can mediate and constitute a domain. Drawing on a detailed study of a prominent
graphical performance measure from within the information technology (IT) arena, we
provide evidence to show that IT markets can be as much a product of the affordances
and constraints of ranking devices as any other (non-material) aspects of the ranking.
The article integrates literature from Accounting research and Science and Technology
Studies to contribute to our understanding of how material things and the economy mutu-
ally constitute one another. It also offers one of the ?rst empirical accounts of the socioma-
terial construction of a graphical ranking.
Ó 2012 Elsevier Ltd. All rights reserved.
Introduction
Rankings represent an important mechanism shaping
markets (Aldridge, 1994; Blank, 2007; Schultz, Mouritsen,
& Grabielsen, 2001; Shrum, 1996), such that scholars have
labelled them ‘engines’ within the economy (Espeland &
Sauder, 2007; Karpik, 2010). To depict a ranking in this
way is to imply that it is not a passive portrait of the world
but ‘‘an active force transforming its environment’’ (Mac-
Kenzie, 2006, p. 12). This is indicative of a growing consen-
sus also from within Accounting research about how we
should theorise the power of formal measures of perfor-
mance and reputation (see Argyris, 1954; Cooper & Hopper,
1989; Kornberger & Carter, 2010; Lapsley & Mitchell,
1996). Despite highlighting a key area for empirical and
theoretical inquiry, however, this popular conceptualisa-
tion carries unquestioned assumptions about the way we
understand their constitutive role. In particular, the in?u-
ence of a ranking is seen to reside predominately in how
it encourages ‘mechanisms of reactivity’ amongst market
actors (Espeland & Sauder, 2007). What this suggests is
that rankings are intrinsically ‘social’, at the same time
raising the question as to whether there are further agen-
tial aspects that might extend this social mode of analysis.
Are there additional agencies (other than how people re-
spond to them) to be found in the makeup of rankings?
A useful prompt is found in tracing the idiom of the
term engine itself. From 17th Century English science, for
instance, we learn how instruments, artifacts and diagrams
– combined with the ‘ingenuity, craftiness and inventive-
ness’ of gentlemen scientists – could function as generative
engines in producing early scienti?c knowledge
(Carroll-Burke, 2001, p. 599). To capture the nature of this
0361-3682/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2012.06.004
?
Corresponding author. Tel.: +44 (0)1316511489; fax: +44
(0)1316506399.
E-mail addresses: [email protected] (N. Pollock), L.D-Adderio@
ed.ac.uk (L. D’Adderio).
1
Tel.: +44 (0)1316502454; fax: +44 (0)1316506399.
Accounting, Organizations and Society 37 (2012) 565–586
Contents lists available at SciVerse ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
intervention, however, one also had to consider the tools
and devices’ hard, physical, material, engineering, and
‘arti?cial’ aspects (Carroll-Burke, 2001, p. 600), which were
key features of the artifacts involvement in everyday prac-
tices. Whilst the ?rst view presents the intervention of en-
gines as a social form of ‘manipulation’, the ‘‘products of
ingenious minds, clever contrivances and artful designs’’
(Carroll-Burke, 2001, p. 599), the second places them
squarely in the domain of practice, matter, method and
constraint.
We see value in bringing both aspects together to cap-
ture how the abstract, generative capacity of a ranking
can result from – and be shaped by – the interplay of a het-
erogeneous range of sociomaterial constraints and prac-
tices. To this purpose, and building on recent discussions
of market devices (Callon, Millo, & Muniesa, 2007), we de-
velop the idea of a ranking device. This focus on objects is
warranted because at a basic level a ranking cannot exist
without some kind of device (Callon, Millo, & Muniesa,
2007). The idea of the ‘100 top restaurants’, ‘10 leading
law schools’, or ‘20 best cities to work and live’, for in-
stance, would be impossible without the device of ‘the list’
(Goody, 1977). Analytically the notion of device is useful
because it captures how a ranking is an ‘arti?ce’, an ‘arti-
fact’, the product of a practice (OED). In can also be used
to describe an object that contains certain constraints
and affordances, while at the same time capturing the as-
pect of ‘clever contrivance’ and ‘artful design’ (rankings
are clearly devised in the sense of something manufac-
tured or contrived) (Goody, 1977).
In this paper, we want to show that devices do more
than simply facilitate the production and communication
of a ranking. They actively participate in their shaping.
The speci?c argument developed is that it is these socio-
material aspects, together with how people respond to
them, that can account for the in?uence of a ranking. We
would go as far as to argue that, in certain case, the consti-
tutive potential of a ranking can reside in its affordances
and constraints as much as any other complementary as-
pect (like the ‘calculation’). Our study draws on observa-
tions and interviews conducted over a period of several
years on the construction and use of one of the most in?u-
ential rankings from the information technology (IT) arena
– a two-by-two matrix called the ‘Magic Quadrant’.
To show this in?uence we draw on and integrate a
number of schools of thought from Accounting research
as well as Science and Technology Studies (STS). The ?rst
is Miller’s ‘governance of economic life’ framework which
studies the interactions between ‘programmes’ and ‘tech-
nologies’ as domains are made ‘calculable’ (Miller, 1998,
2001; Miller & O’Leary, 2007). The second is the Account-
ing literature’s focus on ‘graphic inscriptions’ (Bloom?eld
& Vurdubakis, 1997; Chua, 1995; Dambrin & Robson,
2011; Ezzamel, 2004; Qu & Cooper, 2011; Robson, 1992).
Whilst scholars have linked the issue of how a ?guration
might facilitate and mediate a ?nancial decision (Miller &
O’Leary, 2007), they have not yet considered how calcula-
tions might be shaped by and result from the speci?c
sociomaterial features of a graph. Finally, to demonstrate
how a visualisation might offer affordances and constraints
to those producing a ranking we draw on a range of studies
from Science and Technology Studies on how material arti-
facts and economic markets mutually constitute one an-
other (Callon et al., 2007; MacKenzie, 2009; Vollmer,
Mennicken, & Preda, 2009) and the use of graphic inscrip-
tions in Science (Latour, 1986; Lynch, 1985, 1988) and
other domains (Espeland & Stevens, 1998; Quattrone,
2009).
Rankings are engines within the economy
Today there appear to be formal ranking measures to
rate the quality and value of most things: art (Becker,
1982), theatre (Shrum, 1996), restaurants (Blank, 2007),
?lms, music (Karpik, 2010), the performance of various
public services such as hospitals, schools, Business Schools
(Wedlin, 2006), and universities (Free, Salterio, & Shearer,
2009; Strathern, 2000), the ef?ciency of the latest con-
sumer products (Aldridge, 1994), the reputation and com-
petence of companies (Pollock & Williams, 2009; Schultz
et al., 2001). There are those listing the ‘best places’ to live
and work (Kornberger & Carter, 2010), the ‘top holiday des-
tinations’ (Jeacle & Carter, 2011; Scott & Orlikowski, 2012),
and so on.
Despite their simple and often contested nature, there is
growing evidence to suggest that rankings play an en-
hanced role in decision-making (Aldridge, 1994; Blank,
2007; Karpik, 2010; Wedlin, 2006). Speaking about one
of the most well known rankings, the Red Michelin restau-
rant guide, for instance, Karpik (2010, p. 77) writes: ‘‘. . . this
veritable paper engine [has] the rare ability to create the
conditions of large-scale comparisons of incommensurable
entities while thoroughly respecting their particularisms’’.
In their discussion of the global league tables of cities
Kornberger and Carter (2010, p. 333) similarly suggest that
league tables are ‘engines and not simply cameras’ that
create comparisons between hitherto unrelated places.
The resulting competition between global cities, they ar-
gue, is not a natural fact but it has been brought into being
through the circulation of rankings. League tables now, in
their words, ‘‘form the battleground on which cities com-
pete with each other’’ (Kornberger and Carter (2010, p.
236)); for example, they have actively encouraged city
administrations to change behaviours and to develop
strategies that set them apart from other metropolis
(Kornberger and Carter (2010))
Covering a plethora of devices as used in a variety of
industries and contexts the above works address how
rankings, as ordering systems, intervene in shaping the
reality they attempt to monitor. One nuanced discussion
of this kind – setting out in detail the means by which
rankings are generative – is Espeland and Sauder’s (2007)
report on university Law Schools. They suggest that:
‘‘. . . rankings are reactive because they change how people
make sense of situations; rankings offer a generalised ac-
count for interpreting behaviour and justifying decisions
within law schools, and help organise the ‘‘stock of
knowledge’’ that participants routinely use’’ (Espeland
and Sauder (2007, p. 11)).
Espeland and Sauder (2007) suggest that rankings do
more than simply grade or describe: they also offer new
566 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
interpretations of a situation. Actors then adapt their
behaviour to conform with this altered understanding (in
a formulation that has much in common with Hacking’s
(1983) notion of representing and intervening). To evidence
how a ranking can intervene, they cite the words of a
respondent. A university manager notes how ‘‘[r]ankings
are always in the back of everybody’s head. With every is-
sue that comes up, we have to ask, ‘How is this impacting
our ranking?’’’ (Espeland and Sauder, 2007, p. 11). Their
thesis is that ultimately rankings can become self-ful?lling:
One type of self-ful?lling prophecy created by rankings
involves the precise distinctions rankings create.
Although the raw scores used to construct [Law School]
rankings are tightly bunched, listing schools by rank
magni?es these statistically insigni?cant differences in
ways that produce real consequences for schools, since
their position affects the perceptions and actions of out-
side audiences (Espeland and Sauder, 2007, p. 12, our
emphasis).
This leads them to suggest that ‘‘[r]ankings are a power-
ful engine for producing and reproducing hierarchy since
they encourage the meticulous tracking of small differences
among schools, which can become larger differences over
time’’ (Espeland and Sauder, 2007, p. 20). Whilst changes
in interpretations and perceptions are obviously important,
however, this viewseems to suggest that a ranking is an en-
tirely ‘social’ phenomenon. Likewise to propose that a rank-
ing primarily resides in the ‘heads’ of actors would tend to
overlook additional inherently material agential features.
Espeland and Sauder (2007) hint at (but do not develop)
the importance of material format in facilitating particular
interpretations. To paraphrase their words, the list magni-
?es small differences that produce real consequences.
Kornberger and Carter (2010, p. 330) write that the power
of a ranking ‘‘rests in its capacity to shape people’s cogni-
tive maps and takes on material forms through translations
into charts, models, graphs, documents, brainstorming
techniques and other elements. . .’’. Building on Espeland
and Sauder (2007) it could be inferred that a list does more
than simply magnify a particular aspect of the ranking.
Kornberger and Carter (2010) explicitly ?ag the role of arti-
facts but foreground cognitive dimension, such that whilst
devices ?gure in their analysis they are not necessarily
seen as party to interactions.
Hacking (1992) provides a useful guide in his later for-
mulation of the representation and intervention couplet
where he acknowledges the centrality of ‘instruments’.
Representations should be studied alongside (not apart
from) ‘instruments’, he argues, because it is these that pro-
duce particular kinds of intervention. In Hacking’s view, it
is representations and instruments that co-produce one
another. Miller and O’Leary (2007, p. 707) apply these
ideas through addressing the interactions between ‘pro-
grammes’ and ‘technologies’. Programmes refer to ‘‘the
imagining and conceptualising of an arena and its constit-
uents, such that it might be made amenable to knowledge
and calculation’’ (Miller & O’Leary, 2007, p. 702). Technol-
ogies denote the ‘‘possibility of intervening through a
range of devices, instruments, calculations and inscrip-
tions’’ (Miller & O’Leary, 2007, p. 702). The key aspect of
their work is that processes of calculation can only be ex-
tended through the interaction between programmes and
technologies. As Miller and O’Leary (2007) describe it is
not simply a case of ‘implementing’ a set of ideas within
a device. Rather, devices come to mediate and shape con-
ceptualisations and vice versa.
We enthusiastically adopt this terminology both for the
ways it focuses attention on how there is a ‘calculation’ in-
volved in the production of a ranking (see Kornberger and
Carter (2010) and Jeacle and Carter (2011) for this reading)
but also because it ?ags the fact this calculation results
from a process where ‘social’ and ‘technical’ elements are
brought together. Scholars working within this framework,
however, have only begun to specify the process by which
we might study and theorise interactions between mate-
rial objects and wider calculative conceptions. In this re-
spect, we are given rather few clues as to the actual
mechanisms of co-production or the ways in which tech-
nologies, devices or graphic inscriptions for that matter
can mediate and shape ideas. We thus ?nd a need to sup-
plement our analytical toolbox with concepts more at-
tuned to considering the affordances and constraints of
(particularly graphic) devices.
Material agency: affordance and constraint
Scholars have ?agged the role of ‘mediating instru-
ments’, ‘market devices’ and ‘intellectual equipment’ in
facilitating processes of calculation within markets (Callon
et al., 2007; MacKenzie, 2009; Miller & O’Leary, 2007). In
contrast to those approaches foregrounding single actors
in market decisions, it has been argued that actions and
calculations are never performed by individuals alone.
Rather, they are always propped up and aided by various
kinds of material artifact. In this view, artifacts are seen
to have ‘agency’, as they produce speci?c kinds of effects.
In terms of who or what makes someone – or something
– an agent, Latour argues that: ‘‘anything that [can] modify
a state of affairs by making a difference is an actor’’ (2005,
p. 71, emphasis in original). Thus, Preda (2008) discussed
how the ‘price ticker’ in the early years of the stock market
was an agent in leading to different forms of decision mak-
ing in the trading of stocks. Miller and O’Leary (2007), in
their account of the history of integrated circuits, treat fu-
ture based graphs or technology roadmaps in a similar way.
Instruments were in their case central in channelling dis-
cussions concerning the funding and development of inte-
grated circuits across different scienti?c and industrial
domains.
Both examples suggest that material devices play key
roles in mediating or constituting behaviour (Akrich &
Latour, 1992). Miller and O’Leary’s concern was with how
roadmaps worked to mediate between the interests and
strategies of multiple organisations involved in the devel-
opment of the new market of post-optical lithography
(Akrich & Latour, 1992, p. 720). In Preda’s case, the price
ticker produced a constant ?ow of prices that could be vis-
ualised in new ways. The ticker constituted the stockbrok-
ers’ practices in such a way that they found themselves
having to adapt to the continuous ?ow of price data such
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 567
that they switched from being ‘observers of the market’ to
‘observers of the tape’ (Akrich & Latour, 1992, p. 232).
Another way of describing this agency is to suggest that
artifacts have affordances and constraints. Although the ori-
ginal idea of affordance stems from the work of Psychology
(Gibson, 1979), it has been subject to recent discussions
within STS and the Sociology of Technology (David & Pinch,
2008; Hutchby, 2001). Gibson de?ned affordance as the
‘‘perceived and actual properties of the thing, primarily
those fundamental properties that determine just how
the thing could possibly be used’’ (David & Pinch, 2008;
Hutchby, 2001, p. 9). Hutchby later softened this as those
material aspects which frame but do not necessarily deter-
mine the actions of people (2001). In this latter relational
view affordances exist in tandem only with how people
take them up and the particular conditions of the local con-
text. Writers like David and Pinch (2008) have recently
built on this in their discussion of online book reviews
where they describe how there can be ‘material’ and ‘so-
cial’ affordances shaping reviews. Physical affordances
mean that a reviewer can write as much as she wants (lim-
ited only by her patience and the capacity of the com-
puter’s hard disc) but social practices (such as publishing
conventions) dictate that reviews are normally limited to
a handful of pages. Scholars such as Orlikowski (2007)
have noted that since these two things are inseparable it
is necessary to theorise the ‘social’ and ‘material’ as ele-
ments that mutually constitute one another: ‘‘the social
and the material are considered to be inextricably re-
lated—there is no social that is not also material, and no
material that is not also social’’ (Orlikowski, 2007, p.
1437). This re?ects an intellectual project in the social
analysis of technology never to simply ‘black box’ objects
but to study their profoundly social and material elements.
Since there is no clear boundary between what is ’social’
and what is ’material’ scholars refer to these more pre-
cisely as ’sociomaterial’. In the paper, whilst we will adopt
this particular terminology, we will also at times refer to
the social and technical separately as there are analytical
bene?ts from treating these empirically entwined features
as distinct.
Ranking devices
We are now in a position to set out more clearly what
we mean by a ‘ranking device’.
2
Speci?cally, we propose
that these are the ‘‘format and furniture’’ implicated in the
materiality of a ranking. The analytical value of the term is
that it foregrounds how a ranking (the ‘calculation’) can be
shaped through its incorporation in particular sociomaterial
objects. Those constructing a ranking are required to take
into account the device’s various affordances and constraints
when they plot a dot on a graph. To lay the foundations for
our empirical study we discuss some of the furniture com-
monly found within rankings. This is followed by a discus-
sion of some of the sociomaterial affordances and
constraints surrounding the production of graphs.
Format and furniture
Rankings are shot through with various kinds of devices
in and through which they are embedded and become
material. There are those that come in the form of lists or
tables and then there are those that are more graphical
in nature. One ?nds many examples of ranked lists (our
informal research on Google, for instance, suggests at least
several hundreds). Stark (2011) argues that this format be-
came popular in the 1950s and cites the ‘jukebox’ as a pos-
sible source. Since jukeboxes held 40 single records this
apparently led to the development of ‘top 40’ record pro-
grammes on radio stations (see also Anand & Peterson,
2000). Today the list has become the format of choice for
many ranking organisations. One of its affordances appears
to be that it is relatively unconstrained by the number of
subjects evaluated. The ‘top 10 MBA programmes’ can
(and often are) extended to include the ‘top 50’, ‘top 100’
degrees, for instance. Kwon and Easton (2010), in their dis-
cussion of the Financial Times’ list of MBA programmes,
suggest that the longer the list the more comprehensive
or ‘global’ it may appear in certain peoples’ eyes: ‘‘. . .indi-
vidual consumers can ?nd comfort in the perception that
they can choose the ‘best’ among hundreds or thousands
of alternatives, rather than the ‘best’ among several ‘good
enough’ alternatives arising through the search process.
The FT MBA 100 allows buyers to maximise their choice
of a highly ranked school, given personal constraints such
as budget, geographical preferences and entry require-
ments’’ (Kwon & Easton, 2010, p. 133). We ?ag this feature
because it is not a capacity found in all rankings (see
empirical discussion below).
Rankings are also supported by speci?c furniture. In
their discussion of consultancy reports, for instance, Qu
and Cooper (2011, p. 358) highlight the role of the furni-
ture of ‘bullet points’ and ‘checklists’ as providing a ‘‘topo-
graphical image of how various employee groups within an
organization are relevant to achieving strategic objectives’’.
In the case of rankings there are stars, lines, waves, tics,
dots and so on. Kwon and Easton (2010, p. 132) argue that
the use of such furniture constitutes a particularly novel
feature or form of contribution. Whilst rankers have not
been particularly innovative with regard to methodology,
or how assessments are put together, they have been at
the forefront in terms of developments in ‘format and pre-
sentation’. Kwon and Easton (2010) describe howthe Mich-
elin Red Guide, for instance, was amongst the ?rst of the
major rankers to supplement complicated forms of quanti-
tative data with ‘qualitative descriptors’. It rated restaurant
quality by producing the ‘‘now famous three-star scale to
denote relative excellence’’ (Kwon & Easton, 2010, p.
132). These descriptors are now very much part of the
machinery for ranking restaurants around the world (see
Karpik, 2010).
However, we still know very little about why such fur-
niture has become popular or what, if anything, it has
meant for these particular settings. We would argue that
2
Whilst our term builds on the idea of ‘market device’ - de?ned as
‘‘. . .the material and discursive assemblages that intervene in the con-
struction of markets’’ (Callon et al. p. 2) – we attempt to operationalise this
idea speci?cally for the way visual devices mutually constitute calculative
practices. We do so by drawing on and making use of insights provided by
more established ways of thinking (the ‘programmes and technologies’
framework, ‘sociomateriality’, ‘affordance’ and ‘graphic inscription’, and so
on).
568 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
they are important because, they render the calculation
visible through some kind of large-scale ranking apparatus
of which these descriptors form a part. They are thus an as-
pect of the calculative practices for turning ‘qualities into
quantities’ (Miller, 2001) (see Kornberger and Carter
(2010) and Jeacle and Carter (2011) for a discussion of cal-
culative practices involved in ranking). While therefore
their importance has been acknowledged, their effects
have not been demonstrated. This we suggest becomes
more obvious when one considers the production of graph-
ical rankings where rankers are forced to entertain and
take account of quite speci?c affordances and constraints.
To understand what these are we turn to a discussion of
the construction of graphs.
Graphic visualisation: from looking at graphs to looking in
graphs
Latour famously argued that ‘he who visualises badly
loses the encounter’ (1986, p. 13). The ‘scienti?c graph’
was originally said to be one factor that gave science its
in?uence over other forms of knowledge production. For
Latour, the graph was an ‘inscription device’; the key idea
behind this concept was that of ‘mobility’ (the product of
a laboratory could circulate widely without taking with it
the apparatus that led to its production). Accounting re-
search has focused on the inscriptions that construct per-
formance measures more generally (see Dambrin &
Robson, 2011; Robson, 1992), with particular attention
being given to ‘graphs’. Qu and Cooper (2011, p. 358), for
instance, highlight how ‘‘graphical inscriptions are gener-
ally persuasive in communicating information. They solid-
ify ambiguous concepts into concrete forms . . .’’. Whilst
scholars have mobilised the notion of inscription to cap-
ture how material substances are translated into ?gura-
tions that can travel, however, it would be fair to say that
they have looked at the graph but not necessarily in the
graph (see Qu and Cooper’s (2011) call for research on
the production of inscriptions).
Some partial exceptions include Miller and O’Leary
(2007) and Quattrone (2009). In his discussion of the his-
tory of the book, for instance, Quattrone (2009, p. 109) sug-
gests that it is because graphs are ‘partial’ and ‘simpli?ed’
that they have an effect:
Graphical representations . . . are always so partial and
simpli?ed that they essentially contain very little; they
have little truth in them; for, if it ever existed, it has
been lost in the process of diagrammatic representation
which has sacri?ced details and context for the sake of
clarity. This is the only way in which they can effec-
tively communicate and engage the user in a performa-
tive exercise.
From sources further a?eld, Espeland and Stevens
(1998, p. 423), in their review of the Communication Stud-
ies literature, argue that graphs are successful because
they are produced according to ‘aesthetic ideals’ (Espeland
& Stevens, 1998, p. 423, see also Bloom?eld and Vurduba-
kis, 1997). This includes how they should have clarity and
be parsimonious: ‘‘. . . people who make pictures with
numbers typically prize representations whose primary
information is easily legible (clarity), and which contains
only those elements necessary and suf?cient for the com-
munication of this primary information (parsimony)’’
(Espeland & Stevens, 1998, p. 423; see also Tufte (2001)
on whom Espeland and Stevens draw). This is because
those who construct graphs as part of their professional
activities want them to be ‘‘not only errorless but also
compelling, elegant, and even beautiful’’ (Espeland & Ste-
vens, 1998, p. 422).
The contributions above suggest that graphs place ‘lim-
its’ on designers. We supplement this with work from STS
where Lynch (1988, p. 202) argues that graphs (in science)
do more than constrain; they also add features and affor-
dances not found in original understandings.
The [graph] does not necessarily simplify the diverse
representations, labels, indexes, etc., that it aggregates.
It adds theoretical information which cannot be found
in any single micrographic representation, and provides
a document of phenomena which cannot be repre-
sented by photographic means (emphasis in original).
Even the simplest graphs, in Lynch’s view, add rather
than reduce information. They contribute
. . .visual features which clarify, complete, extend, and
identify conformations latent in the incomplete state
of the original specimen’. Instead of reducing what is
visibly available in the original, a sequence of reproduc-
tions progressively modi?es the object’s visibility in the
direction of generic pedagogy and abstract theorizing
(Lynch, 1988, p. 229).
An example of those things added can be found in an
earlier paper where Lynch discusses a common but little
discussed graphic resource is the ‘device of the dot’
(1985, p. 43). Analysing a ?eld manual describing the anat-
omy of a lizard he makes the following point:
Note that each observation of a marked individual is
rendered equivalent to all others through the use of
the device of the ‘dot’. The only material difference
between one dot and another on the chart is its locale.
Locales are reckoned in terms of the grid of stakes,
and all other circumstantial features of observation
‘drop out’.
Dots are ‘additive’ rather than ‘reductive’ (we get this
terminology from Ingold’s discussion of another type of
notation, ‘the line’ (2007)). Lynch (1985) ?ags how graphs
provide for commonplace resources of graphic representa-
tion. Understanding the interplay between graphic re-
sources and the thing they purport to describe, therefore,
is important. Lynch (1988) suggests it is this way one can
witness how the properties of graphs go onto merge with
and come to incorporate the thing represented. He writes:
‘‘. . . one theme which applies to many, if not all, graphs is
that of how the commonplace resources of graphic repre-
sentation come to embody the substantive features of the
specimen or relationship under analysis (Lynch, 1988, p.
226). In turn: ‘‘. . . efforts are made to shape specimen
materials so that their visible characteristics become con-
gruent with graphic lines, spaces, and dimensions (Lynch,
1988, p. 227).
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 569
To summarise, we ?nd it necessary to bring together a
number of complementary disciplinary schools to discuss
this complicated phenomena. Specialisation in this respect
has traditionally posed a major barrier to analysis and
understanding (Hopwood, 2007). Linkages across different
scholarly ?elds provide important new insights into how
we understand, represent and theorise the tools and prac-
tices of performance measurement. In this respect, the ‘pro-
grammes and technologies’ framework (Miller & O’Leary,
2007) tells us how areas are conceptualised in certain ways
so that they can become ‘calculable’, often through inter-
ventions made possible through devices. The literature
from STS directs attention to how devices do not simply
support but can act within calculations. The idea of a ‘rank-
ing device’ drills down further still to show how a ranking
(and ‘calculation’) can be shaped by its incorporation in a
speci?c format and furniture, and, in turn, with how these
sociomaterial features can shape aspects of the market.
The kinds of markets we are interested in are those pro-
curement markets related to the supply of advanced tech-
nologies like information systems and other kinds of
software. We organise our empirical material around a dis-
cussion of three aspects of how speci?c furniture – ‘the dot’
– is moved around a graph. The ?rst section focuses on
how the ranking helps create a ‘competitive space’ in rela-
tion to the shaping of the visible market of players.
3
It dis-
cusses how new expertise, practices and routines are created
and emerge as vendors attempt to improve their placing in
the competitive space (what actors call ‘moving the dot
activities’). The second section investigates how the compet-
itive space is shaped not only by ‘people moving dots’ but
also by sociomaterial constraints. In particular, the affor-
dances and limitations found within the ranking device
(here the focus is on how ‘dots move people’). Speci?cally
these are material affordances (for instance how players in
a market can be brought together and compared in one
space) and social constraints (not all players can be included
on one graph). The ?nal section discusses how these con-
straints encourage rankers to make interventions in the
competitive space (how ‘dots move markets’).
Setting and method
The Magic Quadrant
The ranking discussed here is produced by the industry
analyst ?rm Gartner Inc. (hereafter Gartner). Founded in
1979 by Gideon Gartner, the ?rm operates (almost exclu-
sively) within the information technology domain.
4
Whilst
Gartner is just one of a number of such research organisa-
tions within this area, it is widely recognised as the largest
and most in?uential. Despite not having a monopoly over
the production of IT analysis, commentators suggest it has
something close (Hopkins, 2007).
5
Gartner’s strap line is that
it ‘‘wants to be involved in every IT decision’’ (interview,
Gartner Analyst A). The Magic Quadrant is by far the most
well-known of Gartner’s research tools. This attempts to
compare and rank software vendors according to a number
of prede?ned measures. It comes in the form of a box with
an X and Y-axis (labelled as ‘completeness of vision’ and
‘ability to execute’) dimensioning a two-by-two matrix, with
four segments into which one can see placed the names of
several vendors (see Fig. 1). Vendors are not randomly
placed. Each segment is individually labelled (niche player,
challenger, visionary and leader). The position of a vendor
in a particular segment signi?es something regarding its
current and future performance as well as its behaviour
within markets (Burton & Aston, 2004). Those placed further
to the right are seen to have more ‘complete visions’, whilst
those placed towards the top an elevated ability ‘to execute’
on that vision.
Gartner are proli?c in the production of Magic Quad-
rants: they author nearly 150 for different IT markets (Dro-
bik, 2010); this number changes all the time as Gartner
continually creates new Magic Quadrants to re?ect the
development of new types of technology markets and
occasionally ‘retire’ older ones to represent the fact certain
markets have matured. Authorship of Magic Quadrants is
not a one-off process. They are updated and released each
year. This means how vendors are placed within the matrix
will change over time. There may also be the introduction
or exit of players onto the Magic Quadrant.
In the IT domain there are a number of visual rankings
(examples include the ‘Forrester Wave’, the ‘Gartner Hype
Cycle’, the ‘Gartner Clock’, the ‘Ovum Decision Matrix’, to
name but a few). The Magic Quadrant is, by far, the most
referenced of these (Violino & Levin, 1997). One Gartner
Analyst we interviewed describes how: ‘‘[a] good Magic
Quadrant will get ?fteen hundred downloads every
month’’ whereas a ‘‘Hype Cycle will get around six or seven
hundred’’ (interview, Gartner Analyst B). These are down-
loads from the Gartner website (accessible only by fee-
paying clients). Magic Quadrants are also often posted on
the Internet (meaning they are normally available to a
much wider audience).
Decision makers apparently draw on these rankings to
help facilitate choices when procuring IT equipment and
software. It has become part of IT folklore that those
looking to buy solutions invite only those in the top right
quadrant to tender. This leads some to suggest that a
high-ranking guarantees a vendor more attention than its
rivals (Hind, 2004) or that the ranking has the power to
‘make or break’ a vendor (Violino, 1997). It is perhaps no
3
We de?ne a ‘competitive space’ as the space of confrontation and
struggle that is created between various economic players in a speci?c
technological ?eld, often through the use of various social and material
strategies linked to a ranking.
4
Gartner runs ‘executive programs’, has an established consultancy
wing, organises regular themed conferences and symposiums on emerging
technological topics, and produces research for the IT market. This latter
activity forms the bulk of its enterprise, and it is where 80% of revenues are
generated (Drobik, 2010). Gartner has over 4000 employees and of?ces in
80 countries around the world. It is reported to have over 60,000 clients
from 10,000 different organisations (Drobik, 2010). For further information
about Gartner’s activities, see Pollock and Williams (2010).
5
This point about monopoly is important for what is described below. It
is clear that rankers are stronger when there is only one dominant
evaluator in an area. Kwon and Easton (2010, p. 124) note how an
individual ranker ‘‘. . . can become powerful to the point where they are
able to monopolize the information required for the ef?cient functioning of
markets and thereby in?uence the behaviour of other market actors’’.
570 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
surprise then that vendors seek to in?uence the shaping of
the ranking. Some are even said to construct aspects of
their business (marketing and product development strat-
egies) in line with the ranking’s underlying assumptions
(Hopkins, 2007).
Research on the Magic Quadrant
We have been studying the Magic Quadrant for several
years now. Our attention was alerted to its signi?cance
whilst carrying out an ethnographic study of IT procure-
ment in a large municipal council at the turn of the century
(Pollock & Williams, 2007) and then a couple of years later
during a study of how users bring in?uence to bear on ERP
vendors (Pollock & Williams, 2009). These initial dealings
prompted us to plan and develop a research project that
would enquire into the production of this ranking and
the nature of the expertise surrounding it. The fact our pro-
ject was funded ?lled us with both excitement and (it must
be said) a certain amount of dread! There is a perception
that it is dif?cult to gain access to Gartner (a point said
to be true of rankers more generally (Kwon & Easton,
2010), which perhaps explains the paucity of studies on
the production of rankings). Nevertheless, we set out to
conduct ?eldwork in the hope that we would get lucky
(and ‘fortune’ does seem to feature in a lot of research).
In our initial attempts to gain access, we wrote to one par-
ticular analyst whom we had come across in previous
?eldwork. He agreed straightaway to an interview, which
meant we were able to visit Gartner’s European headquar-
ters in London and begin what turned out to be a highly
productive period of ?eldwork.
Data collection
Since this particular analyst worked in the area of ‘Cus-
tomer Relationship Management’ (CRM) technologies and
was able to provide speci?c details on how the CRM Magic
Quadrants were constructed, we devoted most of our time
to following events and people in this area. We attended
two symposiums organised by the Gartner CRM team. Here
we could observe the formal presentations made by ana-
lysts but also approach them informally afterwards. These
occasions turned out to be a particular fertile ground for
studying rankings. Since the meetings were run in a similar
fashion to academic seminars it was easy to engage ana-
lysts in conversations or to simply hang around and listen
whilst others quizzed them about their thinking behind
the placing of vendors. Whilst we bene?ted from these
spontaneous discussions, we were also able to conduct
interviews with analysts. We carried out seven formal
interviews with Gartner analysts: three of these were over
the telephone, and four took place face to face.
We circulated an early research paper within Gartner,
which not only served to validate our ?ndings but also
led to further episodes of ?eldwork. One analyst, for-
warded the article by a colleague and whom we had previ-
ously interacted with, contacted us to tell us that he
thought that we had produced a ‘critical but fair’ analysis
of Gartner’s work. He also re?ected on how we had missed
some of the more ‘internal’ aspects by which Magic Quad-
rants were constructed. Later, in a hastily arranged inter-
view, he would tell us about these aspects. These form
part of the material presented here.
Our study is further informed and contextualised by
interviews and discussions we conducted with other actors
involved in and around the ranking. This includes four cat-
egories of player: (1) we conducted two formal interviews
with some of the vendors subject to Gartner’s assessment;
(2) we held informal discussions, especially during our
attendance at Gartner conferences, with the IT managers
and practitioners who consume this kind of knowledge;
(3) we interviewed analysts from ?ve rival ?rms to
Ability
to
Execute
Completeness of Vision
Niche Player Visionaries
Leaders
The Magic Quadrant
Challengers
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
Fig. 1. The Magic Quadrant.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 571
ascertain their view on Gartner’s ranking process and its
wider effects on the market; (4) we also interviewed and
observed the activities of a new breed of professional that
has emerged to offer advice to vendors on how to interact
with ranking organisations like Gartner.
Within the larger IT vendors there are now commonly
‘analyst relations’ (ARs) departments which contain ex-
perts whose role is to liaise with and represent the vendors
to industry analysts, consultants and other commentators.
These experts attempt to understand the details of how
industry analyst ?rms work and what kinds of in?uence
they can wield. They will be particularly keen to identify
how the analyst organisation currently views their partic-
ular ?rm and what they might do to in?uence that opinion.
Moreover, there are now hundreds of independent ?rms of
‘AR consultants’ operating in and around the IT market-
place. During our research, we were able to interview
one of these consultants.
Overall we conducted ?fteen formal interviews, carried
out over 50 h of observation at conferences, listened to and
participated in more than 20 ‘webinars’, and engaged in
dozens of informal discussions. All the interviews were
taped and fully or partially transcribed. During participa-
tion in Gartner conferences we took extensive notes. The
collection of data at these venues was facilitated by the fact
that Gartner video record all sessions and make these
available to participants after the event (for a further
fee!). This meant we could re-listen to presentations whilst
back in our university of?ces.
Dot-ology
6
How rankings shape the practices of those ranked (people
moving dots)
Rankings wield signi?cant in?uence over a ?eld of
activity (Sauder & Espeland, 2006). However, those groups
and organisations subject to these measures have not
stood still. A market has been created that sells of informa-
tion on the details of how major rankings are constructed,
together with strategies for the improvement of placings.
Below we report on our interactions with a number of Ana-
lyst Relations (ARs) consultants who produce and trade in
this kind of knowledge. We show how one effect of their
work has been to establish the ranking as a space of con-
frontation and struggle between competing vendors (Korn-
berger & Carter, 2010).
Moving the dot activities: a social affair
In these ?rst set of quotes a consultant has prepared a
presentation to AR professionals. Having previously
worked as a Gartner analyst, this expert now offers advice
to others on how to interact with ranking bodies. His pre-
sentation is organised around various ‘moving the dot
activities’. He is careful to tell the audience that if they
are to be successful in shaping a ranking then they will
be a signi?cant amount of work to do:
Now, these activities that we’re going to talk about,
although we’re going to call them out and highlight
them as speci?c ‘Moving the Dot activities’, they should
be part of your overall AR Strategic and Tactical
Plan . . ..I’m going to remind you, tremendous effort is
required to in?uence the Magic Quadrant. The data that
we’ve gathered indicates that our clients spend any-
where from 60 to 200 h on a single Magic Quad-
rant . . . understand that this is not an insigni?cant
amount of work (presentation, AR consultant A).
In terms of the type of work necessary, ?rstly, this in-
cludes gathering insights about the makeup of the Magic
Quadrant and, then secondly, feeding information back to
the ranker about a vendor’s products, strategy and speci?-
cally ‘thought leadership’. Vendors are encouraged to do
the latter through building personal relationships with
individual rankers, often through engineering periods of
‘social time’ between them and particular analysts (con-
ducting discussions ‘over a meal’ being one of the favoured
methods) (presentation, AR consultant A). Thus, there ap-
pear to be rich and direct interactions between rankers
and those they rank (albeit mediated by these new kinds
of intermediaries).
Another AR consultant interviewed described how he
had engaged in a similar process when one of his own cli-
ents had received a negative placing:
We used enquires with speci?c analysts in the channel
to understand who they should be approaching to help
go to market with speci?c vertical analysts at Gartner to
understand the best approach to solve the business
problems in that particular industry. And we focused
on speci?c analysts to help us make sure our message
and our persistent focus directly for that individual, that
individual market (interview, AR consultant B).
The consultant goes onto describe how the key reason
for these ‘brie?ngs’, ‘enquiries’, ‘touches’, or ‘deep dives’
was to bridge the ‘gap’ in knowledge between the ranker
and the vendor. To evidence this he gives an example of
a successful set of interactions:
[O]ne of our clients was getting involved in a Magic
Quadrant and . . . we tried to understand what the ana-
lyst thought about our company, and we realised that
there were several areas where there was a gap. So
we made sure we ?lled those gaps . . . we did enquires
to understand whether what we believed the message
should have got across, whether the analyst got that
across, and if it wasn’t we tried to ?ll that gap. So when
the Magic Quadrant ?nally came out we positioned, we
knew the analyst had suf?cient information, we knew
where we had weak points and we addressed those,
so it wasn’t a shock. In fact, we were positioned in the
top right hand corner. It was fantastic! (interview, AR
consultant B).
6
What could be more banal than a ‘dot’? However, if we want to
understand the constitutive nature of a visual ranking then we have no
choice but to focus attention on this particular graphic furniture. Dots form
the basis of every conversation and consideration with regard to the Magic
Quadrant. Everything that happens typically occurs around the dot. Dot-
ology, which is a development of an actors’ category, attempts to capture
how this mundane furniture can offer new possibilities, place limitations
on actors, and encourage processes of co-production between graphs and
settings.
572 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
Both consultants describe how the rationale for these
brie?ngs and meetings should be for the vendor to under-
stand the ‘evaluative criteria’ the ranking organisation ap-
plies when assessing vendors/products. This is the speci?cs
as to how individual rankers’ conceive of the nature and
characteristics of the various technologies covered by their
particular Magic Quadrant:
I need to understand the criteria and current opinion
and the publishing schedule, and I need to see what I
can do to in?uence that criteria and that opinion. Now
we’re going to use the analysts by doing inquiry to ?nd
out this information, like what is changing in the crite-
ria . . . consulting with them, perhaps even use some of
their information and criteria to in?uence the way in
which my product roadmap is going to go (presenta-
tion, AR consultant A).
The suggestion given is that once a vendor understands
the ranker’s evaluative criteria that they should then use
this information to in?uence their own product develop-
ment strategies. In other words, they should develop prod-
ucts and strategies in a way that more closely resembles
the ranker’s description of the technology/market (this is
reported to be a common strategy amongst many IT ven-
dors (Hopkins, 2007)). If not possible (or desirable) to rea-
lign product development around the ranking then another
solution is to attempt to modify the criteria of the ranking:
. . . we might even give consideration to trying to change
the character of the Magic Quadrant [through] in?uenc-
ing the de?nition of exactly what this Magic Quadrant
is. That’s part of changing the criteria. If I can sort of
say ‘Look, this is not the same Magic Quadrant as it used
to be, now it has a newset of objectives and a new set of
criteria because the market has changed’, that has an
interesting possibility of radically changing the position
of all the dots (presentation, AR consultant A).
What is being recommended is that vendors should at-
tempt to move the ranker’s conception of the technology
assessed. In so doing, there will be obvious advantages
for the vendor that is able to help set the criteria by which
products in a particular market are judged. The AR consul-
tant then closes this particular segment by giving some
practical examples of what kinds of bene?ts might be
gained from (re)setting criteria.
Bringing vendors into the same competitive space
The issue of competition – and shaping of the competi-
tive landscape – is a key theme surrounding the Magic
Quadrant. The AR consultant suggests that if a vendor
has a product that is signi?cantly different from those of
competitors then it may be possible to suggest to Gartner
that it need create a new Magic Quadrant. This they can
do through feeding analysts their thoughts on how partic-
ular technologies and technology markets are developing.
Alternatively, through similar kinds of interactions and
brie?ngs, there may also be the possibility of ‘killing’ a Ma-
gic Quadrant where a vendor is not doing so well:
Alternatively, there’s the chance of creating a com-
pletely new Magic Quadrant. Gartner does retire old
ones and create new ones. Working with an analyst that
doesn’t have a Magic Quadrant, you might be able to
create a new one. Working with the analyst that has
two Magic Quadrants, you might be able to alter the
characteristics. Working with an analyst that has lots
of Magic Quadrants, you might be able to kill a Magic
Quadrant (presentation, AR consultant A).
The suggestion is that a vendor may be able to create a
Magic Quadrant for an area where it is the ‘leader’. It may
even be able to help retire a Magic Quadrant where its
competitors are doing particularly well by comparison.
The consultant suggests that whilst a ?rm may not always
be able to move its dot up it should nonetheless give con-
sideration as to how it might be able to move its compet-
itor’s dot down:
An alternate objective is to move your competitor dot
down, to the left . . . So that might be an interesting
approach . . . if I had the ability to push my competitor
down then by inference I’ve pushed myself up. I might
look at an objective as increasing the distance between
you and the competitors, or preventing a competitor
from leapfrogging over you (presentation, AR consul-
tant A).
What is being described here is how it is the ranking it-
self that mediates and constitutes competition. Even
though a vendor may not necessarily have thought of itself
as directly competing with speci?c others, through place-
ment on the Magic Quadrant, the competitive space has
been mapped out. Vendors are seen (and increasingly trea-
ted) as direct rivals (Kornberger & Carter, 2010). In the con-
sultant’s view, the Magic Quadrant clearly indicates a
vendor’s standing in relation to those immediately sur-
rounding it. And whilst vendors could not previously rank
their performance against others, they can now measure
the dots on a graph (and the use of a ruler by executives
to capture even slight movements appears to be common
– see Pollock & Williams, 2009). Interestingly, whilst ven-
dors have been brought together in the same competitive
space, the consultant is advocating that a vendor should
not simply accept but potentially attempt to recon?gure
this space. Vendors are given advice on how to shape the
boundaries surrounding the competitive space; they are
encouraged to develop tactics and strategies to push them-
selves up and to the right, which, by default, will push their
competitors down and to the left.
To summarise, we see how dots have come to mediate a
vendor’s interaction not only with the ranking organisation
but also with other vendors. Some have gone as far as to
develop strategies and plan for modes of interaction with
the rankers to help move places and shape spaces. Thus
at a basic level dot-ology captures the practices and rou-
tines that develop as actors focus attention around the de-
tails of a ranking in order to in?uence, ?rstly, their own
position in relation to competitors and, secondly, the
boundaries of the competitive space. However, we want
the notion to capture more than these ‘social’ strategies
at play. It is not simply about how people contrive to move
dots but how the competitive space is being (re)shaped in
other ways too. In particular, we want to introduce the idea
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 573
of sociomaterial agency, by which we mean that the ?eld is
in?uenced by the various affordances and constraints con-
tained within the ranking. It is not simply people moving
dots but also ‘dots moving people’. To demonstrate this,
we begin by discussing how dots are placed on the matrix
in the ?rst instance.
Individual rankers and the ranking organisation (dots moving
people)
The production of the ranking is not static
The calculation of the Magic Quadrant has generated
much discussion within IT practitioner circles. During
?eldwork, we had the opportunity to interview a number
of Gartner employees about how Magic Quadrants were
developed: ‘‘The accusation we were always given’’, re-
sponded one to our question, ‘‘was that we threw darts
at the chart’’ (interview, Gartner analyst A). Here the ana-
lyst is responding to a widely held belief that the calcula-
tion of places lacks any form of process or systemization
(see for instance Violino, 1997). One issue that apparently
vexed practitioners was the thought that placings were
plotted by hand. Presumably this was problematic because
it lent the ranking a discretionary quality (Violino, 1997).
Another was the fact the Gartner described the Magic
Quadrant as resulting from predominately ‘qualitative re-
search’ (Soejarto & Karamouzis, 2005). One Gartner report
describes how: ‘‘During the research process, we may ask
for new information and brie?ngs from vendors. We often
gather information from vendor-provided references, from
industry contacts, from unnamed clients, from public sour-
ces . . . and from other Gartner analysts (Burton, 2004, p. 4).
It was the idea that rankings could be in?uenced by ‘un-
named clients’ that caused much discussion (Violino,
1997). Gartner would informally solicit the opinions from
customers of those vendors being assessed. But this was
seen as ‘?awed’ since it gave a paramount role to analysts
who could choose which customers to listen to (and this
raised the issue of ‘bias’ and ‘partiality’; for more details
see Pollock & Williams, 2009).
In our interviews with Gartner analysts, however, they
went to great efforts to dispel the idea that rankings were
judgmental or approximate. They pointed to how the pro-
duction of rankings, whilst they did rely on a range of
sources including informal discussions with customers,
was also circumscribed by standardised measures and
technology: ‘‘The actual dot scoring, there is a standardised
spreadsheet we have to use [and] standardised scoring
mechanism’’ (interview, Gartner analyst A). Dots are plot-
ted within a ‘spreadsheet’ and populated with numbers
from a ‘standardised scoring mechanism’. Scorings derive
from a number of ‘evaluation criteria’ that have been di-
vided along the two axis of the Magic Quadrant. These
break down to reveal a number of further standard criteria
(see Table 1).
Set criteria are then given a weighting (‘high’, ‘stan-
dard’, ‘low’, or ‘no rating’). If ‘no rating’ is applied this
means that this particular factor will not be counted in
the calculation. However, whilst individual rankers had
the ?exibility to choose whether to apply a criterion or
not, it was reported that the bulk of analysts would use
most of them:
So for example, of the standard, I think it is eight criteria
on the two dimensions, eight criteria on each [sic], you
could theoretically get rid of four or ?ve of them, and
just weight it on three – so you could weight something
zero if you want to – but most analysts are using most,
if not all of those criteria, and weighting them to differ-
ent degrees, on every single Magic Quadrant (interview,
Gartner analyst A).
The primary reasons for these changes in calculating
places was because of increasing pressure exerted by AR
consultants and others who were probing ranking bodies
– through ‘brie?ngs’, ‘enquiries’, ‘touches’, etc., – to under-
stand the detailed practice of ranking construction. An-
other reason was the fear of ‘litigation’.
7
As a result the
production of the Magic Quadrants are more regulated so
as to create an ‘audit trail’ (see Free et al. (2009) for a discus-
sion of the auditing of rankings):
. . . individual analysts have to follow the same proce-
dure, and we have to document that, and you have to
have an audit trail of how it was created, and usually
you have to have scoring sheets to demonstrate how
you got to that point but on the actual spreadsheet that
creates the quadrant there is a scoring, a whole scoring
system which is standardised across the whole com-
pany (interview, Gartner analyst A).
Gartner had even gone as far as setting up a ‘Methodol-
ogy Team’ to ensure that the standards for plotting the
graph were maintained across the entire organisation. A
former Director of the Methodology Team describes how
this did bring a certain amount of systemisation in the
work of individual analysts: ‘‘. . . there is some leeway in
the methodology but [the Methodology] team is responsi-
ble for making sure that there methodology is sound and
that it is followed, and that it is updated as technology
changes and as we see things unfold in the marketplace’’
(interview, Gartner analyst C).
An analyst notes that this is a more regulated and
standardised process than from just a couple of years
ago. Apparently, individuals had more freedom in the past
Table 1
Evaluation criteria for the Magic Quadrant.
Completeness of vision Ability to execute
Market understanding Product or service
Marketing strategy Overall viability
Sales strategy Sales execution, pricing
Product strategy Market responsiveness
Business model Marketing execution
Industry strategy Customer experience
Innovation Operations
Geographic strategy
7
Gartner has been the subject of a number of high pro?le litigation
cases. The most recent of which was the 2009–2010 case presented by ZL
Technologies Inc. who argued that because of a low ranking received on a
Magic Quadrant they had been ‘defamed’. The case, whilst gaining much
publicity, was ultimately unsuccessful.
574 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
to plot graphs in different ways. He describes how the old
way of calculating Magic Quadrants had both advantages
and drawbacks:
. . . they were more comprehensive in those days but
they weren’t consistent. So the way I would have my cri-
teria would be nothing like my colleague sitting next to
me. We weight in a very different way and the dots are
arrived at very differently. And the vendors didn’t like
that. The vendors didn’t like being top right in one and
bottom left in another and not knowing why. Often that
was because they were trying to negotiate about how
they were treated (interview, Gartner analyst A).
Magic Quadrants were more comprehensive because
vendors could be scored according to criteria the individual
ranker felt was important at the time or relevant to the
speci?c circumstances. However, this meant the process
of plotting the dots differed widely across the ranking
organisation. This seemingly caused problems for Gartner’s
relationship with vendors who wanted greater clarity and
uniformity around scoring mechanisms. One analyst notes
that because the process of placing dots was now similar
across Gartner that certain aspects of the ranking construc-
tion process had ‘improved’. However, he was also of the
view that that not all these changes in production were
leading to improvements in the overall ‘quality’ of the Ma-
gic Quadrant:
. . . the purpose of the Methodology Team, and the pur-
pose of all these extra steps, and more rigorous proce-
dures, is to improve quality. The question really is
about what quality means? And I would argue that
the de?nition of quality being used there is about con-
sistency, repeatability and audit trail. It is that level of
quality. In other words, we have a process, we’re follow-
ing it, no one is getting out of the process (interview,
Gartner analyst A).
Improvements, in his view, were related to control over
the process and the repeatability of the same evaluative
measures. He then goes onto describes why he thought
Magic Quadrant were better in previous years:
So I would argue that the value of the Magic Quadrants’
ten years was actually better, even though they were
less accurate in some ways . . . there were bigger move-
ments on Magic Quadrants from year to year. But the
point being made was that analysts’ were changing
the weightings much more dramatically to re?ect what
the customers were telling them. Now we re?ect the
customers . . . less well, because we have to go through
a lot more steps to re?ect what the customers are ask-
ing. So it is an interesting trade-off really. Who is the
value for? (interview, Gartner analyst A).
His point is that there used to be more ‘movement’ on
the ranking at each new release. Since individual rankers
had the freedom to set criteria and plot dots this re?ected
what these ‘unnamed clients’ were actually telling them
about vendors. By contrast, today, even though an analyst
might hear critical comments about a vendor, these may
not be so easily re?ected within the Magic Quadrant (they
may fall outside of the publicly available criteria). The clear
impression we gained from our interviewees was that in
recounting these moves towards transparency and stan-
dardisation that they were also describing a decrease in
their own discretion. In order to attempt to remove the
idea of bias and partiality from the ranking, individual ana-
lysts were now increasingly circumscribed by a new mate-
rial and organisational reality (increasingly explicit
assessment criteria, a methodology team scrutinising their
work, the need to provide explicit evidence for choices, a
spreadsheet that plotted dots, etc.). We now turn to look
in more detail at these constraints.
Actors are constrained in producing rankings
We want to show how dot-ology relies on an extensive
organisational apparatus that patterns the activities of
individual rankers in placing dots. Below we focus on
two particular aspects: technology and bureaucracy.
Technology
The spreadsheet has become a central feature of the
production of Magic Quadrants. Law (2001) argues that
spreadsheets are among those technologies that help cre-
ate powerful actors (through allowing them to manipulate
data so as to see and project things that others cannot).
However, at Gartner, the spreadsheet appeared not to be
a malleable tool but one that placed limitations on individ-
ual rankers. For instance, when information had been input
into the spreadsheet and the graph plotted it was then dif-
?cult, if not impossible, to move a vendor: ‘‘. . . you just
can’t put the dots where you want. The dots are all related
to each other. So if you move one score up it impacts all the
dots on the chart’’ (interview, Gartner analyst A). A vendor
might be moved if the analyst thought the calculative
apparatus had failed to position a dot in the way s/he con-
sidered ‘fair’. Fair meant a placing that re?ected the indi-
vidual ranker’s own knowledge as opposed to that which
results from the ‘organisational machinery’. However,
moving a vendor once a graph had been generated would
create further movement across the ranking. One small
change could affect the position of all vendors and this
would almost attract the attention of colleagues elsewhere
in the organisation.
For this particular analyst, this was further evidence
that dots were not arbitrarily placed but that individuals
were constrained by the scoring mechanism and technol-
ogy. The analyst then goes onto describes how one of the
few changes they could actually do to the graph was to:
. . . move the box around a bit. So, in other words, if all
the dots are clustered in the centre you can reset the
axes to get the box more spread out so they look more
attractive. Otherwise, you would have a scale where all
the dots are clustered around the centre or clustered
around one spot. The idea there is just to make them
spread out so you can actually read who compares to
whom. So, there is a little bit of ?exibility on the edges,
but frankly, you can’t really rig it anymore (interview,
Gartner analyst A).
Analysts had the freedom to adjust the scale within the
spreadsheet but not speci?c dots. If vendors were all clus-
tered together, it was possible to adjust the box to create
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 575
distance between them. That is, to enhance or develop a
greater distinction between the entities ranked than was
initially revealed in the spreadsheet. This was apparently
an attempt to make the rankings more ‘attractive’ (a point
we develop in detail below).
Bureaucracy: the review process
There was reportedly increased scrutiny of the work of
the rankers. The Methodology Team dictated that rankings
should pass through various kinds of review. This includes,
?rstly, the discussions analysts would have amongst them-
selves. Most Magic Quadrants were produced by more than
one individual, meaning that the ranking emerged from a
consensus amongst a group of authors. There was also a
‘peer review committee’ where analysts from the same
technology area would scrutinise the calculation. Accord-
ing to one analyst, it was now practically impossible to
‘rig’ Magic Quadrants because they were subject to so
much scrutiny:
If you have sat down and set the criteria out – I suppose
mentally you could if you sat down – but there is a lot of
heart felt discussion that goes on between usually a
couple of the authors and, there is usually two authors,
one author, sometimes two on each, and then there is a
team of maybe three or four who are very closely
involved (interview, Gartner analyst A).
Moreover, in recent years, a further check was also
introduced where the placement of the larger vendors
was also given a further round of review. It was inspected
by what was called a ‘lead analyst’ within Gartner. This
was someone who had overall responsibility for research
produced on speci?c vendors:
But now there is something else that happens as well.
Say there is ?fteen vendors on the Magic Quadrant,
you might have lead analysts on some of the biggest
vendors out there. So for the biggest vendors we tend
to have a lead analyst on them to keep a consistent
viewpoint of the whole vendor. So they might be in
ten different areas of technology and one analyst will
have an overview across the whole lot. So if there is
any form of escalation or, you want to go to one person
and say ‘give me an overview of that whole vendor’.
And they are a sixty billion dollar company or some-
thing, you’ve got somebody with a view across the
whole company. Those people have to review where
the dot is and what the wording of the text is (inter-
view, Gartner analyst A).
One ?nal part of the review process was that graphs
were also sent out to vendors themselves prior to publica-
tion who, in turn, were free to comment. A consequence of
this, according to an analyst with responsibilities for the
Gartner Ombudsman of?ce, was that this often led to
‘thorny’ interactions between Gartner and the vendors:
. . . a thorny one would be a vendor is dissatis?ed or
believe that they haven’t been treated objectively in
a . . . Magic Quadrant . . . So a typical issue might be well
I am too far down and to the left and I deserve for my
dot to be higher and more to the right. So they’ll come
to us and say I haven’t been treated fairly (interview,
Gartner analyst C).
Interestingly, it was not only in the management of
existing Magic Quadrants where various new kinds of
bureaucratic measures could be found. They were also vis-
ible in other aspects of the ranking. In particular, this was
in the creation of new Magic Quadrants. Developing a new
ranking turned out to be more dif?cult than in the past be-
cause a ‘committee’ had now been put in place to approve
them:
Before you could just do it. 10 years ago you could just
create one if you wanted to. You just had to negotiate
with the boss. But now you have to go to a committee.
There is a senior research committee that has to
approve all new proposals for Magic Quadrants. So
you have to justify there is a market, it’s big enough,
it’s growing at this rate, there’s lot of market clients,
here’s the enquiry volume coming from the customers,
‘OK then, you’ve got a Magic Quadrant’ (interview, Gart-
ner Analyst A).
Asked whether this particular analyst had been in-
volved in or seen such a committee, he replied that he
had observed from nearby the workings of a number. In
particular, in recent months, he had seen a committee for
a type of development called ‘Social Software’ (discussed
in more detail below): ‘‘I didn’t go through the committee
but I saw the forms you have to ?ll in, and you have to go
to a meeting, and you have to in effect propose it and nego-
tiate why it has a right to exist’’ (interview, Gartner analyst
A). Added to this, and this is where we get to the substance
of our argument, there was a further reason as to why set-
ting up a new Magic Quadrant had become dif?cult. It ap-
peared that the affordances and constraints of the device
itself was a mediating feature.
Affordances and constraints of the ranking
Creating a Magic Quadrant was reported by those we
interviewed to be ineffective at certain key times in a tech-
nological lifecycle. It was said to be dif?cult to set a ranking
up at the outset and then during the more mature stages of
the career of a technology. There could be dif?culties in the
initial stages of the launch of a new technological ?eld be-
cause there might simply be too many vendors. An analyst
describes how:
When there is a 100 [vendors], that’s not very good for
us . . . because then [the market] is not mature enough
for us to actually say, so what we are doing is watching
that very carefully, and going, I will give you an exam-
ple, Social Media Monitoring devices. There is tonnes
of them at the moment (interview, Gartner analyst A).
When asked to explain why the presence of too many
vendors was problematic our respondent replies:
‘‘. . . graphically, you can’t, [. . .] we’ve done it, you can have
a 100 dots on the chart but it is unreadable. It is just gar-
bage. It is just a bunch of dots’’ (interview, Gartner analyst
A). In other words, if all players producing (or claiming to
produce a) new technology were to be included then this
would mean graphs would be too cluttered. There would
576 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
just be too many dots and vendor names on the device.
This would presumably create confusion for those
attempting to consume and make sense of the ranking
(see Fig. 2).
Another analyst notes that, at the outset therefore, Ma-
gic Quadrants may not be very useful for those seeking in-
sights into developing trends: ‘‘possibly if you have 200
vendors in the space that is probably not the right time
to do a Magic Quadrant (interview, Gartner analyst B).
The ?rst analyst goes onto describe how, equally, too few
vendors is also a problem: ‘‘And likewise when there is 3
dots on it, it is meaningless. What’s the point of having a
Magic Quadrant with 3 dots?’’ (interview, Gartner analyst
A). Too few dots meant that little is being described in
terms of how the market is developing (see Fig. 3). The
analyst gives a recent example:
Ability
to
Execute
Completeness of Vision
Niche Player Visionaries
Leaders
Too Cluttered
Challengers
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Fig. 2. Too cluttered.
Ability
to
Execute
Completeness of Vision
Niche Player Visionaries
Leaders
Too Empty
Challengers
Vendor
Vendor
Vendor
Vendor
Vendor
Fig. 3. Too empty.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 577
. . . we used to do things like operating systems. . . But
when Microsoft started dominating operating systems
on desktop or desktop applications it was pointless hav-
ing 4 dots on a chart . . . But the ones that I have seen
that have gone, have basically just dwindled to a point
where through mergers and acquisitions they are down
to less than 8 vendors, and the colleagues all turn
around and go ‘what was the point in that?’. The clients
don’t read them anymore, they are not so interesting.
The only people who read them then are clients who
want to justify what they are already doing – it is an
insurance policy kind of thing. But their value is very,
very low. The dots hardly move. And nobody is very
interested (interview, Gartner analyst A).
In contrast to the situation where there were too many
or too few vendors, those analysts that we had interviewed
had come to realise that there was an ideal number of dots
that could be pictured at any one time:
So, I would argue that Magic Quadrants are almost like,
if you imagine a market always going theoretically
going a 100 down to 10, to 5 vendors or something as
it consolidates and the barriers to entry get put up by
the incumbent. Gartner’s Magic Quadrant is the beauti-
ful picture when you have gone down to about 20, 25 to
15, or 10, and then once you go below that it ceases to
be useful. And before that it is not particularly useful
(interview, Gartner analyst A).
The ideal number is somewhere between 10 and 25
dots. This is what this individual ranker identi?es as the
‘beautiful picture’. Another analyst makes the same point:
‘‘Typically, we would cream off all the vendors by inclusion
criteria, and we work that in a way so that there is 20, 25
dots’’ (interview, Gartner analyst B). It is seemingly a beau-
tiful picture because the graph is neither too crowded nor
too empty. It is also a beautiful picture because it appar-
ently keeps Gartner in the ‘game’ so to speak:
So, while it is in that sort of state between about 25
down to maybe 10 vendors, there is a choice, there’s a
multiple different dimensions to it, and different ways
of evaluating, how you write each vendor up. There is
complexity in it, and therefore there is a game for us
to play (interview, Gartner analyst A).
To summarise, dot-ology captures some of the interac-
tion between the social and material aspects of producing
a ranking. For instance, whilst (technically) it might have
been possible to move individual placings on the spread-
sheet, the analysts were constrained by the (social) review
process where a moving dot would have to be explained
and justi?ed. Alongside this, the affordances of the Magic
Quadrant meant that creating the ?guration was dif?cult
both at the outset and at the end of a technological evolu-
tion. At the outset, there were simply too many players and
at the end, because the market has consolidated, there
were too few. The individuals we interviewed appeared
to agree that their experience had shown them that there
were an optimal number of vendors that could be repre-
sented. In other words, the Magic Quadrant set limits on
the kind of competitive space that could be created – and
this was what one individual called the ‘beautiful picture’.
In terms of teasing out what the rankers were attempting
to achieve we ?nd Miller and O’Leary’s (2007) ‘pro-
grammes’ and ‘technologies’ framework useful. Pro-
grammes refer to the conceptualisation and envisioning
of a domain so that it might become open to calculation
(the ‘beautiful picture’), whereas technology refers to the
various interventions that are made through a range of de-
vices so as to bring about such ordering. We now turn to
look as such interventions.
How the ranking encourages actors to intervene in the wider
economy (dots moving markets)
Capturing the beautiful picture
The constraints dictated by the matrix appeared not
only to have a spatial but also a time-related dimension.
Although Gartner had identi?ed the picture that furthered
their interests and those of the market, this particular com-
petitive space appeared temporally bound. At times, the
number of players in an emerging ?eld was changing so
fast that Gartner could not capture the picture. Sometimes
they were simply to slow to react to it, or, by the time they
had reacted, the beautiful picture had long gone. To illus-
trate this point we include the comments of an analyst
talking about the case of ‘Web Analytics’:
Sometimes they move through so fast that . . . Gartner’s
Magic Quadrant never quite . . . hits it. And a good
example of that would be Web Analytics where . . . it
was 68 vendors about 4 years ago and now there is
about 20 or so. But there is only 3 big ones who control
a vast majority of the market, followed by Google which
is free and then there’s a couple of specialists. So really
to have a Magic Quadrant with about 5 or 6 on, there is
not much point anymore. So it went from 68 to 6 in
about 3 years and so there was little window there
where Gartner could have managed to get a snapshot
of the market when there was 20 in, but then it was
gone (interview, Gartner analyst A).
In this case there were initially too many vendors and
then later too few for them to ‘get a snapshot’ of the mar-
ket (Web Analytics just passed them by). The ranking orga-
nisation was unable to capture the beautiful picture. This
was because the particular technology ?eld was too fast
moving for Gartner to mobilise its large organisational
machinery in a timely fashion (these were the standard-
ised processes, committees, review cycles described
above). If this was the case for Web Analytics, it seems also
to be true for a new kind of technology called ‘Social
Software’:
So a classic example is Social Software at the moment
where there is a team of 7 or 8 analysts in Gartner
now on that area . . . But Social has been around for –
you know Facebook and all that stuff – has been around
for quite a few years now . . . What happened was they
went: ‘Wait a minute people are making money in that
area’ . . ..I don’t mean Linked-in and that, they are not
making money, but the stuff companies are buying to
manage social networks or to deal with social networks.
578 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
They are starting to invest and there is companies piling
into that area and Gartner is going, at some point Gart-
ner – I think it was 18 months ago – Gartner went ‘Oh
my god. We’re late. Go. Boom!’ (interview, Gartner ana-
lyst A).
Here the analyst ?nishes the conversation by noting
how, in contrast to other smaller industry analysts and
market commentators, Gartner were typically ‘late’ with
their ranking:
An analyst will take it upon themselves and say ‘that’s
mine’, and they will go leap after it. Then a couple will
follow them and they will go after it. So we are, that’s
why I say that . . . we are not setting the pace. The only
time we do set the pace is when we are quick followers I
think is the best way I would describe it and we are use-
ful in that we bless things (interview, Gartner analyst
A).
Capturing the beautiful picture was also dif?cult be-
cause the grouping could simply no longer exist. That is,
there was once a vibrant competitive space but now, be-
cause of mergers and takeovers, failures and collapses,
and so on, there remained only a few competing players
within a market. When this happened, the only solution
apparently was to withdraw a Magic Quadrant:
I haven’t seen many [retired] recently because analysts
don’t like giving up turf but, it tends to be where you
have got down to just a handful like 5 vendors in a mar-
ket . . . So, there is no formal process that says we
review them and anyone with less than ‘x’ dots gets
shot. It is more that the analyst knows that and goes
and ?nds a new market to go cover and research, if they
are bright, which they usually are. So often you ?nd an
analyst has 2 Magic Quadrants: one old one that is
dying; and then they got another one with a slightly dif-
ferent de?nition which has a newer and more buoyant
market. And then eventually they stop doing that one,
but there is no formal process as far as I understand it
(interview, Gartner analyst A).
If a Magic Quadrant is ‘old and dying’, an analyst may
then decide to ‘retire’ it. What all of this suggests is that
the ranking organisation was not completely passive in
searching for the beautiful picture. If the beautiful picture
was not there then the Magic Quadrant prompted them
to set about trying to create one.
Creating the beautiful picture
The affordances and constraints of the Magic Quadrant
were such that it could encourage rankers to attempt to
make interventions in/to markets. During our research,
for instance, we noted how the ranking organisation ap-
peared to have at least two strategies for creating beautiful
pictures. The ?rst of these is related to the standardised
evaluation criteria described above. When there are too
many vendors to be included in a Magic Quadrant, for in-
stance, an individual ranker will continually set and reset
these criteria in order to reduce the competitive space.
One analyst describes this by talking through the example
of Social Software:
There is a lot of discussion [internally within Gartner]
about . . . what stage do Magic Quadrants have in a life-
cycle of a market? And they are not good at the start of a
market; they are hopeless! When a market is in its ?rst
couple of years and there is, Social Software and I’m
looking at Social CRM at the moment and we’ve identi-
?ed 92 vendors in the last three days. Can’t put 92 dots
on a chart! So, it is pretty clear that we will set some
high criteria to cut people out. And that is what the
big debate will be about is how you set those criteria.
But two years ago there was probably more than that.
It all depends on how you de?ne that market (inter-
view, Gartner analyst A).
To paraphrase the words from above, these criteria are
usually set around ‘quantitative’ aspects as well as more
‘qualitative’ elements. These will then be set and reset to
‘cut people out’. The second strategy is to divide spaces
up to get the required picture. An analyst describes how
this is done: ‘‘[c]learly there is a kind of optimal number
of dots on a chart which Gartner kind of ends up almost
dividing markets up in order to get that number of dots
on a chart, which is readable, which is about 15 to 25’’
(interview, Gartner analyst A). The analyst acknowledges
not only that Gartner reduce the market down, but that
they reduce it down to a particular size: ‘‘So in effect you’ll
?nd almost every analyst is setting the criteria, the bounds
– not consciously really but we are doing it – to get 15 to
25 dots. Because if it drops to 5 dots, there’s 5 vendors in
this market, it’s highly consolidated, so why would they
ring us?’’ (interview, Gartner analyst A).
Let us unpack more carefully the implications of what is
being described here. Gartner set the bounds of the com-
petitive space so as to arrive at what it thinks is an optimal
number of vendors. Because there are too many vendors in
an area – and since the emerging ?eld cannot be captured
in its entirety on a single Magic Quadrant – analysts will
literally divide markets up. This means Gartner will at-
tempt to create new competitive spaces and distinctions
between technologies. The easiest way to do this appears
to be through the introduction of alternative nomencla-
tures (Pollock & Williams, 2011). During the period of
our research, for instance, we observed how Gartner intro-
duced three new terminologies within the category of ‘So-
cial Software’.
Social Software
Social Software is a relatively new area where there is
currently a great deal of activity and interest as well as
uncertainty. Gartner describe Social Software as the area
where they are ?elding most questions from clients and
prospective purchasers. One key issue is that Social Soft-
ware is something of an ‘umbrella term’ (also described
as ‘Social CRM’ or ‘Social Media’). The problem is that large
numbers of vendors are rebranding their products as ‘So-
cial’ in some way. We attended a Gartner conference in
London, for instance, where an analyst makes this point
to the audience:
Social CRM is a huge topic. There has been tonnes of
calls about it. I am tracking currently about 90 vendors
who have some area of Social CRM. Some vendors are
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 579
calling themselves that and they are not. Some people
are that. Some people don’t know that they have it when
they have it. So there is a lot of movement going on as
people try to make sense of just Social Media in the ?rst
place, and that is a hard nut to crack: ‘What is Social
Media?’ (conference presentation, Gartner analyst D).
In the last couple of days alone, Gartner had identi?ed
nearly a hundred new players claiming to offer some kind
of Social Software. There appears within the market a need
for some form of clarity. Gartner’s response therefore has
been to break this technological ?eld down into further
sub-segments. They have de?ned Social Software as con-
taining: ‘Social CRM’, ‘Social Software in the Workplace’,
and ‘Externally Facing Social Software’ (EFSS). Another
Gartner analyst presents the rationale for these splits dur-
ing a presentation:
. . . we initially had one Magic Quadrant for Social Soft-
ware and it really covered quite a few different technol-
ogies. Increasingly . . . we have been looking to split that
up because, as the market matures, we start to see some
of the kind of submarkets or other kinds of segmenta-
tion . . . these Magic Quadrants that are being issued in
2010, we’re building on the Social Software in the Work-
place which is looking at how these kinds of ideas can
be used behind the ?rewall . . . [t]he newest one that
was released was EFSS or Externally Facing Social Soft-
ware. What that is essentially doing is going beyond
the ?rewall. . . Now we also see the public social media,
and I will also be talking about in a moment the Social
CRM Magic Quadrant, that is the third one which we
are releasing (conference presentation, Gartner analyst
E).
Out of one category, and because of the dif?culty of rep-
resenting all the possible vendors in the Social Software
Magic Quadrant, they had crafted three new (sub)spaces.
Creating these new kinds of technological categories
turned out not to be a straightforward process as we show
in the ?nal empirical section.
The pragmatics of making meaningful distinctions
8
One way to bring a new competitive space to life seems
to be to create Magic Quadrants for them. However, during
a presentation, a Gartner analyst notes some of the dif?cul-
ties surrounding the pragmatics of doing this – particularly
in separating out the Social Software category and making
clear distinctions between the vendors operating within in
it. The three new categories are presented on a slide as cir-
cles that overlap with each other:
Across these different segments you can see some
examples of the kinds of vendors that we see. You can
also see that these circles do kind of overlap. We do
see that there are some vendors that are active in sev-
eral different markets and that is re?ected also when
we start looking at the Magic Quadrant. There are ven-
dors that are present on several of the Magic Quadrants
and a couple who really are active on all three. Now -
. . . when we ?rst started doing this analysis and we ?rst
started looking at the criteria we actually were . . . a lit-
tle afraid that [we] would see a great deal of overlap
(webinar, Gartner analyst F).
The analyst notes how there were vendors producing
software that could be counted as belonging to all three
categories. Their fear was that there would be a great deal
of overlap. However, he goes onto say, there turned out to
be fewer than anticipated: ‘‘. . .the overlap we had in the ?-
nal publication is really quite small. There is only a couple
really that appear on several different ones’’ (webinar,
Gartner analyst F). The reason for this was how Gartner de-
?ned the evaluation criteria: ‘‘And parts of that is down to
how we de?ned the criteria and what were the criteria and
quali?cations for being included in each Magic Quadrant’’
(webinar, Gartner analyst F). Setting and resetting the cri-
teria meant that the rankings plotted exactly as they
should do!
This pragmatics of making meaningful distinctions can be
seen more speci?cally in the creation of the Social CRM
Magic Quadrant. Here an analyst describes the dif?culty
Gartner have had in producing this particular ranking:
‘‘We’re in the process of creating a Magic Quadrant for this.
There isn’t one yet. . . It is a very onerous task because so
many of these vendors are very new and hard to de?ne’’
(conference presentation, Gartner analyst G). Some months
before the release of the Social CRM Magic Quadrant, an
analyst speculated about how many vendors would be in-
cluded. He shows the audience not the Magic Quadrant but
a ‘list’ of some representative vendors:
Again this is a representative list – we are checking out
80 or 90. I think we are probably going to come out to
25 to 30 based on the criteria. One thing that we are
looking over is vend over ?ve million and putting in
things like ‘Are we being asked about you?’ So, there
is a lot of things in here . . . (conference presentation,
Gartner analyst G).
He makes clear the quantitative and qualitative evalua-
tive criteria to be used. He also notes the use of ‘the list’,
which he views as a stand in for the real ranking, which
has yet to be devised. When, a few months later, the Magic
Quadrant is published, the same analyst describes the ?nal
number:
Gartner just got ?nished with a Social CRMMagic Quad-
rant. We started with about a 120 vendors that we
looked at. Many vendors had some sort of social aspect
included in their CRM – Social CRM aspects to it. We,
?nally, we were left with around ‘19’ for various rea-
sons that I will discuss (webinar, Gartner analyst G).
To summarise, evidence shows that when faced with a
large number of vendors claiming to work in a newtechno-
logical ?eld, in order to create a competitive space, Gartner
set the evaluation criteria to reduce the numbers of ven-
dors included within each space; this is done by dividing
up the ?eld into new competitive groupings. If the beauti-
ful picture that Gartner desire is not there then they set
about trying to create it. Dot-ology therefore also captures
the strategies deployed to in?uence the setting that the
8
We thank Robin Williams for suggesting this formulation.
580 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
ranking describes. This pragmatic work is complex. The
rankers struggle to differentiate between vendors within
classi?cations; this is because they are imposing bound-
aries onto the market and this can provide for dif?culties.
Many vendors, for instance, could be included in more than
one speci?c ranking. Deciding where a particular instance
sits across a number of technology classi?cations therefore
requires taking an explicit decision, which proves often to
be an ambiguous process.
Discussion
According to Espeland and Sauder (2007, pp. 36–37) the
‘proliferation of public measures of performance’ is one of
the most ‘important and challenging trends of our time’
(see Jeacle and Carter (2011) who relate this point, through
a discussion of rankings, to the core concerns of Account-
ing scholarship). The starting point for this paper was the
suggestion that these measures wield forms of in?uence
that have yet to be identi?ed by existing forms of analysis.
Whilst there are a growing number of studies that analyse
the power of rankings, some from within Accounting re-
search (Free et al., 2009; Jeacle & Carter, 2011; Kornberger
& Carter, 2010; Scott & Orlikowski, 2012), others from out-
side this area (Pollock & Williams, 2009; Blank, 2007; Espe-
land & Sauder, 2007; Karpik, 2010; Kwon & Easton, 2010;
Shrum, 1996; Wedlin, 2006), very few have provided in-
sights into their makeup and minutiae (but see Schultz
et al. (2001) who point to some aspects of their construc-
tion). One implication when a crucial market mechanism
is black-boxed is that we only ever develop a partial under-
standing of its constitutive capacity. A tendency when
faced with an incomplete vantage point is to raise the
importance of those aspects of the phenomena that can
be studied (Pollock & Williams, 2009). Speci?cally, rank-
ings are seen to in?uence domains through changing the
way actors make sense of and interpret the world (Espeland
& Sauder, 2007; Kornberger & Carter, 2010; Wedlin, 2006).
We have worked up the idea of a ‘ranking device’ to
capture how, alongside the way rankings cause people to
adapt behaviour, that graphic format and furniture can also
be signi?cant. Taking the example of an in?uential perfor-
mance measure from within the information technology
sector, we have shown how, in ways that are both social
and material, this ranking has shaped the market for vari-
ous technologies. Through describing how the ranking
brought together and counterposed players in a ‘competi-
tive space’, the paper considered three related aspects of
the sociomaterial shaping of that space. Firstly, we focused
on attempts by those technology vendors ranked by the
assessment to affect the shape of the competitive terrain.
Our evidence suggested that, because the ranking created
the space by which various players could compete with
each other (Kornberger & Carter, 2010), vendors were ad-
vised to adapt and orient themselves to the nuances and
measures of the ranking. These included employing strate-
gies to help improve their position and weaken that of
competitors. The players were therefore brought together
into one space, and, importantly, with the help of new
forms of expertise, this space appeared tractable.
Secondly, whilst our initial discussion emphasised the
social strategies at play (‘people moving dots’), we later
introduced the theme of material agency. We demon-
strated the sociomaterial constraints surrounding the
shaping of the competitive space (‘dots were moving peo-
ple’). We saw this in relations between individuals and the
ranking organisation and then between the ranking organi-
sation and the market. Until recently within the ranking
organisation, individual rankers could wield notable
amounts of discretion in placing vendors. More recently
however, because of moves towards transparency and
standardization, there had been changes in ranking prac-
tices (the discretion of individual rankers had become
entangled in and increasingly sti?ed by layers of technol-
ogy and bureaucracy). Added to this, the graph itself (its
affordances and constraints) also placed limitations on
how the competitive space could be captured and repre-
sented. The rankers could not capture and represent all
the players in a market on one graph. This meant they were
forced to adopt alternative strategies.
Thirdly, we showed in particular how the rankers, as a
result, were required to intervene directly in the market
to attempt to shape the competitive space to account for
the limitations of the two-by-two matrix. This meant they
did not use the graph to represent a competitive space con-
ceived prior to its inclusion in the ranking. Rather, they
conceived of new competitive spaces – better still, were
forced to conceive of these spaces – through taking the
capacities of the ranking into consideration. We could say
that the ranking prompted such an intervention and that
this was a prompt that individual rankers appeared willing
to accept. Rankers would thus attempt to modify the com-
petitive space to ?t the ranking (rather than the other way
around). It is speci?cally this aspect – a situation we con-
ceive of as ‘dots moving markets’ – that identi?es one of
the main contributions of the paper.
New visual and temporal dynamics
We propose that graphical performance measures (and
?gurations more generally) contribute a powerful instance
of the process by which markets and material things mutu-
ally constitute one another (Callon et al., 2007; MacKenzie,
2009; Miller & O’Leary, 2007; Pinch & Swedberg, 2008). We
attempted to get at this through analysing the interactions
between ‘programmes’ and ‘technologies’. These refer to
the imaginings and conceptualisations of an arena and
the various devices and inscriptions that mediate and
shape these envisionings such that a domain may be acted
upon and calculated (Miller, 1998; Miller & O’Leary, 2007).
We studied the production of the ranking not as ‘knowl-
edge’ but as a ‘practice’. This is to consider the idea of a
ranking not in an abstract representational idiom (Espe-
land & Sauder, 2007; Kornberger & Carter, 2010), but one
which captures the nuanced interplay involved between
the conceptualisation of a market domain and its incorpo-
ration within various format and furniture. What our anal-
ysis sought to show was how these devices both shaped
and were shaped by the market. In particular, the format
and furniture helped create a new visual and temporal dy-
namic within the IT domain.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 581
Visual dynamic
We say visual dynamic because the ranking organisa-
tion attempted to specify what a market should look like.
They sought a conceptualisation that made the informa-
tion technology domain amenable to calculation (Miller,
1998; Miller & O’Leary, 2007). This meant they strove to
produce a ranking that would allow everyone to see and
compare how one technology vendor was performing in
relation to another, in the most straightforward manner,
where there were neither too many nor too few players in
the competitive space. They apparently found the optimal
number that could be included and this represented the
‘beautiful picture’.
What is the beautiful picture?
The beautiful picture is part of what we might think of
as an ‘aesthetic economy’ operating within the ranking
organisation. This is not to say that it is the picture of an
ideal or perfect market (cf. Garcia-Parpet, 2007). Rather,
it is the result of a negotiated, devised and contrived inter-
vention. The beautiful picture was a set of compromises
negotiated between the imaginings and conceptualisations
of the ranker and the sociomaterial possibilities of the
ranking. Material affordances potentially allowed for the
placing of many vendors on a graph but (conventional)
constraints meant that the rankers could not overburden
the picture (Quattrone, Puyou, McLean, & Thrift, 2012).
This would not only produce a ?guration that would be dif-
?cult for clients to understand, it would give the impres-
sion of an overly complex market (and this would have
adversely affected the aesthetic economy deemed crucial
by the rankers). Thus, the ranking was also conventionally
devised (Espeland & Stevens, 1998): there were not only
material aspects limiting the construction of the competi-
tive space but also ‘social’ ones (David & Pinch, 2008).
The ranking was also a contrived ?guration for bringing
about certain kinds of (potentially contradictory) results. It
was necessary to reduce the level of ‘confusion’ for deci-
sion makers and practitioners (there could not be too many
dots). However, there could never be too few players on a
graph because this would simplify the market to the point
of undermining the need for further consultancy advice. If
everything appeared straightforward, why would people
continue to seek the ranker’s expertise? The beautiful pic-
ture was one that kept this ranker in ‘the game’ so to speak
(for a discussion of the problems of creating and maintain-
ing a market for expertise see Barrett and Gendron
(2006)).
9
Attempts to engineer the beautiful picture were conse-
quential for the shaping of the market. It meant the rank-
ing was not neutral with regard to what constituted a
competitive space. It appeared ill suited to new, fast mov-
ing areas, for instance, where there were many new en-
trants in the technological area. Whilst individual rankers
could spot vendors entering an emerging category, in prac-
tice, they could not capture or represent them within the
ranking (the ?guration lacked the affordances of a list in
this respect). This issue resembles what Lynch (1985, p.
43), talking about scienti?c graphs, has called the ‘problem
of visibility’. Scientists determine what is ‘natural’ based
on what their graphs are able to depict. Translated to our
concerns, this means that the rankers decided what a mar-
ket ‘is’ – the competitive space: which players make up the
market, the boundaries of the ?eld, etc. – partially based on
what the ranking was able to capture and communicate.
This clearly evidences how information technology mar-
kets today are a product of format and furniture as much
as any other calculative aspect of this particular ranking.
What was also salient about our study was the ?nding
that, if the beautiful picture could not be captured, then
the ranking organisation would try to create it. Because
the graph was seen to embody key features of the markets
under analysis, efforts were made to intervene in compet-
itive spaces, so that the characteristics of these spaces were
congruent with the affordances of the ranking. From ?eld-
work, we saw how rankers performed this in one of two
ways: through limiting the number of vendors operating
in a particular competitive space or by creating entirely
new spaces. They performed the former through setting
‘inclusion criteria’ and the latter by attempting to divide
technological ?elds into new designated areas of activity
(with their own unique nomenclature, de?nition, inclusion
criteria, Magic Quadrant, etc.). The designation of a new
technological ?eld of activity, or ‘competitive space’ as
we have called it here, is not trivial. It can draw boundaries
around a set of artefacts and their suppliers and create a
space in which sorting and ranking becomes possible. If ta-
ken up it can go onto provide crucial resources and con-
straints within which vendors and management and
technology consultants’ articulate offerings. It can, in other
words, become a fully-?edged market in its own right
(Pollock & Williams, 2009, 2011).
10
One problem the ranking organisation now faces in
competitive-spaces-constructed-according-to-the-affor-
dances-of-a-ranking is the pragmatics of making meaning-
ful distinctions. Since new boundaries were imposed onto
the space, individual rankers struggled to differentiate be-
tween vendors in these new groupings. This was evidenced
by the fact that certain vendors appeared in all three of the
new Magic Quadrants. This outcome was thought less than
ideal because it suggested a lack of distinction within the
ranking. Similar issues were apparent when the ranker
was forced to intervene because vendors clustered to-
gether. This occurred because the market was converging
or, over time, vendors were conforming to the evaluative
criteria (Espeland & Sauder, 2007), or, as in the case above,
because there was no meaningful distinction to be made.
Clustering was thought problematic because it suggested
that all those on the graph had the same or similar quali-
ties. This was counterproductive because, as in the case
of the oversimpli?ed market, there would be little value
found in the ranking. Decision-makers required the
9
We owe our thanks to one of the anonymous reviewers for encouraging
us to develop this point.
10
To give one example, Gartner coined and went onto shape the
Enterprise Resource Planning (ERP) terminology, that subsequently went
onto become one of the new paradigms of modern day information systems
(see Chapman (2005) for a review of ERP in the accounting area).
582 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
vendors to be graded in a way that signalled a distinction.
Without this, why would people contact the ranking orga-
nisation, to paraphrase one respondent? A further feature
of this pragmatics therefore was the process whereby ran-
kers were forced to devise distinctions by means of manip-
ulating organisational machinery (i.e., resetting the axes of
the spreadsheet to increase distance between dots).
Temporal dynamics
We say temporal dynamics because, during ?eldwork,
we were alerted to the fact that the affordances of the
ranking were not static but evolving over time. Espeland
and Sauder (2007, p. 36) discuss how rankings are a ‘mov-
ing target’: as people learn to ‘game’ them, their authors
are forced to update evaluative criteria more or less on a
continuous basis. Whilst this was also a factor in our case,
we note how the ranking was similarly surrounded by a
‘moving organisational apparatus’ (Pollock & Williams,
2009). The Magic Quadrant had begun its career as a rela-
tively informal, subjective ranking but there had been later
(quite vigorous) demands placed on the ranker to recreate
it as a formal assessment subject to auditing (see Free
et al., 2009 for a discussion of these processes whereby
rankings are audited). This meant individual rankers could
no longer grade vendors exactly as they wished. It also lim-
ited their capacity to respond (rapidly) to innovation.
Today, the provision and administration of the ranking
is circumscribed by new technology and bureaucracy. This
has affected the ranker’s ability to produce ‘snapshots’. The
ranking organisation cannot react in time to capture spe-
ci?c innovations. Some beautiful pictures disappear even
before these experts can mobilise their committees,
spreadsheets etc. The pictures are there for a moment
and then they are gone, to paraphrase one respondent. This
meant that certain technological innovations can com-
pletely pass the ranker by. Pockets of the market can re-
main unranked in what is typically a highly graded
arena. We think the instances where ranking devices and
organisational apparatus create situations of ‘unrankabili-
ty’ deserve further attention. It is a situation where the
market escapes dots.
11
This begs the questions: were the
markets for these products adversely (or positively) af-
fected? Were the vendors who remained outside the com-
petitive space punished (or rewarded) in some way?
Our evidence also showed how the affordances of the
ranking created cyclical pressures on the ranking organisa-
tion to intervene at certain key moments. The beautiful
pictures they sought were time limited. They were not
there at the outset of an innovation (there were too many
dots to be represented), and nor were they there as the
technology matured (either there were too few dots to al-
low anything meaningful to be said, or all the players
had clustered in the same box). This prompted the ranking
organisation to engineer interventions not arbitrarily but
at certain key points in the lifespan of a technology. This
included, for instance, the moment when a new technolog-
ical ?eld ?rst appeared to emerge and then later as it
matured.
What does a focus on graphic format and furniture show?
Our paper has developed some of the analytical tools to
consider the sociomaterial in?uence of a ranking. This begs
the question whether a focus on format and furniture draw
attention to aspects not visible under social approaches.
Existing modes of analysis give particular emphasis to
how rankings in?uence peoples’ behaviour. The ‘mecha-
nisms of reactivity’ concept (Espeland & Sauder, 2007),
for instance, explicitly captures this through showing
how rankings evoke self-ful?lling prophecies that encour-
age people to adapt their behaviour towards the calcula-
tion. Extending this, we have emphasised how ranking
devices can also play a role through offering speci?c affor-
dances and constraints and encouraging others to modify
the settings within which action takes place. For example,
we have shown how the graphical ranking came to suggest
a particular order for a market, prioritising one market
view over another (a beautiful rather than a cluttered or
sparse picture), which the rankers then set about creating.
The corollary is that a ranking can in?uence a setting dif-
ferently, and perhaps more fundamentally, than previously
thought.
Whereas the point above is about the shape of the land-
scape within which actions take place, we have seen that
there is also a temporal issue. In this respect, our approach
raises the question as to whether a sociomaterial in?uence,
as opposed to simply a social one, is a more enduring form
of in?uence. It could be argued that a ranking located ‘‘in
the back of everybody’s head’’, as Espeland and Sauder de-
scribe (Espeland & Sauder, 2007, p. 11), may only have a
?eeting in?uence whereas one residing in a speci?c format
and furniture can endure inde?nitely. As long as the ranker
retains this particular format and furniture, the order de-
scribed in the device above may continue to produce a par-
ticular shape to the market with little regard to the actions
of individual players at speci?c times.
What we are foregrounding is how processes of mar-
ket making are inscribed in and ?ow from the sociomate-
rial negotiations surrounding a ranking. Clearly the
episodes of market (re)construction described here are
very different from those formal accounts preferred by
of economists, where supply and demand comes together
to form a price (Callon & Muniesa, 2005). The ranking
organisation described in the paper has a long tradition
of creating new markets through ‘naming interventions’
(see Pollock & Williams, 2011). Many, though by no
means not all, of these go onto become functioning and
independent markets. We thus offer an example of how
new markets are constituted by the seemingly mundane
constraints of a graph. This also contrasts with those
Accounting scholars who view market creation as the re-
sults of primarily ‘social interactions’. Kornberger and
Carter (2010, p. 330) write that ‘‘competition is some-
thing that is created out of interaction between market
players’’. Our work, by contrast, has shown how devices
are also party to these interactions (see also Miller and
O’Leary (2007), Robson (1992) and Quattrone et al.
(2012) who similarly highlight the link between devices
and processes of market making). Future inquiry would
be to see whether the arguments set out in this paper
11
Thanks to one of the anonymous reviewers for suggesting this point.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 583
hold true for other areas. Does format and furniture hold
similar implications for other kinds of performance
measures?
Implications for accounting research
Accountancy ?rms will potentially play an increasing
role in the provision and administration of formal and
impersonal reputational indices (Free et al., 2009). The last
30 years has seen the emergence of a powerful range of
consultancy and professional services organisations that
produce rankings of various kinds. Many of these assess-
ments are also being integrated into the ‘advisory’ (i.e. con-
sulting) elements of the large accounting ?rms. Whilst we
know that the demand for rankings is expanding, we still
understand little about the detailed processes by which
consultancy ?rms produce, administer and create a market
for these assessments. We have produced a detailed study
of how one global consultancy and research organisation
constructs a highly successful performance measurement
product. Our study, in this respect, meets Qu and Cooper’s
(2011) recent call for more research examining the work of
consultants – speci?cally how they acquire, commodify
and apply their knowledge. Our aim, in this respect, was
to assess the potential for an empirically grounded charac-
terisation of the process by which such knowledge was
produced and communicated. A popular conception of con-
sultants is to see their assessments as based on the vaga-
ries of individual discretion whereas our recently
conducted and ongoing ?eldwork suggests the origins of
assessments result from more observable sociomaterial
and distributed processes. Above, for instance, we have
drawn attention to the large machineries of ranking that
are in place.
Accounting ?rms have also been important shapers of
the consultancy industry (Christensen & Skærbæk, 2010).
However, they have in the main unproblematically
adopted many of the innovations generated from within
this industry. Qu and Cooper (2011) highlight this speci?-
cally in relation to graphic inscriptions. Innovations in ?g-
urations will potentially have a number of implications for
Accounting Research. In particular, whilst there has been a
good understanding and theorisation of 20th Century
accounting representational devices (see for instance Chua
(1995) on ‘accounting images’, and Ezzamel (2004) on fac-
tory performance indicators), those of 21st Century
accounting are still being formulated.
12
In this respect, Qu
and Cooper (2011, 345) talk of new forms of inscriptions
‘‘materialized through different media with different quali-
ties’’ and give the example of power point slides, ?ip chart
pages, emails, strategy maps, graphics such as bullet points
and checklists, and so on, to exemplify this. These new kinds
of inscriptions – another of which is described here: the
two-by-two matrix – may well require scholars to update
characteristic analytical framings and/or to draw on insights
from allied disciplinary approaches.
Our work, which sits at the interstices between a num-
ber of different disciplinary schools (see Vollmer et al.
(2009) for a review of the evolving intellectual interde-
pendencies between Accounting, STS and Economic Soci-
ology), potentially provides insights into both how the
graphic inscriptions of accounting and the practices that
surround them might change. The capture of business
by the two-by-two matrix (Lowy & Hood, 2004), in partic-
ular, suggests that ?gurations are no longer a supplement
but intrinsic and constitutive part of market settings.
Whereas calculative practices have predominately been
conceived of as ‘numerical operations’ (Miller, 2001),
Quattrone et al. (2012, p. 9) argue that there will need
to be more attention devoted to the ‘visual nature of
numbers’ (see also Justesen & Mouritsen, 2008). We be-
lieve our paper meets elements of this call. Calculative
practices turn ‘qualities into quantities’ (Miller, 2001). In
our case, this would be the translation of a subjective
opinion about a vendor – rendered through a large-scale
ranking apparatus – into a quantity, such as placing a
dot on a graph. We suggest that the form of dot-ology de-
scribed here represents a unique instance of these kinds
of calculative practices. On the one hand, this is how a
calculation can come to be shaped by mundane graphic
resources (and vice versa), and, on the other, how there
is an aesthetic element to the construction of visual num-
bers. In terms of the former, those producing visual num-
bers may come to determine what is ‘calculable’ based on
what graphs are able to depict. It is not how corporate
and market performance relate to dots (stars, lines,
waves, tics, etc.) for revealing and ordering that perfor-
mance; it is rather how the format and furniture of
graphs interact and merge with the calculations. Visual
resources constitute calculative practices, such that any
numbers that result bear the imprint of graphic
sociomateriality.
This latter element is also important because, as Quatt-
rone et al. (Quattrone et al., 2012, p. 9) notes, little atten-
tion has been given to the ‘imaginative power’ of an
inscription. This is their ability to envision what business
and markets could and should look like. In this respect,
we speculate that the two-by-two matrix is different from
other formats, such as lists (Cardinaels, 2008), because it
creates particular way of representing and intervening in
situations. As one of the premier modes of representing
business activities – one only has to think of the ‘cost ben-
e?t matrix’, the ‘product and market matrix’, the ‘BCG
Product Portfolio Matrix’, etc., – this creates a particular
kind of aesthetic economy (Espeland & Stevens, 1998).
Through visualising the elements of a competitive situa-
tion, one alters the way in which that situation is thought
about and acted upon or practised. Their allure is such that
the situation appears amenable to intervention. They
encourage various forms of co-production such that set-
tings are modi?ed to become congruent with graphic affor-
dances and vice versa. Ultimately, the predominance of
?gurations across industries means that their sociomateri-
ality should become a feature of academic study. We call
for serious and detailed study of the format and furniture
of the major business and accounting visualisations, for it
is not simply engines but beautiful pictures that shape eco-
nomic life.
12
Thanks to Chris Carter for suggesting this point.
584 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
Acknowledgements
Neil Pollock would like to acknowledge the support of
the Economic and Social Research Council (ESRC) who
funded the research presented in this article. If forms part
of work conducted under an ESRC Fellowship entitled: The
Social Study of the Information Technology Marketplace.
We would like to thank those industry analysts and others
who were kind enough to make themselves available for
interview. We graciously acknowledge the help and advice
of the Editor and anonymous referees who provided very
helpful comments on drafts of this paper. Thanks also must
go to the following people for providing useful suggestions
and ideas during the writing process. This includes: Chris
Carter, Sampsa Hyysalo, Ingrid Jeacle, Jannis Kallinikos,
Christian Koch, Irvine Lapsley, Eric Laurier, Donald Mac-
Kenzie, Peter Miller, Eric Monteiro, Susan Scott and Robin
Williams.
References
Akrich, M., & Latour, B. (1992). A summary of a convenient vocabulary for
the semiotics of human and nonhuman assemblies. In W. Bijker & J.
Law (Eds.), Shaping technology/building society. Cambridge, MA: MIT
Press.
Aldridge, A. (1994). The construction of rational consumption in which?
Magazine: The more blobs the better? Sociology, 28, 899–912.
Anand, N., & Peterson, R. (2000). When market information constitutes
?elds: Sensemaking of markets in the commercial music industry.
Organization Science, 11(3), 270–284.
Argyris, C. (1954). The impact of budgets on people. New York:
Controllership Foundation.
Barrett, M., & Gendron, Y. (2006). WebTrust and the ‘commercialistic
auditor’: The unrealized vision of developing auditor trustworthiness
in cyberspace. Accounting, Auditing and Accountability Journal, 19,
631–662.
Becker, H. S. (1982). Art worlds. Berkeley, CA: University of California
Press.
Blank, G. (2007). Critics, ratings, and society: The sociology of reviews.
Lanham, MD: Rowman & Little?eld.
Bloom?eld, B., & Vurdubakis, T. (1997). Visions of organization and
organizations of vision: The representational practices of information
systems development. Accounting, Organizations and Society, 22(7),
639–668.
Burton, B. & Aston, T. (2004). How Gartner evaluates vendors in a market.
Document ID Number: G00123716.
Callon, M., Millo, Y., & Muniesa, F. (Eds.). (2007). Market devices. London:
Wiley-Blackwell.
Callon, M., & Muniesa, F. (2005). Economic markets as calculative
collective devices. Organisation Studies, 26(8), 1229–1250.
Cardinaels, E. (2008). The interplay between cost accounting knowledge
and presentation formats in cost-based decision making. Accounting,
Organizations and Society, 33, 582–602.
Carroll-Burke, P. (2001). Tools, instruments and engines: Getting a handle
on the speci?city of engine science. Social Studies of Science, 31(4),
593–625.
Chapman, C. (2005). Not because they are new: Developing the
contribution of enterprise resource planning systems to
management control research. Accounting, Organisations and Society,
30(7–8), 685–689.
Christensen, M., & Skærbæk, P. (2010). Consultancy outputs and the
puri?cation of accounting technologies. Accounting, Organizations and
Society, 35, 524–545.
Chua, W. F. (1995). Experts, networks and inscriptions in the fabrication
of accounting images: A story of the representation of three public
hospitals. Accounting, Organisations and Society, 20(2/3), 111–145.
Cooper, D., & Hopper, T. (Eds.). (1989). Critical accounts. Basingstoke:
Macmillan.
Dambrin, C., & Robson, K. (2011). Tracing performance in the
pharmaceutical industry: Ambivalence, opacity and the
performativity of ?awed measures. Accounting, Organizations and
Society, 36, 428–455.
David, S., & Pinch, T. (2008). Six degrees of reputation: The use and abuse
of online review and recommendation. In T. Pinch & R. Swedberg
(Eds.), Living in a material world: Economic sociology meets science and
technology studies. MIT Press.
Drobik, A. (2010). Getting gartner: How to understand what we are
talking about. In Presentation given to the customer relationship
management summit, London, 16th March.
Espeland, W., & Sauder, M. (2007). Rankings and reactivity: How public
measures recreate social worlds. American Journal of Sociology, 113(1),
1–40.
Espeland, W., & Stevens, M. (1998). Commensuration as a social process.
Annual Review of Sociology, 24, 313–343.
Ezzamel, M. (2004). Accounting representation and the road to
commercial salvation. Accounting, Organizations and Society, 29,
783–813.
Free, C., Salterio, S., & Shearer, T. (2009). The construction of auditability:
MBA rankings and assurance in practice. Accounting, Organisations and
Society, 34, 119–140.
Garcia-Parpet, M. F. (2007). The social construction of a perfect market:
the strawberry auction at Fontaines-En-Sologne. In D. MacKenzie, F.
Muniesa & L. Sui (Eds.), Do economists make markets. Princeton
University Press.
Gibson, J. J. (1979). The ecological approach to visual perception. Erlbaum.
Goody, J. (1977). The domestication of the savage mind. Cambridge:
Cambridge University Press.
Hacking, I. (1983). Representing and intervening: Introductory topics in the
philosophy of natural science. Cambridge: Cambridge University Press.
Hacking, I. (1992). The self-vindication of the laboratory sciences. In A.
Pickering (Ed.), Science as practice and culture. Chicago: University of
Chicago Press.
Hind, P. (2004). Self-ful?lling prophecies. CIO, 12 July,
Accessed 29.03.06.
Hopkins, W. (2007). In?uencing the in?uencers: Best practice for building
valuable relationships with technology industry analysts. Austin, TX:
Knowledge Capital Group.
Hopwood, A. (2007). Whither accounting research? The Accounting
Review, 82(5), 1365–1374.
Hutchby, I. (2001). Technologies, texts and affordances. Sociology, 35(2),
441–456.
Ingold, T. (2007). Lines: A brief history. Abingdon, Oxon: Routledge.
Jeacle, I., & Carter, C. (2011). In TripAdvisor we trust: Calculative regimes
and abstract systems. Accounting, Organisations and Society, 36,
293–309.
Justesen, L., & Mouritsen, J. (2008). The triple visual: Translations between
photographs, 3-D visualizations and calculations. Accounting, Auditing
& Accountability Journal, 22(6), 973–990.
Karpik, L. (2010). Valuing the unique: The economics of singularities.
Princeton University Press.
Kornberger, M., & Carter, C. (2010). Manufacturing competition: How
accounting practices shape strategy making in cities. Accounting,
Auditing & Accountability Journal, 23(3), 325–349.
Kwon, W., & Easton, G. (2010). Conceptualizing the role of evaluation
systems in markets: The case of dominant evaluators. Marketing
Theory, 10(2), 123–143.
Lapsley, I., & Mitchell, F. (Eds.). (1996). Accounting and performance
measurement: Issues in the private and public sectors. London: Paul
Chapman Publishing).
Latour, B. (1986). Visualization and cognition: Thinking with eyes and
hands’. In H. Kucklick (Ed.). Knowledge and society: Studies in the
sociology of culture, past and present. Greenwich, Connecticut: JAI Press.
Latour, B. (2005). Reassembling the social: An introduction to actor-network
theory. Oxford: Oxford University Press.
Law, J. (2001). Economics as interference. In P. du Gay & M. Pryke (Eds.),
Cultural economy: Cultural analysis and commercial life. London: Sage.
Lowy, A., & Hood, P. (2004). The power of the 2 Â 2 matrix: Using 2 Â 2
thinking to solve business problems and make better decisions. San
Francisco: Jossey-Bass.
Lynch, M. (1985). Discipline and the material form of images: an analysis
of scienti?c visibility. Social Studies of Science, 15, 37–66.
Lynch, M. (1988). The externalized retina: Selection and mathematization
in the visual documentation of objects in the life sciences. Human
Studies, 11, 201–234.
MacKenzie, D. (2006). An engine, not a camera: How ?nancial models shape
markets. Cambridge, MA: MIT Press.
MacKenzie, D. (2009). Material markets: How economic agents are
constructed. Oxford: Oxford University Press.
Miller, P. (1998). The margins of accounting. The European Accounting
Review, 7, 605–621.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 585
Miller, P. (2001). Governing by numbers: Why calculative practices
matter. Social Research, 68, 379–396.
Miller, P., & O’Leary, T. (2007). Mediating instruments and making
markets: Capital budgeting, science and the economy. Accounting,
Organisations and Society, 32, 701–734.
Orlikowski, W. J. (2007). Sociomaterial practices: Exploring technology at
work. Organization Studies, 28(9), 1435–1448.
Pinch, T., & Swedberg, R. (Eds.). (2008). Living in a material world.
Cambridge, Mass: MIT Press.
Pollock, N., & Williams, R. (2007). Technology choice and its performance:
Towards a sociology of software package procurement. Information
and Organization, 17, 131–161.
Pollock, N., & Williams, R. (2009). The sociology of a market analysis tool:
How industry analysts sort and organise markets. Information and
Organization, 19, 129–151.
Pollock, N., & Williams, R. (2010). The business of expectations: How
promissory organizations shape technology and innovation. Social
Studies of Science, 40, 525–548.
Pollock, N., & Williams, R. (2011). Who decides the shape of product
markets? The knowledge institutions that name and categorise new
technologies. Information and Organization, 21, 194–217.
Preda, A. (2008). Technology, agency, and ?nancial price data. In T. Pinch
& R. Swedberg (Eds.), Living in a material world. Cambridge, Mass: MIT
Press.
Qu, S., & Cooper, D. (2011). The role of inscriptions in producing a
balanced scorecard. Accounting, Organizations and Society, 36,
344–362.
Quattrone, P. (2009). Books to be practiced. Memory, the power of the
visual and the success of accounting. Accounting, Organisations and
Society, 34, 85–118.
Quattrone, P., Puyou, F., McLean, C., & Thrift, N. (2012). Imagining
organizations: An introduction. In F. Puyou, P. Quattrone, C. McLean,
& N. Thrift (Eds.), Imagining organizations: Performative imagery in
business and beyond. London: Routledge.
Robson, K. (1992). Accounting numbers as ‘inscriptions’: Action at a
distance and the development of accounting. Accounting,
Organizations and Society, 17(7), 685–708.
Sauder, M., & Espeland, W. (2006). Strength in numbers? The advantages
of multiple rankings. Indiana Law Journal, 81, 205–217.
Schultz, M., Mouritsen, J., & Grabielsen, G. (2001). Sticky reputation:
Analyzing a ranking system. Corporate Reputation Review, 22, 24–41.
Scott, S. & Orlikowski, W. (2012). Recon?guring relations of
accountability: Materialization of the social media in the travel
sector. Accounting, Organizations and Society.
Shrum, Wesley. M. (1996). Fringe and fortune: The role of critics in high and
popular art. Princeton, NJ: Princeton University Press.
Soejarto, A., & Karamouzis, F. (2005). Magic Quadrants for North American
ERP service providers. Gartner Document, ID Number: G00127206.
Stark, D. (2011). What’s valuable? In P. Aspers & J. Beckert (Eds.), The
worth of goods: Valuation and pricing in the economy. Oxford: Oxford
University Press.
Strathern, M. (2000). The tyranny of transparency. British Educational
Research Journal, 26(3), 309–321.
Tufte, E. R. (2001). The visual display of quantitative information. Cheshire,
Conn.: Graphics Press.
Violino, B. & Levin, R. (1997). Analyzing the analysts. Information Week, 17
November.
Accessed 29.03.06.
Vollmer, H., Mennicken, A., & Preda, A. (2009). Tracking the numbers:
Across accounting and ?nance, organisations and markets.
Accounting, Organisations and Society, 34, 619–634.
Wedlin, L. (2006). Ranking business school: Forming ?elds, identities and
boundaries in international management education. Chichester: Edward
Elgar.
586 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
doc_743862872.pdf
Scholars have described how rankings can be consequential for the shaping of the economy.
The prevailing argument is that they wield influence through encouraging ‘mechanisms
of reactivity’ amongst market actors. We ask the question as to whether there are
additional agential aspects found within rankings that extend ‘social’ accounts. We suggest
that ‘sociomateriality’ is also a significant aspect of a ranking’s influence. Through developing
the notion of a ‘ranking device’, we examine how the ‘‘format and furniture’’ of a ranking
can mediate and constitute a domain. Drawing on a detailed study of a prominent
graphical performance measure from within the information technology (IT) arena, we
provide evidence to show that IT markets can be as much a product of the affordances
and constraints of ranking devices as any other (non-material) aspects of the ranking.
The article integrates literature from Accounting research and Science and Technology
Studies to contribute to our understanding of how material things and the economy mutually
constitute one another. It also offers one of the first empirical accounts of the sociomaterial
construction of a graphical ranking
Give me a two-by-two matrix and I will create the market: Rankings,
graphic visualisations and sociomateriality
Neil Pollock
a,?
, Luciana D’Adderio
b,1
a
University of Edinburgh Business School, 29 Buccleuch Place, Edinburgh EH8 9JS, UK
b
The Institute for Studies of Science, Technology and Innovation (ISSTI), Old Surgeons’ Hall, High School Yards, University of Edinburgh, Edinburgh EH1 1LZ, UK
a b s t r a c t
Scholars have described how rankings can be consequential for the shaping of the econ-
omy. The prevailing argument is that they wield in?uence through encouraging ‘mecha-
nisms of reactivity’ amongst market actors. We ask the question as to whether there are
additional agential aspects found within rankings that extend ‘social’ accounts. We suggest
that ‘sociomateriality’ is also a signi?cant aspect of a ranking’s in?uence. Through develop-
ing the notion of a ‘ranking device’, we examine how the ‘‘format and furniture’’ of a rank-
ing can mediate and constitute a domain. Drawing on a detailed study of a prominent
graphical performance measure from within the information technology (IT) arena, we
provide evidence to show that IT markets can be as much a product of the affordances
and constraints of ranking devices as any other (non-material) aspects of the ranking.
The article integrates literature from Accounting research and Science and Technology
Studies to contribute to our understanding of how material things and the economy mutu-
ally constitute one another. It also offers one of the ?rst empirical accounts of the socioma-
terial construction of a graphical ranking.
Ó 2012 Elsevier Ltd. All rights reserved.
Introduction
Rankings represent an important mechanism shaping
markets (Aldridge, 1994; Blank, 2007; Schultz, Mouritsen,
& Grabielsen, 2001; Shrum, 1996), such that scholars have
labelled them ‘engines’ within the economy (Espeland &
Sauder, 2007; Karpik, 2010). To depict a ranking in this
way is to imply that it is not a passive portrait of the world
but ‘‘an active force transforming its environment’’ (Mac-
Kenzie, 2006, p. 12). This is indicative of a growing consen-
sus also from within Accounting research about how we
should theorise the power of formal measures of perfor-
mance and reputation (see Argyris, 1954; Cooper & Hopper,
1989; Kornberger & Carter, 2010; Lapsley & Mitchell,
1996). Despite highlighting a key area for empirical and
theoretical inquiry, however, this popular conceptualisa-
tion carries unquestioned assumptions about the way we
understand their constitutive role. In particular, the in?u-
ence of a ranking is seen to reside predominately in how
it encourages ‘mechanisms of reactivity’ amongst market
actors (Espeland & Sauder, 2007). What this suggests is
that rankings are intrinsically ‘social’, at the same time
raising the question as to whether there are further agen-
tial aspects that might extend this social mode of analysis.
Are there additional agencies (other than how people re-
spond to them) to be found in the makeup of rankings?
A useful prompt is found in tracing the idiom of the
term engine itself. From 17th Century English science, for
instance, we learn how instruments, artifacts and diagrams
– combined with the ‘ingenuity, craftiness and inventive-
ness’ of gentlemen scientists – could function as generative
engines in producing early scienti?c knowledge
(Carroll-Burke, 2001, p. 599). To capture the nature of this
0361-3682/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.aos.2012.06.004
?
Corresponding author. Tel.: +44 (0)1316511489; fax: +44
(0)1316506399.
E-mail addresses: [email protected] (N. Pollock), L.D-Adderio@
ed.ac.uk (L. D’Adderio).
1
Tel.: +44 (0)1316502454; fax: +44 (0)1316506399.
Accounting, Organizations and Society 37 (2012) 565–586
Contents lists available at SciVerse ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
intervention, however, one also had to consider the tools
and devices’ hard, physical, material, engineering, and
‘arti?cial’ aspects (Carroll-Burke, 2001, p. 600), which were
key features of the artifacts involvement in everyday prac-
tices. Whilst the ?rst view presents the intervention of en-
gines as a social form of ‘manipulation’, the ‘‘products of
ingenious minds, clever contrivances and artful designs’’
(Carroll-Burke, 2001, p. 599), the second places them
squarely in the domain of practice, matter, method and
constraint.
We see value in bringing both aspects together to cap-
ture how the abstract, generative capacity of a ranking
can result from – and be shaped by – the interplay of a het-
erogeneous range of sociomaterial constraints and prac-
tices. To this purpose, and building on recent discussions
of market devices (Callon, Millo, & Muniesa, 2007), we de-
velop the idea of a ranking device. This focus on objects is
warranted because at a basic level a ranking cannot exist
without some kind of device (Callon, Millo, & Muniesa,
2007). The idea of the ‘100 top restaurants’, ‘10 leading
law schools’, or ‘20 best cities to work and live’, for in-
stance, would be impossible without the device of ‘the list’
(Goody, 1977). Analytically the notion of device is useful
because it captures how a ranking is an ‘arti?ce’, an ‘arti-
fact’, the product of a practice (OED). In can also be used
to describe an object that contains certain constraints
and affordances, while at the same time capturing the as-
pect of ‘clever contrivance’ and ‘artful design’ (rankings
are clearly devised in the sense of something manufac-
tured or contrived) (Goody, 1977).
In this paper, we want to show that devices do more
than simply facilitate the production and communication
of a ranking. They actively participate in their shaping.
The speci?c argument developed is that it is these socio-
material aspects, together with how people respond to
them, that can account for the in?uence of a ranking. We
would go as far as to argue that, in certain case, the consti-
tutive potential of a ranking can reside in its affordances
and constraints as much as any other complementary as-
pect (like the ‘calculation’). Our study draws on observa-
tions and interviews conducted over a period of several
years on the construction and use of one of the most in?u-
ential rankings from the information technology (IT) arena
– a two-by-two matrix called the ‘Magic Quadrant’.
To show this in?uence we draw on and integrate a
number of schools of thought from Accounting research
as well as Science and Technology Studies (STS). The ?rst
is Miller’s ‘governance of economic life’ framework which
studies the interactions between ‘programmes’ and ‘tech-
nologies’ as domains are made ‘calculable’ (Miller, 1998,
2001; Miller & O’Leary, 2007). The second is the Account-
ing literature’s focus on ‘graphic inscriptions’ (Bloom?eld
& Vurdubakis, 1997; Chua, 1995; Dambrin & Robson,
2011; Ezzamel, 2004; Qu & Cooper, 2011; Robson, 1992).
Whilst scholars have linked the issue of how a ?guration
might facilitate and mediate a ?nancial decision (Miller &
O’Leary, 2007), they have not yet considered how calcula-
tions might be shaped by and result from the speci?c
sociomaterial features of a graph. Finally, to demonstrate
how a visualisation might offer affordances and constraints
to those producing a ranking we draw on a range of studies
from Science and Technology Studies on how material arti-
facts and economic markets mutually constitute one an-
other (Callon et al., 2007; MacKenzie, 2009; Vollmer,
Mennicken, & Preda, 2009) and the use of graphic inscrip-
tions in Science (Latour, 1986; Lynch, 1985, 1988) and
other domains (Espeland & Stevens, 1998; Quattrone,
2009).
Rankings are engines within the economy
Today there appear to be formal ranking measures to
rate the quality and value of most things: art (Becker,
1982), theatre (Shrum, 1996), restaurants (Blank, 2007),
?lms, music (Karpik, 2010), the performance of various
public services such as hospitals, schools, Business Schools
(Wedlin, 2006), and universities (Free, Salterio, & Shearer,
2009; Strathern, 2000), the ef?ciency of the latest con-
sumer products (Aldridge, 1994), the reputation and com-
petence of companies (Pollock & Williams, 2009; Schultz
et al., 2001). There are those listing the ‘best places’ to live
and work (Kornberger & Carter, 2010), the ‘top holiday des-
tinations’ (Jeacle & Carter, 2011; Scott & Orlikowski, 2012),
and so on.
Despite their simple and often contested nature, there is
growing evidence to suggest that rankings play an en-
hanced role in decision-making (Aldridge, 1994; Blank,
2007; Karpik, 2010; Wedlin, 2006). Speaking about one
of the most well known rankings, the Red Michelin restau-
rant guide, for instance, Karpik (2010, p. 77) writes: ‘‘. . . this
veritable paper engine [has] the rare ability to create the
conditions of large-scale comparisons of incommensurable
entities while thoroughly respecting their particularisms’’.
In their discussion of the global league tables of cities
Kornberger and Carter (2010, p. 333) similarly suggest that
league tables are ‘engines and not simply cameras’ that
create comparisons between hitherto unrelated places.
The resulting competition between global cities, they ar-
gue, is not a natural fact but it has been brought into being
through the circulation of rankings. League tables now, in
their words, ‘‘form the battleground on which cities com-
pete with each other’’ (Kornberger and Carter (2010, p.
236)); for example, they have actively encouraged city
administrations to change behaviours and to develop
strategies that set them apart from other metropolis
(Kornberger and Carter (2010))
Covering a plethora of devices as used in a variety of
industries and contexts the above works address how
rankings, as ordering systems, intervene in shaping the
reality they attempt to monitor. One nuanced discussion
of this kind – setting out in detail the means by which
rankings are generative – is Espeland and Sauder’s (2007)
report on university Law Schools. They suggest that:
‘‘. . . rankings are reactive because they change how people
make sense of situations; rankings offer a generalised ac-
count for interpreting behaviour and justifying decisions
within law schools, and help organise the ‘‘stock of
knowledge’’ that participants routinely use’’ (Espeland
and Sauder (2007, p. 11)).
Espeland and Sauder (2007) suggest that rankings do
more than simply grade or describe: they also offer new
566 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
interpretations of a situation. Actors then adapt their
behaviour to conform with this altered understanding (in
a formulation that has much in common with Hacking’s
(1983) notion of representing and intervening). To evidence
how a ranking can intervene, they cite the words of a
respondent. A university manager notes how ‘‘[r]ankings
are always in the back of everybody’s head. With every is-
sue that comes up, we have to ask, ‘How is this impacting
our ranking?’’’ (Espeland and Sauder, 2007, p. 11). Their
thesis is that ultimately rankings can become self-ful?lling:
One type of self-ful?lling prophecy created by rankings
involves the precise distinctions rankings create.
Although the raw scores used to construct [Law School]
rankings are tightly bunched, listing schools by rank
magni?es these statistically insigni?cant differences in
ways that produce real consequences for schools, since
their position affects the perceptions and actions of out-
side audiences (Espeland and Sauder, 2007, p. 12, our
emphasis).
This leads them to suggest that ‘‘[r]ankings are a power-
ful engine for producing and reproducing hierarchy since
they encourage the meticulous tracking of small differences
among schools, which can become larger differences over
time’’ (Espeland and Sauder, 2007, p. 20). Whilst changes
in interpretations and perceptions are obviously important,
however, this viewseems to suggest that a ranking is an en-
tirely ‘social’ phenomenon. Likewise to propose that a rank-
ing primarily resides in the ‘heads’ of actors would tend to
overlook additional inherently material agential features.
Espeland and Sauder (2007) hint at (but do not develop)
the importance of material format in facilitating particular
interpretations. To paraphrase their words, the list magni-
?es small differences that produce real consequences.
Kornberger and Carter (2010, p. 330) write that the power
of a ranking ‘‘rests in its capacity to shape people’s cogni-
tive maps and takes on material forms through translations
into charts, models, graphs, documents, brainstorming
techniques and other elements. . .’’. Building on Espeland
and Sauder (2007) it could be inferred that a list does more
than simply magnify a particular aspect of the ranking.
Kornberger and Carter (2010) explicitly ?ag the role of arti-
facts but foreground cognitive dimension, such that whilst
devices ?gure in their analysis they are not necessarily
seen as party to interactions.
Hacking (1992) provides a useful guide in his later for-
mulation of the representation and intervention couplet
where he acknowledges the centrality of ‘instruments’.
Representations should be studied alongside (not apart
from) ‘instruments’, he argues, because it is these that pro-
duce particular kinds of intervention. In Hacking’s view, it
is representations and instruments that co-produce one
another. Miller and O’Leary (2007, p. 707) apply these
ideas through addressing the interactions between ‘pro-
grammes’ and ‘technologies’. Programmes refer to ‘‘the
imagining and conceptualising of an arena and its constit-
uents, such that it might be made amenable to knowledge
and calculation’’ (Miller & O’Leary, 2007, p. 702). Technol-
ogies denote the ‘‘possibility of intervening through a
range of devices, instruments, calculations and inscrip-
tions’’ (Miller & O’Leary, 2007, p. 702). The key aspect of
their work is that processes of calculation can only be ex-
tended through the interaction between programmes and
technologies. As Miller and O’Leary (2007) describe it is
not simply a case of ‘implementing’ a set of ideas within
a device. Rather, devices come to mediate and shape con-
ceptualisations and vice versa.
We enthusiastically adopt this terminology both for the
ways it focuses attention on how there is a ‘calculation’ in-
volved in the production of a ranking (see Kornberger and
Carter (2010) and Jeacle and Carter (2011) for this reading)
but also because it ?ags the fact this calculation results
from a process where ‘social’ and ‘technical’ elements are
brought together. Scholars working within this framework,
however, have only begun to specify the process by which
we might study and theorise interactions between mate-
rial objects and wider calculative conceptions. In this re-
spect, we are given rather few clues as to the actual
mechanisms of co-production or the ways in which tech-
nologies, devices or graphic inscriptions for that matter
can mediate and shape ideas. We thus ?nd a need to sup-
plement our analytical toolbox with concepts more at-
tuned to considering the affordances and constraints of
(particularly graphic) devices.
Material agency: affordance and constraint
Scholars have ?agged the role of ‘mediating instru-
ments’, ‘market devices’ and ‘intellectual equipment’ in
facilitating processes of calculation within markets (Callon
et al., 2007; MacKenzie, 2009; Miller & O’Leary, 2007). In
contrast to those approaches foregrounding single actors
in market decisions, it has been argued that actions and
calculations are never performed by individuals alone.
Rather, they are always propped up and aided by various
kinds of material artifact. In this view, artifacts are seen
to have ‘agency’, as they produce speci?c kinds of effects.
In terms of who or what makes someone – or something
– an agent, Latour argues that: ‘‘anything that [can] modify
a state of affairs by making a difference is an actor’’ (2005,
p. 71, emphasis in original). Thus, Preda (2008) discussed
how the ‘price ticker’ in the early years of the stock market
was an agent in leading to different forms of decision mak-
ing in the trading of stocks. Miller and O’Leary (2007), in
their account of the history of integrated circuits, treat fu-
ture based graphs or technology roadmaps in a similar way.
Instruments were in their case central in channelling dis-
cussions concerning the funding and development of inte-
grated circuits across different scienti?c and industrial
domains.
Both examples suggest that material devices play key
roles in mediating or constituting behaviour (Akrich &
Latour, 1992). Miller and O’Leary’s concern was with how
roadmaps worked to mediate between the interests and
strategies of multiple organisations involved in the devel-
opment of the new market of post-optical lithography
(Akrich & Latour, 1992, p. 720). In Preda’s case, the price
ticker produced a constant ?ow of prices that could be vis-
ualised in new ways. The ticker constituted the stockbrok-
ers’ practices in such a way that they found themselves
having to adapt to the continuous ?ow of price data such
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 567
that they switched from being ‘observers of the market’ to
‘observers of the tape’ (Akrich & Latour, 1992, p. 232).
Another way of describing this agency is to suggest that
artifacts have affordances and constraints. Although the ori-
ginal idea of affordance stems from the work of Psychology
(Gibson, 1979), it has been subject to recent discussions
within STS and the Sociology of Technology (David & Pinch,
2008; Hutchby, 2001). Gibson de?ned affordance as the
‘‘perceived and actual properties of the thing, primarily
those fundamental properties that determine just how
the thing could possibly be used’’ (David & Pinch, 2008;
Hutchby, 2001, p. 9). Hutchby later softened this as those
material aspects which frame but do not necessarily deter-
mine the actions of people (2001). In this latter relational
view affordances exist in tandem only with how people
take them up and the particular conditions of the local con-
text. Writers like David and Pinch (2008) have recently
built on this in their discussion of online book reviews
where they describe how there can be ‘material’ and ‘so-
cial’ affordances shaping reviews. Physical affordances
mean that a reviewer can write as much as she wants (lim-
ited only by her patience and the capacity of the com-
puter’s hard disc) but social practices (such as publishing
conventions) dictate that reviews are normally limited to
a handful of pages. Scholars such as Orlikowski (2007)
have noted that since these two things are inseparable it
is necessary to theorise the ‘social’ and ‘material’ as ele-
ments that mutually constitute one another: ‘‘the social
and the material are considered to be inextricably re-
lated—there is no social that is not also material, and no
material that is not also social’’ (Orlikowski, 2007, p.
1437). This re?ects an intellectual project in the social
analysis of technology never to simply ‘black box’ objects
but to study their profoundly social and material elements.
Since there is no clear boundary between what is ’social’
and what is ’material’ scholars refer to these more pre-
cisely as ’sociomaterial’. In the paper, whilst we will adopt
this particular terminology, we will also at times refer to
the social and technical separately as there are analytical
bene?ts from treating these empirically entwined features
as distinct.
Ranking devices
We are now in a position to set out more clearly what
we mean by a ‘ranking device’.
2
Speci?cally, we propose
that these are the ‘‘format and furniture’’ implicated in the
materiality of a ranking. The analytical value of the term is
that it foregrounds how a ranking (the ‘calculation’) can be
shaped through its incorporation in particular sociomaterial
objects. Those constructing a ranking are required to take
into account the device’s various affordances and constraints
when they plot a dot on a graph. To lay the foundations for
our empirical study we discuss some of the furniture com-
monly found within rankings. This is followed by a discus-
sion of some of the sociomaterial affordances and
constraints surrounding the production of graphs.
Format and furniture
Rankings are shot through with various kinds of devices
in and through which they are embedded and become
material. There are those that come in the form of lists or
tables and then there are those that are more graphical
in nature. One ?nds many examples of ranked lists (our
informal research on Google, for instance, suggests at least
several hundreds). Stark (2011) argues that this format be-
came popular in the 1950s and cites the ‘jukebox’ as a pos-
sible source. Since jukeboxes held 40 single records this
apparently led to the development of ‘top 40’ record pro-
grammes on radio stations (see also Anand & Peterson,
2000). Today the list has become the format of choice for
many ranking organisations. One of its affordances appears
to be that it is relatively unconstrained by the number of
subjects evaluated. The ‘top 10 MBA programmes’ can
(and often are) extended to include the ‘top 50’, ‘top 100’
degrees, for instance. Kwon and Easton (2010), in their dis-
cussion of the Financial Times’ list of MBA programmes,
suggest that the longer the list the more comprehensive
or ‘global’ it may appear in certain peoples’ eyes: ‘‘. . .indi-
vidual consumers can ?nd comfort in the perception that
they can choose the ‘best’ among hundreds or thousands
of alternatives, rather than the ‘best’ among several ‘good
enough’ alternatives arising through the search process.
The FT MBA 100 allows buyers to maximise their choice
of a highly ranked school, given personal constraints such
as budget, geographical preferences and entry require-
ments’’ (Kwon & Easton, 2010, p. 133). We ?ag this feature
because it is not a capacity found in all rankings (see
empirical discussion below).
Rankings are also supported by speci?c furniture. In
their discussion of consultancy reports, for instance, Qu
and Cooper (2011, p. 358) highlight the role of the furni-
ture of ‘bullet points’ and ‘checklists’ as providing a ‘‘topo-
graphical image of how various employee groups within an
organization are relevant to achieving strategic objectives’’.
In the case of rankings there are stars, lines, waves, tics,
dots and so on. Kwon and Easton (2010, p. 132) argue that
the use of such furniture constitutes a particularly novel
feature or form of contribution. Whilst rankers have not
been particularly innovative with regard to methodology,
or how assessments are put together, they have been at
the forefront in terms of developments in ‘format and pre-
sentation’. Kwon and Easton (2010) describe howthe Mich-
elin Red Guide, for instance, was amongst the ?rst of the
major rankers to supplement complicated forms of quanti-
tative data with ‘qualitative descriptors’. It rated restaurant
quality by producing the ‘‘now famous three-star scale to
denote relative excellence’’ (Kwon & Easton, 2010, p.
132). These descriptors are now very much part of the
machinery for ranking restaurants around the world (see
Karpik, 2010).
However, we still know very little about why such fur-
niture has become popular or what, if anything, it has
meant for these particular settings. We would argue that
2
Whilst our term builds on the idea of ‘market device’ - de?ned as
‘‘. . .the material and discursive assemblages that intervene in the con-
struction of markets’’ (Callon et al. p. 2) – we attempt to operationalise this
idea speci?cally for the way visual devices mutually constitute calculative
practices. We do so by drawing on and making use of insights provided by
more established ways of thinking (the ‘programmes and technologies’
framework, ‘sociomateriality’, ‘affordance’ and ‘graphic inscription’, and so
on).
568 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
they are important because, they render the calculation
visible through some kind of large-scale ranking apparatus
of which these descriptors form a part. They are thus an as-
pect of the calculative practices for turning ‘qualities into
quantities’ (Miller, 2001) (see Kornberger and Carter
(2010) and Jeacle and Carter (2011) for a discussion of cal-
culative practices involved in ranking). While therefore
their importance has been acknowledged, their effects
have not been demonstrated. This we suggest becomes
more obvious when one considers the production of graph-
ical rankings where rankers are forced to entertain and
take account of quite speci?c affordances and constraints.
To understand what these are we turn to a discussion of
the construction of graphs.
Graphic visualisation: from looking at graphs to looking in
graphs
Latour famously argued that ‘he who visualises badly
loses the encounter’ (1986, p. 13). The ‘scienti?c graph’
was originally said to be one factor that gave science its
in?uence over other forms of knowledge production. For
Latour, the graph was an ‘inscription device’; the key idea
behind this concept was that of ‘mobility’ (the product of
a laboratory could circulate widely without taking with it
the apparatus that led to its production). Accounting re-
search has focused on the inscriptions that construct per-
formance measures more generally (see Dambrin &
Robson, 2011; Robson, 1992), with particular attention
being given to ‘graphs’. Qu and Cooper (2011, p. 358), for
instance, highlight how ‘‘graphical inscriptions are gener-
ally persuasive in communicating information. They solid-
ify ambiguous concepts into concrete forms . . .’’. Whilst
scholars have mobilised the notion of inscription to cap-
ture how material substances are translated into ?gura-
tions that can travel, however, it would be fair to say that
they have looked at the graph but not necessarily in the
graph (see Qu and Cooper’s (2011) call for research on
the production of inscriptions).
Some partial exceptions include Miller and O’Leary
(2007) and Quattrone (2009). In his discussion of the his-
tory of the book, for instance, Quattrone (2009, p. 109) sug-
gests that it is because graphs are ‘partial’ and ‘simpli?ed’
that they have an effect:
Graphical representations . . . are always so partial and
simpli?ed that they essentially contain very little; they
have little truth in them; for, if it ever existed, it has
been lost in the process of diagrammatic representation
which has sacri?ced details and context for the sake of
clarity. This is the only way in which they can effec-
tively communicate and engage the user in a performa-
tive exercise.
From sources further a?eld, Espeland and Stevens
(1998, p. 423), in their review of the Communication Stud-
ies literature, argue that graphs are successful because
they are produced according to ‘aesthetic ideals’ (Espeland
& Stevens, 1998, p. 423, see also Bloom?eld and Vurduba-
kis, 1997). This includes how they should have clarity and
be parsimonious: ‘‘. . . people who make pictures with
numbers typically prize representations whose primary
information is easily legible (clarity), and which contains
only those elements necessary and suf?cient for the com-
munication of this primary information (parsimony)’’
(Espeland & Stevens, 1998, p. 423; see also Tufte (2001)
on whom Espeland and Stevens draw). This is because
those who construct graphs as part of their professional
activities want them to be ‘‘not only errorless but also
compelling, elegant, and even beautiful’’ (Espeland & Ste-
vens, 1998, p. 422).
The contributions above suggest that graphs place ‘lim-
its’ on designers. We supplement this with work from STS
where Lynch (1988, p. 202) argues that graphs (in science)
do more than constrain; they also add features and affor-
dances not found in original understandings.
The [graph] does not necessarily simplify the diverse
representations, labels, indexes, etc., that it aggregates.
It adds theoretical information which cannot be found
in any single micrographic representation, and provides
a document of phenomena which cannot be repre-
sented by photographic means (emphasis in original).
Even the simplest graphs, in Lynch’s view, add rather
than reduce information. They contribute
. . .visual features which clarify, complete, extend, and
identify conformations latent in the incomplete state
of the original specimen’. Instead of reducing what is
visibly available in the original, a sequence of reproduc-
tions progressively modi?es the object’s visibility in the
direction of generic pedagogy and abstract theorizing
(Lynch, 1988, p. 229).
An example of those things added can be found in an
earlier paper where Lynch discusses a common but little
discussed graphic resource is the ‘device of the dot’
(1985, p. 43). Analysing a ?eld manual describing the anat-
omy of a lizard he makes the following point:
Note that each observation of a marked individual is
rendered equivalent to all others through the use of
the device of the ‘dot’. The only material difference
between one dot and another on the chart is its locale.
Locales are reckoned in terms of the grid of stakes,
and all other circumstantial features of observation
‘drop out’.
Dots are ‘additive’ rather than ‘reductive’ (we get this
terminology from Ingold’s discussion of another type of
notation, ‘the line’ (2007)). Lynch (1985) ?ags how graphs
provide for commonplace resources of graphic representa-
tion. Understanding the interplay between graphic re-
sources and the thing they purport to describe, therefore,
is important. Lynch (1988) suggests it is this way one can
witness how the properties of graphs go onto merge with
and come to incorporate the thing represented. He writes:
‘‘. . . one theme which applies to many, if not all, graphs is
that of how the commonplace resources of graphic repre-
sentation come to embody the substantive features of the
specimen or relationship under analysis (Lynch, 1988, p.
226). In turn: ‘‘. . . efforts are made to shape specimen
materials so that their visible characteristics become con-
gruent with graphic lines, spaces, and dimensions (Lynch,
1988, p. 227).
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 569
To summarise, we ?nd it necessary to bring together a
number of complementary disciplinary schools to discuss
this complicated phenomena. Specialisation in this respect
has traditionally posed a major barrier to analysis and
understanding (Hopwood, 2007). Linkages across different
scholarly ?elds provide important new insights into how
we understand, represent and theorise the tools and prac-
tices of performance measurement. In this respect, the ‘pro-
grammes and technologies’ framework (Miller & O’Leary,
2007) tells us how areas are conceptualised in certain ways
so that they can become ‘calculable’, often through inter-
ventions made possible through devices. The literature
from STS directs attention to how devices do not simply
support but can act within calculations. The idea of a ‘rank-
ing device’ drills down further still to show how a ranking
(and ‘calculation’) can be shaped by its incorporation in a
speci?c format and furniture, and, in turn, with how these
sociomaterial features can shape aspects of the market.
The kinds of markets we are interested in are those pro-
curement markets related to the supply of advanced tech-
nologies like information systems and other kinds of
software. We organise our empirical material around a dis-
cussion of three aspects of how speci?c furniture – ‘the dot’
– is moved around a graph. The ?rst section focuses on
how the ranking helps create a ‘competitive space’ in rela-
tion to the shaping of the visible market of players.
3
It dis-
cusses how new expertise, practices and routines are created
and emerge as vendors attempt to improve their placing in
the competitive space (what actors call ‘moving the dot
activities’). The second section investigates how the compet-
itive space is shaped not only by ‘people moving dots’ but
also by sociomaterial constraints. In particular, the affor-
dances and limitations found within the ranking device
(here the focus is on how ‘dots move people’). Speci?cally
these are material affordances (for instance how players in
a market can be brought together and compared in one
space) and social constraints (not all players can be included
on one graph). The ?nal section discusses how these con-
straints encourage rankers to make interventions in the
competitive space (how ‘dots move markets’).
Setting and method
The Magic Quadrant
The ranking discussed here is produced by the industry
analyst ?rm Gartner Inc. (hereafter Gartner). Founded in
1979 by Gideon Gartner, the ?rm operates (almost exclu-
sively) within the information technology domain.
4
Whilst
Gartner is just one of a number of such research organisa-
tions within this area, it is widely recognised as the largest
and most in?uential. Despite not having a monopoly over
the production of IT analysis, commentators suggest it has
something close (Hopkins, 2007).
5
Gartner’s strap line is that
it ‘‘wants to be involved in every IT decision’’ (interview,
Gartner Analyst A). The Magic Quadrant is by far the most
well-known of Gartner’s research tools. This attempts to
compare and rank software vendors according to a number
of prede?ned measures. It comes in the form of a box with
an X and Y-axis (labelled as ‘completeness of vision’ and
‘ability to execute’) dimensioning a two-by-two matrix, with
four segments into which one can see placed the names of
several vendors (see Fig. 1). Vendors are not randomly
placed. Each segment is individually labelled (niche player,
challenger, visionary and leader). The position of a vendor
in a particular segment signi?es something regarding its
current and future performance as well as its behaviour
within markets (Burton & Aston, 2004). Those placed further
to the right are seen to have more ‘complete visions’, whilst
those placed towards the top an elevated ability ‘to execute’
on that vision.
Gartner are proli?c in the production of Magic Quad-
rants: they author nearly 150 for different IT markets (Dro-
bik, 2010); this number changes all the time as Gartner
continually creates new Magic Quadrants to re?ect the
development of new types of technology markets and
occasionally ‘retire’ older ones to represent the fact certain
markets have matured. Authorship of Magic Quadrants is
not a one-off process. They are updated and released each
year. This means how vendors are placed within the matrix
will change over time. There may also be the introduction
or exit of players onto the Magic Quadrant.
In the IT domain there are a number of visual rankings
(examples include the ‘Forrester Wave’, the ‘Gartner Hype
Cycle’, the ‘Gartner Clock’, the ‘Ovum Decision Matrix’, to
name but a few). The Magic Quadrant is, by far, the most
referenced of these (Violino & Levin, 1997). One Gartner
Analyst we interviewed describes how: ‘‘[a] good Magic
Quadrant will get ?fteen hundred downloads every
month’’ whereas a ‘‘Hype Cycle will get around six or seven
hundred’’ (interview, Gartner Analyst B). These are down-
loads from the Gartner website (accessible only by fee-
paying clients). Magic Quadrants are also often posted on
the Internet (meaning they are normally available to a
much wider audience).
Decision makers apparently draw on these rankings to
help facilitate choices when procuring IT equipment and
software. It has become part of IT folklore that those
looking to buy solutions invite only those in the top right
quadrant to tender. This leads some to suggest that a
high-ranking guarantees a vendor more attention than its
rivals (Hind, 2004) or that the ranking has the power to
‘make or break’ a vendor (Violino, 1997). It is perhaps no
3
We de?ne a ‘competitive space’ as the space of confrontation and
struggle that is created between various economic players in a speci?c
technological ?eld, often through the use of various social and material
strategies linked to a ranking.
4
Gartner runs ‘executive programs’, has an established consultancy
wing, organises regular themed conferences and symposiums on emerging
technological topics, and produces research for the IT market. This latter
activity forms the bulk of its enterprise, and it is where 80% of revenues are
generated (Drobik, 2010). Gartner has over 4000 employees and of?ces in
80 countries around the world. It is reported to have over 60,000 clients
from 10,000 different organisations (Drobik, 2010). For further information
about Gartner’s activities, see Pollock and Williams (2010).
5
This point about monopoly is important for what is described below. It
is clear that rankers are stronger when there is only one dominant
evaluator in an area. Kwon and Easton (2010, p. 124) note how an
individual ranker ‘‘. . . can become powerful to the point where they are
able to monopolize the information required for the ef?cient functioning of
markets and thereby in?uence the behaviour of other market actors’’.
570 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
surprise then that vendors seek to in?uence the shaping of
the ranking. Some are even said to construct aspects of
their business (marketing and product development strat-
egies) in line with the ranking’s underlying assumptions
(Hopkins, 2007).
Research on the Magic Quadrant
We have been studying the Magic Quadrant for several
years now. Our attention was alerted to its signi?cance
whilst carrying out an ethnographic study of IT procure-
ment in a large municipal council at the turn of the century
(Pollock & Williams, 2007) and then a couple of years later
during a study of how users bring in?uence to bear on ERP
vendors (Pollock & Williams, 2009). These initial dealings
prompted us to plan and develop a research project that
would enquire into the production of this ranking and
the nature of the expertise surrounding it. The fact our pro-
ject was funded ?lled us with both excitement and (it must
be said) a certain amount of dread! There is a perception
that it is dif?cult to gain access to Gartner (a point said
to be true of rankers more generally (Kwon & Easton,
2010), which perhaps explains the paucity of studies on
the production of rankings). Nevertheless, we set out to
conduct ?eldwork in the hope that we would get lucky
(and ‘fortune’ does seem to feature in a lot of research).
In our initial attempts to gain access, we wrote to one par-
ticular analyst whom we had come across in previous
?eldwork. He agreed straightaway to an interview, which
meant we were able to visit Gartner’s European headquar-
ters in London and begin what turned out to be a highly
productive period of ?eldwork.
Data collection
Since this particular analyst worked in the area of ‘Cus-
tomer Relationship Management’ (CRM) technologies and
was able to provide speci?c details on how the CRM Magic
Quadrants were constructed, we devoted most of our time
to following events and people in this area. We attended
two symposiums organised by the Gartner CRM team. Here
we could observe the formal presentations made by ana-
lysts but also approach them informally afterwards. These
occasions turned out to be a particular fertile ground for
studying rankings. Since the meetings were run in a similar
fashion to academic seminars it was easy to engage ana-
lysts in conversations or to simply hang around and listen
whilst others quizzed them about their thinking behind
the placing of vendors. Whilst we bene?ted from these
spontaneous discussions, we were also able to conduct
interviews with analysts. We carried out seven formal
interviews with Gartner analysts: three of these were over
the telephone, and four took place face to face.
We circulated an early research paper within Gartner,
which not only served to validate our ?ndings but also
led to further episodes of ?eldwork. One analyst, for-
warded the article by a colleague and whom we had previ-
ously interacted with, contacted us to tell us that he
thought that we had produced a ‘critical but fair’ analysis
of Gartner’s work. He also re?ected on how we had missed
some of the more ‘internal’ aspects by which Magic Quad-
rants were constructed. Later, in a hastily arranged inter-
view, he would tell us about these aspects. These form
part of the material presented here.
Our study is further informed and contextualised by
interviews and discussions we conducted with other actors
involved in and around the ranking. This includes four cat-
egories of player: (1) we conducted two formal interviews
with some of the vendors subject to Gartner’s assessment;
(2) we held informal discussions, especially during our
attendance at Gartner conferences, with the IT managers
and practitioners who consume this kind of knowledge;
(3) we interviewed analysts from ?ve rival ?rms to
Ability
to
Execute
Completeness of Vision
Niche Player Visionaries
Leaders
The Magic Quadrant
Challengers
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
. Vendor
Fig. 1. The Magic Quadrant.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 571
ascertain their view on Gartner’s ranking process and its
wider effects on the market; (4) we also interviewed and
observed the activities of a new breed of professional that
has emerged to offer advice to vendors on how to interact
with ranking organisations like Gartner.
Within the larger IT vendors there are now commonly
‘analyst relations’ (ARs) departments which contain ex-
perts whose role is to liaise with and represent the vendors
to industry analysts, consultants and other commentators.
These experts attempt to understand the details of how
industry analyst ?rms work and what kinds of in?uence
they can wield. They will be particularly keen to identify
how the analyst organisation currently views their partic-
ular ?rm and what they might do to in?uence that opinion.
Moreover, there are now hundreds of independent ?rms of
‘AR consultants’ operating in and around the IT market-
place. During our research, we were able to interview
one of these consultants.
Overall we conducted ?fteen formal interviews, carried
out over 50 h of observation at conferences, listened to and
participated in more than 20 ‘webinars’, and engaged in
dozens of informal discussions. All the interviews were
taped and fully or partially transcribed. During participa-
tion in Gartner conferences we took extensive notes. The
collection of data at these venues was facilitated by the fact
that Gartner video record all sessions and make these
available to participants after the event (for a further
fee!). This meant we could re-listen to presentations whilst
back in our university of?ces.
Dot-ology
6
How rankings shape the practices of those ranked (people
moving dots)
Rankings wield signi?cant in?uence over a ?eld of
activity (Sauder & Espeland, 2006). However, those groups
and organisations subject to these measures have not
stood still. A market has been created that sells of informa-
tion on the details of how major rankings are constructed,
together with strategies for the improvement of placings.
Below we report on our interactions with a number of Ana-
lyst Relations (ARs) consultants who produce and trade in
this kind of knowledge. We show how one effect of their
work has been to establish the ranking as a space of con-
frontation and struggle between competing vendors (Korn-
berger & Carter, 2010).
Moving the dot activities: a social affair
In these ?rst set of quotes a consultant has prepared a
presentation to AR professionals. Having previously
worked as a Gartner analyst, this expert now offers advice
to others on how to interact with ranking bodies. His pre-
sentation is organised around various ‘moving the dot
activities’. He is careful to tell the audience that if they
are to be successful in shaping a ranking then they will
be a signi?cant amount of work to do:
Now, these activities that we’re going to talk about,
although we’re going to call them out and highlight
them as speci?c ‘Moving the Dot activities’, they should
be part of your overall AR Strategic and Tactical
Plan . . ..I’m going to remind you, tremendous effort is
required to in?uence the Magic Quadrant. The data that
we’ve gathered indicates that our clients spend any-
where from 60 to 200 h on a single Magic Quad-
rant . . . understand that this is not an insigni?cant
amount of work (presentation, AR consultant A).
In terms of the type of work necessary, ?rstly, this in-
cludes gathering insights about the makeup of the Magic
Quadrant and, then secondly, feeding information back to
the ranker about a vendor’s products, strategy and speci?-
cally ‘thought leadership’. Vendors are encouraged to do
the latter through building personal relationships with
individual rankers, often through engineering periods of
‘social time’ between them and particular analysts (con-
ducting discussions ‘over a meal’ being one of the favoured
methods) (presentation, AR consultant A). Thus, there ap-
pear to be rich and direct interactions between rankers
and those they rank (albeit mediated by these new kinds
of intermediaries).
Another AR consultant interviewed described how he
had engaged in a similar process when one of his own cli-
ents had received a negative placing:
We used enquires with speci?c analysts in the channel
to understand who they should be approaching to help
go to market with speci?c vertical analysts at Gartner to
understand the best approach to solve the business
problems in that particular industry. And we focused
on speci?c analysts to help us make sure our message
and our persistent focus directly for that individual, that
individual market (interview, AR consultant B).
The consultant goes onto describe how the key reason
for these ‘brie?ngs’, ‘enquiries’, ‘touches’, or ‘deep dives’
was to bridge the ‘gap’ in knowledge between the ranker
and the vendor. To evidence this he gives an example of
a successful set of interactions:
[O]ne of our clients was getting involved in a Magic
Quadrant and . . . we tried to understand what the ana-
lyst thought about our company, and we realised that
there were several areas where there was a gap. So
we made sure we ?lled those gaps . . . we did enquires
to understand whether what we believed the message
should have got across, whether the analyst got that
across, and if it wasn’t we tried to ?ll that gap. So when
the Magic Quadrant ?nally came out we positioned, we
knew the analyst had suf?cient information, we knew
where we had weak points and we addressed those,
so it wasn’t a shock. In fact, we were positioned in the
top right hand corner. It was fantastic! (interview, AR
consultant B).
6
What could be more banal than a ‘dot’? However, if we want to
understand the constitutive nature of a visual ranking then we have no
choice but to focus attention on this particular graphic furniture. Dots form
the basis of every conversation and consideration with regard to the Magic
Quadrant. Everything that happens typically occurs around the dot. Dot-
ology, which is a development of an actors’ category, attempts to capture
how this mundane furniture can offer new possibilities, place limitations
on actors, and encourage processes of co-production between graphs and
settings.
572 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
Both consultants describe how the rationale for these
brie?ngs and meetings should be for the vendor to under-
stand the ‘evaluative criteria’ the ranking organisation ap-
plies when assessing vendors/products. This is the speci?cs
as to how individual rankers’ conceive of the nature and
characteristics of the various technologies covered by their
particular Magic Quadrant:
I need to understand the criteria and current opinion
and the publishing schedule, and I need to see what I
can do to in?uence that criteria and that opinion. Now
we’re going to use the analysts by doing inquiry to ?nd
out this information, like what is changing in the crite-
ria . . . consulting with them, perhaps even use some of
their information and criteria to in?uence the way in
which my product roadmap is going to go (presenta-
tion, AR consultant A).
The suggestion given is that once a vendor understands
the ranker’s evaluative criteria that they should then use
this information to in?uence their own product develop-
ment strategies. In other words, they should develop prod-
ucts and strategies in a way that more closely resembles
the ranker’s description of the technology/market (this is
reported to be a common strategy amongst many IT ven-
dors (Hopkins, 2007)). If not possible (or desirable) to rea-
lign product development around the ranking then another
solution is to attempt to modify the criteria of the ranking:
. . . we might even give consideration to trying to change
the character of the Magic Quadrant [through] in?uenc-
ing the de?nition of exactly what this Magic Quadrant
is. That’s part of changing the criteria. If I can sort of
say ‘Look, this is not the same Magic Quadrant as it used
to be, now it has a newset of objectives and a new set of
criteria because the market has changed’, that has an
interesting possibility of radically changing the position
of all the dots (presentation, AR consultant A).
What is being recommended is that vendors should at-
tempt to move the ranker’s conception of the technology
assessed. In so doing, there will be obvious advantages
for the vendor that is able to help set the criteria by which
products in a particular market are judged. The AR consul-
tant then closes this particular segment by giving some
practical examples of what kinds of bene?ts might be
gained from (re)setting criteria.
Bringing vendors into the same competitive space
The issue of competition – and shaping of the competi-
tive landscape – is a key theme surrounding the Magic
Quadrant. The AR consultant suggests that if a vendor
has a product that is signi?cantly different from those of
competitors then it may be possible to suggest to Gartner
that it need create a new Magic Quadrant. This they can
do through feeding analysts their thoughts on how partic-
ular technologies and technology markets are developing.
Alternatively, through similar kinds of interactions and
brie?ngs, there may also be the possibility of ‘killing’ a Ma-
gic Quadrant where a vendor is not doing so well:
Alternatively, there’s the chance of creating a com-
pletely new Magic Quadrant. Gartner does retire old
ones and create new ones. Working with an analyst that
doesn’t have a Magic Quadrant, you might be able to
create a new one. Working with the analyst that has
two Magic Quadrants, you might be able to alter the
characteristics. Working with an analyst that has lots
of Magic Quadrants, you might be able to kill a Magic
Quadrant (presentation, AR consultant A).
The suggestion is that a vendor may be able to create a
Magic Quadrant for an area where it is the ‘leader’. It may
even be able to help retire a Magic Quadrant where its
competitors are doing particularly well by comparison.
The consultant suggests that whilst a ?rm may not always
be able to move its dot up it should nonetheless give con-
sideration as to how it might be able to move its compet-
itor’s dot down:
An alternate objective is to move your competitor dot
down, to the left . . . So that might be an interesting
approach . . . if I had the ability to push my competitor
down then by inference I’ve pushed myself up. I might
look at an objective as increasing the distance between
you and the competitors, or preventing a competitor
from leapfrogging over you (presentation, AR consul-
tant A).
What is being described here is how it is the ranking it-
self that mediates and constitutes competition. Even
though a vendor may not necessarily have thought of itself
as directly competing with speci?c others, through place-
ment on the Magic Quadrant, the competitive space has
been mapped out. Vendors are seen (and increasingly trea-
ted) as direct rivals (Kornberger & Carter, 2010). In the con-
sultant’s view, the Magic Quadrant clearly indicates a
vendor’s standing in relation to those immediately sur-
rounding it. And whilst vendors could not previously rank
their performance against others, they can now measure
the dots on a graph (and the use of a ruler by executives
to capture even slight movements appears to be common
– see Pollock & Williams, 2009). Interestingly, whilst ven-
dors have been brought together in the same competitive
space, the consultant is advocating that a vendor should
not simply accept but potentially attempt to recon?gure
this space. Vendors are given advice on how to shape the
boundaries surrounding the competitive space; they are
encouraged to develop tactics and strategies to push them-
selves up and to the right, which, by default, will push their
competitors down and to the left.
To summarise, we see how dots have come to mediate a
vendor’s interaction not only with the ranking organisation
but also with other vendors. Some have gone as far as to
develop strategies and plan for modes of interaction with
the rankers to help move places and shape spaces. Thus
at a basic level dot-ology captures the practices and rou-
tines that develop as actors focus attention around the de-
tails of a ranking in order to in?uence, ?rstly, their own
position in relation to competitors and, secondly, the
boundaries of the competitive space. However, we want
the notion to capture more than these ‘social’ strategies
at play. It is not simply about how people contrive to move
dots but how the competitive space is being (re)shaped in
other ways too. In particular, we want to introduce the idea
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 573
of sociomaterial agency, by which we mean that the ?eld is
in?uenced by the various affordances and constraints con-
tained within the ranking. It is not simply people moving
dots but also ‘dots moving people’. To demonstrate this,
we begin by discussing how dots are placed on the matrix
in the ?rst instance.
Individual rankers and the ranking organisation (dots moving
people)
The production of the ranking is not static
The calculation of the Magic Quadrant has generated
much discussion within IT practitioner circles. During
?eldwork, we had the opportunity to interview a number
of Gartner employees about how Magic Quadrants were
developed: ‘‘The accusation we were always given’’, re-
sponded one to our question, ‘‘was that we threw darts
at the chart’’ (interview, Gartner analyst A). Here the ana-
lyst is responding to a widely held belief that the calcula-
tion of places lacks any form of process or systemization
(see for instance Violino, 1997). One issue that apparently
vexed practitioners was the thought that placings were
plotted by hand. Presumably this was problematic because
it lent the ranking a discretionary quality (Violino, 1997).
Another was the fact the Gartner described the Magic
Quadrant as resulting from predominately ‘qualitative re-
search’ (Soejarto & Karamouzis, 2005). One Gartner report
describes how: ‘‘During the research process, we may ask
for new information and brie?ngs from vendors. We often
gather information from vendor-provided references, from
industry contacts, from unnamed clients, from public sour-
ces . . . and from other Gartner analysts (Burton, 2004, p. 4).
It was the idea that rankings could be in?uenced by ‘un-
named clients’ that caused much discussion (Violino,
1997). Gartner would informally solicit the opinions from
customers of those vendors being assessed. But this was
seen as ‘?awed’ since it gave a paramount role to analysts
who could choose which customers to listen to (and this
raised the issue of ‘bias’ and ‘partiality’; for more details
see Pollock & Williams, 2009).
In our interviews with Gartner analysts, however, they
went to great efforts to dispel the idea that rankings were
judgmental or approximate. They pointed to how the pro-
duction of rankings, whilst they did rely on a range of
sources including informal discussions with customers,
was also circumscribed by standardised measures and
technology: ‘‘The actual dot scoring, there is a standardised
spreadsheet we have to use [and] standardised scoring
mechanism’’ (interview, Gartner analyst A). Dots are plot-
ted within a ‘spreadsheet’ and populated with numbers
from a ‘standardised scoring mechanism’. Scorings derive
from a number of ‘evaluation criteria’ that have been di-
vided along the two axis of the Magic Quadrant. These
break down to reveal a number of further standard criteria
(see Table 1).
Set criteria are then given a weighting (‘high’, ‘stan-
dard’, ‘low’, or ‘no rating’). If ‘no rating’ is applied this
means that this particular factor will not be counted in
the calculation. However, whilst individual rankers had
the ?exibility to choose whether to apply a criterion or
not, it was reported that the bulk of analysts would use
most of them:
So for example, of the standard, I think it is eight criteria
on the two dimensions, eight criteria on each [sic], you
could theoretically get rid of four or ?ve of them, and
just weight it on three – so you could weight something
zero if you want to – but most analysts are using most,
if not all of those criteria, and weighting them to differ-
ent degrees, on every single Magic Quadrant (interview,
Gartner analyst A).
The primary reasons for these changes in calculating
places was because of increasing pressure exerted by AR
consultants and others who were probing ranking bodies
– through ‘brie?ngs’, ‘enquiries’, ‘touches’, etc., – to under-
stand the detailed practice of ranking construction. An-
other reason was the fear of ‘litigation’.
7
As a result the
production of the Magic Quadrants are more regulated so
as to create an ‘audit trail’ (see Free et al. (2009) for a discus-
sion of the auditing of rankings):
. . . individual analysts have to follow the same proce-
dure, and we have to document that, and you have to
have an audit trail of how it was created, and usually
you have to have scoring sheets to demonstrate how
you got to that point but on the actual spreadsheet that
creates the quadrant there is a scoring, a whole scoring
system which is standardised across the whole com-
pany (interview, Gartner analyst A).
Gartner had even gone as far as setting up a ‘Methodol-
ogy Team’ to ensure that the standards for plotting the
graph were maintained across the entire organisation. A
former Director of the Methodology Team describes how
this did bring a certain amount of systemisation in the
work of individual analysts: ‘‘. . . there is some leeway in
the methodology but [the Methodology] team is responsi-
ble for making sure that there methodology is sound and
that it is followed, and that it is updated as technology
changes and as we see things unfold in the marketplace’’
(interview, Gartner analyst C).
An analyst notes that this is a more regulated and
standardised process than from just a couple of years
ago. Apparently, individuals had more freedom in the past
Table 1
Evaluation criteria for the Magic Quadrant.
Completeness of vision Ability to execute
Market understanding Product or service
Marketing strategy Overall viability
Sales strategy Sales execution, pricing
Product strategy Market responsiveness
Business model Marketing execution
Industry strategy Customer experience
Innovation Operations
Geographic strategy
7
Gartner has been the subject of a number of high pro?le litigation
cases. The most recent of which was the 2009–2010 case presented by ZL
Technologies Inc. who argued that because of a low ranking received on a
Magic Quadrant they had been ‘defamed’. The case, whilst gaining much
publicity, was ultimately unsuccessful.
574 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
to plot graphs in different ways. He describes how the old
way of calculating Magic Quadrants had both advantages
and drawbacks:
. . . they were more comprehensive in those days but
they weren’t consistent. So the way I would have my cri-
teria would be nothing like my colleague sitting next to
me. We weight in a very different way and the dots are
arrived at very differently. And the vendors didn’t like
that. The vendors didn’t like being top right in one and
bottom left in another and not knowing why. Often that
was because they were trying to negotiate about how
they were treated (interview, Gartner analyst A).
Magic Quadrants were more comprehensive because
vendors could be scored according to criteria the individual
ranker felt was important at the time or relevant to the
speci?c circumstances. However, this meant the process
of plotting the dots differed widely across the ranking
organisation. This seemingly caused problems for Gartner’s
relationship with vendors who wanted greater clarity and
uniformity around scoring mechanisms. One analyst notes
that because the process of placing dots was now similar
across Gartner that certain aspects of the ranking construc-
tion process had ‘improved’. However, he was also of the
view that that not all these changes in production were
leading to improvements in the overall ‘quality’ of the Ma-
gic Quadrant:
. . . the purpose of the Methodology Team, and the pur-
pose of all these extra steps, and more rigorous proce-
dures, is to improve quality. The question really is
about what quality means? And I would argue that
the de?nition of quality being used there is about con-
sistency, repeatability and audit trail. It is that level of
quality. In other words, we have a process, we’re follow-
ing it, no one is getting out of the process (interview,
Gartner analyst A).
Improvements, in his view, were related to control over
the process and the repeatability of the same evaluative
measures. He then goes onto describes why he thought
Magic Quadrant were better in previous years:
So I would argue that the value of the Magic Quadrants’
ten years was actually better, even though they were
less accurate in some ways . . . there were bigger move-
ments on Magic Quadrants from year to year. But the
point being made was that analysts’ were changing
the weightings much more dramatically to re?ect what
the customers were telling them. Now we re?ect the
customers . . . less well, because we have to go through
a lot more steps to re?ect what the customers are ask-
ing. So it is an interesting trade-off really. Who is the
value for? (interview, Gartner analyst A).
His point is that there used to be more ‘movement’ on
the ranking at each new release. Since individual rankers
had the freedom to set criteria and plot dots this re?ected
what these ‘unnamed clients’ were actually telling them
about vendors. By contrast, today, even though an analyst
might hear critical comments about a vendor, these may
not be so easily re?ected within the Magic Quadrant (they
may fall outside of the publicly available criteria). The clear
impression we gained from our interviewees was that in
recounting these moves towards transparency and stan-
dardisation that they were also describing a decrease in
their own discretion. In order to attempt to remove the
idea of bias and partiality from the ranking, individual ana-
lysts were now increasingly circumscribed by a new mate-
rial and organisational reality (increasingly explicit
assessment criteria, a methodology team scrutinising their
work, the need to provide explicit evidence for choices, a
spreadsheet that plotted dots, etc.). We now turn to look
in more detail at these constraints.
Actors are constrained in producing rankings
We want to show how dot-ology relies on an extensive
organisational apparatus that patterns the activities of
individual rankers in placing dots. Below we focus on
two particular aspects: technology and bureaucracy.
Technology
The spreadsheet has become a central feature of the
production of Magic Quadrants. Law (2001) argues that
spreadsheets are among those technologies that help cre-
ate powerful actors (through allowing them to manipulate
data so as to see and project things that others cannot).
However, at Gartner, the spreadsheet appeared not to be
a malleable tool but one that placed limitations on individ-
ual rankers. For instance, when information had been input
into the spreadsheet and the graph plotted it was then dif-
?cult, if not impossible, to move a vendor: ‘‘. . . you just
can’t put the dots where you want. The dots are all related
to each other. So if you move one score up it impacts all the
dots on the chart’’ (interview, Gartner analyst A). A vendor
might be moved if the analyst thought the calculative
apparatus had failed to position a dot in the way s/he con-
sidered ‘fair’. Fair meant a placing that re?ected the indi-
vidual ranker’s own knowledge as opposed to that which
results from the ‘organisational machinery’. However,
moving a vendor once a graph had been generated would
create further movement across the ranking. One small
change could affect the position of all vendors and this
would almost attract the attention of colleagues elsewhere
in the organisation.
For this particular analyst, this was further evidence
that dots were not arbitrarily placed but that individuals
were constrained by the scoring mechanism and technol-
ogy. The analyst then goes onto describes how one of the
few changes they could actually do to the graph was to:
. . . move the box around a bit. So, in other words, if all
the dots are clustered in the centre you can reset the
axes to get the box more spread out so they look more
attractive. Otherwise, you would have a scale where all
the dots are clustered around the centre or clustered
around one spot. The idea there is just to make them
spread out so you can actually read who compares to
whom. So, there is a little bit of ?exibility on the edges,
but frankly, you can’t really rig it anymore (interview,
Gartner analyst A).
Analysts had the freedom to adjust the scale within the
spreadsheet but not speci?c dots. If vendors were all clus-
tered together, it was possible to adjust the box to create
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 575
distance between them. That is, to enhance or develop a
greater distinction between the entities ranked than was
initially revealed in the spreadsheet. This was apparently
an attempt to make the rankings more ‘attractive’ (a point
we develop in detail below).
Bureaucracy: the review process
There was reportedly increased scrutiny of the work of
the rankers. The Methodology Team dictated that rankings
should pass through various kinds of review. This includes,
?rstly, the discussions analysts would have amongst them-
selves. Most Magic Quadrants were produced by more than
one individual, meaning that the ranking emerged from a
consensus amongst a group of authors. There was also a
‘peer review committee’ where analysts from the same
technology area would scrutinise the calculation. Accord-
ing to one analyst, it was now practically impossible to
‘rig’ Magic Quadrants because they were subject to so
much scrutiny:
If you have sat down and set the criteria out – I suppose
mentally you could if you sat down – but there is a lot of
heart felt discussion that goes on between usually a
couple of the authors and, there is usually two authors,
one author, sometimes two on each, and then there is a
team of maybe three or four who are very closely
involved (interview, Gartner analyst A).
Moreover, in recent years, a further check was also
introduced where the placement of the larger vendors
was also given a further round of review. It was inspected
by what was called a ‘lead analyst’ within Gartner. This
was someone who had overall responsibility for research
produced on speci?c vendors:
But now there is something else that happens as well.
Say there is ?fteen vendors on the Magic Quadrant,
you might have lead analysts on some of the biggest
vendors out there. So for the biggest vendors we tend
to have a lead analyst on them to keep a consistent
viewpoint of the whole vendor. So they might be in
ten different areas of technology and one analyst will
have an overview across the whole lot. So if there is
any form of escalation or, you want to go to one person
and say ‘give me an overview of that whole vendor’.
And they are a sixty billion dollar company or some-
thing, you’ve got somebody with a view across the
whole company. Those people have to review where
the dot is and what the wording of the text is (inter-
view, Gartner analyst A).
One ?nal part of the review process was that graphs
were also sent out to vendors themselves prior to publica-
tion who, in turn, were free to comment. A consequence of
this, according to an analyst with responsibilities for the
Gartner Ombudsman of?ce, was that this often led to
‘thorny’ interactions between Gartner and the vendors:
. . . a thorny one would be a vendor is dissatis?ed or
believe that they haven’t been treated objectively in
a . . . Magic Quadrant . . . So a typical issue might be well
I am too far down and to the left and I deserve for my
dot to be higher and more to the right. So they’ll come
to us and say I haven’t been treated fairly (interview,
Gartner analyst C).
Interestingly, it was not only in the management of
existing Magic Quadrants where various new kinds of
bureaucratic measures could be found. They were also vis-
ible in other aspects of the ranking. In particular, this was
in the creation of new Magic Quadrants. Developing a new
ranking turned out to be more dif?cult than in the past be-
cause a ‘committee’ had now been put in place to approve
them:
Before you could just do it. 10 years ago you could just
create one if you wanted to. You just had to negotiate
with the boss. But now you have to go to a committee.
There is a senior research committee that has to
approve all new proposals for Magic Quadrants. So
you have to justify there is a market, it’s big enough,
it’s growing at this rate, there’s lot of market clients,
here’s the enquiry volume coming from the customers,
‘OK then, you’ve got a Magic Quadrant’ (interview, Gart-
ner Analyst A).
Asked whether this particular analyst had been in-
volved in or seen such a committee, he replied that he
had observed from nearby the workings of a number. In
particular, in recent months, he had seen a committee for
a type of development called ‘Social Software’ (discussed
in more detail below): ‘‘I didn’t go through the committee
but I saw the forms you have to ?ll in, and you have to go
to a meeting, and you have to in effect propose it and nego-
tiate why it has a right to exist’’ (interview, Gartner analyst
A). Added to this, and this is where we get to the substance
of our argument, there was a further reason as to why set-
ting up a new Magic Quadrant had become dif?cult. It ap-
peared that the affordances and constraints of the device
itself was a mediating feature.
Affordances and constraints of the ranking
Creating a Magic Quadrant was reported by those we
interviewed to be ineffective at certain key times in a tech-
nological lifecycle. It was said to be dif?cult to set a ranking
up at the outset and then during the more mature stages of
the career of a technology. There could be dif?culties in the
initial stages of the launch of a new technological ?eld be-
cause there might simply be too many vendors. An analyst
describes how:
When there is a 100 [vendors], that’s not very good for
us . . . because then [the market] is not mature enough
for us to actually say, so what we are doing is watching
that very carefully, and going, I will give you an exam-
ple, Social Media Monitoring devices. There is tonnes
of them at the moment (interview, Gartner analyst A).
When asked to explain why the presence of too many
vendors was problematic our respondent replies:
‘‘. . . graphically, you can’t, [. . .] we’ve done it, you can have
a 100 dots on the chart but it is unreadable. It is just gar-
bage. It is just a bunch of dots’’ (interview, Gartner analyst
A). In other words, if all players producing (or claiming to
produce a) new technology were to be included then this
would mean graphs would be too cluttered. There would
576 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
just be too many dots and vendor names on the device.
This would presumably create confusion for those
attempting to consume and make sense of the ranking
(see Fig. 2).
Another analyst notes that, at the outset therefore, Ma-
gic Quadrants may not be very useful for those seeking in-
sights into developing trends: ‘‘possibly if you have 200
vendors in the space that is probably not the right time
to do a Magic Quadrant (interview, Gartner analyst B).
The ?rst analyst goes onto describe how, equally, too few
vendors is also a problem: ‘‘And likewise when there is 3
dots on it, it is meaningless. What’s the point of having a
Magic Quadrant with 3 dots?’’ (interview, Gartner analyst
A). Too few dots meant that little is being described in
terms of how the market is developing (see Fig. 3). The
analyst gives a recent example:
Ability
to
Execute
Completeness of Vision
Niche Player Visionaries
Leaders
Too Cluttered
Challengers
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Vendor
Fig. 2. Too cluttered.
Ability
to
Execute
Completeness of Vision
Niche Player Visionaries
Leaders
Too Empty
Challengers
Vendor
Vendor
Vendor
Vendor
Vendor
Fig. 3. Too empty.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 577
. . . we used to do things like operating systems. . . But
when Microsoft started dominating operating systems
on desktop or desktop applications it was pointless hav-
ing 4 dots on a chart . . . But the ones that I have seen
that have gone, have basically just dwindled to a point
where through mergers and acquisitions they are down
to less than 8 vendors, and the colleagues all turn
around and go ‘what was the point in that?’. The clients
don’t read them anymore, they are not so interesting.
The only people who read them then are clients who
want to justify what they are already doing – it is an
insurance policy kind of thing. But their value is very,
very low. The dots hardly move. And nobody is very
interested (interview, Gartner analyst A).
In contrast to the situation where there were too many
or too few vendors, those analysts that we had interviewed
had come to realise that there was an ideal number of dots
that could be pictured at any one time:
So, I would argue that Magic Quadrants are almost like,
if you imagine a market always going theoretically
going a 100 down to 10, to 5 vendors or something as
it consolidates and the barriers to entry get put up by
the incumbent. Gartner’s Magic Quadrant is the beauti-
ful picture when you have gone down to about 20, 25 to
15, or 10, and then once you go below that it ceases to
be useful. And before that it is not particularly useful
(interview, Gartner analyst A).
The ideal number is somewhere between 10 and 25
dots. This is what this individual ranker identi?es as the
‘beautiful picture’. Another analyst makes the same point:
‘‘Typically, we would cream off all the vendors by inclusion
criteria, and we work that in a way so that there is 20, 25
dots’’ (interview, Gartner analyst B). It is seemingly a beau-
tiful picture because the graph is neither too crowded nor
too empty. It is also a beautiful picture because it appar-
ently keeps Gartner in the ‘game’ so to speak:
So, while it is in that sort of state between about 25
down to maybe 10 vendors, there is a choice, there’s a
multiple different dimensions to it, and different ways
of evaluating, how you write each vendor up. There is
complexity in it, and therefore there is a game for us
to play (interview, Gartner analyst A).
To summarise, dot-ology captures some of the interac-
tion between the social and material aspects of producing
a ranking. For instance, whilst (technically) it might have
been possible to move individual placings on the spread-
sheet, the analysts were constrained by the (social) review
process where a moving dot would have to be explained
and justi?ed. Alongside this, the affordances of the Magic
Quadrant meant that creating the ?guration was dif?cult
both at the outset and at the end of a technological evolu-
tion. At the outset, there were simply too many players and
at the end, because the market has consolidated, there
were too few. The individuals we interviewed appeared
to agree that their experience had shown them that there
were an optimal number of vendors that could be repre-
sented. In other words, the Magic Quadrant set limits on
the kind of competitive space that could be created – and
this was what one individual called the ‘beautiful picture’.
In terms of teasing out what the rankers were attempting
to achieve we ?nd Miller and O’Leary’s (2007) ‘pro-
grammes’ and ‘technologies’ framework useful. Pro-
grammes refer to the conceptualisation and envisioning
of a domain so that it might become open to calculation
(the ‘beautiful picture’), whereas technology refers to the
various interventions that are made through a range of de-
vices so as to bring about such ordering. We now turn to
look as such interventions.
How the ranking encourages actors to intervene in the wider
economy (dots moving markets)
Capturing the beautiful picture
The constraints dictated by the matrix appeared not
only to have a spatial but also a time-related dimension.
Although Gartner had identi?ed the picture that furthered
their interests and those of the market, this particular com-
petitive space appeared temporally bound. At times, the
number of players in an emerging ?eld was changing so
fast that Gartner could not capture the picture. Sometimes
they were simply to slow to react to it, or, by the time they
had reacted, the beautiful picture had long gone. To illus-
trate this point we include the comments of an analyst
talking about the case of ‘Web Analytics’:
Sometimes they move through so fast that . . . Gartner’s
Magic Quadrant never quite . . . hits it. And a good
example of that would be Web Analytics where . . . it
was 68 vendors about 4 years ago and now there is
about 20 or so. But there is only 3 big ones who control
a vast majority of the market, followed by Google which
is free and then there’s a couple of specialists. So really
to have a Magic Quadrant with about 5 or 6 on, there is
not much point anymore. So it went from 68 to 6 in
about 3 years and so there was little window there
where Gartner could have managed to get a snapshot
of the market when there was 20 in, but then it was
gone (interview, Gartner analyst A).
In this case there were initially too many vendors and
then later too few for them to ‘get a snapshot’ of the mar-
ket (Web Analytics just passed them by). The ranking orga-
nisation was unable to capture the beautiful picture. This
was because the particular technology ?eld was too fast
moving for Gartner to mobilise its large organisational
machinery in a timely fashion (these were the standard-
ised processes, committees, review cycles described
above). If this was the case for Web Analytics, it seems also
to be true for a new kind of technology called ‘Social
Software’:
So a classic example is Social Software at the moment
where there is a team of 7 or 8 analysts in Gartner
now on that area . . . But Social has been around for –
you know Facebook and all that stuff – has been around
for quite a few years now . . . What happened was they
went: ‘Wait a minute people are making money in that
area’ . . ..I don’t mean Linked-in and that, they are not
making money, but the stuff companies are buying to
manage social networks or to deal with social networks.
578 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
They are starting to invest and there is companies piling
into that area and Gartner is going, at some point Gart-
ner – I think it was 18 months ago – Gartner went ‘Oh
my god. We’re late. Go. Boom!’ (interview, Gartner ana-
lyst A).
Here the analyst ?nishes the conversation by noting
how, in contrast to other smaller industry analysts and
market commentators, Gartner were typically ‘late’ with
their ranking:
An analyst will take it upon themselves and say ‘that’s
mine’, and they will go leap after it. Then a couple will
follow them and they will go after it. So we are, that’s
why I say that . . . we are not setting the pace. The only
time we do set the pace is when we are quick followers I
think is the best way I would describe it and we are use-
ful in that we bless things (interview, Gartner analyst
A).
Capturing the beautiful picture was also dif?cult be-
cause the grouping could simply no longer exist. That is,
there was once a vibrant competitive space but now, be-
cause of mergers and takeovers, failures and collapses,
and so on, there remained only a few competing players
within a market. When this happened, the only solution
apparently was to withdraw a Magic Quadrant:
I haven’t seen many [retired] recently because analysts
don’t like giving up turf but, it tends to be where you
have got down to just a handful like 5 vendors in a mar-
ket . . . So, there is no formal process that says we
review them and anyone with less than ‘x’ dots gets
shot. It is more that the analyst knows that and goes
and ?nds a new market to go cover and research, if they
are bright, which they usually are. So often you ?nd an
analyst has 2 Magic Quadrants: one old one that is
dying; and then they got another one with a slightly dif-
ferent de?nition which has a newer and more buoyant
market. And then eventually they stop doing that one,
but there is no formal process as far as I understand it
(interview, Gartner analyst A).
If a Magic Quadrant is ‘old and dying’, an analyst may
then decide to ‘retire’ it. What all of this suggests is that
the ranking organisation was not completely passive in
searching for the beautiful picture. If the beautiful picture
was not there then the Magic Quadrant prompted them
to set about trying to create one.
Creating the beautiful picture
The affordances and constraints of the Magic Quadrant
were such that it could encourage rankers to attempt to
make interventions in/to markets. During our research,
for instance, we noted how the ranking organisation ap-
peared to have at least two strategies for creating beautiful
pictures. The ?rst of these is related to the standardised
evaluation criteria described above. When there are too
many vendors to be included in a Magic Quadrant, for in-
stance, an individual ranker will continually set and reset
these criteria in order to reduce the competitive space.
One analyst describes this by talking through the example
of Social Software:
There is a lot of discussion [internally within Gartner]
about . . . what stage do Magic Quadrants have in a life-
cycle of a market? And they are not good at the start of a
market; they are hopeless! When a market is in its ?rst
couple of years and there is, Social Software and I’m
looking at Social CRM at the moment and we’ve identi-
?ed 92 vendors in the last three days. Can’t put 92 dots
on a chart! So, it is pretty clear that we will set some
high criteria to cut people out. And that is what the
big debate will be about is how you set those criteria.
But two years ago there was probably more than that.
It all depends on how you de?ne that market (inter-
view, Gartner analyst A).
To paraphrase the words from above, these criteria are
usually set around ‘quantitative’ aspects as well as more
‘qualitative’ elements. These will then be set and reset to
‘cut people out’. The second strategy is to divide spaces
up to get the required picture. An analyst describes how
this is done: ‘‘[c]learly there is a kind of optimal number
of dots on a chart which Gartner kind of ends up almost
dividing markets up in order to get that number of dots
on a chart, which is readable, which is about 15 to 25’’
(interview, Gartner analyst A). The analyst acknowledges
not only that Gartner reduce the market down, but that
they reduce it down to a particular size: ‘‘So in effect you’ll
?nd almost every analyst is setting the criteria, the bounds
– not consciously really but we are doing it – to get 15 to
25 dots. Because if it drops to 5 dots, there’s 5 vendors in
this market, it’s highly consolidated, so why would they
ring us?’’ (interview, Gartner analyst A).
Let us unpack more carefully the implications of what is
being described here. Gartner set the bounds of the com-
petitive space so as to arrive at what it thinks is an optimal
number of vendors. Because there are too many vendors in
an area – and since the emerging ?eld cannot be captured
in its entirety on a single Magic Quadrant – analysts will
literally divide markets up. This means Gartner will at-
tempt to create new competitive spaces and distinctions
between technologies. The easiest way to do this appears
to be through the introduction of alternative nomencla-
tures (Pollock & Williams, 2011). During the period of
our research, for instance, we observed how Gartner intro-
duced three new terminologies within the category of ‘So-
cial Software’.
Social Software
Social Software is a relatively new area where there is
currently a great deal of activity and interest as well as
uncertainty. Gartner describe Social Software as the area
where they are ?elding most questions from clients and
prospective purchasers. One key issue is that Social Soft-
ware is something of an ‘umbrella term’ (also described
as ‘Social CRM’ or ‘Social Media’). The problem is that large
numbers of vendors are rebranding their products as ‘So-
cial’ in some way. We attended a Gartner conference in
London, for instance, where an analyst makes this point
to the audience:
Social CRM is a huge topic. There has been tonnes of
calls about it. I am tracking currently about 90 vendors
who have some area of Social CRM. Some vendors are
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 579
calling themselves that and they are not. Some people
are that. Some people don’t know that they have it when
they have it. So there is a lot of movement going on as
people try to make sense of just Social Media in the ?rst
place, and that is a hard nut to crack: ‘What is Social
Media?’ (conference presentation, Gartner analyst D).
In the last couple of days alone, Gartner had identi?ed
nearly a hundred new players claiming to offer some kind
of Social Software. There appears within the market a need
for some form of clarity. Gartner’s response therefore has
been to break this technological ?eld down into further
sub-segments. They have de?ned Social Software as con-
taining: ‘Social CRM’, ‘Social Software in the Workplace’,
and ‘Externally Facing Social Software’ (EFSS). Another
Gartner analyst presents the rationale for these splits dur-
ing a presentation:
. . . we initially had one Magic Quadrant for Social Soft-
ware and it really covered quite a few different technol-
ogies. Increasingly . . . we have been looking to split that
up because, as the market matures, we start to see some
of the kind of submarkets or other kinds of segmenta-
tion . . . these Magic Quadrants that are being issued in
2010, we’re building on the Social Software in the Work-
place which is looking at how these kinds of ideas can
be used behind the ?rewall . . . [t]he newest one that
was released was EFSS or Externally Facing Social Soft-
ware. What that is essentially doing is going beyond
the ?rewall. . . Now we also see the public social media,
and I will also be talking about in a moment the Social
CRM Magic Quadrant, that is the third one which we
are releasing (conference presentation, Gartner analyst
E).
Out of one category, and because of the dif?culty of rep-
resenting all the possible vendors in the Social Software
Magic Quadrant, they had crafted three new (sub)spaces.
Creating these new kinds of technological categories
turned out not to be a straightforward process as we show
in the ?nal empirical section.
The pragmatics of making meaningful distinctions
8
One way to bring a new competitive space to life seems
to be to create Magic Quadrants for them. However, during
a presentation, a Gartner analyst notes some of the dif?cul-
ties surrounding the pragmatics of doing this – particularly
in separating out the Social Software category and making
clear distinctions between the vendors operating within in
it. The three new categories are presented on a slide as cir-
cles that overlap with each other:
Across these different segments you can see some
examples of the kinds of vendors that we see. You can
also see that these circles do kind of overlap. We do
see that there are some vendors that are active in sev-
eral different markets and that is re?ected also when
we start looking at the Magic Quadrant. There are ven-
dors that are present on several of the Magic Quadrants
and a couple who really are active on all three. Now -
. . . when we ?rst started doing this analysis and we ?rst
started looking at the criteria we actually were . . . a lit-
tle afraid that [we] would see a great deal of overlap
(webinar, Gartner analyst F).
The analyst notes how there were vendors producing
software that could be counted as belonging to all three
categories. Their fear was that there would be a great deal
of overlap. However, he goes onto say, there turned out to
be fewer than anticipated: ‘‘. . .the overlap we had in the ?-
nal publication is really quite small. There is only a couple
really that appear on several different ones’’ (webinar,
Gartner analyst F). The reason for this was how Gartner de-
?ned the evaluation criteria: ‘‘And parts of that is down to
how we de?ned the criteria and what were the criteria and
quali?cations for being included in each Magic Quadrant’’
(webinar, Gartner analyst F). Setting and resetting the cri-
teria meant that the rankings plotted exactly as they
should do!
This pragmatics of making meaningful distinctions can be
seen more speci?cally in the creation of the Social CRM
Magic Quadrant. Here an analyst describes the dif?culty
Gartner have had in producing this particular ranking:
‘‘We’re in the process of creating a Magic Quadrant for this.
There isn’t one yet. . . It is a very onerous task because so
many of these vendors are very new and hard to de?ne’’
(conference presentation, Gartner analyst G). Some months
before the release of the Social CRM Magic Quadrant, an
analyst speculated about how many vendors would be in-
cluded. He shows the audience not the Magic Quadrant but
a ‘list’ of some representative vendors:
Again this is a representative list – we are checking out
80 or 90. I think we are probably going to come out to
25 to 30 based on the criteria. One thing that we are
looking over is vend over ?ve million and putting in
things like ‘Are we being asked about you?’ So, there
is a lot of things in here . . . (conference presentation,
Gartner analyst G).
He makes clear the quantitative and qualitative evalua-
tive criteria to be used. He also notes the use of ‘the list’,
which he views as a stand in for the real ranking, which
has yet to be devised. When, a few months later, the Magic
Quadrant is published, the same analyst describes the ?nal
number:
Gartner just got ?nished with a Social CRMMagic Quad-
rant. We started with about a 120 vendors that we
looked at. Many vendors had some sort of social aspect
included in their CRM – Social CRM aspects to it. We,
?nally, we were left with around ‘19’ for various rea-
sons that I will discuss (webinar, Gartner analyst G).
To summarise, evidence shows that when faced with a
large number of vendors claiming to work in a newtechno-
logical ?eld, in order to create a competitive space, Gartner
set the evaluation criteria to reduce the numbers of ven-
dors included within each space; this is done by dividing
up the ?eld into new competitive groupings. If the beauti-
ful picture that Gartner desire is not there then they set
about trying to create it. Dot-ology therefore also captures
the strategies deployed to in?uence the setting that the
8
We thank Robin Williams for suggesting this formulation.
580 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
ranking describes. This pragmatic work is complex. The
rankers struggle to differentiate between vendors within
classi?cations; this is because they are imposing bound-
aries onto the market and this can provide for dif?culties.
Many vendors, for instance, could be included in more than
one speci?c ranking. Deciding where a particular instance
sits across a number of technology classi?cations therefore
requires taking an explicit decision, which proves often to
be an ambiguous process.
Discussion
According to Espeland and Sauder (2007, pp. 36–37) the
‘proliferation of public measures of performance’ is one of
the most ‘important and challenging trends of our time’
(see Jeacle and Carter (2011) who relate this point, through
a discussion of rankings, to the core concerns of Account-
ing scholarship). The starting point for this paper was the
suggestion that these measures wield forms of in?uence
that have yet to be identi?ed by existing forms of analysis.
Whilst there are a growing number of studies that analyse
the power of rankings, some from within Accounting re-
search (Free et al., 2009; Jeacle & Carter, 2011; Kornberger
& Carter, 2010; Scott & Orlikowski, 2012), others from out-
side this area (Pollock & Williams, 2009; Blank, 2007; Espe-
land & Sauder, 2007; Karpik, 2010; Kwon & Easton, 2010;
Shrum, 1996; Wedlin, 2006), very few have provided in-
sights into their makeup and minutiae (but see Schultz
et al. (2001) who point to some aspects of their construc-
tion). One implication when a crucial market mechanism
is black-boxed is that we only ever develop a partial under-
standing of its constitutive capacity. A tendency when
faced with an incomplete vantage point is to raise the
importance of those aspects of the phenomena that can
be studied (Pollock & Williams, 2009). Speci?cally, rank-
ings are seen to in?uence domains through changing the
way actors make sense of and interpret the world (Espeland
& Sauder, 2007; Kornberger & Carter, 2010; Wedlin, 2006).
We have worked up the idea of a ‘ranking device’ to
capture how, alongside the way rankings cause people to
adapt behaviour, that graphic format and furniture can also
be signi?cant. Taking the example of an in?uential perfor-
mance measure from within the information technology
sector, we have shown how, in ways that are both social
and material, this ranking has shaped the market for vari-
ous technologies. Through describing how the ranking
brought together and counterposed players in a ‘competi-
tive space’, the paper considered three related aspects of
the sociomaterial shaping of that space. Firstly, we focused
on attempts by those technology vendors ranked by the
assessment to affect the shape of the competitive terrain.
Our evidence suggested that, because the ranking created
the space by which various players could compete with
each other (Kornberger & Carter, 2010), vendors were ad-
vised to adapt and orient themselves to the nuances and
measures of the ranking. These included employing strate-
gies to help improve their position and weaken that of
competitors. The players were therefore brought together
into one space, and, importantly, with the help of new
forms of expertise, this space appeared tractable.
Secondly, whilst our initial discussion emphasised the
social strategies at play (‘people moving dots’), we later
introduced the theme of material agency. We demon-
strated the sociomaterial constraints surrounding the
shaping of the competitive space (‘dots were moving peo-
ple’). We saw this in relations between individuals and the
ranking organisation and then between the ranking organi-
sation and the market. Until recently within the ranking
organisation, individual rankers could wield notable
amounts of discretion in placing vendors. More recently
however, because of moves towards transparency and
standardization, there had been changes in ranking prac-
tices (the discretion of individual rankers had become
entangled in and increasingly sti?ed by layers of technol-
ogy and bureaucracy). Added to this, the graph itself (its
affordances and constraints) also placed limitations on
how the competitive space could be captured and repre-
sented. The rankers could not capture and represent all
the players in a market on one graph. This meant they were
forced to adopt alternative strategies.
Thirdly, we showed in particular how the rankers, as a
result, were required to intervene directly in the market
to attempt to shape the competitive space to account for
the limitations of the two-by-two matrix. This meant they
did not use the graph to represent a competitive space con-
ceived prior to its inclusion in the ranking. Rather, they
conceived of new competitive spaces – better still, were
forced to conceive of these spaces – through taking the
capacities of the ranking into consideration. We could say
that the ranking prompted such an intervention and that
this was a prompt that individual rankers appeared willing
to accept. Rankers would thus attempt to modify the com-
petitive space to ?t the ranking (rather than the other way
around). It is speci?cally this aspect – a situation we con-
ceive of as ‘dots moving markets’ – that identi?es one of
the main contributions of the paper.
New visual and temporal dynamics
We propose that graphical performance measures (and
?gurations more generally) contribute a powerful instance
of the process by which markets and material things mutu-
ally constitute one another (Callon et al., 2007; MacKenzie,
2009; Miller & O’Leary, 2007; Pinch & Swedberg, 2008). We
attempted to get at this through analysing the interactions
between ‘programmes’ and ‘technologies’. These refer to
the imaginings and conceptualisations of an arena and
the various devices and inscriptions that mediate and
shape these envisionings such that a domain may be acted
upon and calculated (Miller, 1998; Miller & O’Leary, 2007).
We studied the production of the ranking not as ‘knowl-
edge’ but as a ‘practice’. This is to consider the idea of a
ranking not in an abstract representational idiom (Espe-
land & Sauder, 2007; Kornberger & Carter, 2010), but one
which captures the nuanced interplay involved between
the conceptualisation of a market domain and its incorpo-
ration within various format and furniture. What our anal-
ysis sought to show was how these devices both shaped
and were shaped by the market. In particular, the format
and furniture helped create a new visual and temporal dy-
namic within the IT domain.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 581
Visual dynamic
We say visual dynamic because the ranking organisa-
tion attempted to specify what a market should look like.
They sought a conceptualisation that made the informa-
tion technology domain amenable to calculation (Miller,
1998; Miller & O’Leary, 2007). This meant they strove to
produce a ranking that would allow everyone to see and
compare how one technology vendor was performing in
relation to another, in the most straightforward manner,
where there were neither too many nor too few players in
the competitive space. They apparently found the optimal
number that could be included and this represented the
‘beautiful picture’.
What is the beautiful picture?
The beautiful picture is part of what we might think of
as an ‘aesthetic economy’ operating within the ranking
organisation. This is not to say that it is the picture of an
ideal or perfect market (cf. Garcia-Parpet, 2007). Rather,
it is the result of a negotiated, devised and contrived inter-
vention. The beautiful picture was a set of compromises
negotiated between the imaginings and conceptualisations
of the ranker and the sociomaterial possibilities of the
ranking. Material affordances potentially allowed for the
placing of many vendors on a graph but (conventional)
constraints meant that the rankers could not overburden
the picture (Quattrone, Puyou, McLean, & Thrift, 2012).
This would not only produce a ?guration that would be dif-
?cult for clients to understand, it would give the impres-
sion of an overly complex market (and this would have
adversely affected the aesthetic economy deemed crucial
by the rankers). Thus, the ranking was also conventionally
devised (Espeland & Stevens, 1998): there were not only
material aspects limiting the construction of the competi-
tive space but also ‘social’ ones (David & Pinch, 2008).
The ranking was also a contrived ?guration for bringing
about certain kinds of (potentially contradictory) results. It
was necessary to reduce the level of ‘confusion’ for deci-
sion makers and practitioners (there could not be too many
dots). However, there could never be too few players on a
graph because this would simplify the market to the point
of undermining the need for further consultancy advice. If
everything appeared straightforward, why would people
continue to seek the ranker’s expertise? The beautiful pic-
ture was one that kept this ranker in ‘the game’ so to speak
(for a discussion of the problems of creating and maintain-
ing a market for expertise see Barrett and Gendron
(2006)).
9
Attempts to engineer the beautiful picture were conse-
quential for the shaping of the market. It meant the rank-
ing was not neutral with regard to what constituted a
competitive space. It appeared ill suited to new, fast mov-
ing areas, for instance, where there were many new en-
trants in the technological area. Whilst individual rankers
could spot vendors entering an emerging category, in prac-
tice, they could not capture or represent them within the
ranking (the ?guration lacked the affordances of a list in
this respect). This issue resembles what Lynch (1985, p.
43), talking about scienti?c graphs, has called the ‘problem
of visibility’. Scientists determine what is ‘natural’ based
on what their graphs are able to depict. Translated to our
concerns, this means that the rankers decided what a mar-
ket ‘is’ – the competitive space: which players make up the
market, the boundaries of the ?eld, etc. – partially based on
what the ranking was able to capture and communicate.
This clearly evidences how information technology mar-
kets today are a product of format and furniture as much
as any other calculative aspect of this particular ranking.
What was also salient about our study was the ?nding
that, if the beautiful picture could not be captured, then
the ranking organisation would try to create it. Because
the graph was seen to embody key features of the markets
under analysis, efforts were made to intervene in compet-
itive spaces, so that the characteristics of these spaces were
congruent with the affordances of the ranking. From ?eld-
work, we saw how rankers performed this in one of two
ways: through limiting the number of vendors operating
in a particular competitive space or by creating entirely
new spaces. They performed the former through setting
‘inclusion criteria’ and the latter by attempting to divide
technological ?elds into new designated areas of activity
(with their own unique nomenclature, de?nition, inclusion
criteria, Magic Quadrant, etc.). The designation of a new
technological ?eld of activity, or ‘competitive space’ as
we have called it here, is not trivial. It can draw boundaries
around a set of artefacts and their suppliers and create a
space in which sorting and ranking becomes possible. If ta-
ken up it can go onto provide crucial resources and con-
straints within which vendors and management and
technology consultants’ articulate offerings. It can, in other
words, become a fully-?edged market in its own right
(Pollock & Williams, 2009, 2011).
10
One problem the ranking organisation now faces in
competitive-spaces-constructed-according-to-the-affor-
dances-of-a-ranking is the pragmatics of making meaning-
ful distinctions. Since new boundaries were imposed onto
the space, individual rankers struggled to differentiate be-
tween vendors in these new groupings. This was evidenced
by the fact that certain vendors appeared in all three of the
new Magic Quadrants. This outcome was thought less than
ideal because it suggested a lack of distinction within the
ranking. Similar issues were apparent when the ranker
was forced to intervene because vendors clustered to-
gether. This occurred because the market was converging
or, over time, vendors were conforming to the evaluative
criteria (Espeland & Sauder, 2007), or, as in the case above,
because there was no meaningful distinction to be made.
Clustering was thought problematic because it suggested
that all those on the graph had the same or similar quali-
ties. This was counterproductive because, as in the case
of the oversimpli?ed market, there would be little value
found in the ranking. Decision-makers required the
9
We owe our thanks to one of the anonymous reviewers for encouraging
us to develop this point.
10
To give one example, Gartner coined and went onto shape the
Enterprise Resource Planning (ERP) terminology, that subsequently went
onto become one of the new paradigms of modern day information systems
(see Chapman (2005) for a review of ERP in the accounting area).
582 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
vendors to be graded in a way that signalled a distinction.
Without this, why would people contact the ranking orga-
nisation, to paraphrase one respondent? A further feature
of this pragmatics therefore was the process whereby ran-
kers were forced to devise distinctions by means of manip-
ulating organisational machinery (i.e., resetting the axes of
the spreadsheet to increase distance between dots).
Temporal dynamics
We say temporal dynamics because, during ?eldwork,
we were alerted to the fact that the affordances of the
ranking were not static but evolving over time. Espeland
and Sauder (2007, p. 36) discuss how rankings are a ‘mov-
ing target’: as people learn to ‘game’ them, their authors
are forced to update evaluative criteria more or less on a
continuous basis. Whilst this was also a factor in our case,
we note how the ranking was similarly surrounded by a
‘moving organisational apparatus’ (Pollock & Williams,
2009). The Magic Quadrant had begun its career as a rela-
tively informal, subjective ranking but there had been later
(quite vigorous) demands placed on the ranker to recreate
it as a formal assessment subject to auditing (see Free
et al., 2009 for a discussion of these processes whereby
rankings are audited). This meant individual rankers could
no longer grade vendors exactly as they wished. It also lim-
ited their capacity to respond (rapidly) to innovation.
Today, the provision and administration of the ranking
is circumscribed by new technology and bureaucracy. This
has affected the ranker’s ability to produce ‘snapshots’. The
ranking organisation cannot react in time to capture spe-
ci?c innovations. Some beautiful pictures disappear even
before these experts can mobilise their committees,
spreadsheets etc. The pictures are there for a moment
and then they are gone, to paraphrase one respondent. This
meant that certain technological innovations can com-
pletely pass the ranker by. Pockets of the market can re-
main unranked in what is typically a highly graded
arena. We think the instances where ranking devices and
organisational apparatus create situations of ‘unrankabili-
ty’ deserve further attention. It is a situation where the
market escapes dots.
11
This begs the questions: were the
markets for these products adversely (or positively) af-
fected? Were the vendors who remained outside the com-
petitive space punished (or rewarded) in some way?
Our evidence also showed how the affordances of the
ranking created cyclical pressures on the ranking organisa-
tion to intervene at certain key moments. The beautiful
pictures they sought were time limited. They were not
there at the outset of an innovation (there were too many
dots to be represented), and nor were they there as the
technology matured (either there were too few dots to al-
low anything meaningful to be said, or all the players
had clustered in the same box). This prompted the ranking
organisation to engineer interventions not arbitrarily but
at certain key points in the lifespan of a technology. This
included, for instance, the moment when a new technolog-
ical ?eld ?rst appeared to emerge and then later as it
matured.
What does a focus on graphic format and furniture show?
Our paper has developed some of the analytical tools to
consider the sociomaterial in?uence of a ranking. This begs
the question whether a focus on format and furniture draw
attention to aspects not visible under social approaches.
Existing modes of analysis give particular emphasis to
how rankings in?uence peoples’ behaviour. The ‘mecha-
nisms of reactivity’ concept (Espeland & Sauder, 2007),
for instance, explicitly captures this through showing
how rankings evoke self-ful?lling prophecies that encour-
age people to adapt their behaviour towards the calcula-
tion. Extending this, we have emphasised how ranking
devices can also play a role through offering speci?c affor-
dances and constraints and encouraging others to modify
the settings within which action takes place. For example,
we have shown how the graphical ranking came to suggest
a particular order for a market, prioritising one market
view over another (a beautiful rather than a cluttered or
sparse picture), which the rankers then set about creating.
The corollary is that a ranking can in?uence a setting dif-
ferently, and perhaps more fundamentally, than previously
thought.
Whereas the point above is about the shape of the land-
scape within which actions take place, we have seen that
there is also a temporal issue. In this respect, our approach
raises the question as to whether a sociomaterial in?uence,
as opposed to simply a social one, is a more enduring form
of in?uence. It could be argued that a ranking located ‘‘in
the back of everybody’s head’’, as Espeland and Sauder de-
scribe (Espeland & Sauder, 2007, p. 11), may only have a
?eeting in?uence whereas one residing in a speci?c format
and furniture can endure inde?nitely. As long as the ranker
retains this particular format and furniture, the order de-
scribed in the device above may continue to produce a par-
ticular shape to the market with little regard to the actions
of individual players at speci?c times.
What we are foregrounding is how processes of mar-
ket making are inscribed in and ?ow from the sociomate-
rial negotiations surrounding a ranking. Clearly the
episodes of market (re)construction described here are
very different from those formal accounts preferred by
of economists, where supply and demand comes together
to form a price (Callon & Muniesa, 2005). The ranking
organisation described in the paper has a long tradition
of creating new markets through ‘naming interventions’
(see Pollock & Williams, 2011). Many, though by no
means not all, of these go onto become functioning and
independent markets. We thus offer an example of how
new markets are constituted by the seemingly mundane
constraints of a graph. This also contrasts with those
Accounting scholars who view market creation as the re-
sults of primarily ‘social interactions’. Kornberger and
Carter (2010, p. 330) write that ‘‘competition is some-
thing that is created out of interaction between market
players’’. Our work, by contrast, has shown how devices
are also party to these interactions (see also Miller and
O’Leary (2007), Robson (1992) and Quattrone et al.
(2012) who similarly highlight the link between devices
and processes of market making). Future inquiry would
be to see whether the arguments set out in this paper
11
Thanks to one of the anonymous reviewers for suggesting this point.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 583
hold true for other areas. Does format and furniture hold
similar implications for other kinds of performance
measures?
Implications for accounting research
Accountancy ?rms will potentially play an increasing
role in the provision and administration of formal and
impersonal reputational indices (Free et al., 2009). The last
30 years has seen the emergence of a powerful range of
consultancy and professional services organisations that
produce rankings of various kinds. Many of these assess-
ments are also being integrated into the ‘advisory’ (i.e. con-
sulting) elements of the large accounting ?rms. Whilst we
know that the demand for rankings is expanding, we still
understand little about the detailed processes by which
consultancy ?rms produce, administer and create a market
for these assessments. We have produced a detailed study
of how one global consultancy and research organisation
constructs a highly successful performance measurement
product. Our study, in this respect, meets Qu and Cooper’s
(2011) recent call for more research examining the work of
consultants – speci?cally how they acquire, commodify
and apply their knowledge. Our aim, in this respect, was
to assess the potential for an empirically grounded charac-
terisation of the process by which such knowledge was
produced and communicated. A popular conception of con-
sultants is to see their assessments as based on the vaga-
ries of individual discretion whereas our recently
conducted and ongoing ?eldwork suggests the origins of
assessments result from more observable sociomaterial
and distributed processes. Above, for instance, we have
drawn attention to the large machineries of ranking that
are in place.
Accounting ?rms have also been important shapers of
the consultancy industry (Christensen & Skærbæk, 2010).
However, they have in the main unproblematically
adopted many of the innovations generated from within
this industry. Qu and Cooper (2011) highlight this speci?-
cally in relation to graphic inscriptions. Innovations in ?g-
urations will potentially have a number of implications for
Accounting Research. In particular, whilst there has been a
good understanding and theorisation of 20th Century
accounting representational devices (see for instance Chua
(1995) on ‘accounting images’, and Ezzamel (2004) on fac-
tory performance indicators), those of 21st Century
accounting are still being formulated.
12
In this respect, Qu
and Cooper (2011, 345) talk of new forms of inscriptions
‘‘materialized through different media with different quali-
ties’’ and give the example of power point slides, ?ip chart
pages, emails, strategy maps, graphics such as bullet points
and checklists, and so on, to exemplify this. These new kinds
of inscriptions – another of which is described here: the
two-by-two matrix – may well require scholars to update
characteristic analytical framings and/or to draw on insights
from allied disciplinary approaches.
Our work, which sits at the interstices between a num-
ber of different disciplinary schools (see Vollmer et al.
(2009) for a review of the evolving intellectual interde-
pendencies between Accounting, STS and Economic Soci-
ology), potentially provides insights into both how the
graphic inscriptions of accounting and the practices that
surround them might change. The capture of business
by the two-by-two matrix (Lowy & Hood, 2004), in partic-
ular, suggests that ?gurations are no longer a supplement
but intrinsic and constitutive part of market settings.
Whereas calculative practices have predominately been
conceived of as ‘numerical operations’ (Miller, 2001),
Quattrone et al. (2012, p. 9) argue that there will need
to be more attention devoted to the ‘visual nature of
numbers’ (see also Justesen & Mouritsen, 2008). We be-
lieve our paper meets elements of this call. Calculative
practices turn ‘qualities into quantities’ (Miller, 2001). In
our case, this would be the translation of a subjective
opinion about a vendor – rendered through a large-scale
ranking apparatus – into a quantity, such as placing a
dot on a graph. We suggest that the form of dot-ology de-
scribed here represents a unique instance of these kinds
of calculative practices. On the one hand, this is how a
calculation can come to be shaped by mundane graphic
resources (and vice versa), and, on the other, how there
is an aesthetic element to the construction of visual num-
bers. In terms of the former, those producing visual num-
bers may come to determine what is ‘calculable’ based on
what graphs are able to depict. It is not how corporate
and market performance relate to dots (stars, lines,
waves, tics, etc.) for revealing and ordering that perfor-
mance; it is rather how the format and furniture of
graphs interact and merge with the calculations. Visual
resources constitute calculative practices, such that any
numbers that result bear the imprint of graphic
sociomateriality.
This latter element is also important because, as Quatt-
rone et al. (Quattrone et al., 2012, p. 9) notes, little atten-
tion has been given to the ‘imaginative power’ of an
inscription. This is their ability to envision what business
and markets could and should look like. In this respect,
we speculate that the two-by-two matrix is different from
other formats, such as lists (Cardinaels, 2008), because it
creates particular way of representing and intervening in
situations. As one of the premier modes of representing
business activities – one only has to think of the ‘cost ben-
e?t matrix’, the ‘product and market matrix’, the ‘BCG
Product Portfolio Matrix’, etc., – this creates a particular
kind of aesthetic economy (Espeland & Stevens, 1998).
Through visualising the elements of a competitive situa-
tion, one alters the way in which that situation is thought
about and acted upon or practised. Their allure is such that
the situation appears amenable to intervention. They
encourage various forms of co-production such that set-
tings are modi?ed to become congruent with graphic affor-
dances and vice versa. Ultimately, the predominance of
?gurations across industries means that their sociomateri-
ality should become a feature of academic study. We call
for serious and detailed study of the format and furniture
of the major business and accounting visualisations, for it
is not simply engines but beautiful pictures that shape eco-
nomic life.
12
Thanks to Chris Carter for suggesting this point.
584 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
Acknowledgements
Neil Pollock would like to acknowledge the support of
the Economic and Social Research Council (ESRC) who
funded the research presented in this article. If forms part
of work conducted under an ESRC Fellowship entitled: The
Social Study of the Information Technology Marketplace.
We would like to thank those industry analysts and others
who were kind enough to make themselves available for
interview. We graciously acknowledge the help and advice
of the Editor and anonymous referees who provided very
helpful comments on drafts of this paper. Thanks also must
go to the following people for providing useful suggestions
and ideas during the writing process. This includes: Chris
Carter, Sampsa Hyysalo, Ingrid Jeacle, Jannis Kallinikos,
Christian Koch, Irvine Lapsley, Eric Laurier, Donald Mac-
Kenzie, Peter Miller, Eric Monteiro, Susan Scott and Robin
Williams.
References
Akrich, M., & Latour, B. (1992). A summary of a convenient vocabulary for
the semiotics of human and nonhuman assemblies. In W. Bijker & J.
Law (Eds.), Shaping technology/building society. Cambridge, MA: MIT
Press.
Aldridge, A. (1994). The construction of rational consumption in which?
Magazine: The more blobs the better? Sociology, 28, 899–912.
Anand, N., & Peterson, R. (2000). When market information constitutes
?elds: Sensemaking of markets in the commercial music industry.
Organization Science, 11(3), 270–284.
Argyris, C. (1954). The impact of budgets on people. New York:
Controllership Foundation.
Barrett, M., & Gendron, Y. (2006). WebTrust and the ‘commercialistic
auditor’: The unrealized vision of developing auditor trustworthiness
in cyberspace. Accounting, Auditing and Accountability Journal, 19,
631–662.
Becker, H. S. (1982). Art worlds. Berkeley, CA: University of California
Press.
Blank, G. (2007). Critics, ratings, and society: The sociology of reviews.
Lanham, MD: Rowman & Little?eld.
Bloom?eld, B., & Vurdubakis, T. (1997). Visions of organization and
organizations of vision: The representational practices of information
systems development. Accounting, Organizations and Society, 22(7),
639–668.
Burton, B. & Aston, T. (2004). How Gartner evaluates vendors in a market.
Document ID Number: G00123716.
Callon, M., Millo, Y., & Muniesa, F. (Eds.). (2007). Market devices. London:
Wiley-Blackwell.
Callon, M., & Muniesa, F. (2005). Economic markets as calculative
collective devices. Organisation Studies, 26(8), 1229–1250.
Cardinaels, E. (2008). The interplay between cost accounting knowledge
and presentation formats in cost-based decision making. Accounting,
Organizations and Society, 33, 582–602.
Carroll-Burke, P. (2001). Tools, instruments and engines: Getting a handle
on the speci?city of engine science. Social Studies of Science, 31(4),
593–625.
Chapman, C. (2005). Not because they are new: Developing the
contribution of enterprise resource planning systems to
management control research. Accounting, Organisations and Society,
30(7–8), 685–689.
Christensen, M., & Skærbæk, P. (2010). Consultancy outputs and the
puri?cation of accounting technologies. Accounting, Organizations and
Society, 35, 524–545.
Chua, W. F. (1995). Experts, networks and inscriptions in the fabrication
of accounting images: A story of the representation of three public
hospitals. Accounting, Organisations and Society, 20(2/3), 111–145.
Cooper, D., & Hopper, T. (Eds.). (1989). Critical accounts. Basingstoke:
Macmillan.
Dambrin, C., & Robson, K. (2011). Tracing performance in the
pharmaceutical industry: Ambivalence, opacity and the
performativity of ?awed measures. Accounting, Organizations and
Society, 36, 428–455.
David, S., & Pinch, T. (2008). Six degrees of reputation: The use and abuse
of online review and recommendation. In T. Pinch & R. Swedberg
(Eds.), Living in a material world: Economic sociology meets science and
technology studies. MIT Press.
Drobik, A. (2010). Getting gartner: How to understand what we are
talking about. In Presentation given to the customer relationship
management summit, London, 16th March.
Espeland, W., & Sauder, M. (2007). Rankings and reactivity: How public
measures recreate social worlds. American Journal of Sociology, 113(1),
1–40.
Espeland, W., & Stevens, M. (1998). Commensuration as a social process.
Annual Review of Sociology, 24, 313–343.
Ezzamel, M. (2004). Accounting representation and the road to
commercial salvation. Accounting, Organizations and Society, 29,
783–813.
Free, C., Salterio, S., & Shearer, T. (2009). The construction of auditability:
MBA rankings and assurance in practice. Accounting, Organisations and
Society, 34, 119–140.
Garcia-Parpet, M. F. (2007). The social construction of a perfect market:
the strawberry auction at Fontaines-En-Sologne. In D. MacKenzie, F.
Muniesa & L. Sui (Eds.), Do economists make markets. Princeton
University Press.
Gibson, J. J. (1979). The ecological approach to visual perception. Erlbaum.
Goody, J. (1977). The domestication of the savage mind. Cambridge:
Cambridge University Press.
Hacking, I. (1983). Representing and intervening: Introductory topics in the
philosophy of natural science. Cambridge: Cambridge University Press.
Hacking, I. (1992). The self-vindication of the laboratory sciences. In A.
Pickering (Ed.), Science as practice and culture. Chicago: University of
Chicago Press.
Hind, P. (2004). Self-ful?lling prophecies. CIO, 12 July,
Accessed 29.03.06.
Hopkins, W. (2007). In?uencing the in?uencers: Best practice for building
valuable relationships with technology industry analysts. Austin, TX:
Knowledge Capital Group.
Hopwood, A. (2007). Whither accounting research? The Accounting
Review, 82(5), 1365–1374.
Hutchby, I. (2001). Technologies, texts and affordances. Sociology, 35(2),
441–456.
Ingold, T. (2007). Lines: A brief history. Abingdon, Oxon: Routledge.
Jeacle, I., & Carter, C. (2011). In TripAdvisor we trust: Calculative regimes
and abstract systems. Accounting, Organisations and Society, 36,
293–309.
Justesen, L., & Mouritsen, J. (2008). The triple visual: Translations between
photographs, 3-D visualizations and calculations. Accounting, Auditing
& Accountability Journal, 22(6), 973–990.
Karpik, L. (2010). Valuing the unique: The economics of singularities.
Princeton University Press.
Kornberger, M., & Carter, C. (2010). Manufacturing competition: How
accounting practices shape strategy making in cities. Accounting,
Auditing & Accountability Journal, 23(3), 325–349.
Kwon, W., & Easton, G. (2010). Conceptualizing the role of evaluation
systems in markets: The case of dominant evaluators. Marketing
Theory, 10(2), 123–143.
Lapsley, I., & Mitchell, F. (Eds.). (1996). Accounting and performance
measurement: Issues in the private and public sectors. London: Paul
Chapman Publishing).
Latour, B. (1986). Visualization and cognition: Thinking with eyes and
hands’. In H. Kucklick (Ed.). Knowledge and society: Studies in the
sociology of culture, past and present. Greenwich, Connecticut: JAI Press.
Latour, B. (2005). Reassembling the social: An introduction to actor-network
theory. Oxford: Oxford University Press.
Law, J. (2001). Economics as interference. In P. du Gay & M. Pryke (Eds.),
Cultural economy: Cultural analysis and commercial life. London: Sage.
Lowy, A., & Hood, P. (2004). The power of the 2 Â 2 matrix: Using 2 Â 2
thinking to solve business problems and make better decisions. San
Francisco: Jossey-Bass.
Lynch, M. (1985). Discipline and the material form of images: an analysis
of scienti?c visibility. Social Studies of Science, 15, 37–66.
Lynch, M. (1988). The externalized retina: Selection and mathematization
in the visual documentation of objects in the life sciences. Human
Studies, 11, 201–234.
MacKenzie, D. (2006). An engine, not a camera: How ?nancial models shape
markets. Cambridge, MA: MIT Press.
MacKenzie, D. (2009). Material markets: How economic agents are
constructed. Oxford: Oxford University Press.
Miller, P. (1998). The margins of accounting. The European Accounting
Review, 7, 605–621.
N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586 585
Miller, P. (2001). Governing by numbers: Why calculative practices
matter. Social Research, 68, 379–396.
Miller, P., & O’Leary, T. (2007). Mediating instruments and making
markets: Capital budgeting, science and the economy. Accounting,
Organisations and Society, 32, 701–734.
Orlikowski, W. J. (2007). Sociomaterial practices: Exploring technology at
work. Organization Studies, 28(9), 1435–1448.
Pinch, T., & Swedberg, R. (Eds.). (2008). Living in a material world.
Cambridge, Mass: MIT Press.
Pollock, N., & Williams, R. (2007). Technology choice and its performance:
Towards a sociology of software package procurement. Information
and Organization, 17, 131–161.
Pollock, N., & Williams, R. (2009). The sociology of a market analysis tool:
How industry analysts sort and organise markets. Information and
Organization, 19, 129–151.
Pollock, N., & Williams, R. (2010). The business of expectations: How
promissory organizations shape technology and innovation. Social
Studies of Science, 40, 525–548.
Pollock, N., & Williams, R. (2011). Who decides the shape of product
markets? The knowledge institutions that name and categorise new
technologies. Information and Organization, 21, 194–217.
Preda, A. (2008). Technology, agency, and ?nancial price data. In T. Pinch
& R. Swedberg (Eds.), Living in a material world. Cambridge, Mass: MIT
Press.
Qu, S., & Cooper, D. (2011). The role of inscriptions in producing a
balanced scorecard. Accounting, Organizations and Society, 36,
344–362.
Quattrone, P. (2009). Books to be practiced. Memory, the power of the
visual and the success of accounting. Accounting, Organisations and
Society, 34, 85–118.
Quattrone, P., Puyou, F., McLean, C., & Thrift, N. (2012). Imagining
organizations: An introduction. In F. Puyou, P. Quattrone, C. McLean,
& N. Thrift (Eds.), Imagining organizations: Performative imagery in
business and beyond. London: Routledge.
Robson, K. (1992). Accounting numbers as ‘inscriptions’: Action at a
distance and the development of accounting. Accounting,
Organizations and Society, 17(7), 685–708.
Sauder, M., & Espeland, W. (2006). Strength in numbers? The advantages
of multiple rankings. Indiana Law Journal, 81, 205–217.
Schultz, M., Mouritsen, J., & Grabielsen, G. (2001). Sticky reputation:
Analyzing a ranking system. Corporate Reputation Review, 22, 24–41.
Scott, S. & Orlikowski, W. (2012). Recon?guring relations of
accountability: Materialization of the social media in the travel
sector. Accounting, Organizations and Society.
Shrum, Wesley. M. (1996). Fringe and fortune: The role of critics in high and
popular art. Princeton, NJ: Princeton University Press.
Soejarto, A., & Karamouzis, F. (2005). Magic Quadrants for North American
ERP service providers. Gartner Document, ID Number: G00127206.
Stark, D. (2011). What’s valuable? In P. Aspers & J. Beckert (Eds.), The
worth of goods: Valuation and pricing in the economy. Oxford: Oxford
University Press.
Strathern, M. (2000). The tyranny of transparency. British Educational
Research Journal, 26(3), 309–321.
Tufte, E. R. (2001). The visual display of quantitative information. Cheshire,
Conn.: Graphics Press.
Violino, B. & Levin, R. (1997). Analyzing the analysts. Information Week, 17
November.
Accessed 29.03.06.
Vollmer, H., Mennicken, A., & Preda, A. (2009). Tracking the numbers:
Across accounting and ?nance, organisations and markets.
Accounting, Organisations and Society, 34, 619–634.
Wedlin, L. (2006). Ranking business school: Forming ?elds, identities and
boundaries in international management education. Chichester: Edward
Elgar.
586 N. Pollock, L. D’Adderio / Accounting, Organizations and Society 37 (2012) 565–586
doc_743862872.pdf