From new deal institutions to capital markets: Commercial consumer risk scores and the mak

Description
institutions called government sponsored enterprises (GSEs). Known as Freddie Mac and
Fannie Mae, the GSEs once dominated mortgage backed securities underwriting. The recent
subprime mortgage crisis has drawn attention to the fact that during the real estate boom,
these agencies were temporarily overtaken by risk tolerant channels of lending, securitization,
and investment, driven by investment banks and private capital players. This research
traces the movement of a specific brand of commercial consumer credit analytics into
mortgage underwriting. It demonstrates that what might look like the spontaneous rise
(and fall) of a ‘free’ market divested of direct government intervention has been thoroughly
embedded in the concerted movement of calculative risk management technologies. The
transformations began with a sequence of GSE decisions taken in the mid-1990’s to implement
a consumer risk score called a FICO into automated underwriting systems. Having
been endorsed by the GSEs, this scoring tool was gradually hardwired throughout the
industry to become a distributed and collective ‘market device’. As the paper will show,
once modified by specific GSE interpretations the calculative properties generated by these
credit bureau scores reconfigured mortgage finance into two parts: the conventional, riskadverse,
GSE conforming ‘prime’ and an infrastructurally distinct, risk-avaricious, investment
grade ‘subprime’.

From new deal institutions to capital markets: Commercial consumer
risk scores and the making of subprime mortgage ?nance
Martha Poon
*
Science Studies Program, University of California San Diego, Department of Sociology, 401 Social Science Building, La Jolla, CA 92093-0533, United States
Center for the Sociology of Innovation, Ecole Nationale Supérieure des Mines de Paris, 60 Boulevard St. Michel, 75272 Cedex 06, Paris, France
a r t i c l e i n f o a b s t r a c t
The investment fueled US mortgage market has traditionally been sustained by New Deal
institutions called government sponsored enterprises (GSEs). Known as Freddie Mac and
Fannie Mae, the GSEs once dominated mortgage backed securities underwriting. The recent
subprime mortgage crisis has drawn attention to the fact that during the real estate boom,
these agencies were temporarily overtaken by risk tolerant channels of lending, securitiza-
tion, and investment, driven by investment banks and private capital players. This research
traces the movement of a speci?c brand of commercial consumer credit analytics into
mortgage underwriting. It demonstrates that what might look like the spontaneous rise
(and fall) of a ‘free’ market divested of direct government intervention has been thoroughly
embedded in the concerted movement of calculative risk management technologies. The
transformations began with a sequence of GSE decisions taken in the mid-1990’s to imple-
ment a consumer risk score called a FICO
Ò
into automated underwriting systems. Having
been endorsed by the GSEs, this scoring tool was gradually hardwired throughout the
industry to become a distributed and collective ‘market device’. As the paper will show,
once modi?ed by speci?c GSE interpretations the calculative properties generated by these
credit bureau scores recon?gured mortgage ?nance into two parts: the conventional, risk-
adverse, GSE conforming ‘prime’ and an infrastructurally distinct, risk-avaricious, invest-
ment grade ‘subprime’.
Ó 2009 Elsevier Ltd. All rights reserved.
‘‘The shift from reliance on specialized portfolio lenders
?nancedbydeposits toagreater useof capital marketsrep-
resented the second great sea change in mortgage ?nance,
equaledinimportanceonlybytheevents of theNewDeal.”
FRB Chairman Ben Bernanke
August 31, 2007
1
Introduction: From new deal institutions to capital
markets
At the tail end of 2006, the ‘subprime’ hit the news with
a bang when default rates shot up in a segment of mort-
gage ?nance that had previously received little attention
in mainstream reporting. Against rising central bank inter-
est rates, and following the collapse of the housing bubble,
borrowers bearing certain high-risk classes of loans ceased
to maintain their repayment schedules. By the turn of
2007, the unanticipated inability of lenders to raise enough
capital from borrowers impeded their own instalment pay-
ments to international residential mortgage backed securi-
ties (RMBS) holders. Major subprime lenders declared
bankruptcy and several high pro?le hedge funds imploded.
As regularized transnational circuits of capital ?ow broke
down in the space of only a few months, the problem
0361-3682/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.aos.2009.02.003
* Address: Science Studies Program, University of California San Diego,
Department of Sociology, 401 Social Science Building, La Jolla, CA 92093-
0533, United States.
E-mail address: [email protected]
1
Remarks made by Chairman Ben S. Bernanke at the Federal Reserve
Bank of Kansas City’s Economic Symposium, Jackson Hole, Wyoming,
August 31, 2007. The text is available online athttp://www.federalre-
serve.gov/boarddocs/speeches/2007/20070831/default.htm.
Accounting, Organizations and Society 34 (2009) 654–674
Contents lists available at ScienceDirect
Accounting, Organizations and Society
j our nal homepage: www. el sevi er. com/ l ocat e/ aos
escalated into a ?nancial credit crunch that soon took on
global proportions. This series of all too recent and as yet
ongoing events has made evident the long chain of ?nan-
cial connections that have come to co-ordinate the eco-
nomic agencies of ordinary US homeowners with those of
international capital investors.
Those working at the intersection of ‘social studies of ?-
nance’ and ‘social studies of accounting’ (Miller, 2008)
might immediately suspect that instabilities in the seg-
ment named ‘subprime’ have been accompanied by impor-
tant organizational and infrastructural changes whose
underlying signi?cance, through disruption, are perhaps
only now coming to light. One of the most dramatic of
these transformations has occurred in the business of
mortgage ?nance which sits at the nexus between the
markets for real estate and those for asset backed securi-
ties. As emphasized by Federal Reserve Board Chairman
Ben Bernanke in a speech responding to current events in
the last ten years (quoted above), US mortgage ?nance
has shifted from an industry driven by government spon-
sored enterprises (GSEs) and specialized deposit-funded
lenders, to an industry fuelled in large part by high-risk
investment capital. No longer the purview of local banks
and savings co-operatives, consumer mortgages have be-
come the asset class feeding some of the most popular debt
securities for sale on Wall Street.
The shift towards the unfettered involvement of private
capital in mortgage lending and its downstream effects are
becoming widely recognized in the US. A New York Times
Magazine contributor who had just received a letter
informing him that his mortgage obligations were being
transferred to another ?nancial group, expressed his per-
sonal sense of shock in this way: ‘‘. . .it came to me as a
thunderous revelation: my debts were some other people’s
assets” (Kirn, 2006). In this spirit, the movement towards
big capital has been tied to many of the most cited reasons
in mainstream commentary for how mortgage credit be-
came unsustainably ampli?ed in the last few years. The
pro?t driven interests of investment banks and hedge
funds have ostensibly encouraged unscrupulous and irra-
tional lending, fraudulent income reporting, a reduced
responsibility towards the personal situation of borrowers.
This was compounded by naïve borrowing in the face of
increasingly complex ?nancing options and negligence on
the part of the federal agencies who should have been pro-
tecting consumers from predatory lending. Critiques such
as these have been deployed in the style of a classic ‘soci-
ology of errors’ (Bloor, 1991), in which deviations from a
retrospectively appropriate course of action are rooted
out and condemned.
Analyses of technical systems that focus on (human) er-
ror are fundamentally ‘asymmetric’ because they are con-
?ned to situations of breakdown or crisis. This is why the
post hoc denunciation of deleterious actions triggered by
this new brand of mortgage ?nance reads like a stale list
of ‘the usual suspects’ – the ones that are routinely rolled
out whenever there is an issue with crushing consumer
indebtedness (Black, 1961). This kind of reasoning leaves
us open to two popular poles of argumentation: either
to the ideologically driven conclusion that the cur-
rent ?nancial crisis is due to the natural excesses of
free-marketeering run amock; or to a moralistic accusation
that investment bankers allowed themselves to be seized
by a greed-induced passion, a ‘contagious’ psychology of
‘irrational exuberance’ (Shiller, 2005, 2008), that temporar-
ily overcame their otherwise sound economic good sense.
Either way, these perspectives sidestep the pressing con-
temporary question of how a ?nancial network for lending
so freely has come into being. Crisis or no crisis they fail to
provide a compelling account of how these private capital
players have managed to encroach, in practice, upon a
marketplace the federal government has had to actively
sustain, through specialized government sponsored agen-
cies, since the New Deal. If government charters were once
necessary to make the connections for liquid mortgage ?-
nance to exist – and in particular for making mortgage
funding available to credit strapped populations – a move
towards ?nancial markets that sidesteps these entities
cannot be suf?ciently explained by a spontaneous ramping
up of credit volume through supply and demand; and even
less so by some kind of natural willingness among capital
investors to cater to a consumer segment called the
‘subprime’.
How has mortgage ?nance been rendered open to the
practices of high-risk investment that appeal to big capital
players? Surely, something might be said about the genesis
and development
2
of subprime ?nance as a novel network
of investment grade lending in and of itself. It is perhaps
of interest, then, to take a step back from the collapse and
to investigate the implementation of new calculative infra-
structures and their consequences on how mortgage ?nance
is arranged. To track such a change means taking up the
painstaking search into the most mundane of details so
familiar to social studies of science (Bowker & Star, 2000;
Star, 1999) and of accountancy (Hopwood, 1987; Hopwood
& Miller, 1994); it means exploring the innovations that
have re-con?gured markets, their machineries and their
places (Beunza & Stark, 2004; Guala, 2001; Muniesa, 2000;
Zaloom, 2006; Çalis ßkan, 2007). In the case of the diffused
industry of mortgage ?nance it means prying into the every-
day apparatuses of underwriting and into the rise of con-
sumer risk management techniques that have permitted a
dramatic production of increased liquidity. Such an analysis
would conclude that understanding subprime lending is less
about unravelling the motivations and psychologies that
might lead to ?nancial overextension, than it is about under-
standing the development of technical apparatuses that
have supported the practical activities of a new cadre of
?nancial agents (Hopwood, 2000).
Instead of questioning why so much mortgage credit
was extended to borrowers at a high-risk of defaulting; in-
stead of con?ating the crisis with a set of culturally familiar
categories such as the ‘poor’ or the ‘economically vulnera-
ble’; instead of presuming to know what it is that is col-
lapsing and offering calculatively empty, off-the-shelf
reasons for why, this research traces the technical consti-
tution of an investment subprime – at once a class of con-
sumers, a set of ‘exotic’ mortgage products, and a class of
2
The term ‘genesis and development’ is borrowed from the work of
Ludwig Fleck, a classic text on the establishment of scienti?c facts in
science studies (Fleck, 1981).
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 655
mortgage backed securities – as a viable and ?uid network
of high-stakes ?nancial action. It may be helpful to note
that generating ?nancial action of this type is substantially
more complicated problem than single market formation
(see, for example (Garcia-Papet, 2007)). In the case of
mortgages, making debts fungible involves numerous
transactions crosscutting what might be considered four
distinct markets arenas: First, there is market for real es-
tate where home buyers and sellers meet to exchange
property. Next, there is the market for loans, where home-
buyers receive credit from ?nancial institutions. Third,
there is the point of exchange between mortgage brokers
and wholesalers who pool loans.
3
Finally, there is the sec-
ondary market where pools of these mortgages are packaged
by securitizing bodies and sold off to international investors
as ?nancial products. For the full circuit to function, money
or credit ?ows transversally in one direction while what is
known as ‘paper’ in the industry, or debt, ?ows in the other.
This is an extraordinary problem of coordination that de-
mands much more than a single interface where buyers
and sellers meet.
Consistent assessment is central to framing ?nancial
exchanges. In the absence of sustained calculation no
?nancial action is possible, and there is little or no second-
ary mortgage market. To create liquidity in any circuit of
mortgage ?nance – government sponsored or otherwise
– numerous agents must come to similar understandings
of the value of the asset backed paper so that it can be suc-
cessive transferred between market participants. If the
overarching problem is to organize heterogeneous actors
to agree upon the qualities of goods (Callon, 1998a; Callon,
Méadel, & Rabeharisoa, 2002), then there is strong reason
to suspect that the recent explosion of secondary subprime
?nancial activity is the result of a process thorough which
a novel chain of mortgage valuation has been put into
place. Rather than assuming that calculation is a mono-
lithic means to market organization, however, this re-
search takes for granted that calculative activities are by
nature disorderly – that is, that at the outset, there are as
many potential solutions to a problem of valuation as there
are participating agents. From this position, stories about
paradigmatic shifts towards quanti?cation, models, or risk
management are inadequate explanations, for even if such
movements could spontaneously occur, it is unlikely that
agents working on a calculative problem independently,
from different ?elds, would spontaneously come to the
same evaluative results.
To understand unprecedented subprime liquidity the
empirical concern is to document the work that has been
done to selectively reduce calculative multiplicity, partic-
ularly with regards to low quality loans. Instead of taking
the uniformity of calculative frames from real estate to
the secondary markets for granted, this paper will explore
the importation of a distributed calculative (Hutchins,
1995) analytic apparatus into mortgage origination. In
1995, the GSE known by its nickname, ‘Freddie Mac’,
adopted a commercially available consumer risk assess-
ment tool called a FICO
Ò
credit bureau score which was
originally designed to control risk in consumer credit
(credit cards, small loans etc.). At that time, Freddie’s goal
was simple and clear: it wanted to standardize underwrit-
ing practices in federally sanctioned, prime mortgage
lending by introducing a consistent means of screening
credit risk into its newly automated system. The paper
follows the gradual, sequential and material movement
4
of this speci?c risk management tool, the FICO
Ò
, as it
spread from the GSEs throughout mortgage ?nance. It doc-
uments how, in rede?ning the calculation of prime quality,
commercial scores simultaneously provided an expression
of non-prime quality whose quantitative granularity was
unprecedented.
What this account intriguingly suggests is that the dis-
placement of the New Deal institutions through the activa-
tion of capital players is not a result of inaction or
inattention on the part of GSE managers. To the contrary,
the intensi?cation of high-risk lending has been built out
of the GSE’s very own initiatives to wrest calculative con-
trol over mortgage ?nance.
5
The key word is ‘built’. The
GSE’s authoritative endorsement of a particular commercial
solution to the problem of consumer credit risk assessment
created the conditions of its widespread adoption. But this
alone did not guarantee that all players would resort to
the same risk management tool. Once marked by the gov-
ernment agencies’ authoritative interpretation and en-
trenched in their newly automated underwriting software,
continuous infrastructural investment had to be made to en-
sure that FICO
Ò
scores would be taken up and used in similar
ways across the industry. Ratings agencies such as Standard
& Poor’s would, in turn, play an active role in sti?ing calcu-
lative diversity by translating the FICO
Ò
into non-govern-
ment channels for securitization.
The establishment of FICO
Ò
as a common calculative
tool in mortgage making lead to clear changes in lending
practices. As the paper will further show, once a common
interpretation of these scores was achieved, a gradual
shift away from traditional, exclusionary practices of
credit control-by-screening and towards gradated practices
of credit control-by-risk occurred. Where subprime lend-
ing required overriding the very judgment that was cen-
tral to control-by-screening (since by de?nition a
subprime loan was a mortgage that has been screened
out), in a regime of control-by-risk, subprime lending be-
came an exercise in risk management within a newly cre-
ated space of calculative possibility. Under control-by-risk,
managerial decision making was no longer con?ned to
approving or withholding loans, but was extended to
the exploitation of stabilized grades of credit quality ac-
cessed through scores to create multiple borrowing
3
For a detailed account of this particular market interface see (Bitner,
2008).
4
In an era where information transmission can seem effortless, the term
‘material’ is used to emphasize that the transfer of consumer credit scores
packaged and sold as a commercial product comes with important
monetary as well as organizational costs. It further signals that this
movement is sequential rather than instantaneous, that it passes through
physical media, rather than through a generalized culture or human
cognitive capacity; and that it leaves behind traces that can be empirically
followed.
5
For a key statement on how the state and accounting might be analyzed
as mutually constitutive see (Miller & Rose, 1997).
656 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
options tailored to accommodate varying levels of risk.
This point is pivotal. It is through this calculative shift,
enacted through FICO
Ò
, that the original GSE markets
were circumvented by the development of a second,
infrastructurally distinct circuit of high-risk mortgage
investment known as the ‘subprime’.
Tracking FICO
Ò
credit bureau scores
Work on ?nancial markets is only one part of a broader
research movement towards an anthropology of markets
that considers exchange as the outcome of intensive pro-
cesses of economic formatting
6
(Callon, 1991; Callon,
1998b; Callon & Çalis ßkan, forthcoming). Although the social
studies of ?nance
7
as a movement is perhaps broader than
in?uential science and technology studies renditions would
have it (compare (Godechot, 2001) to (MacKenzie, 2006)) an
attentiveness to technologies of calculation – both mathe-
matical and non-mathematical variants (Callon & Muniesa,
2002) – has certainly been central to work in this ?eld. Cal-
culation comes into play in retail markets (Cochoy & Grand-
clément, 2006; Lave, 1988; Lave, Murtaugh, & de la Rocha,
1984) or labor markets (Godechot, 2006), among others,
but it plays a special role in ?nance where the products
being exchanged are not only the objects of calculation,
but are in and of themselves (as with securities and deriva-
tive products) mathematical derivations based on underly-
ing commodities, risk estimates or indices (Lépinay, 2007;
Millo, 2007). This is why the investigation of a ?nancial mar-
ket is often enmeshed with an anthropology of calculation,
an exercise in tracking the history and circulation of facts,
?gures and formulas.
8
Tracking calculative objects can be an extremely fruitful
method for following the constitution of the ?nancial
products as well as the coordinated assessment of their
qualities, around which are con?gured market forms (Lép-
inay, 2002; MacKenzie, 2003; Muniesa, 2000). The case of
consumer credit scoring in the US is a case in point. Credit
scoring originally referred to a number of statistical tech-
niques used for predicting credit risk that produced a cred-
it score: the punctual empirical assessment of the odds
that a consumer might default on a loan expressed as a
probability. Over time the term has been diffracted in
two directions: scoring techniques have been extended be-
yond default predictions to address such questions as the
likelihood that a consumer might respond to a marketing
campaign or generate revenue by making use of the revol-
ving function on a credit card; and secondly, among credit
data analysts, scoring has come to loosely refer to any sys-
tem that produces a rank ordering of a population of credit
consumers even if this does not involve strict probabilities
or numerical scores.
The proliferation of credit scoring activity in backstage
banking has come to the attention of several social scien-
tists concerned about a paradigmatic shift towards quan-
titative risk management in consumer ?nance (Guseva &
Rona-Tas, 2001; Leyshon & Thrift, 1999; Marron, 2007).
9
But what distinguishes these studies from those in the so-
cial studies of ?nance is that they do not treat scoring
pragmatically as a set of concrete systems worthy of de-
tailed exploration so much as they exploit it as a terrain
on which to theorize grander themes such as rationaliza-
tion, quanti?cation, discipline and governance. Because
credit scoring is portrayed as an example of a larger move-
ment, these studies tend to put aside the formal properties
of technical systems. Analysing technologies in terms of
how they ?t into bigger pictures means taking for granted
the signi?cance of a trajectory of innovation that shapes
speci?c tools. Yet, from a science and technology studies
inspired perspective, it is within the details of these pro-
cesses that the formal calculative properties of technical
systems – in and of themselves the potential agents of
change – are created and established.
As I have discussed elsewhere the distinctive properties
of the credit scoring system in the US are very much a
product of idiosyncratically unfolding processes (Poon,
2007). Credit scoring is not only a body of statistical meth-
ods that is being applied within ?nancial institutions to
assemble and digest consumer credit information into a
decision making tool; it is also a thriving industry for ‘ana-
lytics’ in which a range of consumer risk management
products designed and marketed by specialized ?rms cir-
culate with stabilized contents as commercial goods. These
?rms may have little or no ability to generate consumer
data on their own, but each one possesses a delicate sav-
oir-faire (De Certeau, Giard, & Mayol, 1998), a prized ‘way
of doing’ based on accumulated experience, artisanal skills,
and in-house software that allows practitioners to exploit
credit information and ?x the results of their analysis into
applications suited for business decision-making. The
broader research project this research is taken from traces
the US origins of an industry for credit analytics. This
6
The term ‘economic formatting’ might be thought of as a less
cumbersome term for what Callon has also called ‘economization’. It refers
to the process through which activities, arrangements and behaviours are
quali?ed as ‘economic’. Because Callon argues that there are multiple
de?nitions of what is economic and that these are perpetually under
construction, controversy and maintenance, cases of economic formatting
can only be identi?ed empirically according to the de?nitions that actors
themselves deploy for what constitutes an economic situation.
7
SSF is also much narrower than the ?eld of economic sociology
(Smelser & Swedberg, 1994; Swedberg, 2003) although it might be brought
into relationship with the sociology of markets (Fourcade-Gourinchas,
2007). For a statement on the ‘sociology of ?nancial markets’, see (Knorr
Cetina & Preda, 2005). For an early sociological take on ?nancial markets
that predates the SSF movement see (Adler & Adler, 1984; Baker, 1984).
Several research networks have been organized to support work in SSF.
Donald MacKenzie’s ESRC professorial fellowship sponsors a researcher’s
list and conferences for the U.K. (see:http://www.sociology.ed.ac.uk/
?nance/index.html). The Social Studies of Finance Network (see: http://
www.ssfn.org/) run out of the LSE’s Department of Information Systems is
partnered to the French network ‘Association d’Etudes Sociales de la
Finance’ (see:http://ssfa.free.fr/) at the Centre for the Sociology of
Innovation, Ecole Nationale Supérieure des Mines de Paris. In the US, the
website for the Social Studies of Finance conference hosted by David Stark
at the Center on Organizational Innovation (COI, New York, 3-4 May, 2002)
has also served as an important resource.
8
Much excellent work in science and technology studies has been
devoted to tracing the history and circulations of things, tools and
technologies (Clarke & Fujimura, 1992; Daston, 2000; Kohler, 1994; Latour,
1987; Levinson, 2006; Rheinberger, 1997).
9
For an exploration of the equivalent practices of risk calculation in
corporate ?nance see (Kalthoff, 2005). It is interesting to note that
commercial lending is much less quanti?ed than US consumer lending.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 657
industry began in the late 1950’s with the pioneering ef-
forts of a single ?rm – Fair, Isaac & Company Incorporated
(today, Fair Isaac Corporation)
10
– to sell ‘custom applica-
tion scorecards’, a statistical tool originally adapted to the
needs of ?nance companies.
11
The commercial basis of credit scoring provides a un-
ique opportunity for understanding the material transfer,
that is, the step-by-step movement from one location to
the next, of risk management practices and information.
Similar to the way in which formulas issued fromacademic
scientists might bear the signature of their author(s) – the
Black-Sholes-Merton option pricing formula is a key exam-
ple of this – proprietary credit scoring models made by
credit analytics providers will bear the brand mark of their
maker. This means that many of the tools for the statistical
analysis of credit data have an independent and distinctly
traceable origin from the more diffuse and maverick meth-
ods for data mining into which credit scoring as a practice
is currently being subsumed. The most celebrated inven-
tion issued from this fruitful circumstance of corporate
innovation is called a ‘credit bureau score’. A US bureau
score is any consumer credit risk estimate that is calcu-
lated using individual level credit (and repayment) infor-
mation compiled and periodically refreshed from a
number of sources, such as revolving credit card lines,
small personal loans and auto ?nancing.
12
Financial institu-
tions issuing credit, regardless of their contribution to the
data pool, can purchase commercial risk scores, available
in several distinct brands from each bureau, as a generic tool
that aids in evaluating the overall credit risk of an individual
borrower.
The strength of the bureau scores as risk management
aids is that they give competitive lending ?rms equal ac-
cess to general snapshots of the consumer that are contin-
uously recalculated as new data is amassed from
participating lenders. Such scores are by no means pro-
duced from an ‘ideal’ data set. They are parasitic and prag-
matic constructions that make the most of information
that is readily available at the bureaus as a resource for
manufacturing pre-packaged analytic products. These
black-boxed statistical ?gures are in large part ‘behav-
ioural scores’. They do not seek to qualify static qualities
of the person so much as they constitute a temporally
responsive picture of consumer risk that is useful for track-
ing a person’s ongoing relationship to credit. Unlike classic
‘application scores’ which use data provided directly by a
consumer on a form, it is noteworthy that bureau scores
are calculated in the absence of input on income, occupa-
tion, or socio-demographic characteristics, even the ones
that may legally be considered, because this kind of data
is simply too costly to be accessed and reliably maintained
by the bureaus.
Beyond the fact that bureau scores exist, there is an
additional and important peculiarity about the US mar-
ket for scores. Through an unexpected business con?gu-
ration achieved by Fair Isaac, three statistically distinct
proprietary scoring algorithms were put in place at Trans
Union, Equifax, and Experian, the three major credit bu-
reaus. As a result of these joint ventures similar scores
are manufactured by these otherwise highly competitive
organizations under a common FICO
Ò
brand-label. The
FICO
Ò
line of scores numerically tags an estimated 75%
of the US population eligible for consumer credit on a
linear scale of 300-850 units, trademarked by Fair Isaac.
The robustness and penetrance of the pan-bureau ‘prod-
uct’ with its high substitutability and low switching costs
explains why, in a situation where product proliferation
and heavy competition among multiple, sui generis statis-
tical solutions would otherwise be expected, there exists
instead, a single analytic product that saturates the mar-
ket for scores. The co-ordinating effects of this widely
circulating piece of ‘economic information’ (Callon,
2002) are signi?cant: the overwhelming commercial suc-
cess of this tool is arguably what has given recent US
consumer credit markets their coherence, con?uence
and vibrancy.
13
As the FICO
Ò
has travelled across ?nancial institutions
it has become a distinctive market device (Callon & Mu-
niesa, 2002; Callon, Muniesa, & Millo, 2007a), that is, it
is a traceable technological system involved in aligning
the decision-making of lenders with regards to the qual-
ities of borrowers. A market device is any distributed
technological arrangement that participates in the pro-
duction of calculative agencies that are ?rm enough to
render a singular quali?cation of market goods and
therefore sustain the coming together of agents in acts
of exchange (for a number of concise case studies see
(Callon, Muniesa, & Millo, 2007b)). In short, a market de-
vice is a social scienti?c concept for identifying objects
of investigation whose analysis can demonstrate that
10
Research for the author’s PhD dissertation on the history of credit
scoring technology (University of California San Diego, expected 2009)
began with a series of over thirty open-ended, face-to-face interviews
carried out between June 2004 and October 2005. Respondents were
(predominantly) former Fair Isaac personnel, including executives, analysts,
project managers, sales people, and technical and administrative support
staff contacted through snowball sampling. Only two of the interviews are
quoted directly in this piece. The remaining data presented here were
collected between January and August 2007 from a variety of trade
journals, government documents, regulatory manuals, newspapers, and
online sources.
11
The original Fair Isaac scorecards were custom crafted algorithmic tools
designed to capture patterns of default in ?rm-level consumer credit data.
The tool rendered scoring possible at the point of retail sale by representing
the algorithm as sets of ?gures to be added on a printed card. Today,
scorecards are no longer visible as they have been embedded into
electronic systems. Although Fair Isaac continues to be a leader in the
?eld, they face increasing competitive pressure from rival providers as well
as from in-house analytics groups. For a general account of their methods
by a former company executive, see (Lewis, 1992).
12
Credit bureau data can be negative (default information), or positive
(repayment information). US bureaus keep both kinds. There are other
major data gathering operations in business that compile consumer credit
histories and provide other marketing services (such as preparing direct
solicitation mailing lists), but by strict de?nition a US bureau sells actual
credit histories and is subject to the Fair Credit Reporting Act (FCRA) 15 USC
§ 1581 et seq.
13
This argument is made in the author’s PhD dissertation. Moving
through several iterations of the technology as it emerged from the
activities of Fair, Isaac and Company Incorporated beginning in 1957, this
research shows how consumer credit scoring has gradually become the
information infrastructure sustaining multiple US consumer credit markets.
The work culminates with an analysis of how scoring has participated in
generating the current credit crisis, largely triggered by calculative
over?ows of risk within the consumer credit sector.
658 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
‘‘Calculation is neither a universally homogeneous attri-
bute of humankind, nor an anthropological ?ction” (Cal-
lon et al., 2007a, p. 5). The implication of this
provocative phrase is that market devices are by no
means technologically determined, that is to say, they
do not exist prior to their own implementation in actual
practice. Nor can devices be reduced to discrete technol-
ogies. That technologies become market devices is
achieved by active translation (Callon, 1986) through
which they are adjusted, interpreted, modi?ed, reworked,
extended and distributed to become the bedrock of a col-
lective calculative infrastructure.
Before their use in mortgage making, the FICO
Ò
scores
had already become a genuine market device in the
wider US consumer credit markets (personal loans, credit
cards, retailer credit). Their circulation had singularized
calculations of consumer risk and had considerably rei-
?ed the position of the consumer into a highly govern-
able person (Miller & O’Leary, 1987, 1994) in those
markets. Commercial scores give lending institutions ac-
cess to a common viewpoint on the consumer; they as-
sign individuals a routinely updated placement in a
shared cartography of the marketplace. It is in large part
through these scores (assisted by a smattering of other
scoring tools), that the competitive basis of consumer
credit have undergone a dramatic turn from one set of
calculative agencies into quite another. Over the past
few decades, consumer credit markets have progressively
moved away from blunt forms of pro?tability based on
tighter consumer selection – credit control-by-screening
characterized by simple but rigid barriers of exclusion de-
signed to sift for acceptable credit quality; and towards
razor sharp segmentation games that demand superior
product matching – credit control-by-risk characterized
by a segmented accommodation of varying credit quali-
ties. To remain competitive, consumer ?nance operations
must do additional statistical work to re?ne the risk esti-
mates produced by FICO
Ò
, supplementing these with in-
house data and subtle re-calculation. But this does not
undermine the fundamental effect that shared commer-
cial risk scores have had on co-ordinating lenders’ overall
vision of an accessible population, as well as for stimulat-
ing strategies of product design and targeted marketing.
The result is a risk segmented and saturated US market
for consumer credit.
Credit scoring is a prime example of how numbers
might matter to market activity not so much because
of what they represent and whether they represent accu-
rately, but because of what they enable agents to do
(Vollmer, 2007). From a perspective that is sensitive to
the generative capacities
14
of calculative tools in action,
it should come as no surprise that the movement of a tool
such as the FICO
Ò
, from consumer credit into the
mortgage ?nance,
15
might provoke the con?guration of a
speci?c set of economic agencies heretofore unseen in
mortgage making. A method that has therefore proved
useful for making the emergence of these agencies visible
is to track the details of the scores’ movement through
their uptake by the government sponsored agencies and
out into mortgage making infrastructures. (For clarity,
the handful of institutions involved is described in Table
1.) As this research will show, the government agencies’
interpretation of how to use the tool, once impressed
upon the scores, has lead to the bipartite organization of
today’s US mortgage markets into the conventional prime
and high-risk subprime. Grasping the scores’ bubbling po-
tential to recon?gure the calculative underpinnings of the
mortgage markets, however, ?rst requires an understand-
ing of how credit quality was previously assessed by the
GSEs in the absence of circulating numerical consumer
credit scores.
Government sponsored mortgage market making
In the US, homeownership is not just a part of the
‘American Dream’; it is also actively facilitated by special-
ized state initiated institutions. Since the Great Depres-
sion, the US federal government has played an
important role in making a liquid and stable mortgage
market (Carruthers & Stinchcombe, 1999). As part of the
New Deal, the Federal Housing Administration (FHA)
was started in 1934 to provide guaranteed insurance to
mortgages, and the Federal National Mortgage Association
(FNMA) in 1938 to create a government assisted market
for loans. In 1968, the FNMA was transformed from a gov-
ernment owned body into a government sponsored enter-
prise (GSE), changing its name to ‘Fannie Mae’. A second
GSE, ‘Freddie Mac’ (Federal Home Loan Mortgage Corpo-
ration, FHLMC), was created in 1970.
16
Freddie’s charter
demanded that it ‘‘promote access to mortgage credit
throughout the Nation (including central cities, rural areas,
and underserved areas)”.
14
A distinction should be made here between the notion of ‘capacities’
and that of ‘generative capacities’ with regard to technology. Generative
capacities are possibilities that inhere in technical systems, but they are not
developed without continued enrolment and innovation. In the current
case, the possibility of risk based pricing inheres within credit scoring but is
not necessarily expressed if users do not develop this capability through
additional innovation. The GSEs for instance, do not.
15
In US economic reporting loans secured by real estate have traditionally
been treated separately from consumer credit, the latter referring to retail
credit, credit cards, small loans, and car ?nancing. The distinction re?ects
the different institutional pathways through which these kinds of credit are
originated.
16
Through a statutory process the GSEs were placed in conservatorship
on September 7, 2008 by the freshly created regulatory body, the Federal
Housing Finance Agency (FHFA). The move was an effort to stem the
systemic impact of their increasing weakness on the ongoing credit crisis.
Treasury Secretary Paulson has argued that because they are federally
chartered but publicly traded, pro?t-oriented corporations, ‘‘only Congress
can address the inherent con?ict of attempting to serve both shareholders
and a public mission” (Secretary Henry M. Paulson Jr. on Treasure and
Federal Housing Finance Agency Action to Protect Financial Markets and
Taxpayers, September 7, 2008. The text is available athttp://www.ustre-
as.gov/press/releases/hp1129.htm). At the time they were created the GSEs
were intended ‘‘to overcome then-existing legal and institutional imped-
iments to the ?ow of funds for housing” (Congressional Budget Of?ce.,
2003, p. 1). Initially they did so by issuing debt securities, but they also
became major investors in private mortgage securities, purchasing 13% of
all products produced in 2006 and 2007. For a detailed description of how
these agencies have operated as well as of the recent spate of challenges
they have faced see (Frame & White, 2007).
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 659
The enterprises were created to ful?l an equalizing and
democratizing function. From the 1970’s on, the stated
mechanism by which they were to accomplish this mission
was ‘‘by increasing the liquidity of mortgage investments
and improving the distribution of investment capital avail-
able for residential mortgage ?nancing”.
17
The federal gov-
ernment’s intention was that the GSEs would ‘attract private
capital for public purpose’, serving as a kind of ‘institutional
market maker’
18
by liaising homeowners borrowing funds to
buy houses in the primary markets to capital holders seek-
ing investment opportunities in the secondary markets.
The GSEs were not intended to make loans like banks.
Rather, their purpose was to facilitate the movement of
debts in one direction in order to generate renewed funds
in the other, either by purchasing and holding, or packaging
and selling, ?nancial instruments called mortgage-backed
securities (MBS). Considered a type of bond, the original
GSE-MBS was a simple pool of conforming mortgages called
a ‘single class pass-through’ (Adelson, 2004), which was cal-
culated to yield a certain percentage as they matured.
19
To understand the reasons for the GSEs it is important
to recognize that the default state of debts is inertial. As
a part of their production, debts are entangled in manage-
rial rules, institutional relationships, and local processes of
decision making. A recent handbook on asset securitization
by the Of?ce of the Comptroller of the Currency (OCC) ex-
plains: ‘‘in the days before securities, banks were essen-
tially portfolio lenders; they held loans until they
matured or were paid off”. Under this arrangement loans,
including mortgages, ‘‘were funded by deposits, and some-
times by debt, which was a direct obligation of the bank
(rather than a claim on speci?c assets)” (Comptroller of
the Currency., 1997, p. 2). A securities market only works,
then, providing that debts can be converted into mobile
and transferable goods whose qualities buyers and sellers
can come to agree upon in the present, even though these
qualities will only be expressed in the future. The value of a
simple MBS, its quality, depended on the credit risk (esti-
mated rate of default) and the prepayment risk (estimated
rate of payment in advance of the due date) of the pooled
assets, as either event could decrease the eventual return
to the investor. The need to assess these qualities explains
why specialized agencies have been required to provide
Table 1
Overview of the major institutions and technological systems featured in the paper.
Institution Company Role in mortgage market Relevant technological contributions
Government
sponsored
agencies
(GSEs)
Freddie Mac (FHLMC,
Federal Home Loan and
Mortgage Corporation)
Formed to purchase and assemble pools of loans
into investment grade securities, the GSEs created
the traditional guidelines and letter grades for rating
mortgages and securities. They spearheaded efforts
to automate the mortgage industry in the mid-
1990’s, adopting the FICO
Ò
scores and
benchmarking the prime market at FICO
Ò
660
RMBS, the original simple pool, residential mort-
gage backed securities
Prime market automated underwriting software:
Loan Prospector
Ò
(Freddie Mac)
Desktop Underwriter
Ò
(Fannie Mae)
Fannie Mae, (FNMA,
Federal National
Mortgage Association)
Consumer
credit
bureaus
Experian De?ned by and subject to special laws, these are
competitive repositories of data on consumers
collected from lenders and the pubic record. The
business, which was started by ‘mom and pops’, has
slowly been consolidated. Today, these three ?rms
hold statistically signi?cant information on an
estimated 75% of the US population eligible for
credit (i.e. over age 18)
Originally providers of credit reports, by 1991 the
big three had independently entered into joint
ventures with Fair Isaac to produce and sell a
statistical consumer analytic product called a
FICO
Ò
score, a risk management tool for credit
solicitations and account control. Each has since
implemented multiple scoring algorithms and sell
brands of competing credit scores
Equifax
Trans Union
Consumer
analytics
?rm
Fair Isaac A pioneering credit analytics ?rm started by Bill Fair
and Earl Isaac in 1956; developer of the ‘scorecard’, a
basic credit scoring tool. In the mid-1980’s they
engineered the statistical algorithms implemented
at the bureaus to produce FICO
Ò
scores
The group attempted and failed to create a
commercially viable ‘mortgage score’ out of RMCR
(residential merged credit reports), the traditional
data of mortgage underwriting. When the GSEs
adopted the FICO
Ò
Fair Isaac turned their support
towards that product
Ratings
agencies
e.g. Standard & Poor
(S&P)
S&P worked with the GSEs to test statistical
underwriting in the non-prime market and offered a
model validation service that actively hardwired the
GSE interpretation of the FICO
Ò
into numerous
alternative underwriting systems
LEVELS
Ò
, S&P’s proprietary mortgage securities
evaluation program that issues a letter rating for
investors to indicate the credit quality of pools of
loans. The system prefers loans tagged with FICO
Ò
Subprime
specialists
e.g. Countrywide, 1st
Franklin Financial
Specialized subprime operations. Some created
proprietary underwriting software and packed
mortgages into subprime securities issued through
the ratings agencies while retaining servicing rights
over the loans
e.g. CLUES
Ò
(Countrywide’s Loan Underwriting
Expert System). This system relies on FICO
Ò
scores
17
Federal Home Loan Mortgage Corporation Act January 2005 12
USC.1451 Sec. 301 4. It is noteworthy that the original FNMA did not
securitize loans but purchased and held them (Sarah Quinn, personal
communication).
18
The technical de?nition of a ‘market maker’ in ?nance is an exchange
member who is positioned to take responsibility for making the market.
These ?gures are obligated to buy and sell from their own account when
ever there is an excess of orders in either direction. For a detailed
description of this profession see (Abola?a, 1997). The term is employed
only loosely here.
19
For a lively description of the early problems in organizing a mortgage
backed bond market in the 1980’s see (Lewis, 1990), a engaging memoir of
the writer’s days in the employment of Solomon Brothers.
660 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
the production function necessary to bring securitization
and the liquidity advantages that accompany it into
being.
20
In the securities markets, this function has belonged in
large part to third party ratings providers such as Stan-
dard & Poor’s, alongside Fitch’s and Moody’s (Sinclair,
2005). The system of credit rating they manufacture is
an important ?nancial indicator. Ratings describe the
overall quality of pools of loans underlying debt securities
such as bonds and other ?nancial instruments issued
from private companies or even from nation states (trea-
sury bonds). Like information printed on packaging (Co-
choy, 2002), performance testing to report on products
in a consumer magazine (Mallard, 2007), or classi?cations
of grain that allow different growers to merge their stocks
(Cronon, 1992, pp. 97–119), ratings are what allow inves-
tors to know something about the contents of invest-
ments so they can decide what to buy. Providing
standardizing information about mortgage holders in the
days before individual level credit bureau scores was a
challenge, and ‘‘investors and other market participants
faced greater dif?culties in comparing the riskiness of
loans from different lenders” (Adelson, 2004, p. 5). While
the ratings agencies are experts in the process of evaluat-
ing the credit risk of million dollar asset pools, nation
states, and large corporations, they have traditionally
not been attuned to ?ne processes of rating individual
mortgage consumers. For this reason, even they have
had to follow behind the authoritative market making
guidelines set by the GSEs.
21
The government agencies have therefore had to serve as
an all-in-one expert and organizational solution to both
the problem of standardizing underwriting (quality control
of individual loans) and the downstream problem of certi-
fying securities (quality control of aggregated loan pools).
In the absence of competing market forces and with the
weight of the federal government behind them, they have
?lled their function by keeping a ?rm hand on the micro-
organization of loan origination. The GSEs calculated a va-
lue for loans and loan pools, but their original methods
were not quantitative. Instead, prior to the advent of scor-
ing, their main strategy was to issue thick books of under-
writing guidelines, stringently designed to screen for
acceptable quality loans. The GSE’s independently devised
ratings grades, carved through their thicket of rules
became recognized across the industry: A for prime invest-
ment and A-, B, C, and D for non-investment grade, or less-
than-prime. The ratings agencies did provide their own
systems for rating RMBS, but for the most part they con-
?ned their efforts to certifying asset pools outside of GSE
control. Nevertheless, although those pools might have
been excluded by the Agencies due to a variation in the
underlying loan con?gurations, ‘‘ntil the mid 1990’s all
loans included in securitized pools in the non-conforming
market were assumed to meet agency prime loan credit
standards” (Raiter & Parisi, 2004). The privately securitized
loans were ‘non-prime’ (as distinguished from subprime),
because they were considered acceptable from a credit risk
standpoint according to the of?cial GSE rulebooks.
22
Each of the thousands of lenders around the country
could use the GSE classi?cation system for loan origina-
tion. But the limiting property of a rule-based form
23
of rat-
ing, its Achilles weakness, was that the interpretation of the
rules on the ground ‘‘differed from one company to the next”
(Adelson, 2004, p. 5). Due to the imperfect transmission of a
standard meaning of the rule, what ended up happening in
practice was that ‘‘one lender’s ‘A-‘ looked a lot like another
lender’s ‘B’” (Raiter & Parisi, 2004, p. 3). Given the wide mar-
gins of uncertainty in the resulting grades, the GSEs ren-
dered their debt products attractive by investing
exclusively in ‘A’ quality loans and offering only a modest re-
turn on investment. They further sweetened the deal by
offering to share the risk burden with investors, guarantee-
ing the value of the principal (although not of the interest).
The Agencies ‘‘promise the security holders that the latter
will receive timely payment of interest and principal on
the underlying mortgages”, and for their services they claim
‘‘an annual ‘‘guarantee fee” of about 20 basis points on the
remaining principal” (Frame & White, 2007, p. 85).
Under these conditions, in a market dominated by long
term 15- and 30-year ?xed interest rate loan products, it is
easy to see why mortgage securitization was an unappeal-
ing proposition to the fast-paced, high-return world of pri-
vate equity.
Automating mortgage underwriting and the
importation of bureau scores
The shift away from rule-based rating towards a system
of score-based rating for RMBS marked a fundamental
change in mortgage underwriting. However, this shift need
not have passed through the FICO
Ò
scores, and indeed this
was not Fair Isaac’s original inclination. By the early 1990’s
the company’s success at making and marketing bureau
scores for the consumer credit markets was nearing its pin-
20
The advantage of MBSs for lenders is that they provide more liquidity
than keeping primary loans on the books. Today, securities have come to be
seen as increasingly ?nancially desirable because these carry lower capital
requirements under Basel, which in turn ‘‘improves return on capital by
converting an on-balance-sheet lending business into an off-balance-sheet
fee income stream that is less capital intensive” (Comptroller of the
Currency, 1997, p. 4). While this paper considers the translation of default
risk into commercial numerical scores, it is noteworthy that uncertainty
surrounding prepayment risk was controlled contractually in subprime
?nance through the imposition of heavy penalty fees.
21
It was not until 2001 that Freddie’s products began to be independently
rated by S&P. This move is part of the increasingly complex intertwinement
of the government sponsored and private investment mortgage markets
described in the last section of the paper. The testimony of Leland Brendsel,
then Chairman and CEO of Freddie Mac before the US Senate Committee on
Banking, Housing and Urban Affairs, Subcommittee on Housing and
Transportation on this topic is available online athttp://banking.sen-
ate.gov/01_05hrg/050801/brendsel.htm.
22
For example, a non-prime loan ‘‘conforms to traditional prime credit
guidelines, although the LTV, loan documentation, occupancy status or
property type, etc. may cause the loan not to qualify under standard
underwriting programs” (Raiter & Parisi, 2004, p. 2). Another example are
‘jumbo loans’ which are loans that exceeded the government imposed size
caps that were placed on the GSEs until this year.
23
For a theoretical discussion of the notion of a form see (Thévenot,
1984).
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 661
nacle, and the company was seeking new opportunities for
expansion. According to oral history
24
they set their sights
on the mortgage industry, hiring professionals from the ?eld
with the idea of developing a specialized credit risk score for
home loan underwriting. Their analytic scouts soon discov-
ered that the way credit data was brought into the mortgage
underwriting was through an ‘RMCR’ – a residential mort-
gage credit report. The practice of merged reporting, a sys-
tem of gathering the personal data that would be fed
through the GSE guidelines, had grown out of the days when
the bureaus were small, geographically scattered operations
and where an individual might have reports lodged in sev-
eral places all containing relevant information. Fair Isaac’s
?rst instinct, therefore, was to try and partner with report
merging ?rms to develop a scoring system for RMCR data.
The problem with scoring the RMCRs was that the re-
ports were infamous for being inherently unreliable. To
create an RMCR a mortgage broker would assemble data
from several credit bureaus and ‘‘bring in other elements
that might not necessarily be part of the credit bureau.
So they would do a veri?cation of employment, or veri?ca-
tion of income”. However, the process of merging reports
provided commission motivated mortgage brokers with
‘‘the wiggle room, [. . .] to manipulate the system to get a
mortgage loan through”. In addition to merging data, the
other ‘‘service [brokers] did was to ‘cleanse’ the credit re-
port. They formatted it a certain way, and then if the mort-
gage worker said, ‘this information is wrong’ they would
manually ?x it on their merged credit report”. GSEs were
aware of these kinds of procedural loopholes which they
tried to close by passing more and more supplementary
rules. So as time went on, the mortgage underwriting
guidelines became ‘‘so rigid that if you followed them by
the letter no one would ever originate a loan”! The situa-
tion only reinforced the brokers’ motivation to engage in
tactics that are as old as the industry, to invent resourceful
ways to drive loans forward and to keep the system mov-
ing. This meant that Fair Isaac’s business strategy (an iso-
morphic imitation of the bureau scoring project) would
eventually stall. The GSEs, which ?xed the rules for the sec-
ondary market, would not accept to purchase loans under-
written by a novel score calculated from merged reports
whose content they knew were subject to manipulation.
25
In the same period, the GSEs had begun their own
searching for automated solutions to tighten the system.
Expected to balance a complex set of objectives – promot-
ing ?exible and affordable housing, all while maintaining
their reputation for investment quality products, reward-
ing their shareholders, and adequately controlling risk
26
– the Agencies were facing considerable pressure from all
sides to gain consistent knowledge of the quality of loans
they were purchasing from mortgage originators. Numerous
efforts were being made, in particular at Fannie Mae, to pro-
duce automated underwriting programs based on mentored
arti?cial intelligence (AI)
27
. In their original conception,
these kinds of systems ‘‘simply converted existing under-
writing standards to an electronic format” (Freddie Mac,
1996, Chapter 1 Improving the World’s Best Housing Finance
System)
28
. They were attempts ‘‘to try to train a system to
reproduce the credit decisions of a human underwriter (or
group)”. While simple automation ‘‘brought speed and con-
sistency to the underwriting process” they could not, how-
ever, ‘optimally predict defaults’. Industry reports seem to
agree that by the mid-1990’s ‘‘mentored AI systems had lar-
gely lost out to or begun to progress to statistical mortgage
scoring—which brought the key advantage of modeling the
actual likelihood of mortgage default” (Straka, 2000, p. 214).
A genuine ‘mortgage score’ was a statistical undertaking
considerably more ambitious than anything a free standing
analytics ?rm with no way to generate empirical data on
their own could have undertaken. Such a score would be
made from a model in which credit data (such as bureau
data) ?gure in alongside industry speci?c data on the char-
acteristics of the property and the type of loan being con-
sidered, as well as information on income and personal
?nance. With their massive stores of historical mortgage
data the GSEs were the only institutions in a position to
envisage and implement such a project. It was at this point
that Freddie Mac made series of crucial decisions that
would lay down the calculative foundations for dramatic
change. Not only did Freddie decide to pursue statistical
underwriting to the detriment of the traditional rule-based
methods, but, secondly, rather than testing the bureau
holdings for the most predictive combination of consumer
credit data for mortgage lending, they opted to insert con-
sumer credit data, pre-digested in the form of numerical
commercial bureau scores, into their nascent systems.
29
Inspiration or caprice, the exact reasoning behind the deci-
sion to adopt the general commercial risk scores was not re-
ported to even the makers of the FICO
Ò
scores whose own
ambition was to design and market a new consumer risk cal-
culation speci?cally adapted to mortgage risk.
What is certain is that Freddie Mac’s primary objective
was to include a reliable selection of consumer credit data
into their automated systems in a form that could not be
24
As part of this research the author has carried out interviews with two
former Fair Isaac bureau score specialists who worked almost exclusively
with the mortgage industry throughout the 1990’s. Both were conducted in
September, 2006.
25
All quotations in this paragraph are taken from the two interviews cited
above.
26
One way of keeping housing affordable has been to offer loans to less
solvent borrowers but to distribute risk by arranging an appropriate
amount of mortgage insurance from a network of other federally mandated
institutions such as the Federal Housing Administration (FHA) or the
Department of Veterans Affairs (VA).
27
General Electric Capital, a ?nancial subsidiary of General Electric (GE),
also came out with an AI based system (automated but not statistical)
called GENIUS in the same period.
28
In March 1996, Leland C. Brendsel testi?ed before a subcommittee of
the Senate Banking Committee on HUD oversight. Part of the purpose of his
appearance was to discuss ‘‘the extraordinary bene?ts that automated
underwriting is bringing to home mortgage lending”. Following the
presentation, Senator Carol Moseley-Braun (D-Il) commissioned the agency
to prepare a report ‘‘on automated underwriting and credit scoring and
their impacts on the wide range of American families who borrow money to
purchase homes” (Freddie Mac, 1996). The document is available online athttp://www.freddiemac.com/corporate/reports/moseley/mosehome.htm.
In the absence of page numbers I have indicated chapter titles.
29
In an industry review article, John Straka, then Director of Consumer
Modeling at Freddie Mac reveals that Freddie originally endorsed both
FICO
Ò
default risk scores and the competitive CNN-MDS bankruptcy risk
scores. But since the predictor of bankruptcy was narrower in scope than
default and was only available from a limited number of bureaus, it seems
to have fallen out of the picture.
662 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
locally manipulated by the brokers. As the giants in the
?eld, the agency’s gesture was designed to simultaneously
restrain the artful brokers, to provide a way to monitor
credit standards (Schorin, Heins, & Arasad, 2003), and to
create a criterion of commensurability (Espeland, 1998)
for assembling and describing prime, GSE quality MBSs. It
should be noted, however, that these goals might have
been equally achieved by employing electronically trans-
ferred raw credit data purchased from the bureaus, and
dissolving them seamlessly into the proprietary algorithms
the GSEs were assembling from scratch.
30
The astounding
result was that although ‘credit data’ was only one category
of information included in mortgage scores, it was now re-
duced to a discrete factor whose composition could poten-
tially become invariable between automated systems.
While the estimates of property value, the loan-to-value ra-
tio, personal income, and any number of other factors in-
cluded in the mortgage might be calculated in many
different ways, providing the industry followed Freddie’s
guidelines for FICO
Ò
scores, the interpretation of credit risk
could potentially be the same across all automated
systems.
31
Treated side by side with the mortgage industry, Fair
Isaac received a letter from Freddie Mac dated July 11,
1995.
32
Firmly grounded in the tradition of credit control-
by-screening – that is, of seeking to lend only to those of a
credit quality that made them highly unlikely to default –
Freddie announced its decisions, including a third signi?cant
stipulation: that a FICO
Ò
score of 660 was the eyeball
threshold for their de?nition of loans eligible for the prime
investment. Within a month Fanny Mae swiftly followed
suit adopting the identical convention in October to demar-
cate their prime loans. Industry insiders suggest that Fannie
had no choice because they suddenly found themselves be-
sieged by bad paper – that is, by loans that passed through
their rule-based guidelines but which were adversely se-
lected because many had already been picked over and re-
jected by Freddie.
33
The decision to use FICO
Ò
as well as
the GSE manner of interpreting them was materially hard-
wired into the system through the release of proprietary,
agency designed, automated underwriting software that
would henceforth be used to underwrite all loans destined
for agency purchase. The ?rst system in circulation was
Freddie Mac’s Loan Prospector
Ò
which had become com-
mercially available to all Freddie Mac lenders in February
1995. Fannie Mae’s Desktop Underwriter
Ò
would soon fol-
low suit.
The FICO
Ò
feature of automated system design was
politically useful when the software was showcased to leg-
islators.
34
A score within a score, FICO
Ò
could be neatly
pulled out of the formula as a discrete factor in both sys-
tems; it could be isolated and independently interpreted
as having meaning. For example, to explain statistical auto-
mation, discreet, individualized FICO
Ò
scores conveniently
substitute for the quality of ‘creditworthiness’ which gov-
ernment of?cials and the public had come to recognize as
being an essential part of loan evaluation. In a report to a
subcommittee of the Senate Banking Committee the section
devoted to explaining the use of commercial credit bureau
scores made an explicit equivalence between the use of
FICO
Ò
scores and an evaluation of ‘creditworthiness’ even
though the former is a shifting quality assigned statistically
with respect to the aggregate and the latter has traditionally
been considered a personal property of the individual often
thought to be interchangeable with ‘character’. Through this
analogy with known concepts (even though the commonal-
ities were thin
35
) FICO
Ò
helped circumvent some of the
technical dif?culties in explaining statistical underwriting
to lay audiences.
The effect of bureau scores was not only to facilitate the
evaluation and pooling of loans, but it also introduced a
common lexicon into the industry. The same Senate Bank-
ing report took great pains to explain the demarcation of a
categorical break at FICO
Ò
660. Freddie Mac’s independent
studies showed that this score corresponded to their exist-
ing standards, such that ‘‘borrowers possessing weak credit
pro?les [. . .] as FICO scores under 620”, were found to be
‘‘18 times more likely to enter foreclosure than borrowers
with FICO scores above 660” (Freddie Mac, 1996, Chapter 3
Looking Inside Loan Prospector). Given the GSE mandate to
help and not hinder homeownership, 660 was intended to
be a soft minimum score and not a ?rm cutoff, since the
ultimate evaluation depended on the contribution of all
of the other factors that could be weighted in through
the larger mortgage scoring algorithm. In this regard statis-
tical analysis made the distinction between acceptable and
not acceptable less immediately clear to the system user
(Standard & Poor’s, 1999, p. 10). Nevertheless, FICO
Ò
660
rapidly became a free standing benchmark of prime invest-
ment grade status, recognizable among underwriters,
securitizing bodies, investors, regulators, and later (after
30
In fact, this is precisely what Fannie Mae has done in an attempt to
remove FICO
Ò
scores from their models when the issue of their non-
transparency became a heated political issue (Quinn, 2000). Nevertheless,
although Fannie Mae’s Desktop Underwriter system no longer uses FICO
Ò
scores as part of its internal risk assessment of individual loans, lenders
must still submit scores with loan applications. This strongly suggests that
the scores are an essential to the process of securitization, that is, to
describing the quality of securities products to the secondary markets, even
if they are not employed in the loan underwriting process. The incident
con?rms that there are many ways to adequately calculate consumer credit
risk in mortgage origination, but only one calculation that allows buyers to
compare the quality of Fannie’s products to others offerings. This is
essentially what it means to say that the FICO
Ò
constitutes a ‘market
device’.
31
This is true with a couple of caveats. Firstly, since the contents of the
bureaus are not exactly the same scores calculations for an individual ?le
vary between the three providers. Secondly, since the score shifts over time
as new information is accumulated, it can change within the period of loan
underwriting.
32
Freddie Mac Industry Letter from Michael K. Stamper, ‘‘The Predictive
Power of Selected Credit Scores” July 11, 1995 as referenced in (Avery,
Bostic, Calem & Canner, July, 1996).
33
Former Fair Isaac mortgage and bureau score specialist A, interview,
September, 2006. A similar story is reported in (Dallas, 1996).
34
See footnote 28.
35
As mentioned earlier, FICO
Ò
scores are behavioural scores which means
that they ?uctuate according to changing credit behaviour. They are not
based on a ?xed quality of the person such as ‘character’ even though they
were cast as a substitute for this traditional quality of the person in loan
underwriting.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 663
2001
36
) to informed consumers as well. The overall effect
was that a ‘prime lender’ could now identify as catering to
consumers with 660 FICO
Ò
scores and above. By default,
anyone willing to develop products that catered to risk
scores lower than a FICO
Ò
660 would become a high-risk
or ‘subprime’ lender.
37
The ratings agencies adopt the scores
It is important to note that the demarcation of subprime
lending by FICO
Ò
scores is a distinct moment from its
ampli?cation into a functioning ?nancial circuit. The
development of the subprime into a coherent network of
mortgage ?nance in which securitization could take place
was not a given. It would itself have to be materialized.
To create a circuit of subprime ?nance would require a pro-
liferation of specialized underwriting software equally
grounded around and further reinforcing the use of the
speci?c brand name credit scores elected and interpreted
by the GSEs. If at any moment another solution to evaluat-
ing consumer risk had been incorporated into private soft-
ware when faced with the consumer, lenders would have
produced a series of disconnected risk assessments. While
this situation would not have precluded the emergence of
subprime ?nance, it would have demanded a patch work of
solutions to the problem of commensuration, which would
have complicated the calculative picture and, much like
the previous system of letter grades, considerably weak-
ened the transferability of risk into the secondary markets.
The GSEs continued to play an active role in the project
of statistical automation. Given the mortgage industry’s
growing appetite for the swiftness of automation
(although not necessarily for statistical underwriting
38
),
as well as the propensity of the industry to follow the gov-
ernment agencies’ every lead, the effects of the new GSE sys-
tems would not stop at the borders of the government
sanctioned mortgage ?nance. Reports to government of?-
cials con?rm that Freddie was eager ‘‘[t]o address lender de-
mand for an automated underwriting service capable of
evaluating loans in any mortgage market” and not only in
the conventional, conforming one. Freddie soon ‘‘joined
forces with [Standard and Poor’s], a rating agency with sig-
ni?cant experience evaluating subprime loans” (Freddie
Mac, 1996, Chapter 5 Expanding Markets, Lowering Interest
Rates Across Markets). Standard and Poor’s (S&P) interest in
Loan Prospector
Ò
was to test how this system for underwrit-
ing, a pre-packaged algorithm from their point of view,
might further contribute to rating securities in a secondary
market that had been interchangeably referred to as non-
conforming or non-GSE.
Under manual underwriting, most forms of rating were
done at the level of the portfolio (at the level of a lender’s
pool of loans). In the absence of automation and scores, the
secondary market had learned to rely on indicators de-
signed to describe the risk level of the aggregated pool,
such as a calculation of the average interest rate (WAC
39
),
or the geographical distribution of loans across regionally
distinct housing markets. Until 1995, the description of the
risk of each individual loan through underwriting was done
with an entirely separate set of tools, metrics, and vocabu-
lary than those used to describe a securitized pool of loans
as a composite whole. In other words, before the introduc-
tion of commercial bureau scores, securitizing bodies ‘‘wer-
en’t used to looking at metrics that allowed you to drill so
deeply into an individual consumer credit pro?le so effec-
tively”. Individualized consumer risk scores interpreted by
the GSEs and funnelled through their automated underwrit-
ing systems introduced a substantially ‘‘different view than
what [the ratings agencies, securities ?rms and bond issu-
ers] were accustomed to evaluating”.
40
Work had to be done to educate each of the securitiza-
tion and ratings agencies ‘about how credit scores worked’.
Once Fair Isaac caught wind of the direction of change, the
scorecard makers actively went out and ‘‘urged them to
use [bureau scores] as components in their analysis”. Some
securitizing bodies were harder to convince, but from Fair
Isaac’s standpoint S&P was an ally that ‘got it right away’.
Score-supported statistically based underwriting programs
began to ?ow into and merge with the rating phase of
securitization. The rating agency regarded the result of
these changes as positive in that ‘‘For the ?rst time, a to-
tally integrated risk management capability is available
to loan originators, portfolio managers, investors, traders
and regulators” (Raiter et al., 1997, p. 1). For S&P the impli-
cations of automated underwriting extended well beyond
the moment of underwriting, because as their research
would show, ‘‘the use of credit and mortgages scores is
not limited to the origination process” (Raiter et al., 1997,
p. 13). A 1997 S&P report on innovations in mortgage
underwriting enthusiastically af?rmed that in addition to
rendering underwriting faster and more consistent, statis-
tical automation could go one step further, giving rise ‘‘to
36
Brokers were quick to inform consumers whose loan applications were
rejected that the ‘reason’ was the weakness of their FICO
Ò
scores. The
discovery of the scores on the eve of the re?nancing boom and housing
bubble led to protests by consumer advocates, who argued that the scores
should be released to the public. In 2000 the California State legislature
ruled that consumers would have a right to be told their scores. Rather than
risk further regulation Fair Isaac conceded and it hastily created a dot.com
that made individuals’ scores available to their subjects for a fee.
37
Beyond the distinction between prime and subprime, FICO
Ò
scores are
considered basic descriptors in mortgage ?nance. In addition to front end
pricing sheets, scores are a ubiquitous component in the representation of a
?rm’s holdings in investor presentations, annual reports, SEC 8-K as well as
in 10-K ?lings that ful?l the pool level disclosure reporting requirements of
the SEC (1111(b)(11) of Regulation AB). Finally, they are being used by
economists as an analytic tool for visualising and evaluating the trajectory
of current events. For one example of this kind of work by Federal Reserve
researchers see (Chomsisengphet & Pennington-Cross, 2006), which traces
the ‘evolution of the subprime mortgage market’ by the recording the
volume of loan origination by score, but not the origins of the technical
practices that sustain these increases.
38
In the mortgage industry the changes brought about by ‘automation’
are frequently con?ated with the introduction of ‘analytics’ (statistical
analysis) because these occurred simultaneously. As the paper has
described the automation of traditional rule-based underwriting could
have occurred without the introduction of statistical underwriting. Auto-
mation favours the introduction of statistical analysis but does not
determine it. There was a process of translation to brings automation and
statistical underwriting together.
39
WAC refers to Weighted Average Coupon. The term coupon refers to
the stated percentage rate of interest paid out to a security.
40
Former Fair Isaac mortgage and bureau score specialist B, personal
communication, May 24th, 2007.
664 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
the introduction of standardized risk grades” (Raiter et al.,
1997, p. 13).
In sharing metrics for risk quanti?cation, the primary
and secondary markets were to be placed on the same cal-
culative platform. A recent fact sheet for S&P’s mortgage
security rating system called LEVELS
Ò
(c1996) re?ects the
taken-for-granted nature of this change. The program is
said to combine ‘‘the power of automation with Standard
& Poor’s time-tested ratings criteria to assess the credit risk
of individual or pooled residential mortgage loans”
(emphasis added).
41
So while LEVELS
Ò
was developed to
rate pools of securities, in a statistical regime it can equally
be used to evaluate individual loans. This is, in fact, what
LEVELS
Ò
was designed to do. It performs a loan-by-loan
analysis as a means of assembling an investment quality as-
set pool (Raiter et al., 1997, p. 28). Through a common use of
FICO
Ò
scores the calculative ?eld could be vertically inte-
grated,
42
even though the chain of institutional intermediar-
ies between borrowers and lenders (brokers, lenders, ratings
agencies, underwriting systems, investors and so on) re-
mained populated by heterogeneous and diverse economic
agents. If access to a rich source of mortgage data was se-
cured, and then supplemented by commercially accessible
consumer risk scores, a system of risk estimation could be
devised that held its meaning as products moved ?uidly
from the level of individual loans up into that of aggregated
asset pools.
Several competing systems of automated statistical
underwriting tools were soon in the works beyond the
GSE models.
43
While using Freddie’s Loan Prospector
Ò
or
Fannie’s Desktop Underwriter
Ò
would facilitate the sale of
loans to one of the GSEs, distinctive models built off of data
from non-conforming, non-conventional loan specialists be-
came available on the commercial market for automated
systems or simply for use in-house. Even though the valua-
tion made of individual mortgages at the moment of under-
writing could be methodologically aligned with the
valuation of the asset pool (not to mention to the calculation
of mortgage insurance), the existence of separate, competing
systems to carry out this work for non-GSE destined loans
impeded horizontal market integration. Outside of the GSE
controlled market, there was an open season on innovation.
The hardwiring of other brands of bureau scores, or at the
very least, other interpretations of the FICO
Ò
became dis-
tinct possibilities. Private label securitization tools cropping
up all over – each based on proprietary databases, built by
in-house analytics teams, with preferences for certain statis-
tical methods, a unique take on variables, and a distinctive
statistical savoir faire – could be expected to produce a di-
verse set of algorithms and therefore a different set of risk
calculations.
Controlling the problems that ?ourishing calculative
diversity posed was S&P’s business. As a certifying body,
a calculating expert and a gateway to the secondary mar-
kets, it initiated a service to validate underwriting systems.
For system developers willing to submit their software cre-
ations to external evaluation, an initial development re-
view was ‘‘intended to validate the soundness and
statistical validity of the process used to build the predic-
tive system”. Once the data used to develop the system
was received from the vendor S&P would perform ‘‘a series
of statistical analyses that determine how well the system
measures risk relative to actual loan performance, what
key predictive variables have the most in?uence on the
system’s score, and ?nally the observed default rates asso-
ciated with various scores.” In its most basic level valida-
tion checked the internal soundness of models. With
regards to solving the problem of horizontal coordination,
however, these results were ‘‘then compared with those of
other automated underwriting systems and discussed with
the issuer” (Raiter et al., 1997, pp. 3–12). Acting to produce
coordination in ?nancial markets, S&P aligned the risk out-
comes of various models, by imposing de?nitions or by
modifying the factors they took into account.
44
Because FICO
Ò
was a standard ranking criterion that
S&P itself used to test the soundness of an underwriting
model this effectively put pressure on vendors to include
FICO
Ò
scores in their models. This was not merely a sug-
gestion. A key incentive to adopt FICO
Ò
then was that pools
of loans tagged with an S&P validated ‘mortgage score’
could be more easily rated for securitization by S&P’s pro-
prietary securities rating system. As a ?nal part of valida-
tion S&P offered to ‘‘calibrate each system against a
model portfolio of credit reports and mortgage application
information to facilitate use of scores by Standard & Poor’s
LEVELS
TM
[sic] model” (Raiter et al., 1997, p. 9). In 1998,
‘‘only 50% of Prime [. . .] and 30% of Non-prime mortgages
incorporated a credit score in their underwriting data ?le”
(Raiter & Parisi, 2004). By 2003 this had increased to virtu-
ally 100%. What is more, a 1999 document to update
the industry on the methods of rating in a post-automa-
tion era crisply announced that having ‘‘reviewed the
41
S&P 2006 Product Fact Sheet: LEVELS
Ò
, Loan Evaluation and Estimates
Loss System. The document is available athttp://www2.standardandpo-
ors.com/spf/pdf/?xedincome/LEVELS2006.pdf.
42
Vertical integration refers to the ability to communicate the quality of
the loans with calculative continuity as they are converted from single
mortgages, into pools of paper, and on into securities. At every stage in the
chain of transfer FICO
Ò
plays a role in calculation even though the content
of what the actors are calculating (whether to grant a mortgage, how to
price a pool of loans, whether to invest in a security. . .) is different. Vertical
integration constitutes a chain of production. This is distinct from
horizontal integration (see next paragraph, main text) which denotes the
ability to compare between the ?nancial products originating from
different competitive producers.
43
Examples of early subprime underwriting systems included ‘CLUES’
(Countrywide’s Loan Underwriting Expert System). Countrywide Financial
was one of the top 10 subprime lenders in the US, which ?ourished and
then declined with the collapse of the recent real estate bubble. There were
several other systems produced by mortgage insurance ?rms, such as GE
Capital’s ‘Mortgage Insurance Omniscore’, and Mortgage Guaranty Insur-
ance Corporation’s plainly named ‘Mortgage Score’.
44
This document explained that the process of validation and testing
would begin when S&P received ‘‘a sample data ?le of a pool of
approximately 10,000–15,000 loans randomly selected over three years of
origination” (sent in Salomon 400 data format). In addition, they required
‘‘1,000 bad loans speci?cally selected to augment this randomly selected
group”. The process of validation required a commitment to a deep ?x. The
document emphasized that ‘‘for a system to enjoy validation bene?ts,
Standard & Poor’s requires the vendor to agree contractually not to make
any modi?cations to its system without ?rst notifying Standard & Poor’s
and to provide Standard & Poor’s with suf?cient information to determine
the impact of such modi?cation” (Raiter et al., 1997, p. 10). The system
would be re-validated by S&P semi-annually with fresh data, on a
continuous basis.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 665
guidelines established by Fannie Mae and Freddie Mac”,
S&P would endorsed ‘‘similar guidelines for selecting FICO
scores included with new loans submitted for rating”
(Standard & Poor’s, 1999, p. 14). So once S&P had imple-
mented credit bureau scores as ‘‘an integral factor in our
underwriting review”, validation and rating gave S&P the
opportunity not only to push the FICO
Ò
scores, but to
transmit the speci?c interpretations of them that it had ab-
sorbed from its earlier collaboration with the GSEs.
The Freddie Mac–S&P connection was not the only
means through which the FICO
Ò
scores have been ex-
tended beyond the GSE market. The FICO
Ò
had already
generated a lot of momentum following their implementa-
tion at the GSEs, and S&P would admit it was in large part
‘‘[d]ue to the overwhelming utilization of credit scores”
seeping into the industry that it became ‘‘a factor in our
current credit risk analysis” (Standard & Poor’s, 1999, p.
20). The point of this account has been simply to demon-
strate one channel through which bureau score-supported
underwriting passed out of the GSE market into the non-
GSE market. The S&P endorsement had speci?c conse-
quences in opening up an alternative passage point to
securitization that piggy-backed on GSE risk management
practices, but moved them into alternative software sys-
tems, outside of the government sanctioned market, and
of GSE control. Within a proliferation of underwriting pro-
grams, algorithms, mortgage scores, ratings agencies, and
lenders, for practical intents and purposes, in the mortgage
industry, there are two independently functioning circuits
of mortgage ?nance — the government sanctioned prime
and the private label subprime. What divides them are
information systems, their regard for risk, and product
development; what unites them is a common reliance
and baseline interpretation of FICO
Ò
scores (see Fig. 1).
The calculative shift from screening to risk
The dif?culty of precisely evaluating individual mort-
gage quality – that is, in stating credit risk as a ?rm
expression transferrable across domains – is the reason
why, for half a century, there was only weak investment
activity outside of a slow and steady, federally chartered
prime investment market. The government sponsored
agencies were a quasi-obligatory passage point to the
production and sale of investment quality residential
mortgage backed securities because they were the only
institutions in a position to certify the quality of loans
and securities. Held together by these institutions in
their active role to build and implement sets of guide-
lines as market devices, this non-quantitative but none-
theless calculative arrangement (Callon & Muniesa,
2002) worked to stabilize the quality of securities and
to produce a steady secondary market. It was on the
authority of the institutions’ guidelines, their initiatives
in interface design, as well as their dirty, hands-on
involvement as a driver of RMBS production that the
market was made. This paper has described how the
market coordination provided by the institutionally
made and managed guidelines (rule-based market de-
vices) was supplanted by the coordination provided by
commercial consumer scores (statistical market devices).
What remains to be shown is the mechanism through
which this created an avalanche of subprime securities
investment.
Fig. 1. Summary of the overall argument. (The dotted lines indicate moments of interpretive ?exibility in underwriting practices.)
666 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
The GSEs guidelines embodied traditional credit pro-
duction practices in which lending was reserved to
arrangements where borrowers could be considered ‘cred-
itworthy’, and all cases that failed to make this standard
were rejected. As ethnographic studies have shown, how-
ever, establishing creditworthiness under traditional lend-
ing was subject to subtle negotiations in which numerous
forms of justi?cation could come into play (Wissler, 1989).
What was considered ‘manipulation’ of the RMCR reports
by brokers grew out of the permissiveness of this type of
practice. Such activities were able to occur, because the
de?nition of creditworthiness, even when ?ltered through
rules and guidelines, was being ?exibly assembled in the
moment of loan production rather than being taken from
?xed criteria. It is precisely this aspect of traditional con-
sumer lending that demanded the stabilizing force of the
GSEs in quality assessment. Nevertheless, despite its local
and practical multiplicity, in the practice of control-by-
screening lenders tended to act as though they faced two
(and only two) kinds of people – those who deserve to be
worked with and those who did not. The credit manager’s
mandate was to minimize risk by distinguishing as clearly
as possible between these binary groups.
Empirically derived credit scoring techniques have cre-
ated a new kind of consumer whose calculability de?es
conventional assumptions about the binary nature of cred-
itworthiness. Individuals viewed through statistics no
longer need to be classi?ed as either ‘in’ or ‘out’ of the mar-
ket. Armed with a gradated sliding scale, people all along a
spectrum of risk can be offered specially designed products
at alternative terms and prices. There is nothing that pre-
cludes the scale from being used conservatively to screen
for high quality borrowers, as the GSEs clearly intended.
45
But once in place, the score scale is a generator of calculative
possibility. It became a platform for creative design work
that brought lines of risk calibrated products, both mort-
gages and securities, into existence. The introduction of a
numerical scale of consumer credit quality into mortgage
origination permitted calculative actions that were simply
unanticipated from within the conventional frameworks of
the GSEs. This is how control-by-screening was concretely
edged out in the non-GSE circuit by the productivity of credit
control-by-risk, whose characteristic is to act at the level of
population, harnessing a variety of credit qualities through
a proliferation of ?nancial goods.
In both screening and risk forms of lending there is
elasticity in credit arrangements, a multiplicity of con?g-
urations under which lending can occur. The ?rst tends to
create loan paper on a case-by-case basis, while the sec-
ond distributes a variety of standardized products to mar-
kets segments. Although they achieve this ?t in different
ways, in both types of credit practice the terms and the
property type must be appropriately matched to the bor-
rower in order to make the loan. The difference that is
most relevant to this paper, however, is as follows: once
‘creditworthiness’ is expressed through a statistical scale
of gradated risk, a loan can be arranged for people who
are of low credit quality; that is, for those who would
not be considered particularly ‘creditworthy’ from a
screening point of view. Screening is a risk minimizing
strategy; statistical lending is a risk management strategy,
that is, one that embraces risk (Baker & Simon, 2002). It is
this displacement, the result of an innovative fusion be-
tween FICO
Ò
and the ratings agencies, that catapulted
the ‘subprime’ from a specialized low pro?le area of
non-conforming lending and into a burgeoning ?nancial
market. It is through the rise of this risk management
apparatus that subprime loans escaped the books to be-
come the raw materials for mass produced ?nancial prod-
ucts destined for mainstream consumption.
That the subprime has developed as a distinct ?nancial
space, yet one positioned with a high degree of congruence
to the prime, is an historical phenomenon produced by the
particularities of the commercial technology whose history
has been presented here. Private label sources did not in-
vade GSE territory. Instead, by borrowing but modifying
the GSEs’ very own market making tool kits they have built
their endeavour up beside it. Specialized lenders can and
do underwrite conventional loans to prime eligible individ-
uals,
46
yet they have clearly preferred to exploit more lucra-
tive subprime lending opportunities. So although the
existence of information that provides an incremental and
linear ranking of risk could theoretically have given rise to
a con?uent market space, open to an in?nite variety of com-
petitive decisions on how to segment the mortgage market,
what we ?nd instead is the entrenchment of a fairly tangible
break. The binary partition is the conservative imprint of the
GSE’s upon the FICO
Ò
technology for the purposes of screen-
ing for prime market candidates. Once the institutional
benchmark for how the scores should be used was hard-
wired into the material infrastructure of underwriting and
rating software, it ran deeply enough in the infrastructure
to cleave the lending space in two.
These spaces are distinguished by their distinctive risk
management practices. While the GSEs have tended to
stick to their ‘plain vanilla’ prime market loans after the
adoption of bureau scores, a new breed of lending out?ts
continued to work with the scores to innovate techniques
of granular risk-based pricing with hundreds of potential
price levels (Collins, Belsky, & Case, 2004). In 1996, a full
ten year before the onset of the contemporary credit crisis,
45
It is not incidental that the original Fair, Isaac scorecard was actually
designed to do control-by-screening. In its original conception the ?exibility
of the scoring tool acted as a switch that allowed credit managers at ?nance
companies to adjust the risk level at which ?oor personnel were screening.
46
As the crisis has unfolded the consumer fairness issue is to assess when
a subprime loan is justi?ed and when it is predatory. Many prime eligible
borrowers did take out subprime loan products during the bubble. While
consumers in disadvantaged areas may have done so because they had
greater geographical access to subprime lenders, others were attracted to
these loans by their lower monthly payment schemes which could be
advantageous especially when making multiple property investments.
Treasury Secretary Paulson’s proposed plan (unveiled in December 2007)
which included freezing interest rates on adjustable rate mortgages, but
only for individuals with credit scores of 660 or less, is a perfect example of
how FICO
Ò
scores are being redeployed to re?ne and justify the distinctions
between prime and subprime treatment through ongoing policy interven-
tion. Such decisions reduce ambiguities in the de?nition of subprime by
strictly aligning a category of loan products with a category of borrowers.
The consequences of this on market mobility have yet to be discussed.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 667
First Franklin Financial
47
CEO and co-founder, William D.
Dallas, published an ambitious article entitled ‘A time of
innovation’, in the trade journal Mortgage Banker. He stated
that it was clearly ‘‘unsuitable for lenders to sell what is
truly a subprime loan (loans that fail to meet secondary
market agency standards) to the secondary market corpora-
tions” (Dallas, 1996). Having engaged with Fair Isaac, Fred-
die Mac, and Standard & Poor’s, he enthusiastically
predicted the growth of a subprime business arguing that
‘‘there are much higher margins and reduced risk when
you properly price a subprime loan instead of mispricing it
and jamming it into the prime pipeline” (Dallas, 1996). First
Franklin’s slogan –‘Score it, price it, close it’ – captures the
élan of score infused private label ?nance. With FICO
Ò
poised to act as a vertical bridge between the primary and
secondary markets, it was a short step from systematically
originating subprime mortgage loans, to moving these up
through the ratings agencies and into investment portfolios.
A 2004 Joint Center for Housing Studies (Harvard Uni-
versity) working paper, by two employees of Standard &
Poor’s, has provided evidence that this separation is empir-
ically meaningful (Raiter & Parisi, 2004). Examining the
relationship between FICO
Ò
scores and mortgage coupons
(interest rates) from data in S&P’s proprietary database of
9.3 million residential mortgages, the study concluded that
rational risk based pricing had become more re?ned and
more expansive since 1998. By ‘rational’ they meant that
that the interest rate of the loans increase as the FICO
Ò
scores decreased, but also that ‘‘the coupon rate charged
on the loan at origination [. . .] translated into true dollar
costs over the life of the loan” (p. 20). What is perhaps
more signi?cant is the ?nding that ‘‘risk-based pricing is
more ef?ciently applied in the nonprime arena” (emphasis
added) which implied that ‘‘lenders are more concerned
about accurately pricing default risk in a market segment
that is perceived to be of higher risk than in the prime”
(p 18). Vaunting the qualities of LEVELS
Ò
the paper drew
attention to the fact that while the GSEs ‘‘would provide
one mortgage rate for all borrowers qualifying for a partic-
ular product, originators in the non-conforming market
could provide a range of coupons dependent on their abil-
ity to stratify risk” (p. 6).
48
These alternative dynamics of subprime lending are
now taken to be matters-of-fact in the banking industry.
The FRB’s Commercial Bank Examination Manual and the
Bank Holding Company Supervision Manual both observe
that a FICO of 660 is the reported industry benchmark for
the subprime (consumer credit and mortgages) although
they are careful to indicate that the government guidance
does not endorse any ‘‘single de?nitive cutoff point for
subprime lending” (Federal Reserve Board, 1994a, p. 11;
Federal Reserve Board, 1994b, p. 11).
49
The Commercial
Manual goes on to frankly state that ‘‘Subprime loans com-
mand higher interest rates and loan fees than those of of-
fered to standard-risk borrowers” (Federal Reserve Board,
1994b, p. 2). As long as lenders charge prices that are high
enough to cover the higher loan-loss rates and overhead
costs associated with this business, the subprime can be ex-
pected to be pro?table. Moreover, this manual points out
that ‘‘The ability to securitize and sell subprime portfolios
at a pro?t while retaining the servicing rights makes sub-
prime lending attractive to a larger number of institutions,
further increasing the number of subprime lenders and
loans” (p 2). Indeed, under contemporary conditions, whose
achievement has been traced in this paper, it does not seem
at all astounding that securitization would be in?nitely
more attractive in the subprime than in the prime.
The GSE paradox
50
During the housing bubble the mortgage market grew
due to a proliferation of lending that did not meet the
Agencies’ risk management criteria because the GSEs’
‘non-accepts’ – the very loan categories they eliminated
and deemed hors du marché – became private-label’s main
market. The result is a startling paradox: ‘‘Fannie and Fred-
die have become the opposite of what they were. They are
now lenders to safe markets, while private institutions
serve markets that were once liquidity-deprived” (Thorn-
bert, 2007). So although the GSEs exist to better attract
capital to the market, as Richard Syron, Chairman and
CEO of Freddie Mac pointed out to the recent Congressio-
nal Committee on Financial Services inquiry, today,
‘‘Numerous investors compete vigorously for mortgage
assets” (Syron, 2007). The record of Syron’s testimony indi-
cates that mortgage risk is, in all actuality, widely dis-
persed among many investors. For the duration of the
housing boom, it was investment capital that generously
funded a proliferation of mortgage options, and attended
to the very groups that are arguably most in need of own-
ership assistance according to the mandate of the GSEs.
47
1
st
Franklin Financial Corporation operates in Georgia, Alabama, South
Carolina, Mississippi and Louisiana. In the heart of the real estate bubble
First Franklin was bought by Merrill Lynch as a ‘subprime specialist’ for 1.3
billion, an acquisition that would weight heavily on the ?rm only a year
later as the market collapsed (Keoun, 2007).
48
It is perhaps not incidental that the ?rst author, Frank Raiter, was, as
managing director of Residential Mortgage Group within S&P Structured
Finance unit, a key advocate of credit scoring during the period of industry
automation. The study discussed here is perhaps somewhat tautological in
that it uses FICO
Ò
scores to show that the market is rational, when it is
arguable the rationale of the FICO
Ò
that has made the market able to
perform this rationality.
49
The FRB examination manuals provide guidance to supervisory
personnel in planning and conducting bank inspections, although they
are not legal documents.
50
The account of GSE involvement in the crisis has taken a dramatic turn
since these were place under conservatorship (see footnote 16). The
paradox described here has been somewhat overshadowed. Propagated by
the publicity of federal hearings (‘‘Federal Responses to Market Turmoil”,
Committee of the Budget, US House of Representatives, September 24,
2008), and heightened by the drama of electoral politics, new arguments
emerged charging that it was in fact the GSEs that underwrote the mass of
subprime loans at the root of the default crisis. These claims place fault
squarely on Democrats for resisting greater oversight during the 2004
Congressional Hearings into accounting practices at Fannie Mae, as well as
for supporting policies that encouraged GSE involvement in the project of
affordable housing. It is important to note, however, that Fannie Mae’s
direct involvement in underwriting subprime lending began late in the
game, in 2006, as an effort to stem the erosion of their market share
(Hilzenrath, 2008). Caught up in the dynamics of the new market
con?guration looping back upon them, it is arguably at this moment that
the GSEs absorbed some forms of control-by-risk developed in the parallel
subprime markets.
668 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
The central observation is that ‘‘the issuers of private-
label residential MBS are holding the aces that were once
held by government-sponsored enterprises (GSEs), Fannie
Mae and Freddie Mac” (England, 2006). ‘‘Once a junior –
but powerful – player in the market, private-label resi-
dential mortgage backed securities (RMBS) are now the
leading force driving product innovation and the net over-
all volume of mortgage origination” (England, 2006). As
the composition of loan originations moves towards
non-standard products and as the secondary market at-
tracts less risk restricted ?rms willing to fund those loans,
‘‘[the GSE’s] share of US residential mortgage debt out-
standing (MDO) has dropped signi?cantly, while the
MDO share for competing investors has grown dramati-
cally” (Syron, 2007, p. 30).
51
Freddie Mac and Fannie
Mae continue to be ‘‘large forces in the mortgage market”,
but it is becoming widely recognized that they are playing
‘‘a small and diminishing role in the subprime business as
large Wall Street institutions and hedge funds have become
more active” (Bajaj, 2007).
52
Some recent production ?gures from the heart of the
real estate boom drive home the magnitude and accelera-
tion of these changes. By 2003, private-label accounted for
24 percent, or some $586 billion of RMBSs. At that time,
most of the loans involved were ‘jumbo prime mortgages’,
that is, mortgages considered to involve a low credit risk
but whose size would exceed the purchasing limits im-
posed on the GSEs in their charters. In only the ?rst two
quarters of 2006, however, private-label issuance had
grown to nearly the same amount as in all of 2003 – to
$577 billion – and their percentage share of the market
had leapt up to 57 percent.
53
What is even more striking
is how these ?gures are distributed by type of market or
market segment. While the issuance showed a healthy in-
crease from $57 billion (Q1-03) to $67 billion (Q1-06) in
the prime segment, it had more than tripled – from $37 bil-
lion (Q1-03) to $114 billion (Q1-06) – in the ‘subprime’. It
has further been reported that in 2003, ‘‘62 percent of orig-
inations were conventional, conforming loans underwritten
to GSE guidelines. By contrast, in the ?rst half of 2006, only
35 percent of mortgages were conventional, conforming
loans” (England, 2006).
In the last decade private capital has been tripping
over itself – or so it appeared – to become a handmaiden
to the American Dream. The subprime collapse has turned
the tables back again, and the GSEs are now taking a
sound scolding from their masters in Congress for having
left vulnerable populations, the very groups most in need
of temperate government assistance to the Wall Street
wolves. In its defense, Syron has diplomatically pointed
out that ‘‘Freddie Mac’s business is con?ned to the resi-
dential mortgage market – in good times and bad. We
can’t diminish our support for this market when there
are more pro?table investments to be had elsewhere”.
Unlike the private equity funds, hedge funds, non-bank
?nancial institutions, the GSEs need to maintain more
conservative portfolios because they have a ‘‘statutory
requrement to provide liquidity to the nation’s mortgage
market” (Syron, 2007). Perhaps the ?nal blow of irony is
that as the crisis began, the GSEs themselves were caught
holding some $170 billion in private-label subprime secu-
rities,
54
products which they would never have underwrit-
ten themselves. Like so many others, they had purchased
these as investments because they were triple-A rated by
the ratings agencies.
55
Discussion: Market devices as agents of change
It seems to make obvious sense today that lenders
should be moving all kinds of loans into the capital mar-
kets. High-risk loans ?ying off the books – this is indeed
as Ben Bernanke has put it, a ‘great sea change’ from the
days when the GSEs were the chartered institutions neces-
sary to facilitate mortgage ?nance in a risk minimizing
fashion. Rather than taking simpli?ed dynamics of ‘supply
and demand’ or ‘risk versus return’ as naturalized back-
drops of this type of change, this paper has proposed that
we take the practical con?guration of these economic prin-
ciples in distributed material devices as an object of inves-
tigation. Instead of searching for accelerations of ?nancial
activity in the ideas and motivations of market participants
this means examining the moments when the material
content of industry practices have changed. These changes
have generated novel pathways of microeconomic market
participation which have gradually become ampli?ed,
through continuous ongoing innovation, into macroeco-
nomic circuits of capital ?ow.
Adapting tools from science and technology studies and
the social studies of accounting to the social study of ?-
nance, this paper has presented a calculatively sensitive
account of the origins of the subprime mortgage market.
It has traced the movement of commercial consumer credit
analytics into mortgage underwriting as a means of dem-
onstrating that what might look like the spontaneous rise
of a ‘free’ capital market divested of direct government
intervention, has been thoroughly embedded in the con-
certed movement of technological apparatuses. When
dealing with the recent breakdown of this ?nancial cir-
cuit, the approach replaces ‘transgressions of economic
51
Document available online athttp://www.freddiemac.com/corporate/
about/policy/pdf/syron3-15-07?naltestimonypdf.pdf.
52
In addition to facing new sources of competition, the GSEs have been
besieged by harsh accusations of ‘creative bookkeeping’. In response to
these affairs, H.R. 1461 the Federal Housing Reform Act of 2005 was passed
on October 26, 2005. The Federal Housing Reform Act of 2007 introduced
March 9 (H.R. 1427), was being debated at the time of writing. Bills have
included provision to force the Agencies to raise capital reserves and to
divert funds towards affordable housing in high-risk groups. Although the
bill does not specify how high-risk will be assessed, an educated guess is
that this will be determined at least in part by the participation of FICO
Ò
scores or some other bureau tool. The potential repercussion of these and
other capital requirement to the agencies’ potential hold on even the prime
market is clearly discussed in (Frame & White, 2007).
53
Source: Inside Mortgage Finance Database, reported in (England, 2006).
54
This ?gure was reported in an Of?ce of Federal Housing Enterprise
Oversight (OFHEO) news release available online at:http://www.ofheo.gov.
55
A statement to this effect was made by James B. Lockhard III, director of
the OFHEO at the Federal Reserve Bank of Chicago’s 44th Annual Confer-
ence on Bank Structure & Competition, luncheon address, May 16, 2008
(author’s ?eld notes). The increasingly complex relationship between the
GSEs and the ratings agencies manifests itself in numerous ways as
indicated earlier in footnote 21.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 669
common sense’
56
with the ‘generative calculative practices
of economic agency’ (Callon & Law, 2003; Preda, 2006; Rose
& Miller, 1992). In this view, ?nancial phenomena are no
longer categorised as the results of correctness or falsity,
of rationality or irrationality, so much as they are analysed
symmetrically according to how ?nancial activities are
framed, constituted and brought into being – until as it
may happen, their own internal consistency brings them
to the point of over?ow and collapse (MacKenzie, 2006).
FICO
Ò
scores can therefore be said to have recon?gured
mortgage markets, putting into place a space of potential
high-risk investment action. The intriguing plot twist is
that these scores were introduced into the mortgage indus-
try by risk-adverse government agencies. When the GSEs
adopted the FICO
Ò
they interpreted scores conservatively,
assuming they could be used to reinforce the binary spirit
of the traditional form of credit control-by-screening. But
because the tool had inscribed within it the possibility of
making ?nancially meaningful risk management calcula-
tions, it enabled the rise of a new form of ?nancial activity:
credit control-by-risk. As FICO
Ò
scores were hardwired
across a number of independent information processing
infrastructures they aligned the calculative activities of
distinct groups of actors. The new control was not
exploited uniformly; it proliferated outside of the govern-
ment facilitated market through developments in private
automated loan evaluation software, giving rise to a vi-
brant and invested subprime.
What the exercise of tracking shows is that the scores
have not achieved these effects abstractly or from a dis-
tance. Shifting from one form of market calculation to an-
other requires a gradual and continuous process of
material extension in which scores have travelled long dis-
tances, lodged themselves in many places, and participated
in traceable processes. Thus it is not quanti?cation, model
building, or numerical expression as information per se,
that should be linked to increased channels for high-risk
investment in the mortgage industry. Nor can responsibil-
ity for the changes be ?atly pinned on the GSEs for having
adopted the scores in the ?rst place. It is the pioneering
journey of FICO
Ò
scores throughout the industry that has
integrated, assembled, and aligned different market
agents. The integrity of the chain – which might have been
truncated at any point along its length had an alternative
solution or even another interpretation of these scores
been adopted – is what has rendered these divers agents
capable of engaging together in a distinctive and coherent,
globe spanning circuit of productive subprime real estate
?nance.
This is not a story of technological diffusion because
continuous distribution, adaptation, discovery and innova-
tion have mattered. The scores did not diffuse unhindered,
but passed through and were adjusted at several institu-
tional passage points. Nor is it a story about technology
selection where a technical method is purposively pro-
moted by overtly politicized actors because it coheres to
the needs of a greater movement or political program
(for this kind of account see (Burchell, Clubb, & Hopwood,
1985)). Instead, the political outcomes of this case (broadly
speaking) have unfolded within the messy and uncertain
process of constituting the scores as appropriate tools for
mortgage ?nance. Political change results from the multi-
ple local movements that remake the technology into a
market device. In this story a risk management apparatus
becomes in and of itself the diffused principle of coordina-
tion between groups with different interests and objec-
tives. This is why an overarching or driving ‘discourse’ or
preexisting ‘rationality’ is notably absent – because actors
who are not discursively aligned at the outset end up being
organized through shared risk management practices.
57
A technological platform for common calculation can be
the carrier of profound political displacement and of
astounding economic change. But since statistical solutions
are naturally multiple the achievement of such a platform
has to be taken as an analytic puzzle not as a causal force.
As a form of modelling for simplifying and disambiguating
through a process of abstraction (Rosenblueth & Wiener,
1945), calculative problems can be framed in multiple
ways, and calculative solutions are constantly threatened
by the introduction of alternative possibilities (Callon,
1998b; Callon & Law, 2003). From Fair Isaac’s point of view
the impetus for selecting their product across the board is
its scienti?c superiority within a competitive market for
scores. Yet as we have seen, the constitution of this staying
power is deeply entangled with the activities of govern-
ment and ratings agencies whose endorsements, indepen-
dent research initiatives, interpretations and automated
systems greatly contributed to re-qualifying and singula-
rising (Callon et al., 2002) this particular brand of con-
sumer risk scores such that it became a calculatively
effective risk management product for the mortgage
underwriting situation. Once this calculative tool was sta-
bilized in and as infrastructure it intensi?ed and generated
downstream complexity
58
; it made an alternative form of
co-ordinated and coherent collective decision making
possible.
As devices, both the GSE’s exclusionary rulebooks and
rank-bearing FICO
Ò
scores have proved workable solutions
to the problem of rendering ?nancial action possible. What
is remarkable is that in achieving their objectives through
the assembly of different tools, methods and organiza-
tional arrangements, each one assembles mortgage mar-
kets with distinctive qualities of ?nancial action. Agency
guidelines are one distributed market making device
56
The reference to ‘transgressions’ points to both the errors that are
attributed to having followed economic ideas too closely, as well as to those
that are said to result from overriding a naturalized economics.
57
The insightful observation that accounting systems can participate in
the creation of their own organizational contexts is discussed in (Hopwood,
1983).
58
A topic entirely omitted in this paper that is crucial to the unfolding of
the eventual subprime induced crash is the rise of structured ?nance, credit
enhanced securities designed with what are called ‘senior subordinated
structures’. These investment vehicles are built with tiers of mutually
insuring, differentially graded tranches that layer risk unequally at different
rates of return in the design of the product. In the crisis it was the junior
classes of these products held by hedge funds that degraded ?rst as they are
built to do, but not as rapidly as they did. That the single class pass-through
gave way to these structured securities after 1997 (Adelson, 2004) strongly
suggests that the adoption of commercial credit scores played a role in the
advancement of structuring. This paper touches upon only the immediate
innovation that followed behind the introduction of FICO
Ò
scores.
670 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
whose way of achieving common calculability is actively
reinforced by the GSEs, which take responsibility for
checking behind the application of guidelines and who
reassure investors by taking up the central position in
the securitization process. Moreover, the GSEs have a di-
rect stake in ?nancial outcomes as holders of their own
as well as private label securities. Knotted together in this
way, GSE devices have performed a concentrated low-risk
mortgage market with a limited set of explicitly and
implicitly guaranteed investment products.
The shift towards circulating credit bureau FICO
Ò
scores, on the other hand, has performed high-risk markets
with differentiated and structured products. Like the GSE
guidelines, commercial bureau scores are also constituted
by institutional arrangements (Poon, 2007). Yet, unlike
the guidelines whose ef?cacy is intimately tied to their
ongoing association with the authority of the GSEs, bureau
scores enter exchange activity as detached pieces of scien-
ti?c calculation that circulate independently of their mak-
ers. Through the commercial transactions in which they
are bought and sold, scores are emancipated from the con-
ditions of their own production, an effect that contributes
to their very appeal (Latour, 1987). The result is a curious
distinction: Despite the fact that distributed market de-
vices play a crucial role in generating the qualities of both
circuits of mortgage ?nance, prime lending, facilitated by
the visible hand of accountable government sponsored
enterprises, is considered to be ‘regulated’ or ‘managed’;
while subprime lending, sustained by the invisible hand
of economic information, is described as the culmination
of independent decision making by economic agents who
are ‘dispersed’ and ‘free’.
Conclusion
This research is part of a broader project that seeks to
draw attention to the introduction of default risk, estab-
lished through new calculative apparatuses, in changing
the nature of US consumer ?nance.
59
By engaging with
empirical details of how risk management tools are trans-
mitted on the ground, the work emphasizes that shared
forms of calculation do not arise spontaneously but must
be established progressively through their insertion into lo-
cal practices. Some may ?nd it a strange conclusion, but the
consequence of this observation is as follows: inherently
superior qualities are not necessarily what allow some cal-
culations to rise above the many other solutions to the prob-
lem of assessing risk. It is the idiosyncratic process of being
reworked and implemented which might enable speci?c cal-
culations to acquire a unique positioning that renders them
effective agents of collective ?nancial action.
In the case discussed here, the infrastructural qualities
of FICO
Ò
scores in mortgage ?nance were engineered
through successive movement and translation as they
spread across the industry. It is important to remember
that at the outset, credit bureau scores were considered a
sub-optimal, if not inappropriate tool for mortgage under-
writing by scoring experts at Fair Isaac. Nonetheless they
were a convenient solution to the problem of controlling
credit quality, one that was perhaps cheaper and faster to
implement than doing R&D. Adopted by Freddie Mac, com-
mercially available credit scores entered into the mortgage
industry to do a humble job of reinforcing extant practices
of control-by-screening. The distinctive mark of 660 is a tes-
timony of these limited intentions. Subsequently taken up
by Fannie Mae, FICO
Ò
became part of a united GSE solution
to evaluating credit quality. Scores were hardwired into
proprietary automated underwriting software, and rapidly
became a recognized piece of loan-making machinery.
Facilitated, for example, by an enthusiastic partnership be-
tween Freddie Mac and S&P, FICO
Ò
was also hardwired into
private automated underwriting software. In both ?nancial
circuits bureau scores smoothed out production. They pro-
vided vertical integration by allowing the quality of single
loans and pools of loans to be expressed by the same risk
metric. They also provided horizontal integration in that
investors could now use the description in terms of FICO
Ò
to compare the value of complexly constructed securities.
The scores bubbled with generative capacities, provid-
ing fresh material for ?nancial innovation as they propa-
gated throughout the industry. An empirical
demonstration that the qualities of calculation are not
deterministic, but must be acted upon and developed, the
paper further describes how this potential was taken up
differentially by the GSE and private label players. In the
hands of the GSEs statistical scores continued to be used
as a conservative screening device for selecting prime
quality loans; in the hands of private label, however, they
were used to developed risk managed products that
exploited the newly risk quanti?ed space of non-GSE lend-
ing. Coordinated by FICO
Ò
, a new regime of control-by-risk
emerged. As exotic mortgage products and increasingly
structured securities proliferated, the ‘non-prime’ – by def-
inition excluded from investment, was transformed into
‘the subprime’ – a place of elevated return on investment.
In the subprime, an alternative circuit of mortgage produc-
tion supported by the rise of direct retail channels to con-
sumers
60
and the bond rating agencies, capital players could
now circumvent authoritative government sponsored appa-
ratuses. They calculatingly poured money directly into asset
backed paper based on consumer real estate.
As investment capital ?ooded into housing, it crashed
into two pillars: The fabled American Dream of homeow-
nership, and the reputation of real estate as a safe and sta-
ble sector. These golden images, foraged in the days when
the GSEs’ rule-based market making apparatus dominated
mortgage ?nance, carried over untarnished even as infor-
mation infrastructure was changing the nature of lending
industry under everyone’s feet. Given that the mandate
of the GSEs was to facilitate home ownership, it should
come as no surprise that the success of subprime was ini-
tially heralded as a solution to the problem of affordable
housing. The tensions that make democratic lending a puz-
zle in a regime of control-by-screening, seemed to dissolve
59
It is noteworthy that the UK’s commercial bureau scoring system is the
most similar to that of the US, largely due to the in?uence of Fair Isaac.
60
As of October 2008, twenty two of the thirty largest specialized
subprime operations had been shut down, gone bankrupt, or been seized by
the FDIC (i.e. Indymac and WaMu). It is noteworthy that other casualties of
subprime involvement such as Bear Sterns, were too small as players to
make this list.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 671
away in a regime of control-by-risk. Yet what went over-
looked was that the transition from low-risk exclusionary
to high-risk inclusionary lending practices had trans-
formed the very nature of homeownership. It intensi?ed
competition and raised properties prices by equipping
more home buyers across the nation with immediate pur-
chasing capability. Moreover, faced with complex choice
sets it demanded that everyday people exercise degrees
of ?nancial judgment that had heretofore not been re-
quired of them.
Readers searching for a smoking gun will no doubt ?nd
this account of the origins of subprime ?nance tremen-
dously disappointing. It is admittedly counterintuitive to
consider the onset of crisis from anything but the perspec-
tive of fault or error. But although it may be true, to take
one example, that lax income statements ran rampant in
the subprime business, it could also be expected that the
age-old tactics of brokers would take on a renewed fervour
as lending boomed. Misstated income is not new; what is
new are the infrastructural conditions under which these
misstatements have occurred. To belabour the point of
underwriting error is to forget that the rationale of statis-
tical automation was to minimize and overcome the viru-
lence of precisely this kind of well-recognized ground level
activity. A provocative hypothesis would be that such error
could be expected to proceed unchecked and to increase
exactly as it ostensibly did, once muted at the systemic le-
vel.
61
In a world where multiple calculations and multiple
frames of meaning are possible, what is an error at one mo-
ment can quickly become a non-error by the criteria of an-
other, and vice versa. It is only by retreating to the rigid
view of worthiness in control-by-screening that actions
occurring in a regime of control-by-risk can be criticized as
fundamental ‘errors’. This is the ?aw of ‘error’ as a social sci-
enti?c concept in situations that are in motion: it can only
be ?xed retrospectively and de?ned from an analytically
external point of view.
This paper has taken an altogether different approach to
the subprime crisis. It has suggested that the explosion of
the subprime was not caused by a sheer increase in lending
volume stemming from irrational, fraudulent, or extra-
governmental activity, but by the super-coordination of
market actors’ decision-making around stabilized frames
of risk provided by third party commercial consumer
analytics companies. If risk is tied to the capacity to make
decisions as Millo and Holzer (2005) have cogently sug-
gested – that is to say, a decision not to lend at all is a zero
risk decision – then the unfolding volatility of subprime ?-
nance as well as its ampli?ed supply and demand would
not be related to having misjudged or underestimated risk,
so much as it would be generated by economic agents act-
ing upon newly constituted risk-bearing entities material-
ized, shaped and described by FICO
Ò
credit bureau scores.
It was not from a dearth of information (information asym-
metry), but from the presence of innovative forms of digi-
tized consumer risk scores that the infamous model of
originate-to-distribute, of creating pro?t by pushing loans
in volume onto the secondary markets, was put into prac-
tice. In this view, the protracted globe-spanning credit cri-
sis beginning in 2007 should be studied ?rst and foremost
as the temporary achievement of a tightly calculated sys-
tem of ?nancial order, not as disorder. The contemporary
?nancial turbulence is the empirical result of having en-
gaged with novel conditions of calculative possibility.
Acknowledgements
Acknowledgements are extended to Michel Callon,
Emmanuel Didier, Steven Epstein, Horacio Ortiz, Onur Ozg-
ode, Zsuzsanna Vargha and two rigorous anonymous
reviewers for their assistance in the preparation of this pa-
per. An earlier version of this work bene?ted from a grad-
uate writing seminar hosted by Bruno Latour at Science-Po
(Paris, France). The ?nal version was ?ne-tuned at a CODES
meeting of David Stark’s Center on Organizational Innova-
tion, Columbia University (New York, USA). The research
reported here has been supported by NSF award no. SES-
0451139. This paper was awarded the 2008 Sally Hacker
– Nicholas Mullins Award from the American Sociological
Association – Science Knowledge & Technology Section
(ASA-SKAT).
References
Abola?a, M. (1997). Making markets: Opportunism and restraint on wall
street. Cambridge, MA: Harvard University Press.
Adelson, M. (2004). Home equity ABS basics. New York: Nomura Fixed
Income Research. p. 24.
Adler, M., & Adler, P. (Eds.). (1984). The social dynamics of ?nancial
markets. Greenwich: JAI Press.
Avery, R., Bostic, R., Calem, P., & Canner, G. (1996). Credit risk, credit
scoring, and the performance of home mortgages. Federal Reserve
Bulletin, 621–648.
Bajaj, V. (2007). Freddie Mac tightens standards. New York Times. New
York.
Baker, T., & Simon, J. (Eds.). (2002). Embracing risk: The changing culture of
insurance and responsibility. Chicago: University of Chicago Press.
Baker, W. (1984). The social structure of a national securities market.
American Journal of Sociology, 89, 775–811.
Beunza, D., & Stark, D. (2004). Tools of the trade: The socio-technology of
arbitrage in a Wall Street trading room. Industrial and Corporate
Change, 13, 369–400.
Bitner, R. (2008). Confessions of a subprime lender, an insider’s tale of greed,
fraud, and ignorance. Hoboken: John Wiley & Sons Inc..
Black, H. (1961). Buy now, pay later. New York: William Morrow and
Company.
Bloor, D. (1991). Knowledge and social imagery. Chicago: University of
Chicago Press.
Bowker, G. C., & Star, S. L. (2000). Sorting things out: Classi?cation and its
consequences. Cambridge, MA: The MIT Press.
Burchell, S., Clubb, C., & Hopwood, A. (1985). Accounting in its social
context: Towards a history of the value added in the United Kingdom.
Accounting, Organization and Society, 10, 381–413.
Çalis ßkan, K. (2007). Price as a market device: Cotton trading in Izmir
mercantile exchange. In The sociological review. In F. Muniesa, M.
Callon, & Y. Millo (Eds.). Market devices (Vol. 55, pp. 241–260).
Oxford: Blackwell Publishers.
Callon, M. (1986). Some elements of a sociology of translation:
Domestication of the Scallops and the ?shermen of St. Brieuc Bay.
Power Action and Belief: Sociological Review Monographs, 32,
197–233.
Callon, M. (1991). Techno-economic networks and irreversibility. In J.
Law (Ed.), A sociology of monsters (pp. 132–161). London:
Routledge.
Callon, M. (1998a). Actor-network theory – The market test. In J. Law & J.
Hassard (Eds.), Actor network theory and after (pp. 181–195). Oxford:
Blackwell.
61
For a detailed enumeration of the underwriting battles that went on
between brokers and subprime mortgage wholesalers by a practitioner see
Bitner (2008).
672 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674
Callon, M. (1998b). An essay on framing and over?owing: Economic
externalities revisited by sociology. In M. Callon (Ed.), The laws of the
market (pp. 244–269). Oxford: Blackwell.
Callon, M. (2002). From science as an economic activity to the
socioeconomics of scienti?c research. In P. Mirowski & E. M. Sent
(Eds.), Science bought and sold: Essays on the economics of science
(pp. 277–317). Chicago: University of Chicago Press.
Callon, M., & Çalis ßkan, K. (forthcoming). Economization: New directions in
the studies of markets. Economy and Society.
Callon, M., & Law, J. (2003). On qualculation, agency and otherness.
Environment and Planning D: Society and Space, 23, 717–733.
Callon, M., Méadel, C., & Rabeharisoa, V. (2002). The economy of qualities.
Economy and Society, 31, 194–217.
Callon, M., & Muniesa, F. (2002). Economic markets as calculative and
calculated collective devices. Organization Studies, 26, 1229–1250.
Callon, M., Muniesa, F., & Millo, Y. (2007a). An introduction to market
devices. In The sociological review. In F. Muniesa, M. Callon, & Y. Millo
(Eds.). Market devices (Vol. 55, pp. 1–12). Oxford: Blackwell Publishers.
Callon, M., Muniesa, F., & Millo, Y. (Eds.). (2007b). Market devices. Oxford:
Blackwell Publishers.
Carruthers, B., & Stinchcombe, A. (1999). The social structure of liquidity:
Flexibility, markets, and states. Theory and Society, 28, 353–382.
Chomsisengphet, S., & Pennington-Cross, A. (2006). The evolution of the
subprime mortgage market. Federal Reserve Bank of St. Louis Review,
88, 31–56.
Clarke, A., & Fujimura, J. (Eds.). (1992). The right tools for the job. Princeton:
Princeton University Press.
Cochoy, F. (2002). Une sociologie du packaging ou l’âne de Buridan face au
marché. Paris: Presses Universitaires de France – PUF.
Cochoy, F., & Grandclément, C. (2006). Histoires du chariot de
supermarché. Ou comment emboîter le pas de la consommation de
masse. Vingtième siècle, 91, 77–93.
Collins, M., Belsky, E., & Case, K. E. (2004). Exploring the welfare effects of
risk-based pricing in the subprime mortgage market. Joint center for
housing working paper series. Cambridge, MA: Harvard University.
Comptroller of the Currency. (1997). Asset Securitization, Comptroller’s
Handbook. Liquidity and Funds Management, p. 89.
Congressional Budget Of?ce. (2003). Effects of Repealing Fannie Mae’s
and Freddie Mac’s SEC Exemptions. The Congress of the United States.
Cronon, W. (1992). Nature’s metropolis: Chicago and the Great West. New
York: W. W. Norton & Company.
Dallas, W. D. (1996). A time of innovation. Mortgage Banking, 57.
Daston, L. (2000). The coming into being of scienti?c objects. The biographies
of scienti?c objects. Chicago: Chicago University Press.
De Certeau, M., Giard, L., & Mayol, P. (1998). The practice of everyday living:
Living and cooking. Minneapolis: University of Minnesota Press.
England, R.S. (2006). Cover report, industry trends: The rise of private
label. Mortgage Banking.
Espeland, W. N. (1998). Commensuration as a social process. Annual
Review of Sociology, 24, 313–343.
Federal Reserve Board (1994a). Section 2128.08, December 2002,
Subprime lending (risk management and internal controls). Bank
holding company supervisory manual.
Federal Reserve Board (1994b). Section 2133.1, November 2002,
Subprime lending: Supervisory guidance for subprime lending.
Commercial bank examination manual.
Fleck, L. (1981). Genesis and development of a scienti?c fact. Chicago:
University of Chicago Press.
Fourcade-Gourinchas, M. (2007). Theories of markets, theories of society.
American Behavioral Scientist, 50, 1015.
Frame, S. W., & White, L. J. (2007). Charter value, risk-taking incentives,
and emerging competition for Fannie Mae and Freddie Mac. Journal of
Money, Credit and Banking, 39, 83–103.
Freddie Mac (1996). Automated underwriting report: Making mortgage
lending simpler and fairer for America’s families. McLean, Virginia:
Freddie Mac.
Garcia-Papet, M.-F. (2007). The social construction of a perfect market:
The strawberry auction at Fontaines-en-Sologne. In D. MacKenzie, F.
Muniesa, & L. Siu (Eds.). Do economists make markets? (pp 20–53).
Princeton: Princeton University Press.
Godechot, O. (2001). Les traders, essai de sociologie des marchés ?nancier.
Paris: La Découverte.
Godechot, O. (2006). ‘‘What’s the market wage” How compensation
surveys give form to the ?nancial labor market. Genèse, 63, 108–127.
Guala, F. (2001). Building economic machines: The FCC auctions. Studies in
the History and Philosophy of Science, 32, 453–477.
Guseva, A., & Rona-Tas, A. (2001). Uncertainty, risk and trust: Russian and
American credit card markets compared. American Sociological Review,
66, 623–646.
Hilzenrath, D.S. (2008). Fannie’s Periolous Pursuit of Subprime Loans, as it
tried to increase its business, company gave risks short shrift,
documents show. Washington Post.
Hopwood, A. (1983). On trying to study accounting in the contexts in
which it operates. Accounting, Organization and Society, 8, 287–305.
Hopwood, A. (1987). The archaeology of accounting systems. Accounting,
Organizations and Society, 12, 207–234.
Hopwood, A. (2000). Understanding ?nancial accounting practice.
Accounting, Organizations and Society, 25, 763–766.
Hopwood, A., & Miller, P. (Eds.). (1994). Accounting as social and
institutional practice. Cambridge: Cambridge University Press.
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.
Kalthoff, H. (2005). Practices of calculation: Economic representations
and risk management. Theory, Culture and Society, 22, 69–97.
Keoun, B. (2007). First Franklin purchase weight on Merrill chief.
International Herald Tribune.
Kirn, W. (2006). My debt, their asset. New York Times Magazine, 26.
Knorr Cetina, K., & Preda, A. (2005). The sociology of ?nancial markets.
Oxford University Press.
Kohler, R. (1994). Lord of the ?y: Drosophila genetics and experimental life.
Chicago: University of Chicago Press.
Latour, B. (1987). Science in action: How to follow scientists and engineers
through society. Cambridge, MA: Harvard University Press.
Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in
everyday life. Cambridge University Press.
Lave, J., Murtaugh, M., & de la Rocha, O. (1984). The dialectic of arithmetic
in grocery shopping. In B. Rogoff & J. Lave (Eds.), Everyday cognition
(pp. 67–94). Cambridge, MA: Harvard University Press.
Lépinay, V.-A. (2002). Finance as circulating formulas. In Conference on
social studies of ?nance. Center on Organization Innovation, Columbia
University: Social Science Research Council.
Lépinay, V.-A. (2007). Parasitic formulae: The case of capital guarantee
products. In The sociological review. In F. Muniesa, M. Callon, & Y. Millo
(Eds.). Market devices (Vol. 55, pp. 261–283). Oxford: Blackwell
Publishers.
Levinson, M. (2006). The box: How the shipping container made the world
smaller and the world economy bigger. Princeton: Princeton University
Press.
Lewis, E. (1992). An introduction to credit scoring. San Rafael: Fair, Isaac
and Co..
Lewis, M. (1990). Liar’s poker: Rising through the wreckage on wall street.
Markham: Penguin Books.
Leyshon, A., & Thrift, N. (1999). Lists come alive: Electronic systems of
knowledge and the rise of credit-scoring in retail banking. Economy
and Society, 28, 434–466.
MacKenzie, D. (2003). An equation and its worlds: Bricolage, exemplars,
disunity and performativity in ?nancial economics. Social Studies of
Science, 33, 831–868.
MacKenzie, D. (2006). An engine not a camera. Cambridge, MA: The MIT
Press.
Mallard, A. (2007). Performance testing: Dissection of a consumerist
experiment. In The sociological review. In F. Muniesa, M. Callon, & Y.
Millo (Eds.). Market devices (Vol. 55, pp. 152–172). Oxford: Blackwell
Publishers.
Marron, D. (2007). ‘Lending by numbers’: Credit scoring and the
constitution of risk within American consumer credit. Economy and
Society, 36, 103–133.
Miller, P. (2008). Calculating economic life. Journal of Cultural Economy, 1,
51–64.
Miller, P., & O’Leary, T. (1987). Accounting and the construction of the
governable person. Accounting, Organization and Society, 12, 235–265.
Miller, P., & O’Leary, T. (1994). Governing the calculable person. In A.
Hopwood & P. Miller (Eds.), Accounting as social and institutional
practice (pp. 98–115). Cambridge: Cambridge University Press.
Miller, P., & Rose, N. (1997). Mobilising the consumer: Assembling the
subject of consumption. Theory, Culture and Society, 14, 1–36.
Millo, Y. (2007). Making things deliverable: The origins of index-based
derivatives. In The sociological review. In F. Muniesa, M. Callon, & Y.
Millo (Eds.). Market devices (Vol. 55, pp. 196–294). Oxford: Blackwell
Publishers.
Millo, Y., & Holzer, B. (2005). From risks as second-order dangers in
?nancial markets: Unintended consequences of risk management
systems. New Political Economy, 10, 223–246.
Muniesa, F. (2000). Un robot walrasien: Cotation électronique et justesse
de la découverte des prix. Politix, 13, 121–154.
Poon, M. (2007). Scorecards as devices for consumer credit: The case of
Fair, Isaac & Company incorporated. In The sociological review. In F.
Muniesa, M. Callon, & Y. Millo (Eds.). Market devices (Vol. 55,
pp. 284–306). Oxford: Blackwell Publishers.
M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674 673
Preda, A. (2006). Socio-technical agency in ?nancial markets: The case of
the stock ticker. Social Studies of Science, 36, 753–782.
Quinn, L. R. (2000). Credit score scrutiny. Mortgage Banking,
50–55.
Raiter, F., Gillis, T., Parisi, F., Barnes, S., Meziani, S., & Albergo, L. (1997).
Innovations in mortgage risk management. Standard & Poor’s Rating
Services.
Raiter, F., & Parisi, F. (2004). Mortgage credit and the evolution of risk-based
pricing. Joint center for housing studies working paper series. Cambridge,
MA: Harvard University.
Rheinberger, H.-J. (1997). Toward a history of epistemic things: Synthesizing
proteins in the test tube. Stanford: Stanford University Press.
Rose, N., & Miller, P. (1992). Political power beyond the state:
Problematics of Government. The British Journal of Sociology, 43,
173–205.
Rosenblueth, A., & Wiener, N. (1945). The role of models in science.
Philosophy of Science, 12, 316–321.
Schorin, C., Heins, L., & Arasad, A. (2003). Home equity hand book. Morgan
Stanley.
Shiller, R. J. (2005). Irrational Exuberance. Princeton: Princeton University
Press.
Shiller, R. J. (2008). The Subprime Solution. Princeton: Princeton University
Press.
Sinclair, T. J. (2005). The new masters of capitalism: American bond rating
agencies and the politics of creditworthiness. Ithaca: Cornell University
Press.
Smelser, N., & Swedberg, R. (1994). The handbook of economic sociology.
Princeton: Princeton University Press.
Standard & Poor (1999). Structured ?nance: US residential subprime
mortgage criteria. New York: The McGraw-Hill Companies.
Star, S. L. (1999). The ethnography of infrastructure. American Behavioral
Scientist, 43, 377–391.
Straka, J. W. (2000). A shift in the mortgage landscape: The 1990s move to
automated credit evaluations. Journal of Housing Research, 11,
207–232.
Swedberg, R. (2003). The principles of economic sociology. Princeton:
Princeton University Press.
Syron, R. (2007). Testimony of Richard F. Syron. Committee on Financial
Services, US House of Representatives.
Thévenot, L. (1984). Rules and implementations: Investment in forms.
Social Science Information, 23, 1–45.
Thornbert, C. (2007). Fannie and Freddie, old and new: In an uncertain
market, perhaps the lenders’ future lies in their past. Los Angeles
Times. Los Angeles.
Vollmer, H. (2007). How to do more with numbers: Elementary stakes,
framing, keying, and the three-dimensional character of numerical
signs. Accounting, Organizations and Society, 32.
Wissler, A. (1989). Les jugements dans l’octroi de crédit. In L. Boltanski &
L. Thévenot (Eds.). Justesse et justice dans le travail (Vol. 33,
pp. 67–120). Paris: Presses Universitaires de France.
Zaloom, C. (2006). Out of the pits: Traders and technology from Chicago to
London. The University of Chicago Press.
674 M. Poon/ Accounting, Organizations and Society 34 (2009) 654–674

doc_576607546.pdf
 

Attachments

Back
Top