Changing Relationship of Hearst

Description
In the second essay, I discuss a related issue - how newly internetworked information technology allows people acting in their own self-interest to indirectly affect the experiences of other people. It is to be expected that people will try to trick or deceive systems that support intrinsically social activities, such as running auctions.

Has the ice man arrived? Tact on
the Internet
Jonathan Grudin, UC Irvine and Microsoft
Research
Several years ago at Bellcore, researchers
thought it would be great to access news-
group contributions by people they admired.
They wrote a program to archive and search
newsgroups. They tested it by entering the
names of a few colleagues. “We soon
found,” one recounted, “that we were dis-
covering things about our friends that we
didn’t want to know.”
The Internet has created a new focus of
computation: computer-mediated communi-
cation and interaction. Most of what is com-
municated is received indirectly. On the
Web, above all else we see what people pro-
duce and make available; also, we read what
people say and how others respond, receive
indications of what people have done or are
doing, and so on. The Internet’s greatness
resides in this extremely efficient spread of
information. It is efficient, but it is not dis-
crete, not tactful. Even when communicat-
ing directly on the Internet, we often neglect
tact for brusqueness or flaming. Indirect
communication and awareness, the focus of
this essay, is unsoftened by the technology.
A word to the wise
Human communication is marked by
tact. Knowing when and how to be tactful
requires knowledge of the communication
context, which is often lost or altered in
computer-mediated interaction. Newsgroup
messages are written in a context that ap-
pears to participants to be “chatting in a
room,” an ephemeral conversation among a
group of like-minded people. But of course
what is said can later be read outside that
context, by anyone, anytime, anywhere. It
can even end up being read in court.
Is anything wrong with openness? Is tact
necessary? Well, yes, it is. The candor of
children, who don’t fully understand a con-
versation’s social context, can be refresh-
ing in small doses, but we all learn that tact
is essential in most communication. We
constantly observe social conventions,
avoid social taboos, minimize needless
embarrassment, and allow people to pre-
serve the gentle myths that make life more
pleasant. Eugene O’Neill’s play The Ice
Man Cometh outlines a series of calamities
that occur when his characters are briefly
forced to abandon these myths.
Consider another example, in which
technology removed an illusion of fairness.
A programming class instructor proposed
that students submit homework solutions
and receive the graded corrections via e-
mail. The students produced a counter-
proposal: After grading an exercise, the
instructor posts all of the graded solutions
for everyone to see! In this way, the stu-
dents can discover what had been tried,
what worked and what didn’t, and which
solution is more elegant. They can learn
from each other.
It sounds great. But, those who have
graded papers probably recall that after
working through the entire set, you might
regrade the first few, because it took a while
to work out how many points to subtract for
this or that kind of error. Grading is not per-
fectly consistent. In this class, the grading is
visible to everyone. The instructor works
harder than usual to be consistent, but stu-
8 IEEEINTELLIGENTSYSTEMS
The changing relationship
between information
technology and society
Society and information technology are rapidly co-evolving, and often in surprising ways.
In this installment of “Trends and Controversies,” we hear three different views on how soci-
ety and networked information technology are changing one another.
Becoming socialized means learning what kinds of behavior are appropriate in a given
social situation. The increasing trend of digitizing and storing our social and intellectual
interactions opens the door to new ways of gathering and synthesizing information that was
previously disconnected. In the first essay, Jonathan Grudin—a leading thinker in the field of
computer-supported cooperative work—points out that, like a naive child, information tech-
nology often ignores important contextual cues, and tactlessly places people into potentially
embarrassing situations. He suggests that as we continue to allow computation into the more
personal and sensitive aspects of our lives, we must consider how to make information tech-
nology more sophisticated about social expectations, and become more sophisticated our-
selves in understanding the nature of computer-mediated services.
In the second essay, I discuss a related issue—how newly internetworked information
technology allows people acting in their own self-interest to indirectly affect the experiences
of other people. It is to be expected that people will try to trick or deceive systems that sup-
port intrinsically social activities, such as running auctions. What is surprising here is that
technologies that do not obviously have a social aspect, such as information-retrieval ranking
algorithms, are nevertheless being manipulated in unexpected ways once they “go social.”
In our third essay, Barry Wellman—a sociologist and an expert in social network the-
ory—explains how the structure of social networks affects the ways we live and work. He
describes the move away from a hierarchical society into a society in which boundaries
are more permeable and people are members of many loosely knit groups. He introduces
the notion of glocalization: simultaneously being intensely global and intensely local.
Wellman describes how computer-mediated communication is contributing to this glocal-
ization transition in social habits and infrastructure. As networked information technol-
ogy continues to provide us with new views of ourselves, we hope that these essay will
help designers of information technology better understand the broader impact of the
work they do.
By Marti A. Hearst
Uni versi ty of Cal i forni a, Berkel ey
hearst@si ms.berkel ey.edu
T R E N D S & C O N T R O V E R S I E S
T R E N D S & C O N T R O V E R S I E S
.
JANUARY/ FEBRUARY 1999 9
dents still detect inconsistencies, complain,
and might conclude that a previously ad-
mired instructor is careless or unfair. The in-
structor works harder than usual to be con-
sistent, but ends up disappointing the
students. The students’ illusion, their belief
in the consistency of grading, is undermined
by the technology. It is tempting to welcome
a dose of reality, but in these examples, no
one is happy about the outcome.
Another example: Compliment a confer-
ence organizer on the smoothness of the
event and you might be told, “If you could
see the chaos and near-catastrophe behind
the scenes....” Now, technology can reveal
what had been “behind the scenes.” In the
Web’s early days, I participated in two con-
ferences in which much of the activity was
made visible to all program committee
members. For example, once reviews of
submissions were written, we could read
them online and begin resolving differ-
ences of opinion by e-mail, prior to the
program committee meeting. Very effi-
cient, but problems arose.
Administrative errors in handling the
database were immediately seen by every-
one and led to confusion or embarrassment.
Reviewers could scan the reviews and
observe patterns: for example, you were
invariably easy, I was invariably harsh; she
was a careful reviewer, he was pretty casual
about it. In addition, some reviewers felt
uneasy about their reviews of a paper being
read “out of context” by people who had
not read the paper. Assumptions of smooth
management and comparable reviewing
performance were demolished. The plan-
ning of these conferences seemed chaotic
to me, but one of the organizers remarked
that in his experience, it was in fact unusu-
ally smooth, because the organizers knew
that all slipups would be visible and thus
“we felt we were on stage at all times; we
had to be careful.” Our difference in per-
ception arose because the technology made
visible more of the underlying reality.
The underlying reality
What is the underlying reality? Ethnogra-
phers or anthropologists have studied work-
places and repeatedly shown that behavior
is far less routine than people believe.
Exception-handling, corner-cutting, and
problem-solving are rampant, but are
smoothed over in our reports and even in
our memories, whether out of tact or simply
to get on with the job. People normally
maintain an illusion of relative orderliness.
Technology is changing that. The more
accurately and widely it disperses informa-
tion about the activities of others, the more
efficiently we can work, but at a price: irreg-
ularity, inconsistency, and rule breaking that
were always present are now exposed and
more difficult to ignore. In a well-known
example, technology could detect all auto-
mobile speeding violations. If we don’t use
it, how do we decide when and against
whom to selectively enforce the law?
A police officer might use context to
guide enforcement—weather and traffic
conditions, perhaps. We might tactfully
overlook a colleague’s occasional tardi-
ness. But technology is poor at assessing
context; it does not tactfully alter a time
stamp. We once could imagine a colleague
as an imposing person, who pays attention
to detail, but e-mail reveals his careless
spelling, his outdated Web site instantly
reveals a relative lack of organization or
concern for his image, and a video portal
catches him picking his nose. None of this
negates the huge benefits of these technolo-
gies, but it creates a challenge. Many chal-
lenges, in fact: in our computer-mediated
interactions during the days and years to
come, we will have to address this issue
over and over, as individuals, as members
of teams and organizations, and as mem-
bers of society.
What to do?
How can we address technology’s lack
of tact, its inability to leave harmless illu-
sions untouched?
Can we build more tact into our sys-
tems? Spelling correctors help. Perhaps
the video portal, detecting a colleague
changing clothes for a tennis match and
having forgotten about the camera, could
recognize what is happening and dis-
cretely blur the focus. Perhaps a virtual
Miss Manners could proofread my e-mail,
or a virtual lawyer could scan an automat-
ically archived meeting and flag sensitive
words. But realistically, these are exceed-
ingly subtle, complex, human matters
involving knowledge of an interaction’s
context, tacit awareness of social conven-
tions and taboos, and appreciation of
which illusions and corner cutting are
harmless or even beneficial and which are
problematic. It is a worthy goal, but intel-
ligent systems will only slowly start to
carry some of the weight.
Another possibility is to retreat. In some
cases, we will decide the increased effi-
ciency isn’t worth it. In the examples I’ve
cited, the newsgroup scanner was aban-
doned, the conferences stopped making as
much information visible in subsequent
years, and posting graded exercises has not
become a custom. But these were inten-
tionally extreme examples. Examples
abound in the daily use of the Internet and
Web, from which there will be no retreat.
Our actions are becoming much more visi-
ble; the global village is arriving. And, in
general, I believe there are tremendous
benefits in efficiency, in the fairness that
visibility promotes, and in the ability to
detect significant problems and inconsis-
tencies. We might be too worried, too cau-
tious in embracing these new technologies.
A third approach seems inevitable: We
will find new ways to work, to organize
ourselves, and to understand ourselves. The
solutions might not be obvious. I have fre-
quently described the case of the program-
ming class instructor, who works harder
but has a more dissatisfied class, as an ap-
parently insoluble dilemma. I recently pre-
sented it to Douglas Engelbart. He thought
for several seconds, then said, “The class
could develop a collective approach to
grading assignments.”
Coming Next
Issue
Intelligent Rooms
Inour next issue, HaymHirsh
will present a discussionof
intelligent rooms, withessays
by
• James Flanagan, Rutgers
University
• Michael Mozer, University of
Colorado
• RichardHasha, Microsoft
• Michael Coen, MITAI Lab
.
When information technology
“goes social”
Marti Hearst, UC Berkeley
In everyday life we often observe the un-
intended consequences of the actions of
individuals on society as a whole. If I intend
to go to San Francisco from Marin County,
I might well get in my car and drive to the
Golden Gate Bridge. Although I certainly
do not have the goal of slowing down some-
one else’s trip to the city, my action might
indeed contribute to this result. I can even
unintentionally add hours to the travel time
of thousands of fellow motorists if my car
stalls on the bridge. Most people do not
ever consider deliberately blocking traffic,
but there are exceptions. Protestors can
exploit the vulnerability of the freeway sys-
tem to tie up the rush-hour commute, and
youths can deliberately disrupt local traffic
patterns by “cruising” suburban streets.
The rise of the Web and other networked
information technologies has brought
about new, sometimes surprising, ways for
the actions of individuals and small groups
to have impact on other people with whom
they otherwise have no relationship. Many
of these new opportunities are exciting and
promise great benefits. For example, after I
purchase a book from Amazon.com, I am
shown suggestions of books bought by
other people who also bought my new
book. If I want to find out how to fix an
electrical problem with my car, it may be
the case that someone I never met has writ-
ten up a solution and placed it on the Web.
However, the interconnectivity and global
accessibility of the Web has also given rise
to some unexpected ways in which people
can take advantage of the technology at the
expense of other people. Applications that
heretofore would not have been assumed to
have social ramifications are in fact allowing
unexpected interactions among their users.
This essay presents the case that information
scientists need to begin thinking about de-
sign in a new way—one that incorporates
the potential consequences if the output of
their systems are likely to “go social.” Infor-
mation technology “goes social” when the
exposure of its output makes a transition
from individuals or small groups to large
numbers of interconnected users.
Gaming Web search engines
Let’s look at a few examples. The first is
a field I know well—information retrieval.
The standard problem in IR is that of
helping users find documents that (partial-
ly) fulfill an information need. If there
were only a few documents to choose from,
finding the relevant ones would be a simple
process of elimination. However, there are
millions of valuable documents as well as
myriad documents of questionable general
worth (for those who think the Web con-
tains mainly junk, the Library of Congress
alone catalogs over 17 million books, and a
trend toward moving materials online will
ensure large amounts of high-quality online
material). Given many equally valid pieces
of information coexisting simultaneously,
the problem becomes that of pushing aside
those that are not relevant, or pulling out
the few that are relevant to the current
need. Thus it is not so much a problem of
finding a needle in a haystack, as finding a
needle in a “needlestack.”
IR is different than retrieval from a stan-
dard database-management system. In
DBMSs, all information is entered in a pre-
cisely controlled format, and for a given
query there is one and only one correct
answer. By contrast, IR systems must make
do with only an approximation to an accu-
rate query, ranking documents according to
an estimate of relevance. This fuzzy behav-
ior is an unfortunate consequence of the
fact that automated understanding of nat-
ural language is still a distant dream.
Instead of understanding the text, an IR
algorithm takes as input a representation of
the user’s information need, usually ex-
pressed as words, and matches this repre-
sentation against the words in the docu-
ment collection. In practice, if the query
contains a relatively large number of words
(say, a paragraph’s worth), then documents
that also contain a large proportion of the
query words will tend to be relevant to the
query. This works because there tends to be
overlap in the words used to express simi-
lar concepts. For example, the sentence
“The Mars probe Pathfinder is NASA’s
main planetary explorer” will tend to share
words with a newspaper account of the
same topic. However, this strategy is not
infallible; if an inappropriate subset of
query words overlaps, nonrelevant docu-
ments may be retrieved. For example, an
article containing the sentence “A vandal
easily mars the paint job of the Pathfinder,
the Explorer, and the Probe” shares four
terms with the previous sentence, although
their meanings are quite different.
Additionally, the short length (1-2 words)
of queries submitted to search engines could
cause IR systems to retrieve documents un-
related to the user’s information need. For
example, a user searching for articles on
Theodore Roosevelt might find information
about a football team located at a school
named after this US president.
Thus IR systems circumvent the need for
automated text understanding by capitaliz-
ing on the fact that the representation of a
document’s contents can be matched
against the representation of the query’s
contents, yielding inexact but somewhat
usable results. For over 30 years, IR re-
search has focused on refining algorithms
of this type. However, in the course of
those 30 years, no one had the faintest
glimmer of what would happen when IR
technology went social.
What had never been imagined was that
authors would deliberately doctor the con-
tent of their documents to deceive the rank-
ing algorithms. Yet this is just what hap-
pened once the Web became widespread
enough to be attractive to competing busi-
nesses, and once search engines began
reporting that thousands of documents
could be found in response to queries.
Web-page authors began gaming the
search-engine algorithms using a variety of
methods. One technique is to embed the
contents of the wordlist of an entire dictio-
nary in the Web page of interest. (The words
are hidden using the HTML comment tag—
comments are invisible to humans reading
the page, but are indexed by some Web
search engines. A similar effect can be
achieved by formatting the text in the same
color as the page background.) For the rea-
sons I’ve described, the inclusion of addi-
tional words, whether or not they have any-
thing to do with the content of the page,
increases the likelihood of a match between
10 IEEEINTELLIGENTSYSTEMS
Information technology
“goes social” when the
exposure of its output
makes a transition from
individuals or small groups
to large numbers of
interconnected users
.
JANUARY/ FEBRUARY 1999 11
a user’s query and a Web page.
There are also cases of authors placing
words that are known to be of interest to
many information seekers (“sex” or “bug-
free code,” for example) into a Web page’s
meta tag field, because some search en-
gines assign high weight to meta tag con-
tent. A variation on this theme is to use a
word that really is relevant to the content of
the Web page, but repeat the word
hundreds of times, exploiting the fact that
some search engines increase a document’s
ranking if a query term occurs frequently
within that document. Listing the names of
one’s competitors in the Web page’s com-
ments section can also mislead a search
engine; if a user searches on a competitor’s
name, the search engine will retrieve one’s
own Web page but no information about
the competitor will be visible.
These techniques could be seen as mod-
ern-day equivalents of naming businesses
in such a manner as to get them listed first
in the phone book— AAA Dry Cleaners,
for example. This doctoring of the content
of documents might also be considered an
entirely new way of using words as weap-
ons; a new way to make words mean other
than what they say; something we might
call subliminal authoring.
Search-engine administrators quickly
catch on to these techniques. Ranking algo-
rithms can be adjusted to ignore long lists
of repeated words, and some search en-
gines do not index comments or meta tags
because of the potential for abuse. This can
quickly devolve into a series of moves and
counter-moves. For example, users can
submit Web-page URLs to search engines
to get the pages reindexed and thus have
the index reflect changes more rapidly.
Some Web-page doctorers (incorrrectly)
assumed that multiple submissions of a
page would cause its ranking to increase,
and so tried submitting their pages thou-
sands of times over. Search-engine admin-
istrators noticed this behavior and started
taking punitive action against repeat resub-
mitters. In response, some people have
considered repetitively resubmitting the
Web pages of their competitors in the
hopes of getting these pages eliminated
from the search engine indexes.
1
Of course, search-engine providers
aren’t all innocent in this. It is claimed that
some will rank Web pages higher than oth-
ers for a fee. This kind of behavior is also
something that simply would not have been
thought of in the earlier, pre-social days of
information retrieval.
System design for social
interactions
The lower levels of networking software
allow computers to send and receive data
from one another. The difficulties with such
software reside in the design of systems that
work accurately, reliably, and efficiently.
However, it has become apparent that the
difficulties in the design of systems that
support interaction among groups of people
or on behalf of people lie not so much in the
creation of efficient, reliable algorithms.
Instead, these systems must be designed to
take into account fuzzier concerns relating
to the social practices, assumptions, and
behaviors of people. Computer-supported
cooperative work (CSCW) researchers have
shown that groupware applications such as
shared calendars and meeting tools must be
sensitive to the various conflicting goals of
the group participants. For example, admin-
istrative assistants, engineers, and managers
disagree on what the important features of a
calendar/scheduling system are.
2
Information systems that take actions on
behalf of human users must take into ac-
count how users might try to manipulate the
system. Designers of auction or voting sys-
tems must consider how users might try to
deceive the system by voting multiple times
or preventing others from voting. Designers
of agents that negotiate prices for goods
must consider the potential for bait-and-
switch pricing tactics, pricing collusion
between competitors, and general fraudu-
lent business practices. Because these sys-
tems perform actions traditionally done by
people interacting with one another, it is
perhaps unsurprising (in retrospect) that
social considerations must be taken into
account to make these systems succeed.
The new phenomenon we observe here
is that even systems whose underlying goal
is not that of supporting social interactions
are nevertheless being used in this manner.
We might need to accede that when infor-
mation technology goes social, informa-
tion-system developers must learn to adopt
defensive strategies, just as neophyte dri-
vers have to learn about defensive driving.
Defensive driving is not necessary if there
are no other drivers on the road; similarly
we do not need this type of defensive strat-
egy with information technologies unless
they are networked together.
What’s in a domain name?
Let’s now consider another example. A
Web-page server’s “real” network address
Jonathan Grudin is a professor in the Information and Computer Science
Department of the University of California, Irvine, and a member of the
Collaboration and Education Group at Microsoft Research. His research
interests include human-computer interaction and computer-support coop-
erative work. He earned a BA in mathematics and physics from Reed Col-
lege, an MS in mathematics from Purdue University, and a PhD in cognitive
psychology from the University of California, San Diego. He is Editor-In-
Chief of ACM Transactions on Computer-Human Interaction and was co-
chair of the CSCW’98 Conference on Computer Supported Cooperative
Work. Contact him at Microsoft Research, One Microsoft Way, Redmond
WA 98052-6399; [email protected]; www.ics.uci.edu/~grudin.
Barry Wellman is a professor of sociology at the University of Toronto, the
founder of the International Network for Social Network Analysis, the chair
of the Community section of the American Sociological Association, and
the virtual community focus area advisor of ACM Siggroup. He attended
the Bronx High School of Science in the days of rotary calculators and
learned how to keypunch and wire a counter-sorter while getting his PhD
from Harvard. He has coedited Social Structures: A Network Approach (2nd
ed., JAI Press, London, 1997) and has edited Networks in the Global Village
(Westview Press, Boulder, Colo., 1999). He is spending January through
May as a visiting professor at the School of Information Management and
Systems, University of California, Berkeley. Contact him at the Centre for
Urban & Community Studies, Univ. of Toronto, 455 Spadina Ave., Toronto, M5S 2G8 Canada;
[email protected]; http://www.chass.utoronto.ca/~wellman.
Marti Hearst is an assistant professor in the School of Information Management & Systems at
the University of California, Berkeley. She is also coeditor of “Trends & Controversies.” Her re-
search interests focus on user interfaces and robust language analysis for building information-
access systems, and on furthering the understanding of how people use and understand such sys-
tems. She received her BA, MS, and PhD in computer science from the University of California,
Berkeley. Contact her at the School of Information Management & Systems, South Hall, Rm.
102, Univ. of California, Berkeley, CA 94720-4600; [email protected]; http://www.sims.
berkeley.edu/~hearst/.
.
is a represented as a string of digits sepa-
rated by periods. These serve as identifiers
to allow computers on the network to dis-
tinguish one from another.
However, Web servers also support
URLs that contain domain names, which
act as mnemonic pseudonyms for the nu-
meric IDs. Usually, a domain name reflects
the name of the institution to which it be-
longs. For example, www.berkeley.edu
refers to the UC Berkeley home page, and
www.whitehouse.gov refers to the US
White House’s home page.
An entirely unexpected and opportunistic
exploitation of these naming conventions
has arisen, relying on the fact that people
tend to make spelling errors. Web sites have
been created whose domain names have no
resemblance to the content they contain, or
whose domain names are common mis-
spellings of the names of popular sites. For
example, www.whitehouse.com contains
pornographic material; conversely, www.
playby.comconsists solely of advertise-
ments for technical products.
Names are not particularly important
when a computer is communicating with
another computer. Within computer sys-
tems, ID strings serve simply to distinguish
one entity from another and do not have
intrinsic meaning. However, once exposed
to and used by people, the symbols take on
meaning. People will interpret and interact
with the identifiers in ways impossible to
imagine a computer doing. Most likely the
creation of mendacious domain names
would not have been thought of, much less
considered important, until large numbers
of people became interconnected, using not
only the same technology but also viewing
the same information.
This situation stems in part from the
rather egalitarian manner in which domain
names were originally assigned. In fact,
domain names were allocated in a manner
similar to how the Department of Motor
Vehicles assigns vanity license plate names.
Pretty much anyone can have pretty much
any license plate as long as it isn’t already
taken by someone else and fits within the
prescribed length limitations and uses the
standard alphanumeric characters. License-
plane names are also subject to certain
restrictions about what constitutes good
taste, and it has long been a game of the
public versus the DMV to try to fool the
censors into accepting license plates with
questionable interpretations.
The difference between URLs and li-
cense plates, of course, is that only a few
people can see a license plate at any one
time, and they are not particularly useful
for business on a large scale. Also, a car
cannot be instantly retrieved just by invok-
ing the name on it’s license plate.
Hypertext
I am a member of an interdisciplinary
program whose faculty include computer
scientists, law professors, economists, and
other social scientists, and whose mission
is to integrate the social and the technical
in the study of the organization, manage-
ment, and use of information.
One day in lecture last semester, I men-
tioned to our interdisciplinary masters stu-
dents that HTML and the Web ignored
much of what had been learned about
hypertext in the preceding decade, includ-
ing such things as link types and bidirec-
tional links. One student asked what would
happen if the Web allowed bidirectional
links. I did what all smart professors do
when posed with a difficult question in
class: instead of answering, I made it into a
homework assignment question.
I asked the students to perform a ge-
danken experiment, and discuss what
would happen if the Web supported bidi-
rectional links. They were to consider a
scenario in which, if a link was made from
A to B on any page, a reverse link could be
forced to appear from B to A.
In my computer scientist naivete, I as-
sumed this would be a good thing, allowing
me to easily show citations at the end my
text and have the citations point back to the
place in the text from which they were ref-
erenced, make it easier to make tables of
contents, and generally make it easier to
find related information.
However, the socially savvy students’
answers surprised me. Out of 19 students,
only one thought bidirectional links would
be an inherently good thing. Instead, they
foresaw all manner of disastrous outcomes,
including
• Link spamming: for example, people
could damage a company by flooding
its home page with spurious backlinks,
or people could force someone’s per-
sonal home page to link back to an
offensive page about themselves (such
as “babes of the Web”).
• False endorsements: people could make
it look as if some entity endorsed their
Web page by linking to that entity;
pages could be forced to link to adver-
tisers’ pages.
• Loss of control of information: If bidi-
rectional links were the only type of
link available, their use could prevent
the ability to hide internal information,
as in the case in which a link internal to
a firewall pointed to a page in the exter-
nal world.
Of course, no one has suggested imple-
menting forced bidirectional links in this
way (the standard technical solution is to
store all links in a separate link database,
rather than place them within the page
itself). On the Web, standard read/write
restrictions on file systems prevent this kind
of activity. However, when discussing why
bidirectional links were not used in the
design of HTML and HTTP, these kinds of
concerns are not named. In the design notes
for the WWW, Tim Berners-Lee writes:
Should the links be monodirectional or bidi-
rectional?
If they are bidirectional, a link always
exists in the reverse direction. A disadvantage
of this being enforced is that it might constrain
the author of a hypertext—he might want to
constrain the reader. However, an advantage is
that often, when a link is made between two
nodes, it is made in one direction in the mind
of its author, but another reader may be more
interested in the reverse link. Put another way,
bidirectional linking allows the system to
deduce the inverse relationship, that if A
includes B, for example, that B is part of A.
This effectively adds information for free. ...
3
Here, Berners-Lee expresses concern
about a lack of control by the author over the
reader’s experience, but none of the poten-
tially negative social impacts considered by
my students comes into account.
Before going social via the Web, most
hypertext linking happened within a single
document, project, or small user group. In
12 IEEEINTELLIGENTSYSTEMS
What had never been
imagined was that authors
would deliberately doctor
the contents of their
documents to deceive
search-ranking algorithms.
.
JANUARY/ FEBRUARY 1999 13
the late 80’s, before the rise of the
Web, there were many competing
technologies, none compatible with
the others Since going social,
hypertext has become useful for
linking information in far-flung
places, assembled by people who
don’t know each other or have
access to one another’s data. Links
outside of given projects can be
more useful than internal ones,
because they lead to resources less
likely to be known to the internal
group members. However, this kind
of interaction was not on the radar
screen in hypertext thought and
research.
For example, in the ACM Hyper-
text 89 proceedings,
4
the authors were gen-
erally concerned with semantics of link
types, navigation paths, how not to get lost
(still a big problem!), and how to author
hypermedia documents. Only two papers
discuss the possibility of cross-project
links. The first, a systems paper by Amy
Pearl describing how documents on differ-
ent systems might be interlinked, simply
assumed bidirectional links as the only link
type. The other paper, called “Design Issues
for Multi-Document Hypertexts” by Bob
Glushko, shows clearly that at the time the
notion of inter-document linking in real
systems was a radical one.
In his closing plenary address at Hyper-
text 91,
5
Frank Halasz revisited the issues
he had raised in his landmark 1987 paper
“Reflection on NoteCards: Seven Issues for
the Next Generation of Hypermedia Sys-
tems.” These issues related to searching,
link structure, and various computational
concerns. Halasz also discussed supporting
social interactions over a hypermedia net-
work, but focused on Randy Trigg and
Lucy Suchman’s notion of mutual intelligi-
bility
6
(making sure participants can under-
stand what each person is doing) and how
to write readable hypertext (which in retro-
spect he realized did not belong in the so-
cial category). Halasz also introduced four
new issues, one of which was the need for
open systems to allow cross-system link-
ing, and another which he called the prob-
lem of very large hypertexts. The problems
he foresaw in this category had to do with
scaling large systems and disorientation in
large information spaces. He did not men-
tion potential social concerns.
A book called Society of Text
7
published
in 1989 contains a collection of 23 research
papers on hypertext, multimedia, and their
use. However, no papers discuss the conse-
quences of many simultaneous users, or
even begin to hint at the possibility of
deceitful or ill-intentioned linking. Rather,
hypertext was discussed in terms of how it
might bring about a new way of thinking, a
way of modeling the mind in the computer,
or a new way of reading. Most of the con-
cern was about how to design hypertext
layout to eliminate confusion and clutter.
The social concerns pertained to how the
writing profession might change and how
users collaborate when authoring together.
Given that it still wasn’t clear if hypertext
would even be intelligible to most people, it
is perhaps not surprising that researchers
were not considering what would happen
when millions of people were linking hun-
dreds of millions of documents.
Ted Nelson, who coined the term “hyper-
text” in 1965 and who since then has been
an evangelist for its execution in his vision
of the Xanadu system, did worry about cer-
tain social issues, namely copyright and
how to handle payments for access (this
system was the subject of a critical legal
analysis by Pamela Samuelson and Robert
Glushko, which brought up additional
social issues
8
). In the Xanadu system,
authors were to pay to put their writings in
the system, and readers were to pay to read
these works. Readers could also add hyper-
links to improve the findability of informa-
tion within the system, and would receive
payment when other readers used these
links. Link creators would only be compen-
sated if their links were traversed by others,
thus motivating authors to create high-qual-
ity links. However, pernicious links
like those anticipated by the SIMS
students were not considered, per-
haps because Xanadu was to be a
closed system over which its ad-
ministrators could exert control.
8
A true exception can be found in
Jakob Nielsen’s 1990 book Hyper-
text & Hypermedia.
9
On page 197 of
this book of 201 pages, under the
heading “Long Term Future: Ten to
Twenty Years,” he cautiously pre-
dicts large shared information
spaces at universities and some com-
panies. In this context, he points out
some potential social consequences
of shared information spaces.
If thousands, or even millions of people add
information to a hypertext, then it is likely
that some of the links will be “perverted” and
not be useful for other readers. As a simple
example, think of somebody who has inserted
a link from every occurrence of the term
“Federal Reserve Bank” to a picture of Uncle
Scrooge’s money bin. ...
These perverted links might have been
inserted simply as jokes or by actual vandals.
In any case, the “structure” of the resulting
hypertext would end up being what Jef Raskin
has compared to the New York City subway
cars painted over by graffiti in multiple unco-
ordinated layers.
10
Interestingly, three paragraphs later, he
also proposes the use of popularity of fol-
lowing hyperlinks as a measure of the use-
fulness of the link, but does not consider
the possible gaming effects using this tech-
nology, as I discuss next.
Collaborative ratings
Information technology going social can
open up new opportunities. Many re-
searchers and developers have noted that
information technology allows for the
tracking and logging of the information
seeking behavior of masses of users. One
oft-stated suggestion is to gather informa-
tion about preferences by users’ implicit
choices, by keeping track of which hyper-
links are followed, which documents are
read, and how long users spend reading
documents. It is hypothesized that this
information can be use to assess the popu-
larity, importance, and quality of the infor-
mation being accesses, and used to improve
Web-site structure and search-engine rank-
ing algorithms. Again, unanticipated behav-
ior might undermine the integrity of these
systems. If the results of these algorithms
.
lead to commercially important conse-
quences, such as changing a site’s ranking
within search results, then people will be
likely to write programs to simulate users’
visiting the Web pages of interest, and
countermeasures will be required.
Researchers are also making use of ex-
plicit rating information, most notably in
what is known as collaborative-filtering
systems or recommender systems.
11
Collab-
orative-filtering systems are based on the
commonsense notion that people value the
recommendations of people whose recom-
mendations they have agreed with in the
past. When new users register with a collabo-
rative-filtering system, they are asked to
assign ratings to a set of items (such as
movies, recipes, or jokes). Their opinions are
then matched against those of others using
the system, and similar users are identified.
After this, the system can recommend addi-
tional items to the new users, based on those
that have already been rated by similar uses.
Collaborative filtering is a social phe-
nomenon. Researchers have discussed
some of the social dilemmas that can work
to the detriment of such systems, especially
issues having to do with motivating people
to be initial reviewers rather than waiting
for others to create the ratings.
11
However, as we’ve seen, there are less
obvious kinds of interactions that can de-
grade the system’s behavior, which arise
only because large masses of people use
the same system.
In a recent manuscript, Brent Chun
points out the motivations people might
have for deceiving the system and some
ways in which they might carry out this
deceit.
12
He proposes that companies
whose services are being rated might
attempt to affect the ratings they receive
or downgrade the ratings of their competi-
tors, specific interest groups might try to
further their causes by giving negative
ratings to companies or products that con-
flict with their beliefs, and collaborative-
filtering companies themselves might try
to sabotage the ratings of their competi-
tors. Chun suggests ways people might
attack the ratings databases, including
conventional security threats such as
breaking into the system to steal or mod-
ify the database. He goes on to discuss
more ingenious means for defrauding
these systems, such as rating the same
item multiple times using large numbers
of pseudonym identities, borrowing other
users’ identities, and collusion within
groups of authentic users to downgrade an
item’s rating.
Why does this happen?
These behaviors seem to occur only when
a large cross-section of society uses the
same technology and information simulta-
neously, and when that information is of
general interest. As I’ve noted, defensive
driving is not necessary if there are no other
drivers on the road.
What is interesting about the phenomena
described here is that social interactions
occur with technology whose use does not
obviously result in such interactions. It was
not obvious that the Web would lead to
gaming of information-retrieval systems,
nor that the domain-name facility would
lead to deceptive naming practices.
Which kinds of technology are suscepti-
ble to this kind of behavior? We can draw
distinctions between technologies that are
self-consciously about interactions among
individuals and groups, and those that os-
tensibly have no reason to consider collu-
sion and arms races.
Here I will venture a classification.
Three conditions must hold for this situa-
tion to arise.
• First, the system must network a large
cross-section of society, the members of
which have partially conflicting goals.
• Second, there must be value associated
with use of the system, power, prestige,
financial, and so on.
• Third, and least obvious, the technology
must involve human use of information
in some human-understandable form.
Ramifications for information
systems design
The introduction of social forces onto
the landscape of information technology
brings up issues that are foreign to tradi-
tional computer-science training.
Computer scientists are taught to antici-
pate and handle all possible kinds of input,
but not at the level of granularity necessary
to address these considerations. Program-
mers check the data type (string, integer,
object pointer) and the ranges that can be
taken on by these data types. A program-
mer learns to test for very long strings and
empty strings, and perhaps whether or not a
string matches a string that has been de-
fined internally, but does not consider
inquiring into whether or not the content of
the string represents something socially
unacceptable, something deceitful or some-
thing fraudulent. Notions of read/write
protection and computer security control
who has access to machines and data, but
do not attempt to control fraudulent or
deceitful use of the technology.
Perhaps, however, this is not the proper
role of the designer of an information tech-
nology system. After all, we don’t want
word processors that censor what a user is
typing. It could be argued that most of the
interesting behaviors discussed above arise
because documents on the Web are not
monitored and controlled by the social
norms that are usually associated with pub-
lishing. It might also be the case that it
should be left to the legal system to prevent
certain forms of unfair business practices
that result from networked information
systems. (This has already begun to happen
in some cases.
13
)
The importance of the interdisciplinary
field of human-computer interaction is grad-
ually achieving increased recognition within
traditional computer science. HCI advocates
the design of computer systems from a hu-
man-centric viewpoint, and advises us on
how to create systems that “generate posi-
tive feelings of success, competence, mas-
tery, and clarity in the user community.”
14
Clare-Marie Karat has gone so far as to pro-
claim a User’s Bill of Rights that
underscores the design goals of HCI.
15
Ben
Shneiderman and Anne Rose suggest that
designers of information systems create
“Social Impact Statements,” modeled after
Environmental Impact Statements, to help
ensure that the technology we create
achieves its intended goals while at the same
time serving human needs and protecting
individual rights.
16
Their framework empha-
sizes the importance of defining the stake-
holders of the system—not just who will use
it directly, but also who will be indirectly
affected by its use. Now that a wider range
of information technology is going social,
designers should begin to consider whether
or not the stakeholders are everyone.
Acknowledgments
This essay has benefited from conversations
with Hal Varian, Doug Tygar, Bob Glushko,
Jonathan Grudin, Brent Chun, Jef Raskin, and
the SIMS masters students. Thanks also to Don
Kimber.
14 IEEEINTELLIGENTSYSTEMS
.
JANUARY/ FEBRUARY 1999 15
References
1. “Zdnet: Search Engines Battle the New
Spam: Make Sure Your Meta Tags Don’t
Get You Disqualified,” ZD Net Internet
MegaSite Magazine, Jan. 19, 1998; http://
www.zdnet.com/icom/content/anchors/
199801/19/new.spam/index.html
2. J. Grudin and L. Palen, “Emerging Group-
ware Successes in Major Corporations:
Studies of Adoption and Adaptation,” Proc.
First Int’l Conf. Worldwide Computing & Its
Applications (WWCA97): Emerging Tech-
nologies for Network Computing, ACM
Press, N.Y., 1997, pp. 142-183.
3. T. Berners-Lee, “Design Issues,” http://
www.w3.org/DesignIssues/.
4. Proc. ACM Hypertext 89, ACM Press, 1989.
5. F.G. Halasz, “Seven Issues: Revisited.,
Hypertext ’91 Closing Plenary,” http://
www.parc.xerox.com/spl/projects/
halasz-keynote/transcript.html.
6. R. Trigg, L. Suchman, and F. Halasz. “Sup-
porting Collaboration in NoteCards,” Proc.
Computer-Supported Cooperative Work
Conf. (CSCW’86), ACM Press, 1986, pp.
147-153.
7. E. Barret, ed., Society of Text, MIT Press,
Cambridge, Mass., 1989.
8. P. Samuelson and R.J. Glushko, “Intellec-
tual Property Rights for Digital Library and
Hypertext Publishing Systems: An Analysis
of Xanadu,” Hypertext, 1991, pp. 39-50.
9. J. Nielsen, Hypertext & Hypermedia, Acad-
emic Press, New York, 1990.
10. P. Resnick and H.R. Varian, “Recommender
Systems—Introduction to the Special Sec-
tion,” CACM, Vol. 40, No. 3, 1997, pp. 56-58.
11. C. Avery, P. Resnick, and R. Zeckhauser,
“The Market for Evaluations,” to appear in
the American Economic Review;
http://www.si.umich.edu/~presnick.
12. B. Chun, “Security in Collaborative Filter-
ing Systems,” 1998, http://www.cs.berkeley.
edu/~bnc
13. S.M. Abel and C.L. Ellerbach, “Trademark
Issues in Cyberspace: The Brave New Fron-
tier,” Fenwick and West LLP, San Francis-
co, 1998; http://www.fenwick.com/pub/
publications.htm.
14. B. Shneiderman, Designing the User Inter-
face (3rd Ed.), Addison Wesley, Reading,
Mass., 1998.
15. C.M. Karat, “Guaranteeing Rights for the
User,” Comm. ACM, Vol. 41, No. 12, Dec.
1998, pp. 29-31.
16. B. Shneiderman and B. Rose, Social Impact
Statements: Engaging Public Participation
in Information Technology Design, Tech.
Report CS-TR-3537, Univ. of Maryland,
College Park, Md., 1995; http://www.cs.
umd.edu/TRs/authors/Ben_Shneiderman.
html.
Living networked in a wired world
Barry Wellman, University of Toronto
The world is composed of networks—
not groups—both computer networks and
social networks. When computer networks
connect people and organizations, they are
the infrastructure of social networks. Just
as a computer network is a set of machines
connected by a set of cables (or airwaves),
a social network is a set of people (or orga-
nizations or other social entities) connected
by a set of socially meaningful relation-
ships (see Figure 1). Although this might
be obvious to many computer scientists,
the implications of living in a networked
world are nonobvious.
Computer scientists have been centrally
involved in a paradigm shift, not only in the
way we think about things but in the way
that society is organized. I call it the shift
from living in “little boxes” to living in net-
worked societies.
1
I am going to describe its
implications for how we work, commune,
and keep house, using the neologism called
glocalization. Members of little-box soci-
eties only deal with fellow members of each
of the few groups to which they belong:
usually our homes,
neighborhoods, work-
groups, and organiza-
tions. We are moving
away from a group-
based society to a soci-
ety in which boundaries
are more permeable,
interactions are with di-
verse others, linkages
switch between multi-
ple networks, and hier-
archies (when they
exist) are flatter and
sometimes recursive.
The little-boxes
metaphor is that people
are socially and cogni-
tively encapsulated by
all-confining, socially
conforming groups.
Most people think of
the world in terms of
groups, boundaries and
hierarchies.
2
They see
themselves as belong-
ing to a single work
group in a single orga-
nization; they live in a
household in a neigh-
borhood; they belong to a kinship group
(one each for themselves and their spouses)
and to voluntary organizations such as
churches, bowling leagues, and the Com-
puter Society. All of these social structures
appear to be bodies with precise boundaries
for inclusion (and therefore exclusion).
Each has an internal organization that is
often hierarchically structured: supervisors
and employees, parents and children, pas-
tors and churchgoers, the Computer Society
executive and its members. In such a little-
box society, we only deal with the people in
each of our bounded groups when we are
participating as members of that group.
We have moved from hierarchically
arranged, densely knit, bounded groups to
less bounded and more sparsely knit social
networks. (Actually, a group is a type of
social network, one that is tightly bounded
and densely knit, but it is cognitively easier
to compare groups with more loosely
bounded and sparsely knit networks.) Em-
pirical observation has shown this shift in
many milieus. Instead of hierarchical trees,
management by network has people report-
ing to shifting sets of supervisors, peers,
and even nominal subordinates. Unless
H
B
Y
T
O
S
N
R
Q
X
P
V
E
G
D
C
J
M
Z
W
I
K
L
A
F
U
Figure1. Asocial network wherethelabeledboxesrepresent individuals(people,
organizations) andthelinesrepresent relationsbetweenthem(whichcouldbe
love, money, or influence, for example). Notethat someof theboxes(suchasF
andW) havetwolinesconnectingthem; they aretiedby tworelations. Thegraph
showstwodensely knit clusterswithcrosscuttingties(BandO, for example).
(Courtesy of CathleenMcGrath, JimBlythe, andDavidKrackhardt;
http:/ / www.andrew.cmu.edu/ user/ cm3t/ groups.html.)
.
people are tethered to assembly lines, their
work relations often spill over their work
group’s boundaries, and may connect them
to outside organizations.
3
Organizations
sometimes distrust salespeople and pur-
chasing agents because their jobs entail
more contact outside the organization than
within it: they are what sociologists call
marginal people who are not totally com-
mitted to one group.
Rather than belonging to a single family,
people often have complex household rela-
tions, with stepchildren, ex-marital partners
(and their progeny), and multiple sets of in-
laws. People in the western world usually
have more friends outside their neighbor-
hood than within it: indeed, many people
have more ties outside their metropolitan
areas than within it.
4
People might attend
several churches, and they may have to
decide between going bowling or attending
a Computer Society meeting. In short,
while people think they are members of
social groups, they more often function as
operators of their social networks. (See the
sidebar for pointers to more information
about social network analysis.)
Although I have constructed my argu-
ment as a then/now contrast, people have
always functioned in social networks to
some extent. Consider the comings-and-
goings in Jane Austen’s preindustrial
England. Whatever their “sense and sen-
sibility,” her novel’s characters are for-
ever galloping past their neighbors to
visit their far-flung friends and relatives.
5
Kenneth Scherzer has shown that guests
traveled considerable distances to attend
New York City weddings in the mid-19th
century: they provided us with empirical
proof by signing the wedding registers as
witnesses.
6
The telegraph enabled alert
businesspeople by the middle of the 19th
century to manage their affairs at a dis-
tance,
7
intensifying a tradition of spatially
dispersed business empires that traders
had heretofore held together through kin-
ship loyalties and written correspond-
ence.
8
Even before the coming of com-
puter-mediated communication, cars,
planes and phones maintained far-flung
relations. For example, in the 1960s and
1970s, North Americans’ important ties
of sociability and support rarely were
confined to their neighborhoods. Many
were on the other side of the metropolitan
area; some were across the continent or
the ocean.
4,9
Characteristics of computer
networks as social networks
CMC—such as the Internet, news-
groups, and videoconferencing—makes it
easier to be socially networked.
10
• CMC is usually asynchronous, allowing
people in different time zones or on
different schedules to communicate. For
example, although the computer scien-
tists our group has studied work in the
same office, their different work sched-
ules leads them to use e-mail.
11
• CMC is rapid, fostering a high velocity of
exchanges, sometimes ill-considered.
12
• CMC supports emotional, nuanced, and
complex interactions, belying early
fears that it would be useful only for
simple, instrumental exchanges.
• CMC has taken on its own norms, pro-
cedures and ethos, with CMC partici-
pants showing greater creativity and
emotional swings than those talking
face to face.
13
• The absence of direct feedback in most
CMC encourages more extreme forms
of communication. People input mes-
sages to screens that they would never
say to another person palpably present
in person or on the telephone.
• The ability of communications to be
forwarded supports transitivity, as when
messages get forwarded to friends of
friends. The inclusion of headers in for-
warded messages allows indirect ties to
become direct relationships. This aids
the exchange of information that cuts
across group boundaries. Such crosscut-
ting ties link and integrate social
groups, instead of such groups being
isolated in tightly bounded little boxes.
• E-mail, the only widely available form
of CMC, supports easy accessibility.
This has led to a leveling of perceived
hierarchies, with all feeling they have
access to all. E-mail is not unique in
this. Telephone networks also support
easy accessibility, so much so that busy
and reclusive people have constructed
social (secretaries) and technical (voice-
mail) barriers to access. CMC will
probably engender the same reaction,
once techno-euphoria fades, with agents
both providing background detail about
callers and keeping unwanted callers at
a distance.
• The ease of sending messages to large
numbers of recipients allows partici-
pants to remain in contact with multiple
social milieus.
• E-mail is especially useful for maintain-
ing contact with “weak ties”—persons
and groups with whom one does not
strong relationships of work, kinship,
sociability, support, or information
exchange. Because weak ties are more
socially heterogeneous than strong ties,
they connect people to diverse social
milieus and provide a wider range of
information.
14
• CMC’s accessibility, velocity and multi-
ple-message characteristics indirectly
connect the entire world in five steps or
less.
15
Yet, unlike computer networks in
which all nodes are ultimately connected,
there is significant decoupling in social
networks. Hence, information diffuses
rapidly through computer-supported
social networks, but neither universally
nor uniformly.
16,17
Although most experimental studies of
CMC look only at screen-to-screen rela-
tionships, people who relate to each other
online often relate to each other offline:
face-to-face, by phone, or even on paper.
12
Many studies have focused on CMC with-
out realizing that such interactions are only
part of the life that extends beyond screens.
We can only comprehend the role of CMC
if the total tie is taken into account, and not
just the on-screen relationship. For exam-
ple, our group’s study of a wired suburb
has found that extremely fast and accessi-
ble Internet access spurs neighborly inter-
action as well as far-flung ties. Neighbors
use the Internet to arrange get-togethers
and to organize in opposition to real estate
developers. This neighborly interaction is
not surprising: Until wireless CMC be-
comes prevalent, people are largely tied to
their computers at their office or home
16 IEEEINTELLIGENTSYSTEMS
Too many studies have
focused too tightly on CMC
without realizing that its
interactions are only part of
the life that extends beyond
the screens.
.
JANUARY/ FEBRUARY 1999 17
desktops. This is the phenomenon we call
glocalization: the situation of being
intensely global as a Net surfer while being
firmly rooted to the area around the com-
puter screen and keyboard.
18,19
Computer-mediated interactions are
socially situated as well as spatially situ-
ated. Thinking of computer networks as
social networks can move the study of
human computer interactions beyond look-
ing only at the standard HCI concerns of
person-screen or person-screen-screen-
person interactions. Even when only two
persons communicate, they are not dancing
duets in isolation. Their interactions are
conditioned by the availability of others to
supply resources, cause problems, or
enforce norms.
Moreover, unlike laboratory experiments
of CMC, in real life CMC is often between
people who have different social character-
istics—such as gender and lifestyle—and
different social positions—such as supervi-
sors and subordinates and core and periph-
ery. For example, our group’s study of
desktop videoconferencing saw supervisors
initiating more contact than subordinates.
Some coworkers in a separate office 100
kilometers away maintained autonomy
when their videoconferencing equipment
frequently “broke down.”
Even when there is unfettered computer
connectivity, not all persons or organiza-
tions are directly connected. A computer
network is not in itself a social network: it
is the technological infrastructure that
enhances the ability of people and organi-
zations to communicate for better or worse.
How does living in networks differ from
living in groups?
(1) It enhances the ability to connect with
a large number of social milieus, while
decreasing involvement in any one
milieu.
(2) It decreases the control that any one
social milieu can have over us, while
decreasing the commitment of any one
milieu to a person’s well-being.
(3) It shifts interactions from those based
on characteristics people are born
with—such as age, gender, race and
ethnicity—to characteristics that they
have adopted throughout the life
course—such as lifestyles, shared
norms and voluntary interests.
(4) It fosters “cross-cutting” ties that link
and integrate social groups, instead of
such groups being isolated in tightly
bounded little boxes.
(5) It has increased choices while reduc-
ing the palpable group memberships
that provide a sense of belonging.
(6) In short, it has reduced the identity and
pressures of belonging to groups while
increasing opportunity, contingency,
globalization, and uncertainty through
participation in social networks.
References
1. M. Reynolds, Little Boxes, Schroeder
Music, New York, 1963.
2. L. Freeman, “The Sociological Concept of
Group: An Empirical Test of Two Models,”
Am. J. Sociology, Vol. 98, 1992, pp. 152-66.
3. B. Wellman et al., “Computer Networks as
Social Networks: Virtual Community, Com-
puter Supported Cooperative Work and
Telework,” Ann. Rev. Sociology, Vol. 22,
1996, pp. 213-238.
4. B. Wellman, ed. Networks in the Global
Village,Westview Press, Bolder, Colo.,
1999.
5. J. Austen, Sense and Sensibility, Penguin,
Harmondsworth, UK, 1811 (1969).
6. K. Scherzer, The Unbounded Community:
Neighborhood Life and Social Structure in
New York City, 1830-1875, Duke Univ.
Press, Durham, N.C., 1992.
7. A. Pred, Urban Growth and the Circulation
of Information: The United States System of
Cities, 1790-1840, Harvard Univ. Press,
Cambridge, Mass., 1973.
8. N. Ferguson, The World’s Banker, Weiden-
feld & Nicolson, London, 1998.
9. C. Fischer, To Dwell Among Friends, Univ.
of California Press, Berkeley, Calif., 1982.
10. L. Garton and B. Wellman, “Social Impacts
of Electronic Mail in Organizations: A
Review of the Research Literature,” Com-
munication Yearbook, Vol. 18, 1995, pp.
434-53.
11. C. Haythornthwaite and B. Wellman, “Work,
Friendship and Media Use for Information
Exchange in a Networked Organization,” J.
Am. Soc. Information Science, Vol. 49, 1998,
pp. 1101-1114.
12. J. B. Walther, J. F. Anderson, and D. W. Park,
“Interpersonal Effects in Computer-Medi-
ated Interaction: A Meta-Analysis of Social
and Antisocial Communication,” Comm.
Research, Vol. 21, 1994, pp. 460-487.
13. L. Sproull and S. Kiesler, Connections, MIT
Press, 1991.
14. M. Granovetter, “The Strength of Weak
Ties: A Network Theory Revisited,” Social
Structure and Network Analysis, P. Marsden
and N. Lin, eds., Sage, Beverly Hills, Calif.,
1982, pp. 105-130.
15. H. White, “Search Parameters for the Small
World Problem,” Social Forces, Vol. 49,
1970, pp. 259-64.
16. T. Valente, Network Models of the Diffusion
of Innovations, Hampton Press, Cresskill,
N.J., 1995.
17. B. Wellman and S.D. Berkowitz, eds. Social
Structures: A Network Approach, 2nd ed.,
JAI Press, Greenwich, Conn., 1997.
18. K.N. Hampton and B. Wellman, “Netville:
Glocalization and Civic Society,” presented
to American Sociological Association. San
Francisco, 1998 (contact author for copies).
19. B. Wellman and M. Gulia, “Net Surfers
Don’t Ride Alone,” Networks in the Global
Village, B. Wellman, ed., Westview Press,
1999, pp. 331-366.
20. J. Scott, Social Network Analysis, Sage,
1991.
21. S. Wasserman and K. Faust, Social Network
Analysis: Methods and Applications. Cam-
bridge Univ. Press, Cambridge, UK, 1993.
22. B. Wellman, “An Electronic Group is Virtu-
ally a Social Network,” Culture of the Inter-
net, S. Kiesler, ed., Lawrence Erlbaum,
Mahwah, N.J., 1997, pp. 179-205.
Social network analysis
To learn more about the discipline of social network analysis, see the following sources:
• S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications, Cam-
bridge Univ. Press, Cambridge, UK, 1993.
• J. Scott, Social Network Analysis, Sage, London, 1991.
• B. Wellman, “An Electronic Group is Virtually a Social Network,” Culture of the Internet,
S. Kiesler, ed., Lawrence Erlbaum, Mahwah, N.J., 1997, pp. 179–205.
• B. Wellman and S. D. Berkowitz, eds. Social Structures: A Network Approach, 2nd ed.,
JAI Press, Greenwich, Conn., 1997.
The International Network for Social Network Analysis (INSNA) has a Website: http://www.
heinz.cmu.edu/project/INSNA/.
.

doc_270102565.pdf
 

Attachments

Back
Top