Study on Trust Management

Description
In information system and information technology, trust management is an abstract system that processes symbolic representations of social trust, usually to aid automated decision-making process.

ABSTRACT
Title of dissertation: DISTRIBUTED TRUST MANAGEMENT
IN AUTONOMIC NETWORKS
Tao Jiang, Doctor of Philosophy, 2007
Dissertation directed by: Professor John S. Baras
Department of Electrical and Computer Engineering
The management of autonomic networks has gained more and more attentions
because of their wide applications and control di?culties. Autonomic networks are
decentralized and self-organized. Without global knowledge on the states of auto-
nomic networks, it is di?cult to predict behaviors of such networks and thus to
conduct proper network management and control. This dissertation is the starting
point of my e?ort to theoretically understand the complex characteristics of auto-
nomic networks. In particular, I focus on a speci?c application: distributed trust
management.
We view trust among users as a set of relations established on the basis of
trust credentials and required by speci?ed policies. Two important components of
a distributed trust management system are studied in this work: trust credential
distribution and trust evaluation. In autonomic networks, trust credentials are dis-
tributed throughout the network. Given the mobility and dynamics of the networks,
it is important to properly distribute trust credentials such that users are able to
e?ciently obtain required credentials and update existing credentials. I present a
trust credential distribution scheme based on network coding. After obtaining cre-
dentials in need, policies are required for users to evaluate trustworthiness of targets
in a distributed way. In this dissertation, I model distributed trust evaluation as
an estimation problem and trust evaluation policies based on local interactions are
studied. I investigate the convergence of both deterministic and stochastic voting
rules and prove their e?ectiveness with the present of misbehaving users.
Autonomic networks rely on collaboration among users. The con?ict between
the bene?t from collaboration and the required cost for collaboration naturally leads
to game-theoretic studies. I study collaboration based on cooperative games with
communication constraints and give the conditions under which users are willing
to collaborate. The results in this dissertation show that a well-designed trust
management system is helpful to enforce collaboration. Besides collaboration, I
show that trust can be used to the utility optimization problems as well. The e?ect
of trust values is that in the routing and scheduling problems the trustworthiness of
the node will be automatically considered and used. For example, packets will not
be routed frequently to suspicious nodes.
DISTRIBUTED TRUST MANAGEMENT IN AUTONOMIC
NETWORKS
by
Tao Jiang
Dissertation submitted to the Faculty of the Graduate School of the
University of Maryland, College Park in partial ful?llment
of the requirements for the degree of
Doctor of Philosophy
2007
Advisory Committee:
Professor John S. Baras, Chair/Advisor
Professor Armand M. Makowski
Professor Virgil D. Gligor
Professor Gang Qu
Professor Nick Roussopoulos
c _ Copyright by
Tao Jiang
2007
DEDICATION
To my parents Xiancheng and Qingfang,
to my sister Rong,
and to Peng
for their invaluable love and support
ii
ACKNOWLEDGMENTS
I owe my gratitude to all the people who have made this thesis possible and
because of whom my graduate experience has been one that I will cherish forever.
First and foremost I would like to thank my advisor, Dr. John S. Baras for giving me
an opportunity to work on challenging and extremely interesting problems. His deep
insight on amazingly broad areas of research, his endless enthusiasm on challenging
problems and his timely encouragement have been the most important support for
my work. I would also like to thank Dr. Armand M. Makowski, Dr. Virgil D. Gligor,
Dr. Gang Qu and Dr. Nick Roussopoulos for agreeing serve on my committee and
for providing feedback.
Sincerely thanks to my colleagues in the HyNet center and SEIL lab. Special
thanks to Georgios Theodorakopoulos, Alvaro Cardenas, Huigang Chen, Maben
Rabi, Svetlana Radosavac, Pedram Hovareshti and Punyaslok Purkayastha, with
whom I always had inspiring and fruitful discussions. Also I would also like to
thank Kim Edwards for her great administrative support.
This dissertation is prepared through collaborative participation in the Com-
munications and Networks Consortium sponsored by the U.S. Army Research Lab-
oratory under the Collaborative Technology Alliance Program, Cooperative Agree-
ment DAAD19-01-2-0011. It is also supported by the U.S. Army Research O?ce
under grant No DAAD19-01-1-0494.
iii
TABLE OF CONTENTS
List of Tables vi
List of Figures vii
1 Introduction 1
1.1 Autonomic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Notions of Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Context of Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Global Trust or Local Trust . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Trust Metrics and Uncertainty . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Examples of Trust and Reputation Systems . . . . . . . . . . . . . . 8
1.6.1 Reputation management in eBay . . . . . . . . . . . . . . . . 9
1.6.2 Product review sites . . . . . . . . . . . . . . . . . . . . . . . 10
1.6.3 Expert sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.4 Google’s PageRank . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6.5 PGP web of trust . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.6 Trust and reputation systems in P2P networks . . . . . . . . . 14
1.7 Trust Management in Autonomic Networks . . . . . . . . . . . . . . . 16
2 Trust Credential Distribution 19
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Types of Trust Credentials . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Network Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Network Coding Based Scheme . . . . . . . . . . . . . . . . . . . . . 25
2.5 Comparison of Distribution Schemes . . . . . . . . . . . . . . . . . . 29
3 Trust Evaluation 32
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 An Example on Distributed Trust Evaluation . . . . . . . . . . . . . 33
3.3 Global Trust Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.1 Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.2 Direct Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.3 Voting Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4 Deterministic Voting Rule . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.1.1 Voting in a virtuous network . . . . . . . . . . . . . 45
3.4.1.2 Voting with leaders . . . . . . . . . . . . . . . . . . . 48
3.4.2 Network Topology . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 Stochastic Voting Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5.1 Stochastic Model . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.3 Bayesian Network . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.5.4 Markov Random Field . . . . . . . . . . . . . . . . . . . . . . 61
iv
3.5.5 Stochastic Voting . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.5.5.1 Update Sequence . . . . . . . . . . . . . . . . . . . . 63
3.5.5.2 Markov Chain Interpretation . . . . . . . . . . . . . 64
3.5.5.3 Convergence . . . . . . . . . . . . . . . . . . . . . . 65
3.5.6 Binary Example . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.5.6.1 Ising Model and Spin Glasses . . . . . . . . . . . . . 71
3.5.6.2 Virtuous Network . . . . . . . . . . . . . . . . . . . . 72
3.5.6.3 Adversary Model . . . . . . . . . . . . . . . . . . . . 77
3.5.6.4 Network Topology . . . . . . . . . . . . . . . . . . . 81
4 Applications of Trust 84
4.1 Trust and Cooperation: the Game Theoretic View . . . . . . . . . . . 85
4.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 87
4.1.2 Coalitional Games . . . . . . . . . . . . . . . . . . . . . . . . 88
4.1.3 Cooperation in games . . . . . . . . . . . . . . . . . . . . . . 90
4.1.3.1 Cooperative games with negotiation . . . . . . . . . 90
4.1.3.2 Trust mechanism . . . . . . . . . . . . . . . . . . . 94
4.1.4 Dynamics of Cooperation . . . . . . . . . . . . . . . . . . . . 96
4.1.4.1 System model . . . . . . . . . . . . . . . . . . . . . . 96
4.1.4.2 Game evolution . . . . . . . . . . . . . . . . . . . . . 98
4.2 Trust Aware Cross-Layer Optimization . . . . . . . . . . . . . . . . . 102
4.2.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2.1.1 Trust . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2.1.2 Data ?ow and utility function . . . . . . . . . . . . . 106
4.2.1.3 Interference model and stability . . . . . . . . . . . . 108
4.2.2 Utility Optimization and Dual Decomposition . . . . . . . . . 109
4.2.3 Distributed Algorithm . . . . . . . . . . . . . . . . . . . . . . 112
Bibliography 114
v
LIST OF TABLES
3.1 The Conditional Probability Pr(d
ij
[s
i
, s
j
) . . . . . . . . . . . . . . . 59
vi
LIST OF FIGURES
1.1 Trust management system in autonomic networks . . . . . . . . . . . 18
2.1 Alice and Cathy want to obtain credentials on Bob . . . . . . . . . . 20
2.2 A Simple Network Coding Example . . . . . . . . . . . . . . . . . . . 24
2.3 Operations of our network coding based scheme . . . . . . . . . . . . 27
2.4 Number of nodes ?nishing over time . . . . . . . . . . . . . . . . . . . 30
2.5 Network load over time . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Procedure of trust evaluation . . . . . . . . . . . . . . . . . . . . . . 34
3.2 A trust graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 The second largest eigenvalue ?
2
. . . . . . . . . . . . . . . . . . . . 52
3.4 Convergence time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 Bipartite Bayesian Network for the Trust Graph 3.2 . . . . . . . . . 55
3.6 General Factor Graph for Trust Graphs . . . . . . . . . . . . . . . . . 56
3.7 Factor Graph for the Trust Graph 3.2 . . . . . . . . . . . . . . . . . . 56
3.8 A Tree Trust Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.9 The Tree Factor Graph of Figure 3.8 . . . . . . . . . . . . . . . . . . 58
3.10 Message Passing for the Tree Factor Graph . . . . . . . . . . . . . . . 60
3.11 Part of the Markov Chain. Suppose S(k) = [1, ?1, 1, 1], then S(k+1)
either ?ips one of the element in S(k) or stays at the state of S(k). . 65
3.12 Transitions between S and R. p
S,R
= q
i
Pr[¯ s
i
and p
R,S
= q
i
Pr[s
i
[R]. 68
3.13 P
correct
vs. b with ? < 0 and ? > 0. . . . . . . . . . . . . . . . . . . . 74
3.14 P
correct
vs. b with ? = 0. . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.15 P
c
vs. b with link errors p
e
. ? = 0. . . . . . . . . . . . . . . . . . . . 76
vii
3.16 P
c
vs. the percentage of adversaries P
m
. The link error p
e
= 0.05 and
the degree of certainty b = 1. . . . . . . . . . . . . . . . . . . . . . . 80
3.17 The e?ect of network topology. P
rw
is the percentage of shortcuts in
WS small world model. The network with P
rw
? [0.01, 0.1] has the
small world property. p
e
= 0.1. . . . . . . . . . . . . . . . . . . . . . . 81
3.18 The e?ect of average degree with p
e
= 0.1. . . . . . . . . . . . . . . . 82
4.1 System operation block-graph for a typical node . . . . . . . . . . . 97
4.2 Algorithm for game evolution modeling trust revocation . . . . . . . 100
4.3 Percentage of cooperating pairs vs. negative links . . . . . . . . . . . 103
4.4 Average payo?s vs. negative links . . . . . . . . . . . . . . . . . . . 103
4.5 An example network with 2 routes for ?ow from s to d. . . . . . . . . 107
viii
Chapter 1
Introduction
1.1 Autonomic Networks
With the explosive growth of network techniques in the last decade, connecting
to the world from any place, at any time and for any body is no longer just a dream.
In the meanwhile, the fast proliferation of networked devices and applications, such
as sensor networks and pervasive computing, integrates information technology into
our environments. These dramatic changes create unique challenges for network
management and control. Innovative solutions are required for managing network
mobility and dynamics, astronomical number of data and enormous information
exchanges.
The traditional centralized server-based management can no longer satisfy the
requirements of next generation networks, so people started to propose new concepts
of network infrastructure and management. For instance, mobile ad hoc networks
(MANETs) [47] aim to provide wireless network services without relying on any in-
frastructure. The wireless mesh network, which has been implemented by a number
of wireless network groups [2, 19], is essentially a MANET over a 802.11 wireless
LAN, which can be set up with almost “zero-cost” in highly mobile environments.
Another example is peer-to-peer (P2P) networks [28, 67], where a large number of
data are shared among millions of network users. All the aforementioned new types
1
of networks share a common characteristic: they are distributed and self-organized,
thus they are sometimes called autonomic networks [5] in the literature. In this
work, our focus is on the fundamental principles and properties of these networks,
rather than narrowing on a particular network prototype.
An autonomic network is one that is self-con?guring, self-optimizing, self-
healing and self-protecting. Such a network requires minimal administration, mostly
involving policy level management. Entities
1
in autonomic networks all participate
in network control through individual interactions. To achieve desired network man-
agement goals under such “anarchy” is not an easy job. A small misbehavior by
an individual might lead to a network-wise “avalanche”. The goal of this disserta-
tion is to understand and analyze the behavior and properties of these “anarchical”
autonomic networks.
Autonomic systems have been studied in various scienti?c ?elds: in biologi-
cal systems, swarms of bacteria, insects and animals yield sophisticated collective
behaviors based on simple individual interactions; in physics, a group of particles in-
teracting with their neighbors to form a magnet; even in human society, economists
have modeled human individual interactions using iterated games, etc. In this dis-
sertation, some results are inspired from those studies, such as the Ising model and
spin glasses model in statistical physics, which will be investigated in details in
Chapter. 3.
As a case study, we employ a speci?c application in autonomic networks –
distributed trust management – for our study. Trust is important and critical for
1
In this dissertation, terms node, user and entity are interchangeable.
2
network security. It integrates with several components of network management,
such as risk management, access control and authentication. Trust management is
to collect, analyze and present trust-related evidence and to make assessments and
decisions regarding trust relationships between entities in a network [10].
1.2 Notions of Trust
In order to understand the phenomenon of trust relationships, we ?rst need
to understand the meaning of trust. Various di?erent de?nitions of the term trust
have been proposed.
In an informal way, trust in an entity can be explained as the belief that under
certain circumstances this entity will perform in a certain way. More formally,
Merrian-Websters Collegiate Dictionary states, that trust is the
“assured reliance on the character, ability, strength, or truth of someone
or something.”
One de?nition popular in the network ?eld is from Diego Gambetta [24] “trust
(or, symmetrically, distrust) is a particular level of the subjective probability with
which an agent assesses that another agent or group of agents will perform a particu-
lar action...”. We identify two relevant points in this de?nition: ?rstly, trust is used
in order to predict an entity’s future behavior, secondly, trust is subjective. A trust
relation is usually speci?ed between two parties: a trustor, the subject that trusts
a target entity, known as the trustee, i.e. the entity that is trusted. The trustor
is an entity capable of thinking and making decision, whereas the trustee can be
3
anything from a person or physical entity, to abstract notions such as software or
a cryptographic key between trustor, the entity who is trusting, and trustee, the
entity who is trusted[39]. In the following, we identify various dimensions of trust
relationships in addition to trustor and trustee:
Context Trust has very broad meaning, and there is no need to trust an
entity completely if one expects it only to perform a limited task. Di?erent forms of
trust have been identi?ed in the literature. Besides those related to security, such
as access control, authentication and integrity, trust can be related to providing
service, delegation, correctness and so on. Here are some examples:
• I trust my doctor to competently perform an operation.
• I trust my database manager to decide who has access to my database.
• A Trust Computing Base (TCB) is trusted to execute on a machine to support
the required security policy.
Metric A trust metric refers to the quality of the trust relationship, which
ranges from complete distrust over a neutral trust measure to full trust. The more
a trustee is trusted, the higher the trust metric is supposed to be. Trust metrics
may take uncertainty into consideration. Certainty of a trust metric speci?es the
con?dence of the trustor in his or her estimation of the trustee. If this estimation is
gained via only few personal experiences, the certainty is supposed to be low. While,
if the estimation is gained based on certi?cates signed by trusted third parties, the
certainty is usually high.
4
Transitivity Transitivity of trust is not always true. However most of
cases, we accept transitivity in a restricted manner. Alice wants to evaluate entity
X, but she can only get Bobs evaluation of X. Alice accepts this evaluation only if
she believes the policy Bob used is a subset of her policy. The similar discussion
can be found in [9, 21]. For instance, Alice only trust board certi?ed doctors as her
Primary Care Physicians (PCPs), and her health insurance company recommends
her a couple of doctors as candidates. Alice believes the company only recommends
doctors who are providers of the insurance plan and board certi?ed doctors. So she
trusts those recommended doctors, because the policy of the company is a subset
of (stronger than) her policy.
Dynamics A trust relationship is not static, but changes dynamically on
various di?erent incidents. If for instance a patient has good experience with his
doctor, the trust of the patient in his doctor will increase. In addition to own expe-
riences, trust estimations received from others in?uence the own trust assessment as
well. Lastly, trust relationships may also change over time when no experiences have
been made. For instance, the trustworthiness of a password decreases dramatically
when the validation date has passed.
Trust, risk and reputation Risk and reputation are two factors that con-
tribute signi?cantly to the trust decision. Risk refers to the probability of failure of
a transaction with respect to a speci?c context. Reputation refers to the cumulative
view of the interactions with respect to a party within a context. Both concepts are
subjective and can be used in the determination of a trust decision. A very risky
venture has a higher chance of being designated a decision not to trust, while a not
5
so risky transaction will normally lead to a positive trust decision. Entities with a
positive reputation stand a higher chance of being trusted for future interactions,
while entities with a negative reputation will normally lead to a distrust decision.
Generally, reputation only partly a?ects trust. However, in this dissertation, we do
not explicitly distinguish trust and reputation.
In the following three sections, we clarify some of the above dimensions in
details.
1.3 Context of Trust
As pointed out in the previous section, applications can de?ne various context
areas in which entities can be trusted. It is important to note, that these areas are
not necessarily independent from each other. Di?erent kinds of dependencies can
exist among the context areas: instance of relationships are one-level relationships
for classi?cation, is a relationships provide generalization, part of relationships en-
able aggregation and surely many other - potentially application speci?c - forms
of dependencies between context areas can be imagined. For the sake of the trust
model, however, we do not need to model all these relationships in detail. Instead,
we model the asymmetric semantic distance between the context areas and therefore
abstract from the kind of dependency. The metrics chosen for the semantic distance
is a real number in the interval of [0, 1]. A distance close to 1 represents a high
dependency; a distance of 0 refers to no dependency. Therefore, we organize the
trust context areas as a weighted directed graph as can be seen in 1. This allows
6
us to spread the impact of a trust update in one area to related areas. The context
areas and the semantic distances between the areas are speci?ed by the applications
to be supported with trust management. However, due to the subjectivity of trust,
each user is enabled to locally modify the distances to suit his or her personal views.
1.4 Global Trust or Local Trust
Global trust metrics assign to a given user a unique trust score, independently
of the user that is evaluating the other user’s trustworthiness. On the other hand,
a local trust metric (personalized trust metric) provides a personalized trust score
that depends on the point of view of the evaluating user [49].
Global trust metrics are good when users have the same criterion on trust-
worthiness. For instance, in the context of routing, a node is trusted if it forwards
others’ packets, and distrusted if it drops packets. While the local trust depends on
individual’s preference.
1.5 Trust Metrics and Uncertainty
Various di?erent representations of trust values exist in the related work. Trust
values can be depicted as real numbers in certain intervals like for instance [?1, +1],
as done by Jonker and Treur [36] and Sabater [60] or probabilities in [0, 1], as
proposed among others by Jøsang and Ismail [37], Yu and Singh [71], and Kinateder
and Rothermel [40]. Others propose discrete values, like the binary representation
by Blaze and Feigenbaum [11] or four discrete values in PGP [73].
7
Not all aforementioned algorithms support the computation of an uncertainty
value, which states the quality of the trust assessment represented in the trust
metric. If uncertainty is mentioned [71, 37, 60], it is speci?ed in the interval [0, 1],
whereas 0 describes complete uncertainty and 1 the total certainty.
The metric used in this dissertation is a real number in the interval of [?1, 1].
Complete distrust is represented by ?1 whereas 1 corresponds to full trust.
1.6 Examples of Trust and Reputation Systems
Trust and reputation systems are widely used to provide a basis for the choice
of transaction items and partners in online service provision. The basic idea is
to let participating nodes rate each other, for example after the completion of a
transaction, and use the aggregated ratings about a given node to derive a trust or
reputation score, which can assist other nodes in deciding whether or not to transact
with that node in the future. Trust and reputation systems have proven useful in
online auctioning systems such as eBay or online book stores such as Amazon, and
can be viewed as a substitute for the ‘word-of-mouth’ mechanisms observed in o?ine
personal encounters. As a result, it has recently been proposed that trust manage-
ment systems be extended to autonomic systems such as Internet-based peer-to-peer
systems and mobile ad-hoc networks. The need here is predominantly to provide
incentives for cooperation and to protect the network from node misbehavior.
In this section, we describe the most well known applications of online trust
and reputation systems
2
. We start with systems having centralized network archi-
2
Some of the examples are from the work by Jøsang et. al.[38].
8
tectures, and then existing distributed systems are described.
1.6.1 Reputation management in eBay
eBay
3
uses the feedback pro?le for rating their members and establishing the
members’ reputation. Members rate their trading partners with a Positive, Negative
or Neutral feedback and explain brie?y why after completion of a transaction. The
reputation is calculated by assigning 1 point for each positive comment, 0 points
for each neutral comment and -1 point for each negative comment. This feedback
mechanism is a centralized reputation system, where eBay collects all the ratings
and computes the scores. The reputation score of each participant is the sum of all
feedback points (from unique users).
The eBay reputation system is very primitive and can be quite misleading.
We list the following problems for the system:
• It su?ers from single point of failure because it is centralized.
• A participant with 100 positive and 10 negative ratings should intuitively ap-
pear much less reputable than a participant with 90 positive and no negatives,
but on eBay both would have the same total reputation score.
• A seller may participate in many small transactions in order to build up a high
positive reputation, and then defraud one or more buyers on a high-priced
item.
• No special mechanism is provided to detect members that lie in their feedback.
3
http://www.ebay.com
9
Despite its drawbacks and primitive nature, the eBay reputation system seems
to have a strong positive impact on eBay as a marketplace.
1.6.2 Product review sites
Individual reviewers of Product review sites provide information for consumers
for the purpose of making better purchase decisions. The trust systems on those
sites apply to products as well as to the reviewers themselves.
Epinions
4
is a product and shop review site. The reviews written by members
consist of text and quantitative ratings from 1 to 5 stars for a set of aspects such as
Ease of Use for products and Customer Service for shops. Other members can rate
reviews as Not Helpful, Somewhat Helpful, Helpful, and Very Helpful, and thereby
contribute to determining the rank of the review, as well as to giving the reviewer a
higher status. A member can obtain the status Advisor, Top Reviewer or Category
Lead as a function of the accumulated ratings on all his or her reviews over a
period. Epinions also make use of the “Web of Trust”: members can decide to
either trust or block another member. A member’s list of trusted members represents
that member’s personal Web of Trust. The Web of Trust in?uences the automated
selection of Top Reviewers and Advisors. Epinions has an incentive systems for
reviewers called the Income Share Program, where members can earn money based
on the utility of their reviews. Reviewers can potentially earn as much for helping
someone make a buying decision with a positive review, as for helping someone
avoid a purchase with a negative review. This is important in order not to give
4
http://www.epinions.com
10
an incentive to write biased reviews just for pro?t. The trust system of Epinions is
highly sophisticated compared to other trust and reputation systems on the Internet.
It has been shown to provide high quality reviews. Notice that the trust system of
Epinions is still centralized.
Amazon
5
is another example of product review sites. Reviews consist of text
and a rating in the range 1 to 5 stars. The average of all ratings gives a book
its average rating. Users can vote on reviews as being helpful or not helpful. The
numbers of helpful as well as the total number of votes are displayed with each
review. As a function of the number of helpful votes each reviewer has received, as
well as other parameters not publicly revealed, Amazon determines each reviewer’s
rank, and those reviewers who are among the 1000 highest get assigned the status
of Top 1000, Top 500, Top 100, Top 50, Top 10 or #1 Reviewer. Amazon has a
system of Favorite People, where each member can choose other members as favorite
reviewers, and the number of other members who has a speci?c reviewer listed as
favorite person also in?uences that reviewer’s rank. There are many reports of
attacks on the Amazon review scheme where various types of ballot stu?ng has
arti?cially elevated reviewers to top reviewer, or various types of “bad mouthing”
has dethroned top reviewers. For example the Amazon #1 Reviewer usually is
somebody who posts more reviews than any living person could possibly do if it
would require that person to read each book, thus indicating that the combined
e?ort of a group of people, presented as a single person’s work, is needed to get to
the top. Also, reviewers who have reached the Top 100 rank have reported a sudden
5
http://www.amazon.com
11
increase in negative votes which re?ects that there is a ‘cat?ght’ taking place in
order to get into the ranks of top reviewers.
1.6.3 Expert sites
Individuals in expert sites are willing to answer questions in their areas of
expertise, and the reputation systems on those sites are there to rate the experts.
AllExperts
6
provides a free expert service for the public on the Internet. The
reputation system on AllExperts uses the aspects: Knowledgeable, Clarity of Re-
sponse, Timeliness and Politeness where ratings can be given in the interval [1; 10].
The score in each aspect is simply the numerical average of ratings received. The
number of questions an expert has received is also displayed in addition to a General
Prestige score that is simply the sum of all average ratings an expert has received.
Advogato
7
is a community of open-source programmers. Members rank each
other according to how skilled they perceive each other to be, using Advogato’s
trust scheme, which in essence is a centralized reputation system based on a ?ow
model. The trust engine of Advogato computes the trust ?ow through a network
where members constitute the nodes and the edges constitute referrals between
nodes. Each member node is assigned a capacity between 800 and 1 depending on
the distance from the source node that is owned by Raph Levien who is the creator
of Advogato. The source node has a capacity of 800 and the further away from the
source node, the smaller the capacity. Members can refer each other with the status
6
http://www.allexperts.com
7
http:www.advogato.org
12
of Apprentice (lowest), Journeyer (medium) or Master (highest). A separate ?ow
graph is computed for each type of referral. A member will get the highest status
for which there is a positive ?ow to his or her node.
1.6.4 Google’s PageRank
PageRank proposed by Page et al. [14] represents a way of ranking the best
search results based on a page’s reputation. Roughly speaking, PageRank ranks a
page according to how many other pages are pointing at it. This can be described as
a reputation system, because the collection of hyperlinks to a given page can be seen
as public information that can be combined to derive a reputation score. A single
hyperlink to a given web page can be seen as a positive rating of that web page.
Google’s search engine is based on the PageRank algorithm and the rapidly rising
popularity of Google was obviously caused by the search results that the PageRank
algorithm delivered.
The trust and reputation systems mentioned above are all centralized. All
data for these systems are stored at a trusted centralized database, which is not
ideal for autonomic networks, and lead to the usual issues of scalability and single
point of failure. We present distributed systems in the rest of this section, which
are the main interest of this thesis.
13
1.6.5 PGP web of trust
PGP web of trust is probably the most widely used decentralized trust sys-
tem. PGP focuses on proving the identity of key holders. PGP uses user-de?ned
thresholds to decide whether a given key is trusted or not, and di?erent introducers
can be trusted at ?nite set of di?erent trust levels (completely trusted, marginally
trust, marginally distrust and completely distrust). Trust in PGP is only followed
through one level of indirection; i.e., if A is trying to decide the trust of B, there
can be at most one person, C, in the trust path between A and B.
1.6.6 Trust and reputation systems in P2P networks
The peer-to-peer (P2P) paradigm allows participants to exchange resources
like content, processing power, and storage capacity without the need for a central-
ized authority. The P2P networks that have been widely deployed to date can be
observed to have several unique features. They are essentially goodwill networks of
mostly unknown peers and the peer population is highly transient. The result is
that these networks are open to abuse and misuse by their peers, and so the issues
of trust and reputation are becoming critical to their continued success.
KaZaA
8
uses Integrity Rating Files for rating ?les and Participation Level for
rating peers. Users can rate the ?les they share based on technical quality and
accuracy of their descriptive ?le data. A ?le can be given the following ratings:
excellent, average, poor, or delete ?le. In KaZaA a Participation Level is assigned
8
http://www.kazaa.com
14
to each user in the network based on the ?les that are uploaded from the user and
the ?les the user downloads from other users. The Participation Level is de?ned as
(Uploads in MB/Downloads in MB) 100.
Reputation systems can be used to motivate peers to positively contribute to
the network. For instance, agents that contribute resources to the network may be
rewarded with faster download speeds or reduced response latency for their requests
and queries. An example of this incentive can be seen in Bittorrent
9
. Applying
the principle of “tit-for-tat”, the client application throttles upload speeds to a peer
based on the download speed it is receiving from that peer. Therefore, peers that
are willing to devote more upload bandwidth are rewarded with a higher download
speed. In both KaZaA and Bittorrent, malicious peers can still upload inauthentic
?les which will reduce the peers’ satisfaction and waste network resources.
As we have seen, the existing distributed trust and reputation systems are
very primitive. They su?er from many attacks and their potentially complex be-
havior is not yet understood. This understanding is necessary to investigate whether
distributed trust management systems might prove as useful as their counterparts
for centralized systems. The focus of this dissertation is exactly on the theoretical
analysis of distributed trust management.
All the above examples assume that the trust credentials are collected by a
centralized server or are ready to use when needed. This is usually not the case in
autonomic networks where trust credentials are produced by individual principals,
9
http://www.bittorent.com
15
thus credential distribution is needed for users to obtain credentials they need. In
Chapter 2, we study distribution of trust credentials and propose a distribution
scheme which is inspired from P2P networks. Most of trust or reputation policies
used in the above examples are designed in an ad hoc way. Therefore, various
vulnerabilities are discovered after these policies have been used in the real life.
In Chapter 3, we model distributed trust evaluation in a more general form and
theoretically analyze performance of trust evaluation policies. By analyzing certain
evaluation policies, we are able to derive the proper region for network design, such
as requirements on the network hierarchy and parameter settings. Some of our
theoretic results are even surprising, such as phase transition phenomena. Those
results provide a way to properly design feasible evaluation policies and proper
network structure. As we mentioned, one predominant need for trust and reputation
systems in autonomic networks is to provide incentives for cooperation. For instance,
Bittorrent uses reputation to encourage cooperation. In Chapter 4 , we formally
model the cooperation based on game theory and provide necessary conditions that
can encourage cooperation among all nodes in the network.
1.7 Trust Management in Autonomic Networks
Before getting into the details of our solutions, we ?rst clarify the unique
properties of trust management in autonomic networks. On Jøsang et. al.’s de?nition
([39]), trust management is: “[t]he activity of creating systems and methods that
allow relying parties to make assessments and decisions regarding the dependability
16
of potential transactions involving risk, and that also allow players and system
owners to increase and correctly represent the reliability of themselves and their
systems.”
The essence of trust management, on this de?nition, is to provide a better
informational basis for analyzing the risks and bene?ts of the transaction. Typical
examples would be the collection of relevant information about the reputation of
the other party or the systematic analysis of my own experiences from previous
encounters with her, and the communication of my own policies in situations of
this kind. The same understanding of the point of trust management emerges from
the survey of trust management provided by Ruohomaa et al. [59]. Our focus for
distributed trust management is on its theoretical analysis rather than providing a
strict de?nition, or a system model for trust management.
In traditional networks, such as Internet, sources of trust evidence are central-
ized control servers, such as trusted third parties (TTPs) and authentication servers
(ASs). Those servers are trusted and available all the time. Most prior research
e?orts in this framework [66, 43, 26] assume an underlying hierarchical structure
within which trust relationships between ASs are constrained.
In contrast, autonomic networks have neither ?xed infrastructures, nor cen-
tralized control servers. In these networks, the sources of trust evidence are peers,
i.e. the entities that form the network. We summarize the essential and unique
properties of distributed trust management in autonomic networks as the follow-
ings:
17
Trust
Decision
Trust
Credentials
Prior trust relations
Local observations
Local key exchanges
Evaluation
Policy
Credential
Distribution
Applications
Figure 1.1: Trust management system in autonomic networks
• uncertainty and incompleteness: Trust evidence is provided by peers,
which can be incomplete and even incorrect.
• locality: Trust information is exchanged locally through individual interac-
tions.
• distributed computation: Trust evaluation is performed in a distributed
manner.
The objective of this dissertation is to use mathematical analysis that helps us
to understand and predict the emergent behaviors [27] of trust management systems
in autonomic networks. Figure 1.1 describes components of a trust management sys-
tem in autonomic networks and their interactions. Chapter 2 focuses on credential
distribution. The evaluation policy is studied in Chapter 3. In Chapter 4, we present
two applications where trust decisions that the trust management system provides
can be used.
18
Chapter 2
Trust Credential Distribution
2.1 Overview
As we have discussed, in autonomic networks, trust credentials are typically is-
sued and often stored in a distributed manner. Most of existing work ([10, 9, 58, 50])
assumes that one has already gathered all the potentially relevant credentials in one
place and does not consider how to gather these credentials. The assumption that
all credentials are stored in one place is inconsistent with the idea of decentralized
trust management. As shown in Figure 2.1, credentials on Bob are scattered in the
network. Both Alice and Cathy want to evaluate the trustworthiness of Bob.
Credential distribution raises many interesting questions. Where should Alice
and Cathy look for credentials? Often, they cannot look everywhere, how can they
e?ciently obtain requested credentials? In addition, what are the best places to store
credentials, such that they can be easily located, well protected and timely updated?
Trust credential distribution is the foundation of trust evaluation. It provides the
input for the evaluation model. Till now, the credential management and retrieval
problem that exists in distributed environments have not been well addressed. The
problem of trust credential distribution shares many characteristics of distributed
peer-to-peer (P2P) ?le sharing systems ([67, 18]). Trust credentials are ?les to be
shared for trust management. Thus many distributed trust management systems
19
Alice
Cathy
Bob
Bob
Bob
Figure 2.1: Alice and Cathy want to obtain credentials on Bob
consist of components that are inspired from P2P systems. For instance, the trust
management scheme P-Grid ([1]) is based on scalable replication of tree structures,
which is derived from the P2P systems. As one of the ?rst work in the literature,
Eschenauer [21] proposed a trust credential distribution scheme based on Freenet
[18], where trust credentials are routed and searched using distributed hash table
(DHT). More speci?cally, for a particular trust credential, its ID is mapped into a
value using a universal de?ned hash function. This hash value also corresponds to a
unique node ID in the network. Then the credential is routed to and stored in this
node. When searching for this particular credential, the requester gets the hash value
using the same hash function and set its request destination as the corresponding
node. In this way, trust credentials are routed by hash-based routing instead of
?ooding. Request routing in Freenet avoids ?ooding and improves with time. Files,
or trust credentials in this context, are replicated by caching at every node, which
causes information to converge to where it is most needed.
Despite the similarities between trust credential distribution and P2P ?le shar-
ing, there exist several unique properties of trust credential distribution. The size
of trust credentials in digital documents are a lot smaller compared to the hundreds
of million or billion bit ?les in P2P sharing. The requests in trust credential distri-
20
bution usually aim at multiple pieces, for instance, a request may ask for all trust
credentials about the trustworthiness of node A. While in P2P one request explicitly
points to one single ?le. Furthermore, the trust credentials update much more fre-
quently than ?les in P2P networks. We propose a new trust credential distribution
scheme that uses network coding, and we show that the new scheme handles the
above unique properties and performs well in terms of e?ciency and security[34].
Network coding has been used in P2P ?le sharing as well. Gkantsidis and
Rodriguez designed a P2P ?le sharing system named Avalanche, in which interme-
diate peers produce linear combinations of ?le blocks as in network coding ([25]).
However, again, they are focused on downloading large ?les from ?le servers, and
our interest is to distribute scattered trust credentials.
In this chapter, we propose our trust credential distribution scheme that uses
linear network coding to combine multiple credentials during transmissions. Our
approach is inherently di?erent with the commonly used request-response approach.
We show that our scheme is extremely easy to implement and absolutely distributed,
which nicely handles the dynamic nature of the problem.
2.2 Types of Trust Credentials
Di?erent trust contexts require di?erent types of credentials. Examples might
be your driver’s license, your social security card, or your birth certi?cate. Each
of these has some information identifying you and some authorization stating that
someone else has con?rmed your identity. Some credentials, such as your passport,
21
are important enough con?rmation of your identity that you would not want to lose
them, lest someone uses them to impersonate you.
In cyberspace, users rely on digital trust documents. A digital trust document
is data that functions much like a physical credential. We are focused on digital
trust documents, which are called trust documents or documents in short. Here are
some examples of digital trust documents:
• Digital certi?cate: which is issued by a certi?cate authority or entity and
veri?es that a public key is owned by a particular entity. Digital certi?cates
are used to thwart attempts to substitute one person’s key for another. A
digital certi?cate consists of three things: a public key, certi?cate information
(“identity” information about the user, such as name, user ID, and so on) and
one or more digital signatures. Digital certi?cates are widely used in PGP and
X.509.
• IAFIS (Integrated Automated Fingerprint Identi?cation System): which uses
human ?ngerprints as identities. It has made law enforcement o?cials capable
to easily share information about criminals and quickly compare a suspect’s
?ngerprint image with millions of similar imprints.
The trust documents can be generated by some trusted party as in centralized
networks (commanders of the army), by friends who know each other (soldiers in
the same group), or by certain monitoring schemes ([48, 72]). Creation of trust
documents is not in the scope of this work.
In autonomic networks, information is usually partial and incomplete. Uncer-
22
tainty of trust must be introduced in the trust documents, which is usually repre-
sented by a value. This value denotes the degree of trust the issuer of the documents
has on the target user. The value can be binary-valued, either trust or distrust, or
multiple-valued, such as four levels of trust in PGP [73], or even continuous in an
interval, say [?1, 1].
In dynamic environments, the validity and value of trust documents change
over time and space. Every user has con?dence values on the documents he stores.
The con?dence value depends on several factors, for instance, time elapsed since
the document is issued, and the communication distance taken by the document to
reach the user. Normally, the con?dence value is a monotonically non-increasing
function of the elapsed time and communication distance. When the con?dence
value is below certain threshold, the corresponding document is considered to be
invalid.
As an important security concept, we need consider the integrity and authen-
ticity of trust documents as well. Digital signatures based on public-key cryptog-
raphy have been studied by Maurer ([50]) and employed in PGP ([73]). However
there are several di?culties with this approach: ?rst, we have to assume that there
is an immense public key infrastructure in place, which is usually not available in a
self-organized network; second, there has to be a reliable way to ?nd the identities
of entities and make sure the identities are not tampered. Those problems are espe-
cially important in networks with no ?xed infrastructure. One solution is to make
use of side channels. For instance, in [6] and [65], users utilize a location-limited side
channel, such as physical contact, to ?rst exchange a small amount of cryptographic
23
A S B
A S B
A S B
A S B
A S B
A S B
A S B
(a) Traditional Method
(b) Network Coding
a
a a
a
b
b b
b
a xor b a xor b
Figure 2.2: A Simple Network Coding Example
information. The information can be used to authenticate standard key exchange
protocols performed over the wireless link. In the rest of the chapter, we assume au-
thenticity and con?dentiality of documents have being achieved by applying certain
cryptographic primitives.
2.3 Network Coding
Network coding ([3]) is a recent ?eld in information theory proposed to improve
the throughput utilization of a given network topology. In communication networks
today, packages are always separated with each other during transmissions. The
principle behind network coding is to allow intermediate nodes to encode packets.
Instead of simply forwarding data, intermediate nodes may recombine several input
packets into one or several output packets. An overview of network coding and a
discussion of possible Internet applications are given in [17].
To illustrate how network coding improves the propagation of information
without global coordination, we consider a simple example in a wireless context
with a three node topology. In Figure 2.2, nodes A and B want to exchange packets
24
via a wireless base station, node S. In the traditional way, nodes A and B send
packet a and b to node S respectively. Then node S sends packet b and a to node A
and B respectively. Because of interference in wireless environments, this procedure
takes four transmissions as shown on the left side of Figure 2.2.
If network coding is used, Node S will ?rst linearly combine packets a and b,
e.g. by binary adding the two packets: a ? b. Instead of forwarding packets a and
b separately, S broadcasts a ? b. After receiving the encoded packet, both A and
B can recover the packet of interest, while the number of transmissions is reduced.
This simple example is similar to linear network coding, where the ? operation
is replaced by linear combinations of data over some ?nite ?eld. As we will see
later, linear network coding makes e?cient propagation of trust documents and the
associated operations are distributed.
2.4 Network Coding Based Scheme
In this section, we describe our proposed scheme for trust document distri-
bution. We will outline the basic operations of this system, emphasizing some
algorithmic parameters that a?ect its performance.
We assume a population of users that are interested in the trustworthiness of
a particular user, that is the target user. The trust documents about the target user
are initially stored in a number of users scattered throughout the network. These
users are the issuers of the trust documents, or they have retrieved these documents
from others. In the rest of the section, we only consider trust documents about one
25
single target user. All the operations are applied on those documents about the
particular target user. Each trust document has a unique ID, which is obtained by
applying a universally de?ned hash function to the document.
In autonomic networks, users usually cannot directly communicate with all
other users; they only communicate with a small subset of users, which we call the
neighborhood. We assume that the neighboring relation is symmetric, i.e. if node A
is in the neighborhood of B, then, also B is in the neighborhood of A. Symmetric
channels are necessary in wireless networks, since reliable transmissions require that
the two nodes can exchange information to avoid collisions and interference, such
as the RTS/CTS messages [30]. In our scheme, users frequently check with their
neighbors for new documents. If there are new documents, these documents are
forwarded.
Whenever a user wants to forward trust documents to another node, it pro-
duces a linear combination of all the documents it currently stores. The operation
of the system is best described in the example of Figuer 2.3. Assume that initially
user A stores several trust documents about a particular target user. User A will
combine all the documents to create an encoded document as follows. It will pick
some random coe?cients c
i
, then multiply each trust document i with c
i
, and then
add the results of the multiplications together, shown as follows
EncodedDoc
1
= (c
1
Doc
1
) + (c
2
Doc
2
) + (c
3
Doc
3
). (2.1)
All these operations take place in a ?nite ?eld. If more than one user requests trust
26
Doc
1
Doc
2
User A
Doc
3
EncodedDoc
1
User B EncodedDoc
2
User C Doc
4
c
1
c
2
c
3
EncodedDoc
3
User D
d
1 d
2
c’
1
c’
2 c’
3
EncodedDoc
1
Figure 2.3: Operations of our network coding based scheme
documents from A, A randomly picks di?erent sets of coe?cients for these users.
Observe that the probability that two distinct users get the same set of coe?cients
depends on the size of the ?eld. If the ?eld is very small such “collisions” may
happen and they will reduce the performance of the system. In most practical cases
a ?eld of size 2
16
should be enough.
User A will then transmit to user B the result of addition, EncodedDoc
1
, the
IDs of the encoded trust documents and the coe?cient vector c = (c
i
). User A
has also another encoded document EncodedDoc
2
, with its associated coe?cients
c
t
. User A transmits it to user C who stores another trust document, Doc
4
. User
D is a neighbor of both user B and user C. He requires trust documents from both
B and C. User B transmits EncodedDoc
1
directly to User D, and user C picks two
random coe?cient d
1
and d
2
and transmits EncodedDoc
3
to D. We have that
EncodedDoc
3
= (d
1
EncodedDoc
2
) + (d
2
Doc
4
). (2.2)
27
The coe?cient vectors user D now has are
[c
1
, c
2
, c
3
, 0] and d
1
[c
t
1
, c
t
2
, c
t
3
, 0] + d
2
[0, 0, 0, 1]
and the corresponding document IDs are Doc
1
, Doc
2
, Doc
3
and Doc
4
.
Observe that given n distinct trust documents, a user can recover them after
receiving n encoded documents for which the associated coe?cient vectors are lin-
early independent from each other. The reconstruction process is similar to solving
linear equations.
Our network coding scheme involves only local interactions. Users need not
be aware of the existence of any trust document and its location. They only contact
their neighbors to check whether there are new documents or new encoded docu-
ments. This is a great advantage as compared to any request-response scheme, such
as the Freenet-based scheme. Request-response schemes require routing tables. If
the topology of the network changes, new nodes come in or some nodes move out,
routing tables need to be updated immediately. While, our scheme adapts quickly to
the topology changes, since users only contact their current neighbors. In addition,
request-response schemes require that users know the document IDs before send-
ing out requests, which needs global information exchange. Our scheme operates
without knowing any document ID.
We have mentioned that there are di?erent types of trust documents with var-
ious con?dence values, and the importance of these documents varies dramatically.
In other words, the trust documents in the network are heterogeneous. A good trust
28
document distribution scheme should take such heterogeneity into consideration.
Credentials with high con?dence values and of high importance should have high
priority in transmission and they should be recovered before other documents. The
simplest solution is to transmit these high priority documents separately. However,
this solution essentially doubles the resources used for transmitting these documents.
In our system, we mark these high priority documents, and they are always consid-
ered to be new documents. Therefore, whenever a user with high priority documents
wants to forward his documents, he always combines the high priority documents
with all other documents. In this way, these high priority documents have much
higher chance to be recovered.
2.5 Comparison of Distribution Schemes
In the section, we study the performance of our trust document distribution
scheme that uses network coding and compare it with the Freenet-based approach
in [21]. To evaluate the performance, we calculate the time it takes for each user to
obtain all trust documents for the target user and the load of the network.
To study the performance of relatively large number of users, we have imple-
mented both our network coding based scheme and the Freenet based scheme in
MATLAB. There are 100 nodes randomly placed in a 1 1 square. If the distance
between two nodes is less or equal to 0.2, they considered to be neighbors. Assume
that 20 distinct documents are randomly stored in these nodes. All users want to
obtain these 20 documents.
29
0 10 20 30 40 50 60 70 80
0
10
20
30
40
50
60
70
80
90
100
Simulation Time (Rounds)
N
u
m
b
e
r

o
f

U
s
e
r
s

F
i
n
i
s
h
e
d
Network coding
Freenet
Network coding
Freenet
Figure 2.4: Number of nodes ?nishing over time
The simulation is round based. For our network coding based scheme, at the
beginning of each round, each user contacts its neighbors to discover whether there
are new documents or encoded documents. When new documents are obtained,
users encode them according to the operations described in Section 4.1.1. Once a
user can recover all the trust documents, we designate that he has ?nished, and the
number of rounds it takes is called the ?nish time. For the Freenet based scheme,
each user sends out a request for one of the documents he does not store. The
routing scheme of the requests is described in [21]. Once the requested document
is found and sent back, a new request is generated for another document the user
does not store. The user stops sending requests once he obtains all documents.
Documents are replicated by caching at every node they pass.
We start by comparing the performance of ?nish time. In Figure 2.4, we plot
the ?nish time of each node, where x-axis represents the simulation time and y-axis
represents the number of nodes that have ?nished. The performance of the network
coding based scheme is better compared to the Freenet based scheme. Because by
30
0 10 20 30 40 50 60 70 80
0
10
20
30
40
50
60
70
80
90
100
Simulation Time (Rounds)
N
e
t
w
o
r
k

L
o
a
d

(
#

D
a
t
a

P
a
c
k
e
t
s
)
Network Coding
Freenet
Figure 2.5: Network load over time
combining the documents with network coding, within a few rounds, documents can
be spread throughout the network. While for the Freenet based scheme, just one
document is transmitted per round, and the search phase at the beginning takes
long time.
Figure 2.5 gives the load of the network with time. Our network coding based
scheme has relatively higher network load. This is due to the controlled ?ooding
nature of network coding.
31
Chapter 3
Trust Evaluation
3.1 Overview
After distributing trust documents to proper nodes, the next and probably
the most important problem is to evaluate target nodes based on these documents.
The objective of trust evaluation is to estimate the trustworthiness of targets and
thus help to predict their future behaviors. In this chapter, nodes which conduct
the evaluation are called the evaluating nodes, and nodes which are the targets of
the evaluation are called the target nodes.
In literature, two types of trust have been studied: global trust and local
trust[49]. In global trust, given a target node, there is a unique trust value assigned
to the node, independently of the evaluating node. On the other hand, a local trust
value (aka personalized rust value) provides a personalized trust value that depends
on the point of view of the evaluating node. In this chapter, we are focused on
the global trust evaluation. Some of our methods can also be applied to local trust
evaluation as well.
32
3.2 An Example on Distributed Trust Evaluation
Consider an example where Alice needs to identify Bob who she meets for the
?rst time. There are several credentials that Alice can get from di?erent sources:
the passport Bob shows to Alice and the voice of Bob that Alice heard from phone
conversations before they meet. Obviously, Alice has di?erent con?dence on these
credentials. Usually, she has stronger con?dence on the passport of Bob than Bob’s
voice. However, the passport may be a weaker credential, in case it is expired.
Suppose Cathy also obtains some credentials on the identity of Bob, such as the
pictures of Bob from the Web.
Given that we are considering autonomic networks, these credentials can not be
gathered in one place and there is no centralized server to evaluate trustworthiness.
However, Alice and Cathy are able to derive their direct trust on the identity of
Bob separately based on the credentials they obtain (passport and voice of Bob for
Alice and pictures of Bob for Cathy). Suppose that Alice and Cathy can exchange
their direct trust information either directly or through other people by the way
of recommendations. The problems we are interested in are how to exchange trust
information and how to combine all available information such that the system can
derive a proper trust value on the identity of Bob in a distributed manner. Figure 3.1
shows the procedure of trust evaluation for this example. The rest of the chapter
studies policies on the derivation of direct trust and evaluated trust in details.
33
Alice’s direct trust
on Bob
Cathy’s direct trust
on Bob
Evaluated trust
on Bob
Cathy’s credentials
on Bob
Alice’s credentials
on Bob
Figure 3.1: Procedure of trust evaluation
3.3 Global Trust Evaluation
In autonomic networks, trust documents a node obtains on the target node
are usually not enough to infer the trust value of the target. However, there may be
more than one nodes that obtain di?erent trust documents about the target. The
system should integrate these trust documents to achieve better estimation on the
trustworthiness of targets.
As we discussed in Chapter 1, the trust states that the node behaves in a
reliable way, but since there are many di?erent things the node can or may do (o?er
a service, issue various types of credentials, etc.), it may be possible that the node is
reliable in doing one thing but not the other. To take this into account, Maurer in
[50] distinguishes di?erent trust. A more complete attempt to classify various forms
of trust has been proposed in [29, 38]. However, in the global trust evaluation, we
will make the assumption that trust is a general proposition about anything the
node may do, i.e. nodes are simply good or bad in certain degree.
34
3.3.1 Network Model
We model the network as a dynamic directed graph G(k)(V (k), E(k)), in which
nodes are the entities/peers in the network and arcs represent trust relations. In
this work, we consider discrete time and k is a speci?c time slot. V (k) is the set of
nodes in the network at time k. If arc (i, j) ? E(k), there are direct trust node i has
on node j at time k. In the followings, we may omit the time k for easier notations
when suitable.
Graph G is called the trust graph, which is a logical graph, as opposed to the
physical graph, in which nodes are connected if they are one hop away in terms of
physical transmissions. Suppose that the number of nodes in the network is N, i.e.
[V [ = N and nodes are labeled with indices V = ¦1, . . . , N¦.
In Chapter 2, nodes have obtained trust documents needed for evaluating the
trustworthiness of certain target nodes. Based on these documents, each node can
derive direct trust values on target nodes if he has some trust documents on the
target. If (i, j) is a direct arc in graph G, i.e. (i, j) ? E, node i has direct trust on
j. As we have discussed, there are various types of trust documents. The evaluating
node may have di?erent con?dence on these documents. Therefore, the direct trust
value is no longer a binary value (trust or distrust). We take trust values continuous
with range [?1, 1], where ?1 stands for completely distrust, 1 for completely trust
and 0 for totally neutral. De?ne d
ij
as the direct trust value i has on j. Therefore,
the trust graph G is a weighted graph, where d
ij
is the weight on the arc (i, j). If i
has no direct trust relation on j, i.e. (i, j) / ? E, d
ij
may set to be 0. Figure 3.2 is an
35
1
2 3
4
0.7
0.5
1
0.8
-0.5
-0.2
Figure 3.2: A trust graph
example of a trust graph with 4 nodes. We de?ne the neighbor set of node i as
^
i
= ¦j[(j, i) ? E¦ ? V ¸ ¦i¦,
which is the set of nodes that have direct trust values on i.
There are three types of trust we need distinguish: real, direct and evaluated
trust.
• Real trust: the real state of the target nodes, denoted as t
i
for node i.
• Direct trust: the trust values each node obtains from trust documents in
hand, denoted as d
ij
for the direct trust node i on node j. The policies used
to set the direct trust values are discussed in detail in the next subsection.
• Evaluated trust: the value that is derived from the evaluation rule. Most of
the time, the evaluated trust values are called the trust values for simplicity,
denoted as s
i
for node i.
The vector T = [t
1
, . . . , t
N
] is called the real trust vector, and S = [s
1
, . . . , s
N
] is the
36
evaluted trust vector or trust vector. Matrix D = ¦d
ij
¦ is the direct trust matrix.
Notice that in this dissertation we consider two types of trust metrics: deterministic
trust metric and stochastic trust metric. If the deterministic trust metric is used,
trust values are scalar between ?1 and 1. If the system uses the stochastic metric,
trust values are random variables with range [?1, 1].
The purpose of the global trust evaluation is to ?nd the best estimates of t
i
’s,
s
i
’s, based on all the d
ij
’s.
3.3.2 Direct Trust
Direct trust is derived from trust documents each node has in hand. Values of
direct trust depend on the value of trust documents and their issuers. Users design
their own policies on deriving direct trust. The policy for deriving direct trust is
out of the scope of this dissertation. In this subsection, we just give some examples
on these policies for both deterministic trust and stochastic trust.
We ?rst consider the deterministic trust. Suppose Alice wants to identify Bob,
and she has got the valid passport of Bob and recognized the voice of Bob based on
their phone conversations before they meet. Alice may have very high con?dence
on the passport, then with the passport along she identi?es Bob with trust, say 0.9,
where the value 0.9 is determined by Alice’s policy. Furthermore, by adding the
voice of Bob, Alice may increase the trust value to 0.95.
Now we give a example on stochastic trust. Let’s consider a binary case where
nodes are either GOOD or BAD
1
, i.e. t
i
= 1 or ? 1, and d
ij
is equal to 1 or ?1
1
The capital letters emphasize that nodes are either completely good or completely bad. In the
37
as well. Suppose a GOOD node i obtains m trust documents about node j. The
con?dence i has on these m documents are the same. In terms of probability, i
believes that the probability of the issuers of these m documents making wrong
statements is p
e
.
Pr(document states that j is GOOD[t
j
= 1) = 1 ?p
e
. (3.1)
The policy that node i uses to derive d
ij
is the majority rule. Suppose there are k
out of these m documents stating that j is GOOD, and the other m?k stating the
opposite. If k ? m/2, d
ij
= 1, and k < m/2, d
ij
= ?1. Then node i can derive
the conditional probability of d
ij
given that i and j are GOOD and i obtains the m
trust documents as the following
Pr(d
ij
= 1[t
i
= t
j
= 1, the m trust documents with con?dence 1 ?p
e
, majority policy)
=
]m/2|

k=0
_
m
k
_
p
k
e
(1 ?p
e
)
m?k
. (3.2)
For the sake of simplicity, we omit the trust documents and direct trust policy in the
conditional probability for the rest of the Chapter. Notice that usually direct trust
policy is ?xed but the trust documents may change over time, then the corresponding
conditional probability changes as well. The conditional probability of d
ij
is denoted
case of uncertainty (t
i
? [?1, 1]), we say a node is good with small letters if t
i
> 0 and bad with
small letters if t
i
< 0.
38
as P(d
ij
[t
i
, t
j
). We have
Pr(d
ij
= ?1[t
i
= t
j
= 1) = 1 ?Pr(d
ij
= 1[t
i
= t
j
= 1). (3.3)
Regarding the BAD nodes, if we know their policies of setting direct trust
values, the conditional probability is derived accordingly. However, in most cases, we
may not know their polices, then uniform distribution is chosen for the conditional
probability of d
ij
where node i is BAD. The reason is that uniform distribution
achieves the maximum entropy. Therefore,
Pr(d
ij
= 1[t
i
= ?1, t
j
= 1) = Pr(d
ij
= 1[t
i
= ?1, t
j
= ?1) = 1/2; (3.4)
Another more complicate form of P(d
ij
[t
i
, t
j
) is the truncated normal distribu-
tion where nodes in the network are good or bad in some extent, i.e., t
i
is continuous
in [?1, 1]. Speci?cally, suppose X ? N(µ, ?
2
) has a normal distribution and lies
within the interval X ? (a, b), ?? ? a < b ? ?. Then the probability density
function of X is
f(x; µ, ?, a, b) =
?(
X?µ
?
)
?(
b?µ
?
) ??(
a?µ
?
)
, (3.5)
where ?() is the probability density function of the standard normal distribu-
tion, ?() is the corresponding cumulative distribution function. In the case for
P(d
ij
[t
i
, t
j
), d
ij
is the estimate i has on the trustworthiness of j. It is reasonable
to assume that µ = t
i
. The error of such estimate depends on the capability of
node j which is a function of t
i
. According to our de?nition of trust, the truncated
39
boundary is set as a = ?1 and b = 1. Therefore, we have the following de?nition
on the conditional PDF
P(d
ij
[t
i
, t
j
) =
_
¸
¸
_
¸
¸
_
1
z
e
(d
ij
?t
j
)t
2
i
, t
i
> 0
1
2
, t
i
? 0
(3.6)
where z =
_
1
?1
e
(x?t
j
)t
2
i
dx. For a good node with positive real trust value (t
i
> 0),
the direct trust value he has on another node j is a truncated normal distribution
in the range of [?1, 1]. Before the truncation, the mean is the real trust value t
j
and the variance is the inverse of t
i
. The more trusted i is, the more accurate its
direct trust value on j. The conditional probability is still chosen to be uniformly
distributed for bad nodes.
We have described examples on the policy of direct trust. In the rest of this
chapter, we assume that direct trust has been derived and given to the evaluation
rule.
3.3.3 Voting Rule
To evaluate a particular node in the network, say node i, all the direct trust
i’s neighbors has on i should be take into account. The natural approach is to
aggregate all the neighbors’ direct trust values. This approach is called the voting
rule, in which votes are neighbors’ direct trust values and those neighbors are called
voters. However, a rule using na¨?ve average is not a good estimation. Votes from
voters with high (evaluated) trust values are more credible, so they should carry
40
larger weights. On the other hand, a vote from a distrusted or neutral voter (i.e.
node j with s
j
? 0 ) should not be valid. Based on the above arguments, we de?ne
the voting rule as the following
s
i
= f(d
ji
s
j
1(s
j
> 0)[j ? ^
i
), (3.7)
where f : R
[A
i
[ ? [?1, 1] and 1() is an indicator function.
Apparently, the trust value s
j
depends on trust values of j’s neighbors of j’s
neighbors and their votes on j. Therefore s
j
is also evaluated at the same time,
and so are j’s neighbors. The whole evaluation evolve as the voting rule iterates
throughout the network and Eqn. (3.7) can be written as
s
i
(k + 1) = f (d
ji
(k)s
j
(k)1(s
j
(k) > 0)[j ? ^
i
) . (3.8)
Thus the trust evaluation can be considered as a dynamic process which evolves
with time.
The voting rule evolves in discrete time. Even if a distributed algorithm is
asynchronous and communication delays are real (i.e., not integer) variables, the
events of interest may be indexed by a discrete variable. In Eqn. (3.8), each time
step includes evaluation of trust values which is followed by updating the direct
trust values (d
ji
’s); so the restriction to discrete time entails no loss of generality.
Our interest is to study the evolution of the evaluated trust vector S and its
values when the process converges. The motivation of trust evaluation is, of course,
41
to be able to detect bad nodes and trust good nodes. It is important to investigate
whether S can correctly estimate the trust vector T at the steady state.
3.4 Deterministic Voting Rule
There are several choices for the voting rule. For instance, it can be the
average, maximum or minimum of all votes. In this section, we investigate a simple
rule: the weighted average of all votes [31]. The voting rule for node i is the following:
s
i
(k + 1) =
1
z
i
(k)

j?A
i
(k)
d
ji
(k)s
j
(k)1(s
j
(k) > 0), (3.9)
where z
i
=

j?A
i
(k)
1(s
j
(k) > 0).
As mentioned in Sec. 3.3.3, the voting rule must satisfy that f : R
[A
i
[
? [?1, 1].
We ?rst prove the following lemma.
Lemma 1 s
i
(k) ? [?1, 1], ?1 ? i ? N, k = 0, 1, . . . in Eqn. (3.9).
Proof For any k = 0, 1, . . . , we have from Eqn. (3.9)
[s
i
(k + 1)[ =
¸
¸
¸
¸
¸
¸
1
z
i
(k)

j?A
i
(k)
d
ji
(k)s
j
(k)1(s
j
(k) > 0)
¸
¸
¸
¸
¸
¸
?
1
[z
i
(k)[

j?A
i
(k)
[d
ji
(k)s
j
(k)1(s
j
(k)[ (3.10)
=

j?N
i
(k)
s
j
(k)>0
[d
ji
(k)[[s
j
(k)[

j?N
i
(k)
s
j
(k)>0
1
(3.11)
?

j?N
i
(k)
s
j
(k)>0
[s
j
(k)[

j?N
i
(k)
s
j
(k)>0
1
(3.12)
? 1.
42
Inequality (3.10) comes from the triangle inequality of summation. By removing all
nonnegative s
j
(k)’s, we get Eqn. (3.11). The last two inequality follows from the
fact that [d
ji
(k)[ ? 1 and [s
j
(k) ? 1[. Therefore, we have ?1 ? s
i
(k + 1) ? 1.
Thus the weighted average rule de?ned in Eqn. (3.9) is a legitimate voting
rule. Furthermore the rule can be written in the state form. Let’s de?ne Z(k) to
be the diagonal matrix whose ith diagonal element is z
i
(k) and the matrix D(k)
represents the values of all votes. d
ji
= 0 if node j has no direct trust relation on i.
Then we have
S(k + 1) = Z(k)
?1
D
T
(k)S
+
(k). (3.13)
S
+
(k) is a column vector where s
+
i
(k) = s
i
(k)?(s
i
(k) > 0).
3.4.1 Convergence
In this section, we regard the trust establishment process as a dynamic system.
We discuss the convergence property of the system and investigate the spreading of
trust as the system reaches the steady state.
The voting rule of Eqn. (3.13) can be written as a mapping
S(k + 1) = T
(k)
(S(k)) (3.14)
where T
(k)
: [?1, 1]
N
? [?1, 1]
N
.
We assume that after a certain time instant K, all nodes have obtained su?-
cient trust documents and d
ij
’s keep as constants. Function T
(k)
does not change
43
over time as d
ij
’s are constants. Set T = T
(k)
when k > K. We want to prove that
the iterated rule converges to the unique ?xed point.
Theorem 2 If there exists i and j where < 0[d
ij
(k)[ < 1, T is a contraction map-
ping, that is [T(S) ?T(R)[ ? l[S ?R[, where l is a constant and 0 < l < 1.
Proof Let’s de?ne a square matrix F = Z
?1
D
T
and the element of F on row i
column j as f
ij
. Since for di?erent S, Z might be di?erent. We use F
S
(and Z
S
) to
denote the matrix F (and Z) corresponds to vector S. We ?rst consider the case
where elements of both S and R are positive. Then we have and F
R
= F
S
.
[T(S) ?T(R)[
2
?[S ?R[
2
= (S ?R)
T
F
T
F(S ?R) ?(S ?R)
T
(S ?R)
= (S ?R)
T
(F
T
F ?I)(S ?R). (3.15)
Since we know that for each row of F,

j
[f
ij
[ < 1. (3.16)
We have that all the diagonal elements of F
T
F are strictly less than 1 and F
T
F is a
real symmetric matrix. Therefore, F
T
F ?I is a negative de?nite matrix. Therefore,
[T(S) ?T(R)[
2
< [S ?R[
2
(3.17)
Now consider R and S with negative elements. If s
i
< 0, then r
i
< 0 for any
i ? ¦1, . . . , N¦. Thus we have F
R
= F
S
. The proof is the same as the above. The
44
mapping T is a contraction mapping.
Banach ?xed point theorem states that every contraction mapping on a nonempty
complete metric space has a unique ?xed point, and that for any initial vector S(0)
the iterated function sequence converges to the ?xed point. Suppose the voting rule
converges to a vector S, then we have that
S = T(S).
S is the ?xed point of T. The following inequality describes the speed of convergence:
[S ?T
k
(S)[ ?
l
k
1 ?l
[T(S(0)) ?S(0))[. (3.18)
3.4.1.1 Voting in a virtuous network
We start with a virtuous network with no adversary, i.e. all nodes behave
rationally and all votes are “reasonable”, which means we admit the uncertainty of
the voting value but no one is voting maliciously, we have d
ij
? [0, 1]. Furthermore,
nodes start with trust value no less than 0. Otherwise if a node has trust value less
than 0 initially, it is excluded by others immediately. Therefore, s
i
(k) ? 0, ?i ? V
and k ? 0. Our goal is to show that as k ? ?the vector sequence ¦S(k)¦ converges
and to ?nd the steady-state value S.
To determine the trustworthiness of nodes, we apply a threshold rule on the
steady states of trust values, which are de?ned as s
i
= lim
k??
s
i
(k). Our threshold
45
rule is dependent on a system de?ned parameter ? as follows:
Node i is
_
¸
¸
_
¸
¸
_
trusted, if s
i
? ?
neutral, if s
i
< ?
.
De?ne a matrix F = Z
?1
D
T
, then the state equation (3.13) is written as S(k) =
F
k
S(0). First, a very simple scenario is considered where all votes are all of the value
1, which means all nodes are able to correctly verify their neighbors as trusted.
Therefore D = A, where A is the adjacency matrix of graph G, and F is the
normalized adjacency matrix. It’s trivial to verify that F is a stochastic matrix
2
.
We show in the following that the convergence behavior of S(k) depends on whether
F is reducible or not, which represents the connectivity of the network.
• G is connected, i.e. F is irreducible.
Since F is a stochastic matrix, the largest eigenvalue of F is 1. Let ? be
the right eigenvector corresponding to eigenvalue 1, which is an N-dimension
normalized row vector
3
, then ?F=?. We could prove by ergodicity that [61],
F
k
=
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
?
?
.
.
.
?
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
.
Thus ?i ? V , s
i
lim
k??
s
i
(k) = ? S(0) =

N
j=1
?
j
s
j
(0). Therefore,
2
A matrix is called a stochastic matrix, if the sum of the elements of each row is 1.
3
For a normalized vector, the sum of all elements is equal to 1.
46
every node reaches the same trust value at the steady state. We can see that
whether a node is trusted or not depends on the initial con?guration of S(0).
Obviously, if

N
j=1
?
j
s
j
(0) ? ?, all nodes are trusted, and none is trusted
otherwise. Therefore the initial trust value is very crucial here. It is easier
to establish a complete trusted network if a large number of nodes have high
trust value initially, otherwise none will get trusted.
• Gis not connected, i.e. F can be decomposed as F = blockdiag[F
1
, F
2
, . . . , F
L
],
where F
l
, l = 1, . . . , L are irreducible matrices of order N
l
,

L
l=1
N
l
= N. Thus
the graph G has L components, which are disconnected with each other. Let’s
use (
i
, i = ¦1, . . . , L¦ to denote those components, where (
i
has normalized
adjacency matrix F
i
. Similarly, since F is irreducible, we can ?nd the vector
?
i
which is the right eigenvector of F
i
corresponding to eigenvalue 1, such that
s
i
lim
k??
s
i
(k) =

j=(
l
?
j
s
j
(0), if i ? (
l
. Therefore, the trustworthiness of
a node is related to the initial con?guration within its connected component.
It’s easy to extend the results of irreducible matrices to reducible ones by
applying the method above. From then on, we will assume G is connected.
More generally, the voting value d
ji
is less or equal to 1 instead of always being
1, i.e., uncertainty is considered. Then F is a semi-stochastic matrix. It’s known
that for a semi-stochastic matrix F, F
k
? 0 as k ? ?, thus S also goes to 0.
Therefore, nodes cannot be trusted at all if there is uncertainty in votes.
We have shown that using the simple voting scheme, trust can only be evalu-
ated under certain strict conditions: all voting values are 1 and the initial con?gura-
47
tion must satisfy

N
j=1
?
j
s
j
(0) ? ?. A single vote with value less than 1 will result in
failure of trust evaluation. The result is not obvious when we de?ne the voting rule.
This actually tells us the di?culties of designing algorithms in autonomic networks.
We want to ensure trust evaluation in virtuous networks without depending on the
initial states. Therefore we introduce the notion of leaders, which are agents that
are always trusted with trust value 1.
3.4.1.2 Voting with leaders
As we just mentioned, leaders are pre-trusted agents. For instance, they can
be the leader of a cluster, or agents holding a certi?cate signed by authorities. To
simplify the discussion, we also assume all leaders only vote for nodes that they fully
trust. Therefore, if a node i is trusted with b
i
leaders, it will get b
i
more votes with
value 1. So de?ne b
i
as the number of leaders that fully trust node i. Let B be the
diagonal matrix with ith diagonal element equal to b
i
and 1 = [1 . . . 1]
t
. Then the
updating rule in (3.13) changes to
S(k) = (Z + B)
?1
(DS(k ?1) + B1). (3.19)
Again as we did in Sect. 3.4.1.1, ?rst the situation with all 1-valued votes is
considered, i.e., D=A. De?ne
¯
S(k) = 1 ?S(k). By Eqn.(3.19), we can deduce that
¯
S satis?es the equation
¯
S(k) = (Z + B)
?1
A
¯
S(k ?1)
¯
F
¯
S(k ?1), (3.20)
48
where
¯
F = (Z + B)
?1
A. Thus,
¯
S(k) =
¯
F
k
¯
S(0). It’s easy to observe that if there
is at least one node i such that b
i
> 0,
¯
F is a semi-stochastic matrix. We have
¯
F
k
? 0, as k ? ?. Therefore S(k) ? 1. It follows that
Corollary 3 For a connected virture network, all nodes get fully trusted in steady
state given all 1-valued votes, i? ? at least one node that connects to one or more
leaders.
Thus adding just one leader guarantees a fully trusted network. Now consider
votes with uncertainty. We have the following theorem.
Theorem 4 Let ? be the threshold of trustworthiness. We have the following con-
ditions on the number of leaders:
1. De?ne a N-dimension column vector V . v
i
= 1 if node i is trusted at the
steady state, otherwise v
i
= 0, we have that
B(?V ?1) > ?(Z ?D)V.
2. If all nodes are trusted at the steady state, the number of leaders for each node
must satisfy
B1 ?> ?1 ??(Z ?D)1.
Proof Using a similar technique as above, let
¯
S(k) = ? ? S(k). Substituting it
into Eqn.(3.19), we have
¯
S(k) = (Z + B)
?1
D
¯
S(k ?1) + (Z + B)
?1
((Z +B ?D)? ?B1) . (3.21)
49
Let the last term on the right hand side of Eqn. (3.21) be 0. Then
¯
S(k) ? 0 as
k ? ?, so T(k) ? ?.
According to the decision rule, we want S = lim
k??
S(k) > ?1, therefore
? > ?1. Consider the case ? = ?1, since (Z + B)
?1
((Z + B ?D)? ?B1) = 0, we
have
B1 =
?
1 ??
(Z ?D)1.
Notice that the function f(x) =
x
1?x
is strictly increasing for x ? [0, 1). If we want
? > ?1, then
B1 >
?
1 ??
(Z ?D)1.
The ?rst condition can be proved similarly.
Similarly, for graphs that are not connected, the theorem holds for each con-
nected component separately.
Theorem 3 proves as well as provides a network design method to establish
a fully trusted network by introducing certain number of leaders. Moreover, this
method only employs local interactions, and it converges to the desired result with-
out dependence on any initial con?guration.
3.4.2 Network Topology
Having found a simple and light-weight trust evaluation method, the next
concern is the dynamics of the procedure. In particular, we investigated the time it
takes to reach the steady state, in other words how fast the trust values converge.
50
We introduce an important theorem , the Perron-Frobenius Theorem[13], which
states that for a stochastic matrix A, A
n
= ?
n
1
v
1
u
T
1
+O(n
m
2
?1
[?
2
[
n
) ,where ?
1
is the
largest eigenvalue with its left and right eigenvector u
1
and v
1
respectively
4
, ?
2
is
the second largest eigenvalue and m
2
is the algebraic multiplicity of ?
2
. Thus the
convergence rate of A
n
is of order n
m
2
?1
[?
2
[
n
. Normalized adjacency matrices are
stochastic matrices, therefore those with smaller ?
2
converge faster.
Then the question becomes: what kind of networks or which network topology
has smaller ?
2
? Since the well-known small-world paper by Watts and Strogatz in
1998 ([69]), research on network topology has gained voluminous attention. The
small world models have two prominent properties: high clustering coe?cient and
small average graphical distance between any pairs. The small average distance es-
sentially indicates that nodes can communicate with other nodes in a few hops given
a very large network. In the past ?ve years, there has been substantial research on
the small-world model in various complex networks, such as Internet and biological
systems. As a social concept, distributed trust networks exhibit small-world prop-
erties too. In [15], it was shown that the PGP certi?cate graph has the small-world
property. Therefore, the study of network topology using the small-world model will
help us to understand practical trust systems.
In this work, we consider one of many small-world models, the so-called ?-
model ([69]), which is modeled by adding small number of new edges into a regular
lattice. The network starts as a two-dimensional lattice with periodic boundary,
and the neighbors are those one hop away. Then new edges are added by randomly
4
Notice that ?
1
= 1 for a stochastic matrix.
51
choosing two unlinked nodes. In our simulations, a network is considered with 400
nodes on the lattice. For the original lattice, each node has 4 neighbors, and the total
number of edges is 800. Each simulation includes several rounds. At each round, 2
new edges are added randomly into the network. The second largest eigenvalue and
the convergence time are computed for each round. Figure 3.3 shows the second
largest eigenvalue ?
2
as a function of the number of new edges added. It shows that
with more edges added, the second largest eigenvalue of the graph decreases. So,
generally speaking, small-world networks have smaller second largest eigenvalue than
the original lattice. Figure 3.4 illustrates the substantial changes of the convergence
time as new edges are added. Notice that given 8 new edges are added, which is just
1% of the total edges, the convergence time drops from 5000 rounds to 500 rounds.
Thus trust is established much faster in a network with small-world property than
a regular lattice. This conclusion also provides a direction for network management
so as to achieve good performance.
0 2 4 6 8 10 12 14 16 18 20
0.95
0.96
0.97
0.98
0.99
1
number of edges added
s
e
c
o
n
d

la
r
g
e
s
t

e
ig
e
n
v
a
lu
e

(
?
2
)
Figure 3.3: The second largest eigenvalue ?
2
52
0 2 4 6 8 10 12 14 16 18 20
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
number of edges added
c
o
n
v
e
r
g
e
n
c
e

t
im
e

(
r
o
u
n
d
)
Figure 3.4: Convergence time
3.5 Stochastic Voting Rule
3.5.1 Stochastic Model
In the stochastic model, the trust evaluation problem is interpreted as an
estimation problem. T is the vector to be estimated and S is the estimate. The
direct trust values are the observation used for estimation. Therefore, the direct
trust value d
ij
is modeled as a random variable that conditioned on real state of i
and j and the policies node i uses to derive the direct trust value.
One important issue for estimation is to correctly model the conditional prob-
ability of the direct trust value d
ij
. d
ij
depends on the real trust values of i and j
and the policies used to derive the direct trust value. The model of the conditional
probability of d
ij
varies in di?erent trust evaluation systems. In this work, we study
the general evaluation rule and assume the conditional probability is known by the
evaluating node.
53
Because the direct trust value d
ij
depends only on t
i
and t
j
we have that
Pr[d
ij
[t
l
, l ? N, d
kl
, (k, l) ? E and (k, l) ,= (i, j)] = Pr[d
ij
[t
i
, t
j
] (3.22)
For four distinct nodes i, j, k, l in the network, we have
Pr[d
ij
, d
kl
[t
i
, t
j
, t
k
, t
l
] = Pr[d
ij
[t
i
, t
j
] Pr[d
kl
[t
k
, t
l
]. (3.23)
We further assume that the a priori probability on the real trust value is
known, i.e. P(t
i
) is known and independent with each other. The value of P(t
i
)
depends on the environment node i is in. For instance, a node that is physically
well-protected has high probability being good. In some case, P(t
i
)’s for all node
i ? V are identically independent. For instance, a network with only GOOD and
BAD nodes, the percentage of GOOD nodes in the network is known in advance
denoted as p
G
. Thus, P(t
i
= 1) = p
G
, ?i ? V .
3.5.2 Estimation
The trust evaluation is considered as an estimation problem, where the obser-
vations are d
ij
’s and the trust values are the estimates. We ?rst consider an ideal
case where all direct trust values (i.e., observations) are collected in one place. We
have the posterior of T given the direct trust values as follows
Pr[T[T] =
Pr[T[T] Pr[T]

T
Pr[T[T] Pr[T]
, (3.24)
54
s
1
d
12
s
2
s
3
s
4
d
21
d
31
d
23
d
24
d
43
Figure 3.5: Bipartite Bayesian Network for the Trust Graph 3.2
where T = ¦d
ij
, ?(i, j) ? E¦ is the set of all direct trust values and Pr[T] is the
prior probability of nodes’ real trust values, which is assumed to be independent of
each other. Therefore, the distribution of the estimated trust vector S
Pr = Pr[T = S[T]
=
Pr[d
ij
, (i, j) ? E[s
i
, i ? V ]

i?V
Pr[s
i
]
Z
, (3.25)
where Z =

S
Pr[d
ij
, (i, j) ? E[s
i
, i ? V ]

i?V
Pr[s
i
] is the normalization factor.
3.5.3 Bayesian Network
According to the independence assumption in Eqn. (3.23), the estimation can
be modeled as a bipartite Bayesian networks. Fig. 3.5 gives the corresponding
Bayesian network for the trust graph Fig. 3.2.
The Bayesian network belongs to graphical models. Graphical models have
been used in variety of ?elds, such as the Markov random ?eld in signal processing
and Viterbi decoding algorithm in coding. The algorithms for these graphic model
may be viewed as instances of the sum-product (or belief propagation or probability
55
=
s
1
=
s
2
=
s
N
f
ij
f
kl

connections

Figure 3.6: General Factor Graph for Trust Graphs
=
s
1
f
12
=
s
2
=
s
3
=
s
4
f
21
f
31
f
23
f
24
f
43
Figure 3.7: Factor Graph for the Trust Graph 3.2
propagation algorithm and the max-product (or min-sum) algorithm. These algo-
rithms are operated by message passing in graphical models. Speci?c instances of
such algorithms include Kalman ?ltering and smoothing, the forward-backward al-
gorithm for hidden Markov models, probability propagation in Bayesian networks,
and decoding algorithms for error correcting codes such as the Viterbi algorithm
and the iterative decoding of LDPC codes. In recent years, researchers uni?ed all
these algorithms in terms of factor graph[42, 46].
The trust estimation can also be interpreted as a factor graph introduced in
[23] (and there called “normal graphs”). The variables are s
i
’s and the factor nodes
are Pr[d
ij
[s
i
, s
j
], denoted as f
ij
(s
i
, s
j
). The factor graph of a general trust graph is
show in Fig. 3.6. Figure 3.7 shows the factor graph of Figure 3.2. For autonomic
56
networks, the corresponding trust graph is usually sparse: each node is connected
only to a small number of other nodes.
We are interested in the marginal probability distribution Pr[s
i
] for each node
i.
Pr[s
i
] =

?¡s
i
¦
Pr =

?¡s
i
¦

(i,j)?E
Pr[d
ij
[s
i
, s
j
]

i?V
Pr[s
i
]/Z, (3.26)
where ? ¦s
i
¦ = S ¸ ¦s
i
¦. Notice that in Eqn. (3.26) and the rest of this sec-
tion, we assume discrete random variables. Our arguments can be easily extended
to continuous random variables by replacing summation to integrals. Now de?ne
f
ij
(s
i
, s
j
) = Pr[d
ij
[s
i
, s
j
], then
Pr[s
i
] =

?¡s
i
¦

(i,j)?E
f
ij
(s
i
, s
j
)

i?V
Pr[s
i
]/Z. (3.27)
The computation of the marginal probability distribution belongs to the sum-
product algorithm, which can be implemented using the message passing algorithm.
Suppose the message send from the factor node f
ij
to variable s
i
is µ
f
ij
?i
, then the
message is the function
µ
f
ij
?i
(s
i
)

s
j
f
ij
(s
i
, s
j

j?f
ij
(s
j
)
, (3.28)
where µ
j?f
ij
(which is a function of s
j
) is the message that arrives at f
ij
from
57
1
2 3
4
1 1
-1
Figure 3.8: A Tree Trust Graph
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
23
f
24
f
43
Figure 3.9: The Tree Factor Graph of Figure 3.8
variable s
j
, which is de?ned as
µ
j?f
ij
(s
j
)

k?
ˆ
A
j
\i
µ
f
jk
?j
(s
j
), (3.29)
where
ˆ
^
j
= ¦k[(j, k) ? E or (k, j) ? E¦.
If the factor graph has no cycles, then it is e?cient to begin the message
computation from the leaves and to successively compute message as their required
“input” messages become available. In this way, each message is computed exactly
once and we are able to get the a posteriori marginal probability of all s
i
’s simultane-
ously. Figure 3.8 is an example of trust graph whose corresponding factor graph is a
tree, as shown in Figure 3.9. Suppose the trust values are binary, i.e. s
i
? ¦?1, 1¦,
58
the a priori probability of a node being GOOD is known to be 0.75, and the con-
ditional probability of d
ij
is de?ned as the binary example in Sec. 3.5.1, which is
shown in Table 3.1. In the table, we assume the right hand side in Eqn. (3.2) is
Table 3.1: The Conditional Probability Pr(d
ij
[s
i
, s
j
)
(s
i
, s
j
)
d
ij
(1,1) (1,-1) (-1, 1) (-1,-1)
1 0.8 0.2 0.5 0.5
-1 0.2 0.8 0.5 0.5
equal to 0.8.
The message passing procedure of this tree example is shown in Figure 3.10.
The message µ are represented as
_
µ(1)
µ(?1)
_
, scaled such that µ(1) + µ(?1) = 1.
The ?nal result in Step 6 is the a posteriori probability Pr(s
i
[d
ij
, (i, j) ? E) for
i = 1, . . . , 4.
However, trust graphs are usually cyclic, thus the factor graph generally has
cycles. We can use iterative algorithms in this case. All messages are repeatedly
updated, according to a certain schedule. The computation stops when the available
time is over or when some other stopping condition is satis?ed (e.g., the trust values
change below a threshold). The results with cycles will usually be only an approxi-
mation. Nevertheless, it has been shown that the message passing algorithm could
achieve suboptimum in sparse graphs without short cycles (such as sparse graphs
with girth greater or equal to 6).
The message passing approach we have discussed in this subsection is a good
approximation. In the followings, we introduce a stochastic voting scheme which
provides the exact optimal solution [7, 32].
59
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
24
0.75
0.25
? ?
? ?
? ?
0.75
0.25
? ?
? ?
? ?
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
24
0.4118
0.5882
? ?
? ?
? ?
0.5652
0.4548
? ?
? ?
? ?
0.75
0.25
? ?
? ?
? ?
0.75
0.25
? ?
? ?
? ?
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
24
0.6776
0.3224
? ?
? ?
? ?
0.7885
0.2115
? ?
? ?
? ?
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
24
0.4852
0.5148
? ?
? ?
? ?
0.7846
0.2154
? ?
? ?
? ?
0.6012
0.3988
? ?
? ?
? ?
0.8075
0.1925
? ?
? ?
? ?
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
24
0.5738
0.4262
? ?
? ?
? ?
0.5482
0.4518
? ?
? ?
? ?
=
s
1
=
s
2
=
s
3
=
s
4
f
21
f
31
f
24
0.4852
0.5148
? ?
? ?
? ?
0.6012
0.3988
? ?
? ?
? ?
0.5729
0.4271
? ?
? ?
? ?
0.3869
0.6131
? ?
? ?
? ?
Step 1
Step 3
Step 5
Step 4
Step 6
Step 2
Figure 3.10: Message Passing for the Tree Factor Graph
60
3.5.4 Markov Random Field
The distribution of S in fact presents a Markovian type property, as stated in
the following lemma.
Lemma 5 The distribution of the random vector S satis?es that the probability of
the estimated trust value for a certain node i, s
i
, given the estimated trust values of
all the other nodes in the network, is the same as the probability of s
i
, given only
the estimated trust values of the neighbors of i.
Pr[s
i
[S/¦s
i
¦] = Pr[s
i
[s
j
, ?j ?
ˆ
^
i
]. (3.30)
Proof According to the de?nition of conditional probability
Pr[s
i
[S/¦s
i
¦] =
Pr
Pr[S/¦s
i
¦]
.
Substitute from Eqn. (3.25) to get
Pr[s
i
[S/¦s
i
¦] =
Pr[d
kl
, (k, l) ? E[s
k
, k ? V ]

k?V
Pr[s
k
]

s
i
Pr[d
kl
, (k, l) ? E[s
k
, k ? V ]

k?V
Pr[s
k
]
. (3.31)
According to the de?nition of conditional probability, we have that
Pr[d
kl
, (k, l) ? E[s
k
, k ? V ] = (3.32)
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[s
k
, k ? V ] Pr[d
kl
, (k, l) ? E, k ,= l ,= i[s
k
, k ? V, d
ij
, d
ji
, j ?
ˆ
^
i
].
61
Because of the independence assumption in Eqn. (3.23), we have that
Pr[d
kl
, (k, l) ? E[s
k
, k ? V ] = (3.33)
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[d
kl
, (k, l) ? E, k ,= l ,= i[s
k
, k ? V, k ,= i].
Therefore, Eqn. (3.31) can be written as
Pr[s
i
[S/¦s
i
¦] =
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[s
i
]
Z
i
, (3.34)
where Z
i
=

s
i
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[s
i
]. The last equation shows
that the conditional probability only depends on trust values of nodes that are
neighbors of node i.
As opposed to the Markov chain, which has the Markov property with re-
spect to time, Eqn. (3.34) shows Markovian property in the space dimension. A
distribution with such property is called a Markov random ?eld (MRF).
The well-known Hammersley-Cli?ord theorem [41] proves the equivalence be-
tween a MRF on a graph and the Gibbs distribution. We can write the distribution
of S in Gibbs distribution form as following
Pr = exp¦

(i,j)?E
ln Pr[d
ij
, d
ji
[s
i
, s
j
] + ln Pr[s
i
]¦/Z. (3.35)
62
3.5.5 Stochastic Voting
We have proved that the PMF of S is a Markov random ?eld. Thus we de?ne
the stochastic rule as the following:
Pr
_
s
i
(k + 1)[s
j
(k), j ?
ˆ
^
i
_
=

j?
ˆ
A
i
Pr[d
ij
, d
ji
[s
i
(k + 1), s
j
(k)] Pr[s
i
(k + 1)]
Z
i
(k)
(3.36)
where Z
i
(k) is the normalization factor
Z
i
(k) =

s
i
(k+1)

j?
ˆ
A
i
Pr[d
ij
, d
ji
[s
i
(k + 1), s
j
(k)] Pr[s
i
(k + 1)]. (3.37)
3.5.5.1 Update Sequence
Our evaluation rule is essentially an updating rule. Another component of
an updating rule is the update sequence. We list three possible types of update
sequences
• Synchronous updates: Trust values of all nodes are updated at the same time.
• Ordered asynchronous updates: Trust values are updated one at a time in a
prede?ned order.
• Random asynchronous updates: Trust values are updated one at a time. The
updated node is randomly chosen following a distribution.
In the autonomic environment, it is very di?cult to achieve synchronicity,
which involves complicated algorithms and substantial control messages. Thus the
63
system should only use asynchronous updates. The ordered updates require extra
control as well, so we only study random asynchronous updates, but the analyses
in the following are also valid for the ordered asynchronous updates with only a
few modi?cations. In random asynchronous updates, the probability that node i is
chosen as the target is de?ned as q
i
, and

i?V
q
i
= 1.
3.5.5.2 Markov Chain Interpretation
Our trust evaluation rule in the form of Eqn. (3.36) has the obvious Markov
property: the value of s
i
at time k + 1 only depends on s
j
, j ?
ˆ
^
i
at time k and is
independent of all the rest history.
The state of the Markov chain at time k is a con?guration of S, and at time
k +1, s
i
changes while all other nodes keep unchanged. Combining Eqn. (3.36) and
the random asynchronous updates, the stochastic voting rule can be considered as
a Markov chain, where the states are composed of all the possible con?gurations of
vector S. Suppose at time k, we are at state S = [s
1
, . . . , s
i
, . . . , s
N
]. Denote the
transition probability from state S to state
¯
S
i
= [s
1
, . . . , ¯ s
i
, . . . , s
N
] as p
S,
¯
S
i , where
¯ s
i
= ?s
i
, then we have
p
S,
¯
S
i = q
i
Pr [¯ s
i
. (3.38)
Then the probability of staying still at S at time k + 1 is
p
S,S
= 1 ?

i?V
p
S,
¯
S
i . (3.39)
64
[1, 1, 1, 1]
s
1
s
2
s
3
s
4
k
[?1, ?1, 1, 1] [1, ?1, ?1, 1] [1, ?1, 1, ?1] k + 1
[1, ?1, 1, 1]
Figure 3.11: Part of the Markov Chain. Suppose S(k) = [1, ?1, 1, 1], then S(k + 1)
either ?ips one of the element in S(k) or stays at the state of S(k).
Figure 3.11 is an example of the Markov chain for a network with four nodes,
such as the one in Figure 3.2. We only depict one state transition due to space
constraints. Suppose at time k, S(k) = [1, ?1, 1, 1], i.e. the state at the top of
Figure 3.11. Each outgoing edge represents a state transition. The one with s
i
on
the edge means transition from S to
¯
S
i
. For instance, the leftmost transition is to
?ip the value of s
1
.
3.5.5.3 Convergence
Having developed the probabilistic interpretation of our local voting rule in
terms of a Markov chain, in this subsection, we study convergence of the Markov
chain. We will show that the Markov chain converges and give the explicit for-
mula for the stationary distribution. First, we give a lemma used for the proof of
convergence.
Lemma 6 The Markov chain with transition probability de?ned as in Eqns. (3.38)
65
and (3.39) is irreducible and aperiodic, given q
i
> 0, ?i ? V .
This lemma is easy to verify by proving that all states have a self-loop with nonzero
probability and all are connected with each other under the condition for q
i
.
Furthermore, the Markov chain is ?nite with dimension 2
N
, so it is a regular
Markov chain. It is known that there is a unique stationary distribution ? for a
regular Markov chain. Therefore, our voting rule does converge under the trivial
condition q
i
> 0, ?i ? V . In order to derive the stationary distribution, we introduce
the notion of a reversible Markov chain.
De?nition 1 A Markov chain is reversible if
?
S
p
S,R
= ?
R
p
R,S
for all R, S, (3.40)
where R, S are the states and ?
S
is the probability of being in state S at the steady
state.
In other words, for a reversible Markov chain at the steady state, the process
looks the same forward as well as backward. Hence it is said to be reversible
5
.
Furthermore, the following theorem which is the reverse of De?nition 1 provides a
way to compute the stationary distribution [4].
Lemma 7 Suppose that ? is a probability distribution satisfying Eqn. (3.40), then
? is the unique stationary distribution and the chain is reversible.
5
Notice that not all regular Markov chains are reversible.
66
Proof This is true because Eqn. (3.40), sometimes called the detailed balance
equation, implies

S
?
S
p
S,R
= ?
R

S
p
R,S
= ?
R
for all R (3.41)
and therefore ? satis?es the balance equation of the stationary distribution.
De?ne
M(S) =

(i,j)?E
Pr[d
ij
[s
i
, s
j
]

i?V
Pr[s
i
] (3.42)
and a distribution ? on the states of the Markov chain as following
?
S
=
M(S)
Z
. (3.43)
Then we have the following theorem.
Theorem 8 For the stochastic voting rule de?ned by Eqn. (3.36) and using random
asynchronous updates, if q
i
> 0, ?i ? V , we have that
1. the stochastic voting rule converges to the steady state with a unique stationary
distribution;
2. the distribution ?
S
=
M(S)
Z
is the unique stationary distribution.
Proof The convergence has been veri?ed based on Lemma 6. To prove ? is the
stationary distribution, we just need to check if ? satis?es Eqn. (3.40) according to
Lemma 7.
67
S R
Pr[ | ]
i i
q s R
Pr[ | ]
i i
q s S
Figure 3.12: Transitions between S and R. p
S,R
= q
i
Pr[¯ s
i
and p
R,S
= q
i
Pr[s
i
[R].
Consider now two adjacent states S and R in the Markov chain.
S = [s
1
, . . . , s
i
, . . . , s
N
] and R = [s
1
, . . . , ¯ s
i
, . . . , s
N
].
We know that
p
R,S
= q
i
Pr[s
i
[R] = q
i
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[s
i
]
Z
i
, (3.44)
p
S,R
= q
i
Pr[¯ s
i
= q
i
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[¯ s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[¯ s
i
]
Z
i
. (3.45)
Notice that since the transitions between R and S are just ?ipping s
i
, the normaliza-
tion factor Z
i
is the same for p
R,S
and p
S,R
. Fig. 3.12 shows the transitions between
R and S. We need only verify Eqn. (3.40), i.e.
M(S)
Z
p
S,R
=
M(R)
Z
p
R,S
(3.46)
or
p
S,R
p
R,S
=
M(R)
M(S)
(3.47)
68
We know that
p
S,R
p
R,S
=
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[s
i
]
Pr[d
ij
, d
ji
, j ?
ˆ
^
i
[¯ s
i
, ¦s
j
, j ?
ˆ
^
i
¦] Pr[¯ s
i
]
=
M(R)
M(S)
.
Thus, Eqn. (3.40) is satis?ed and ? is the stationary distribution of the stochastic
voting rule.
Let vector SS be equal to the trust vector S at the steady state. The objective
of trust evaluation is to ?nd SS that is as close as to the real trust vector T.
3.5.6 Binary Example
In this section, we consider a special model: the binary-error model. Assume
that t
i
, s
i
? ¦1, ?1¦, i.e., a node in the network is either GOOD or BAD. Assume
the a priori probability of nodes being good is known as p
G
, i.e., Pr[s
i
= 1] = p
G
and the probability that a GOOD node has a wrong direct trust on its neighbors is
p
e
. For BAD nodes, we assume that they always provide opposite direct trust with
the probability of errors being p
e
as well. In other words, suppose node i is BAD
and j is GOOD, then d
ij
= ?1 with probability 1 ? p
e
. Therefore, we can get a
generalized direct trust model:
Pr[d
ij
,= s
i
s
j
] = p
e
(3.48)
for both GOOD and BAD nodes.
69
If i has correct direct trust on j, we have that d
ij
s
i
s
j
= 1, and d
ij
s
i
s
j
= ?1 if
i has wrong direct trust on j. If we de?ne parameter a, b, ? and ?, such that
a + b 1 = ln(1 ?p
e
)
a + b (?1) = ln p
e
. (3.49)
and
? + ? 1 = ln(p
G
)
? + ? (?1) = ln(1 ?p
G
). (3.50)
The stochastic voting rule Eqn. (3.36) can be written as following
Pr [s
i
(k + 1)[s
j
(k), j ? ^
i
] =
exp¦

j?
ˆ
A
i
(a + b(d
ij
+ d
ji
)s
i
(k + 1)s
j
(k)) + ? + ?s
i
¦
Z
i
(k)
,
(3.51)
where Z
i
(k) is the normalization factor. By solving the linear equations Eqn. (3.49)
and (3.50), we have that
a = (ln(1 ?p
e
) + ln p
e
)/2, b = (ln(1 ?p
e
) ?ln p
e
)/2,
and
? = (ln(1 ?p
G
) + ln p
G
)/2, ? = (ln(1 ?p
G
) ?ln p
G
)/2,
Assume for any node i, [
ˆ
^
i
[ is identical. For instance, G is an undirected 2-D lattice,
70
then [
ˆ
^
i
[ = 4, ?i ? V . The stochastic voting rule Eqn. (3.51) can be written as
Pr [s
i
(k + 1)[s
j
(k), j ? ^
i
] =
exp¦

j?
ˆ
A
i
b(d
ij
+ d
ji
)s
i
(k + 1)s
j
(k) + ?s
i
(k + 1)¦
Z
i
(k)
,
(3.52)
We investigate properties of the estimated trust values when the voting rule
reaches the steady state. In the binary case, the performance criterion can be de?ned
in a more straight way, as the percentage of correct estimation P
correct
P
correct
= ¦ Expected # of SS
i
= T
i
¦/N
= E
_
1 ?
|SS ?T|
1
2N
_
.
where |SS ? T|
1
=

i?V
[SS
i
? T
i
[. The last equation is true because SS, T ?
¦1, ?1¦. Before discussing properties of the steady state, we introduce an important
model that models local interactions of magnets in physics – the Ising model.
3.5.6.1 Ising Model and Spin Glasses
For those familiar with statistical physics, it is very easy to link the stochastic
voting rule Eqn. (3.52) with the Ising model [70] in statistical physics. The Ising
model describes interaction of magnetic moments or “spins” of particles, where some
particles seek to align with one another (ferromagnetism), while others try to anti-
align (antiferromagnetism). In the Ising model, s
i
is the orientation of the spin at
particle i. s
i
= 1 or ? 1 indicates the spin at i is “up” or “down” respectively. A
71
Hamiltonian, or energy, for a con?guration S is given by
H(S) = ?

(i,j)
J
ij
s
i
s
j
?mH

i
s
i
. (3.53)
The ?rst term represents the interaction between spins. The second term repre-
sents the e?ect of the external (applied) magnetic ?eld. Then the probability of
con?guration S is given by
Pr =
e
?
1
kT
H(S)
Z
, (3.54)
where T is the temperature and k is the Boltzmann constant. In the Ising model, the
local interaction “strengths” J
ij
’s are all equal to a constant J, which is either 1 or
?1. The Ising model has been extensively studied ever since Ernst Ising published
his results on this model in 1920s. In recent years, an extension of the Ising model
called the Edwards-Anderson model of spin glasses is used to study local interactions
with independently random J
ij
[55], which corresponds to d
ij
+ d
ji
in our voting
rule. The rich literature in statistical physics will no doubt help us to understand
our voting model at the steady state.
3.5.6.2 Virtuous Network
Now go back to our discussion on trust at the steady state. We start with the
simplest case: a virtuous network, where all nodes are good and they always have
full con?dence on their neighbors, so t
i
= 1, ?i ? V and d
ij
= 1, ?(i, j) ? E. Then
72
the stationary distribution ? is exactly the same as the one in the Ising model with
b =
1
2kT
and ? =
mH
kT
. (3.55)
Since all nodes are good with t
i
= 1 and SS
i
is either 1 or ?1, the percentage of
correct estimation can also be written as
P
correct
=
E[¸SS)] + N
2N
,
where ¸SS) =

i?V
SS
i
. In the terminology of physics, ¸SS) is called the total
magnetization. It is known that when the external ?eld H > 0, E[¸SS)] is positive
and when H < 0, it is negative.
We use simulations to study the value of P
correct
with respect to parameters
? and b. In this section, the network topology for all the simulations is a two-
dimensional lattice with periodic boundary. The number of nodes is 100 and each
takes four nearest nodes as their neighbors. We chose the lattice because most
theoretical results for the Ising model are for the 2-D lattice. In Sec. 3.5.6.4, we will
discuss the e?ect of network topology.
Fig. 3.13 and Fig. 3.14 represent the percentage of correct estimation as a
function of b for ? being negative, positive or zero. While ? > 0 (p
G
< 0.5),
the percentage of correct estimation is less than half. Such a result is obviously
undesired, since it is even worse than randomly choosing the trust values, which can
achieve the mean 0.5. Therefore, it is infeasible to use a positive threshold. This
73
0 0.2 0.4 0.6 0.8 1 1.2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Degree of Certainty b
C
o
r
r
e
c
t

P
r
o
b
a
b
i
l
i
t
y

P
c
o
r
r
e
c
t
? =1
? = ?1
Figure 3.13: P
correct
vs. b with ? < 0 and ? > 0.
0 0.2 0.4 0.6 0.8 1 1.2
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Degree of Certainty b
C
o
r
r
e
c
t

P
r
o
b
a
b
i
l
i
t
y

P
c
o
r
r
e
c
t
?=0
Random Phase Deterministic Phase
Figure 3.14: P
correct
vs. b with ? = 0.
conclusion is rather surprising given the natural thought of setting a threshold when
using a voting rule. On the other hand, for ? = 0 (p
G
= 0.5) or ? < 0 (p
G
> 0.5),
if the value b satis?es that b > 0.6, P
correct
is close to 1. Therefore, the threshold ?
must be non-positive.
Without our e?ort of deriving the stationary distribution ?, the conclusion
that ? cannot be greater than zero is not an obvious fact if we only look at the local
voting rule. This simple result elucidates the power and necessity of conducting
analytical studies.
74
The other interesting property is the phase transition phenomenon observed
in Fig. 3.14 when b is in [0.4, 0.5]. Phase transition has been extensively studied by
physicists in the Ising model. For the Ising model on a two-dimensional lattice with
H = 0, there exists a so-called critical temperature T
c
. When the temperature T
is above T
c
, all the spins behave nearly independently (no long-range correlation),
whereas when temperature is below T
c
, all the spins tend to stay the same (i.e.,
cooperative performance). The above observation is expressed in the following in
terms of the total magnetization ¸SS),
E[¸SS)]
_
¸
¸
_
¸
¸
_
= 0, if T > T
c
,= 0, if T < T
c
. (3.56)
For a 2-D lattice, the value of the critical temperature can be accurately calculated
as 2kT
c
=
2
1+
?
2
= 2.269, which is consistent with our simulation result where the
critical value of b, denoted as b
c
, is in [0.4, 0.5], given the relation of b and T in
Eqn. (3.55).
If we look closer into the interval when b < 0.4, the estimated trust value of
each node is changing all the time and looks like random jumping. While when b is
above the critical value, all values converge steadily to 1. So we call the ?rst interval
the random phase, while the second is the deterministic phase.
The discovery of phase transition in our voting rule is quite surprising given
that the rule itself is very simple. More importantly, the fact that a small change
in the parameter might result in a totally opposite performance of our voting rule
75
0.2 0.4 0.6 0.8 1 1.2
0.4
0.5
0.6
0.7
0.8
0.9
1
Degree of Certanty b
C
o
r
r
e
c
t

P
r
o
b
a
b
i
l
i
t
y

P
c
o
r
r
e
c
t
p
e
=0.01
p
e
=0.05
p
e
=0.15
Figure 3.15: P
c
vs. b with link errors p
e
. ? = 0.
proves the necessity of doing more analyses before applying any distributed algo-
rithms. A rule with arbitrary or unveri?ed parameters may ruin performance of the
whole system.
As we have discussed, due to uncertainty and incompleteness of trust evidence,
d
ij
should be modeled as a random variable rather than being always 1. Thus in a
virtuous network, the distribution of d
ij
is
Pr(d
ij
= 1) = 1 ?p
e
; Pr(d
ij
= ?1) = p
e
, (3.57)
which is simpli?ed as d ? (1 ?p
e
, p
e
). This model corresponds to the ±J model in
spin glasses, where J ? (p, 1 ?p).
We again investigate the phase transition. As shown in Fig. 3.15, the phase
transition still happens when ? = 0. However, as p
e
increases, the wrong votes with
value ?1 gradually destabilize the votes of value 1. Thus it is harder to keep s
i
’s
equal to 1, which means that b
c
becomes larger and the system more probably stays
76
in the random phase given a high link error p
e
. When p
e
is large enough, as shown
in the ?gure, where p
e
= 0.15, the system always stays in the random phase.
In [55], the authors theoretically studied phase transitions between random
and deterministic phases, and introduced the replica symmetry method to solve
them analytically. Based on this method, very good approximations of values, such
as E[¸SS)] and E[SS
2
i
], can be derived. The mathematical manipulation of the
replica symmetry method is beyond the scope of this work, but it is de?nitely a
very good direction for our future work. Given explicit expressions for these values,
they will provide even better guide for network management and control design.
3.5.6.3 Adversary Model
The motivation of trust evaluation is to be able to identify good nodes and
detect bad nodes. In this section, we study a network in the presence of adversaries,
i.e., bad nodes. Three types of adversaries are considered:
• Independent: adversaries do not collude with each other and have normal
power. We assume their probabilities of wrongly identifying whether a node
is good or bad are p
e
the same as good nodes. Once they identity a node as
their friend, i.e. a bad node as well, they rate it with value 1 and vice versa.
• Collusive: adversaries know each other. They always vote for their friends
with value 1 and vote for good nodes with value ?1.
• Random: to confuse the evaluation rule, they randomly assign con?dence
values on others.
77
The voting rule used here is the one with threshold ? = 0, while the follow-
ing results can be extended to the case where ? < 0. First consider independent
adversaries. De?ne a new network G
?
which has the same topology as the current
network G and same probability of error p
e
, but with all nodes being good. Suppose
the percentage of correct estimation for G
?
is P
?
correct
, we have the following theorem:
Theorem 9 The percentage of correct estimation in the presence of independent
adversaries P
correct
is equal to the percentage of correct estimation P
?
correct
for the
network G
?
with all good nodes. P
correct
is independent of the number of adversaries.
Proof We know that the distribution of d
?
ij
is d
?
ij
? (1 ?p
e
, p
e
), while the distri-
bution of d
ij
is
d
ij
?
_
¸
¸
_
¸
¸
_
(1 ?p
e
, p
e
) if t
i
t
j
= 1
(p
e
, 1 ?p
e
) if t
i
t
j
= ?1
.
Therefore we have
d
?
ij
? d
ij
t
i
t
j
. (3.58)
By the de?nition,
P
correct
= E[1 ?|S ?T|
1
/(2N)] .
In probability theory, we know that
E[X] = E[E[X[Y ]] .
78
Now let’s set X = |S ? T|
1
, Y = ¦d
ij
t
i
t
j
¦
(i,j)?E
and X
?
= |S
?
? T
?
|
1
, Y
?
=
¦d
?
ij
¦
(i,j)?E
. So we are trying to prove
E[E[X[Y ]] = E[E[X
?
[Y
?
]] (3.59)
From Eqn. (3.58), we have Y ? Y
?
, so if we can prove
E[X[Y ] = E[X
?
[Y
?
],
given Y = Y
?
, then Eqn. (3.59) is true. First consider the left hand side,
E[X[Y ] =

S
|S ?T|
1
e
b

d
ij
s
i
s
j
=

S

i
[s
i
?t
i
[e
b

(d
ij
t
i
t
j
)(s
i
t
i
)(s
j
t
j
)
The last equation is because t
2
i
= 1, ?i. Notice that
[s
i
?t
i
[ = [t
i
[[s
i
?t
i
[ = [s
i
t
i
?t
?
i
[
because [t
i
[ = 1 and t
?
i
= 1. If for all i, we take s
?
i
= s
i
t
i
, and d
ij
t
i
t
j
= d
?
ij
(which is
the assumption), we have
E[X[Y ] =

S
?

i
[s
?
i
?t
?
i
[e
b

d
?
ij
s
?
i
s
?
j
= E[X
?
[Y
?
].
79
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Percentage of Adversaries P
m
C
o
r
r
e
c
t

P
r
o
b
a
b
i
l
i
t
y

P
c
o
r
r
e
c
t
All good nodes
Independent adversaries
Collusive adversaries
Random adversaries
Figure 3.16: P
c
vs. the percentage of adversaries P
m
. The link error p
e
= 0.05 and
the degree of certainty b = 1.
Therefore, the performance of our voting rule is independent of the number of
adversaries in the case where all adversaries are independent. This is a very valuable
property for a trust evaluation rule, and we can apply all the analyses in the last
two subsections to the network with independent adversaries.
We use simulations to compare the three types of adversaries. In each simula-
tion, the only di?erence is the behavior of bad nodes. Fig. 3.16 shows the simulation
results. In order to verify Theorem 9, the curve for all good nodes is plotted as well.
The curves of all good nodes and independent adversaries nearly overlap except for
some small random perturbations, and the performance of independent adversaries
does not change with respect to the percentage of adversaries in the network. Both
verify the conclusion of Theorem 9.
The curve of collusive adversaries performs better than the ones for all good
nodes and independent adversaries. This is because when we de?ne the direct trust
model for BAD nodes, we assume independent adversaries. The worst performance
is the one of random adversaries, because our voting rule does not capture such
80
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Degree of Certainty b
C
o
r
r
e
c
t

P
r
o
b
a
b
i
l
i
t
y

P
c
o
r
r
e
c
t
P
rw
=0
P
rw
=0.01
P
rw
=0.1
P
rw
=0.5
P
rw
=1
Figure 3.17: The e?ect of network topology. P
rw
is the percentage of shortcuts in
WS small world model. The network with P
rw
? [0.01, 0.1] has the small world
property. p
e
= 0.1.
malicious behavior.
3.5.6.4 Network Topology
So far the network topology being used is the two-dimensional lattice. While
in reality, a trust graph is not just a lattice. In this section we further investigate
our trust evaluation rule with respect to network topology.
We use the rewiring small-world model proposed by Watts and Strogatz in
[69](WS model).In the WS model, we start from a ring lattice with the number of
nodes N = 100 and degree of each node k = 4. Each edge is rewired at random
so as to create shortcuts with the percentage of shortcuts being the parameter P
rw
.
This construction models the graph transition between regular lattices (P
rw
= 0)
and chaotic random graphs (P
rw
= 1).
Our simulation results are shown in Fig. 3.17, where di?erent curves represent
di?erent shortcut percentages P
rw
. We observe that the performance improves as
81
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Degree of Certainty b
C
o
r
r
e
c
t

P
r
o
b
a
b
i
l
i
t
y

P
c
o
r
r
e
c
t
degree = 2
degree = 6
degree = 18
Figure 3.18: The e?ect of average degree with p
e
= 0.1.
the model changes from regular lattices to random graphs. For instance, as b = 0.4,
P
correct
= 0.55 for regular lattices, while P
correct
= 0.85 for random graphs. This is
because a more random network has shorter average distance between any two nodes
in the network. Therefore, the number of hops which trust information takes to reach
any node in the network is small. The accuracy of trust information degenerates
over the path length, so short spreading paths have more accurate information
and lead to good results. In particular, the most obvious improvement happens
when P
rw
increase from 0.01 to 0.1, which corresponds to the small-world topology.
Therefore, a few shortcuts in the network will greatly improve performance of the
trust evaluation rule.
We also investigated the e?ect of node degree, as shown in Fig. 3.18. The
network is a 100-node ring lattice without shortcuts. The neighbors of a node are
those within k hopes distance, where degree k varies. The simulations show that
the performance improves as the degree increases. One reason is still that the path
length reduces between any two nodes by increasing the number of their neighbors.
82
From the above discussion, clearly the network topology has great in?uence
on the system performance. As our future work, it is interesting to study our trust
evaluation rule under real trust network topologies, and to investigate what kind of
network topology has the best performance in terms of trust evaluation.
83
Chapter 4
Applications of Trust
What is it that trust management is supposed to achieve? It is uncontroversial,
I believe, to claim that the ultimate goal is to enable the development of mutually
bene?cial cooperation. It is natural to suggest, furthermore, that the role of trust
management is to maximise trust between the parties and thereby provide a basis
for cooperation to develop. The aim of trust management must be to facilitate
bene?cial cooperation and to avoid interactions that are not bene?cial. Secondly,
trust is often an important basis for cooperation. This idea is found in much of the
trust literature. For example, in social science, Misztal points out three basic things
that trust does in the lives of people: It makes social life predicable, it creates a sense
of community, and it makes it easier for people to work together [51]. Trust also
plays important role in the cooperation of biological systems. The work by Nowak
[56] states that cooperation among cells and genes is essential for life to evolve to
a new level of organization. He modeled the cooperation as a Prisoner’s Dilemma
like game in which players acquire reputations. He found that if reputations spread
quickly enough, they could increase the chances of cooperation. In this chapter, we
investigate the evolution of cooperation in self-interested autonomic networks with
the help of trust.
Besides cooperation, trust can also be used to the utility optimization prob-
84
lems. Many resource allocation and functionality allocation problems can be for-
mualted as a constrained maximization of some utility function. The key mechanism
of recent advances in Network Utility Maximization (NUM) applies optimization
techniques so as to achieve decomposition by duality or the primal-dual method.
The NUM approach is particularly useful for problems in multihop wireless net-
works, which helps design the most appropriate distributed algorithm and provides
the analytical foundation for the cross-layer protocol design. In this dissertation, we
apply nodes’ trust values in the optimizations. The e?ect of these trust values on
the resulting protocols is that in the routing and scheduling problems the trustwor-
thiness of the node will be automatically considered and used. For example packets
will not be routed as frequently to suspicious nodes, or suspicious nodes will not be
scheduled by the MAC protocol.
4.1 Trust and Cooperation: the Game Theoretic View
Autonomic networks rely on the collaboration of participating nodes for almost
all their functions, for instance, to route data between source and destination pairs
that are outside each other’s communication range. However, because nodes are
resource constrained, we deal with networks composed of users who are trying to
maximize their own bene?t from participation in the network. In the case of packet
forwarding, the fundamental user decision is between forwarding or not forwarding
data packets sent by other users. Given the constraints (mostly related to energy)
that the user faces, there is a very real cost incurred when choosing to forward. So,
85
all users would like to send their own data packets, but not forward those of other
users. Unfortunately, if all users were to do that, the network would collapse.
We assume that users want to be connected to as many other users as possible,
directly (one-hop) or indirectly (multi-hop, through other users). In other words,
by activating a communication link towards one of their neighbors, they gain by
having access to the users with which that neighbor has activated his links, and
so on, recursively. In the mean while, activation of links introduce cost. The more
neighbors a user is connected to, the more packets he has to forward, which results
in more communication cost. Our work is inspired by this fundamental tradeo?
between gain and cost in the context of user collaboration in autonomic networks
[33].
The con?ict between the bene?t from collaboration and the required cost for
collaboration naturally leads to game-theoretic studies, where each node strategi-
cally decides the degree to which it volunteers its resources for the common good of
the network. A user’s payo? depends not only on whether he decides to collaborate
or not, but also on whether his neighbors will decide to collaborate. For instance,
Srinivasan et al. [64] address the problem of cooperation among energy constrained
nodes and devised behavior strategies of nodes that constitute a Nash equilibrium.
In [35], there is a link between two nodes if they agree to cooperate. These links are
formed through one-to-one bargaining and negotiation.
Di?erent with previous work in the literature, we study collaboration based
on the notion of coalitions [8]. The concept of users being connected to each other,
and acquiring access to all the other users that each of them had so far access to,
86
can be well captured by coalitional game theory (also known as cooperative game
theory [57]). In coalitional game theory, the central concept is coalition formation,
i.e., subsets of users that join their forces and decide to act together. Players form
coalitions to obtain the optimum payo?s. The key assumption that distinguishes
cooperative game theory from non-cooperative game theory is that players can ne-
gotiate collectively [53].
4.1.1 Problem Formulation
Suppose there are N nodes
1
in the network. De?ne the set of nodes V =
¦1, 2, . . . , N¦. The communication structure of the network is represented by an
undirected graph G(V, E), where a link between two nodes implies that they are
able to directly communicate. Undirected links model the symmetric communica-
tions between neighboring nodes, because the willingness of both the two nodes is
necessary to establish and maintain a link. Thus the undirected links are also called
pairwise links. For instance, in wireless networks, reliable transmissions require that
the two nodes interact in order to avoid collisions and interference.
A communication link is established only if two end nodes agree to collaborate
with each other, i.e., they are directly connected with each other in G. Once the link
is added, two end nodes join one coalition and they agree to forward all the tra?cs
from each other. Note that indirect communication between two players require
that there is a path connecting them. A path in G connecting i
1
and i
m
is a set of
distinct nodes ¦i
1
, i
2
, . . . , i
m
¦ ? V , such that ¦i
1
i
2
, i
2
i
3
, . . . , i
m?1
i
m
¦ ? E.
1
In this chapter, the terms node, player and user are interchangeable.
87
4.1.2 Coalitional Games
A coalition is a subset of nodes that is connected in the subgraph induced by
the active edges. In other words, two users are in the same coalition, if and only if
there exists a path of active edges between them. The communication structure G
gives rise to a partition of the node set into groups of nodes who can communicate
with each other. A coalition of G is a subgraph G
t
? G, where ?i ? V (G
t
) and
j ? V (G
t
), i ,= j, there is a path in G
t
connecting i and j, and for any ij ? G
implies ij ? G
t
.
If two users of separate coalitions join, then the two coalitions merge into one.
We are interested in the total productivity of the coalition formed by sel?sh nodes,
how this is allocated among the individual nodes and the stability of the coalition.
These notions are captured by coalitional games.
Coalition formation has been widely studied in economics and sociology in
the context of coalitional game [63, 20]. In our game, some nodes are not directly
connected with each other; therefore the game we consider has to take the communi-
cation constraints into consideration. Myerson [52] was the ?rst to introduce a new
game associated with communication constraints, the constrained coalitional game,
which incorporates both the possible gains from cooperation as modeled by the coali-
tional game and the restrictions on communication re?ected by the communication
network.
An important concept in coalitional games is the characteristic function v
[22]. Since the game we study has communication constraints. The characteristic
88
function v is de?ned on a particular network rather than on a set of nodes in general
coalition games, i.e., v : G ?R with convention: v(?) = 0. Notice that in our work
the empty set ? represents a graph where there is no link between any two nodes in
the graph. v(G) is interpreted as the maximum payo? the network G can get given
the network structure.
In our case, the value of v is the maximum aggregate of the net payo?s from
all nodes in the graph
v(G) =

i?G
v
i
(G) (4.1)
where v
i
(G) is the payo? node i can get when the network is G.
An payo? allocation rule x : G ? R
N
describes how the value (v(G)) associ-
ated with each network is distributed to the individual nodes. x
i
(G) is the payo?
of node i from network G and under the characteristic function v. For a graph G
t
,
which is a subgraph of G, de?ne
x(G
t
) =

i?G

x
i
(G).
GThe payo? allocation is feasible if x(G) ? v(G) and e?cient if x(G) =
v(G). In our case, the payo? may not be transferable, so the payo? allocation rule
represents the payo? that each node receives from the network, i.e., x
i
(G) = v
i
(G).
It is easy to show that such payo? allocation rule is feasible and e?cient. We will
discuss the stability of the constrained coalitional game in details in Sect. 4.1.3.
As we have discussed, users obtain bene?ts by joining a coalition. A user’s
89
gain is explicitly de?ned on how he is connected to other users in the coalition.
We assume that nodes always have information sent to other nodes in the network.
Thus we assume node i potentially o?ers bene?ts x
ij
to node j if (i, j) ? E. Then
we could easily ?nd the characteristic function of the coalitional game as:
v(G) =

(i,j)?E
x
ij
+ x
ji
. (4.2)
Notice that ?i, v(¦i¦) = 0. We denote the cooperative game de?ned from (4.2) as
? = (G, v).
In the next section, we will investigate stable solutions for enforcing coop-
eration among nodes, and demonstrate two e?cient methods for achieving such
cooperation: negotiation and trust mechanism.
4.1.3 Cooperation in games
4.1.3.1 Cooperative games with negotiation
In this section, we investigate the impact of the games on the collaboration in
a network. First we start with a simple fact.
Lemma 10 If ?i, j, x
ij
+ x
ji
? 0, then ? = (G, v) is a superadditive game
Proof Suppose S and T are two disjoint sets (S ? T = ?), then
v(S ? T) =

i,j?S?T
x
ij
=

i,j?S
x
ij
+

i,j?T
x
ij
+

i?S,j?T
(x
ij
+ x
ji
)
= v(S) + v(T) +

i?S,j?T
(x
ij
+ x
ji
) ? v(S) + v(T).
90
The last inequality holds by our assumption that x
ij
+ x
ji
? 0.
The main concern in cooperative games is how the total payo? from a partial or
complete cooperation of the players is divided among the players. A payo? allocation
is a vector x = (x
i
)
i?N
in R
N
, where each component x
i
is interpreted as the payo?
allocated to player i. We say that an allocation x is feasible for a coalition S i?

i?S
x
i
? v(S).
When we think of a reasonable and stable payo?, the ?rst thing that comes
to mind is a payo? that would give each coalition at least as much as the coalition
could enforce itself without the support of the rest of the players. In this case,
players couldn’t get better payo?s if they form separate coalitions di?erent from the
grand coalition N. The set of all these payo? allocations of the game ? = (G, v)
is called the core and is formally de?ned as the set of all n-vectors x satisfying the
linear inequality:
x(S) ? v(S) ?S ? G, (4.3)
x(G) = v(G), (4.4)
where x(S) =

i?S
x
i
for all S ? G. If ? is a game, we will denote its core by
C(?). It is known that the core is possibly empty. Therefore, it is necessary to
discuss existence of the core for the game ?. We ?rst give the de?nition of a family
of common games: convex games [22]. The convexity of a game can be de?ned
in terms of the marginal contribution of each player, which plays the role of ?rst
di?erence of the characteristic function v. Convexity of v can be de?ned in terms
91
of the monotonicity of its ?rst di?erences. The ?rst di?erence (or the marginal
contribution of player i) d
i
: 2
N
? R of v with respect to player i is
d
i
(S) =
_
¸
¸
_
¸
¸
_
v(S ? ¦i¦) ?v(S) if i / ? S
v(S) ?v(S ¸ ¦i¦) if i ? S
A game is said to be convex, if for each i ? N, d
i
(S) ? d
i
(T) holds for any coalition
S ? T.
Lemma 11 ?(G, v) is a convex game.
Proof For ?, d
i
(S) =

j?S,j,=i
(x
ji
+x
ij
). Take two sets S ? T,
d
i
(T) ?d
i
(S) =

j?T?S
c
(x
ji
+ x
ij
) ? 0
The core of a convex game is nonempty ([22]), thus C(?) ,= ?. By Lemma 11,
we have the following theorem,
Theorem 12 ? = (G, v) has a nonempty core.
Now let’s ?nd one of the payo? allocations that are in the core. For any pair of
players (i, j), suppose the payo? allocation of the game between i and j is (ˆ x
ij
, ˆ x
ji
).
Then we have the following
Corollary 13 If the payo? allocation satis?es ˆ x
ij
? 0 and ˆ x
ji
? 0, then the payo?
allocation ˆ x
i
=

j?Ç
i
ˆ x
ij
is in the core C(?).
92
Proof Take an arbitrary subset S ? N,
ˆ x(S) =

i?S
ˆ x
i
=

i,j?S
ˆ x
ij
+

i?S,j / ?S
ˆ x
ij
?

i,j?S
ˆ x
ij
= v(S)
the inequality holds because ˆ x
ij
? 0, ?i, j ? N.
Because we only consider transferable utility games, ˆ x
ij
+ ˆ x
ji
= x
ij
+x
ji
? 0.
Therefore (ˆ x
ij
, ˆ x
ji
) could be constructed in the following way:
ˆ x
ij
=
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
x
ij
if x
ij
? 0, x
ji
? 0
x
ij
+ ?
ij
x
ji
if x
ij
< 0, x
ji
> 0
(1 ??
ij
)x
ji
if x
ij
> 0, x
ji
< 0
where 0 ? ?
ij
= ?
ji
? 1, and ˆ x
ij
? 0 is achieved by carefully choosing ?
ij
.
Obviously, the payo? allocation we provided in Corollary 13 is a set of points
in the core, while there generally exist more points in the core that are not covered
in the Corollary. However, this solution indicates a way to encourage cooperation
in the whole network. The players which have positive gain can negotiate with their
neighbors by sacri?cing certain gain (o?ering their partial gain ?x
ji
). Though they
cannot achieve their best possible payo?, they can set up a cooperative relation
with their neighbors. This is de?nitely bene?cial for the players who negotiate
and sacri?ce, since without cooperation they cannot get anything. This solution is
also e?cient and scalable, because players only need to negotiate with their direct
neighbors.
93
Thus we established cooperative games among nodes in the network, and de-
scribed an e?cient way to achieve cooperation throughout the network. In the next
section, we are going to discuss solutions by employing trust mechanisms, which do
not require negotiation and the assumption on x
ij
+ x
ji
? 0 can also be relaxed.
4.1.3.2 Trust mechanism
Trust is a useful incentive for encouraging nodes to collaborate. Nodes who re-
frain from cooperation get lower trust value and will be eventually penalized because
other nodes tend to only cooperate with highly trusted ones. From Fig. 4.1 and the
corresponding system equations, the trust values of each node will eventually in?u-
ence its payo?. Let’s assume, for node i, that the loss of not cooperating with node j
is a nondecreasing function of x
ji
, because the more j loses, the more e?ort j under-
takes to reduce the trust value of i. Denote the loss for i being non-cooperative with
j as l
ij
= f(x
ji
) and f(0) = 0. For simplicity, assume the characteristic function is
a linear combination of the original payo? and the loss, which is shown as
v
t
(S) =

i,j?S
x
ij
?

i?S,j / ?S
f(x
ji
) (4.5)
The game with characteristic function v
t
is denoted as ?
t
(G, v
t
). We then have
Theorem 14 If ?i, j, x
ij
+ f(x
ji
) ? 0, C(?
t
) ,= ? and x
i
=

j?V
x
ij
is a point in
C(?
t
).
94
Proof First we prove ?
t
is a convex game, given x
ij
+ f(x
ji
) ? 0. ?i ? V in ?
t
,
d
i
(S) =

j?S,j,=i
(x
ij
+ x
ji
) ?

k/ ?S
f(x
ki
) +

j?S,j,=i
f(x
ij
)
Let S ? T,
d
i
(T) ?d
i
(S) =

j?T?S
c
(x
ij
+ x
ji
) +

k?T?S
c
f(x
ki
) +

j?T?S
c
f(x
ij
)
=

j?T?S
c
_
(x
ij
+ f
(
x
ji
)) + (x
ji
+ f(x
ij
))
_
? 0.
Therefore C(?
t
) is nonempty. Next, we verify that x
i
=

j?A
i
x
ij
is in the core.
For any S ? G,

i?S
x
i
?v(S) =

i?S

j?V
x
ij
?
_
_

i,j?S
x
ij
?

i?S,k/ ?S
f(x
ki
)
_
_
=

i?S,j / ?S
(x
ij
+ f(x
ji
)) ? 0.
Apparently, the payo? x
i
=

j?A
i
x
ij
does not need any payo? negotiation.
Thus we showed that by introducing a trust mechanism, all nodes are induced to
collaborate with their neighbors without any negotiation.
In this section, we introduced two approaches that encourage all nodes in
the network to cooperate with each other: 1)negotiation among neighbors; 2)trust
mechanism. We proved that both approaches lead to non-empty core for the coop-
erative game played in the network. However, we have only considered these two
95
approaches separately, and the results are based on static settings. The more inter-
esting problems are how these two intertwine and how the dynamics between the
two approaches converge to a cooperative network – these are discussed in the next
section.
4.1.4 Dynamics of Cooperation
We have analyzed the e?ect of a trust mechanism on the formation of coopera-
tion. However, what we concentrated on in the previouss section is the ?nal impact
of trust on the payo?s at the steady state. In this section, we are going to discuss
two dynamic behaviors in the system: trust evaluation and game evolution.
4.1.4.1 System model
In our model, each node has a self-de?ned playing strategy, which is denoted
by ?
i
for node i. Another characteristic of each node is its trust values, which are
dependent on the opinions of other nodes. Trust values of a node can be di?erent
for di?erent players. For instance, t
ji
and t
ki
are the trust values of i provided by
distinct player j and k, and possibly t
ji
,= t
ki
. Fig. 4.1 is a block graph demonstrating
how nodes interact among their neighbors, where the payo? of node i after playing
games is represented as x
i
. The procedure is summarized as the following three
rules:
• Strategy updating rule: as shown in Figure 4.1, nodes update strategies based
on their own payo?s. They tend to choose rules that obtain the maximum
96
U
p
d
a
t
e

s
t
r
a
t
e
g
y
V
o
te
s
Node i
Strategies
Strategy
Trust Values
Payoff
xi
?
i
t1i, . . . , tNi
Inference
Neighboring Nodes
Figure 4.1: System operation block-graph for a typical node
payo?s.
• Payo? function: the payo?s are functions of the strategies of all participants.
For a speci?c node, the payo? only depends on strategies of its neighbors and
itself.
• Trust evaluation rule: trust values are computed based on votes, which are
provided by neighbors and are related to the history (strategies and trust
values) of the target node. Since trust values eventually have impact on the
payo? of the node, there is a dotted line in Figure 4.1 from trust values to
payo? to represent their implicit relation.
For simplicity, we assume the system is memoryless. All values are dependent
only on parameter values at most one time step in the past. Therefore, the system
97
can be modeled as a discrete-time system:
?
i
(t + 1) = f
i
(x
i
(t), ?
i
(t), ?
j
(t), t
ij
(t)) (4.6)
t
ik
(t) = g
i
(t
ij
(t), v
jk
(t)) ?k ? N (4.7)
x
i
(t) = h
i
(?
i
(t), ?
j
(t)) (4.8)
v
ij
(t) = p
i
(?
j
(t), t
ji
(t)) (4.9)
where j stands for all neighbors of i, and v
ij
is the value node i votes for j.
4.1.4.2 Game evolution
As shown in Sect. 4.1.3.2, the trust mechanism drives sel?sh nodes to sacri?ce
part of their bene?ts and thus promotes cooperation. In this section, the procedure
and dynamics of such cooperation evolution are studied.
In this section, we assume that nodes either cooperate or do not cooperate with
neighbors. ?
ij
= 1 denotes that node i cooperates with its neighbor j, and ?
ij
= 0
denotes that it does not cooperate with j. We assume that the payo? when one of
them does not cooperate is ?xed as (0, 0), and as (x
ij
, x
ji
) when both cooperate. If
x
ij
< 0, we call the link (i, j) a negative link for node i, and when the opposite holds
a positive link. Since all nodes are sel?sh, nodes tend to cooperate with neighbors
that are on positive links, while they do not wish to cooperate with neighbors on
negative links. In the mean while, the trust mechanism is employed, which aims to
function as the incentive for cooperation. In this part, we assume that revocation
98
and nulli?cation of revocation can propagate through out the network.
In our evolution algorithm, each node maintains a record of its past experience
by using the variable ?
i
(t). First de?ne x
a,i
(t) as the payo? i gains at time t and
x
e,i
(t) as the expected payo? i can get at time t if i always chooses cooperation
with all neighbors. Notice that the expected payo? can be di?erent each time, since
it depends on whether the neighbors cooperate or not at the speci?c time. Then
compute the cumulative di?erence
?
i
(t) = ?
i
(t ?1) + (x
a,i
(t) ?x
e,i
(t)) , (4.10)
of the total payo? in the past minus the expected payo? if the node always co-
operates. Each node chooses its strategy on the negative links by the following
rule:
• if ?
i
(t) < 0, node i chooses to cooperate, i.e., ?
ij
= 1, ?j ? ^
i
.
• if ?
i
(t) ? 0, ?
ij
= 0, if j ? ^
i
and x
ij
< 1.
Notice that at time 0, ?
i
(0) = 0. That is to say initially all nodes choose not to
cooperate on the negative links, since they are inherently sel?sh. There are two
other conditions that force non-cooperation strategies:
• nodes do not cooperate with neighbors which have been revoked.
• nodes do not cooperate with non-cooperative neighbors.
To summarize, as long as one of those aforementioned conditions is satis?ed, nodes
choose not to cooperate.
99
Consider node i, and the initial settings are as follows:
• all the trust states are set to s
ij
= 1, ?j ? N;
• the variable ?
i
(0) = 0.
Node i chooses strategies and updates variables in each time step for t = 1, 2, . . . :
1. The strategy on the game with neighbor j is set according to the following
rule:
• for negative links (x
ij
< 0), choose non-cooperation strategy (?
ij
= 0) if
?
i
(t ?1) ? 0;
• if s
ij
= 0, ?
ij
= 0;
• for all neighbors, ?
ij
= 0 i? ?
ji
= 0 (cooperation is bilateral);
• otherwise ?
ij
= 1.
2. For all j ? ^
i
, update the trust state s
ij
if one of the following three conditions
is satis?ed, otherwise keep the previous state
• if i accepts a revocation on node j, s
ij
= 0;
• if the revocation has been nulli?ed for more than ? consecutive steps, set
s
ij
= 1;
• if ?
ji
= 0, set s
ij
= 0;
3. Compute the actual payo? x
a,i
(t) and expected payo? x
e,i
(t), then get the
cumulative di?erence ?
i
(t) by Eqn.( 4.10).
Figure 4.2: Algorithm for game evolution modeling trust revocation
Since we allow and encourage nodes to rectify, i.e., to change their strategies
from non-cooperation to cooperation, we de?ne a temporal threshold ? in the trust
propagation rule. Instead of always keeping 0 once the state is switched to 0, we
allow the nulli?cation of revocation (switch back to state 1) under the condition that
the revocation has been nulli?ed for ? consecutive time steps. ? also represents the
100
penalty for being non-cooperative. ? needs to be large so that the non-cooperative
nodes would rather switch to cooperate than get penalized. However, large ? will
also reduce the payo?.
The detailed algorithm is shown in Fig. 4.2.
Suppose the total payo? of node i, if every node cooperates, is x
i
=

j?A
i
x
ij
.
We have the following
Theorem 15 ?i ? N and x
i
> 0, there exists ?
0
, such that for a ?xed ? > ?
0
:
1. The iterated game converges to Nash equilibrium.
2. ?
i
(t)/t ? 0 as t ? ?.
3. i cooperates with all its neighbors for t large enough.
Proof Nodes without negative links, will always cooperate, thus ?
i
? 0. There-
fore, we only consider nodes with negative links. First we prove that for t large
enough ?
i
(t) < 0. De?ne for node i, the absolute sum of positive payo?s and
negative payo?s as x
(p)
i
and x
(n)
i
respectively. Then
x
i
= x
(p)
i
?x
(n)
i
Therefore the ?rst payo? for node i is x
a,i
(1) = x
(p)
i
> 0 and ?
i
= x
(n)
i
. De?ne
T
max
as the maximum propagation delay in the network. Then at t = T
max
all i’s
neighbors revoke i because at time t = 1, i didn’t cooperate, and the payo? now is
x
a,i
(T
max
) = 0 and ?
i
(T
max
) = ?
i
(T
max
?1) ?x
i
. i continues to get 0 payo? till all
101
neighboring nodes have used the penalty interval ?. It’s easy to show that as ? is
set large enough, i eventually gets negative ?
i
.
If i follows the strategy rules in Fig. 4.2, i starts to cooperate with all neighbors.
The di?erence of the actual payo? and expected payo? is 0 from then on. Therefore
?
i
(t)/t ? 0 as t ? ?.
Assume node i deviates to non-cooperation, then it will get negative cumula-
tive payo? di?erence as discussed above. So node i has no intention to deviate from
cooperation. Therefore the game converges to its Nash equilibrium with all nodes
cooperating.
We have also performed simulation experiments with our evolution algorithm.
In the simulations, we didn’t assume the condition that ?i, x
i
> 0, instead the
percentage of negative links is the simulation parameter. We can report that without
this condition, our iterated game with the trust scheme can still achieve very good
performance. Figure 4.3 shows that cooperation is highly promoted under the trust
mechanism. In Figure 4.4, the average payo?s between the algorithm with strategy
update and without strategy update are compared, which explains the reason why
nodes converge to cooperation.
4.2 Trust Aware Cross-Layer Optimization
The problem of resource allocation in wireless networks has been a growing
area of interest over the past decade. Recent advances in the area of NUM driven
cross-layer design[16] have led to current e?orts on top-down development of next
102
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Probability of non?cooperative links
P
e
r
c
e
n
ta
g
e
o
f c
o
o
p
e
r
a
tio
n
without trust
with trust
Figure 4.3: Percentage of cooperating pairs vs. negative links
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Percentage of non?cooperative links
A
v
e
r
a
g
e
p
a
y
o
ff
Initial setting
With strategy update
Without strategy update
Figure 4.4: Average payo?s vs. negative links
103
generation wireless network architectures. By linking decomposition of the NUM
problem to di?erent layers of the network stack, we are able to design protocols,
based on the optimal NUM derived algorithms, that provides much better perfor-
mance gain over the current network protocols.
There are new challenges to apply such approaches to wireless networks com-
paring to wireline networks. Wireless spectrum is scarce, thus it is important to use
the wireless channel e?ciently. The wireless channel is a shared medium where the
transmissions of users interfere with each other. The channel capacity is “elastic”
(time-varying) and the contention over such shared network resources provides a
fundamental constraint for resource allocation. Furthermore, for multihop wireless
networks, it is critical to perform resource control in a distributed manner. How-
ever, because of the interference among wireless channels, a scheduling policy has
to resolve the contention between various attempting transmissions, which require
global information. All these challenges cause interdependencies across users and
network layers. In spite of these di?culties, there have been signi?cant recent de-
velopments in optimization-based approaches that results in loosely coupled cross
layer solution[45].
In recent years, network security becomes more and more important. Secu-
rity in wireless networks is even more critical, because communication channels are
shared media. We will use “trust weights” in the optimizations and shows that
trust helps to achieve better performance in the present of adversaries. These trust
weights will be developed by our community based monitoring approach and will
be disseminated via e?cient methods so that it is timely available to all nodes that
104
need it.
We consider a set of ?ows that share the resources of a ?xed wireless network.
Each ?ow is described by its resource-destination node pair, with no a priori es-
tablished routes. The e?ect of these trust weights on the resulting protocols is that
in the scheduling problems (whether they are at the MAC or the routing protocol)
the trustworthiness of the node will be automatically be considered and used. For
example packets will not be routed as frequently to suspicious nodes. Or suspicious
nodes will not be scheduled by the MAC protocol.
We present decentralized algorithms which obtain a set of feasible solutions
for the NUM problem, and compare the decentralized solutions with results of a
centralized algorithm which is optimal and solves the NUM problem. Furthermore,
we compare our results with schemes which do not take trust into consideration.
4.2.1 System Model
We consider a multihop wireless network with N nodes. Let L denote the
set of links (i, j) such that the transmission from node i to node j is possible.
The interference constraints among transmission links will be described later. The
wireless network can be described as a graph (^, L), where ^ is the set of N nodes.
4.2.1.1 Trust
We study the utility optimization problem with consideration of trust. All
previous work on the NUM problems assumes that nodes function correctly. For
105
instance, intermediate nodes always successfully forward all packets, and they fol-
low the routing and scheduling protocols if they are given. However, nodes do not
always function correctly in reality. They may be compromised by attackers, their
communications may be blocked or interfered by attackers, or they may just simply
malfunction. Wireless networks are especially vulnerable to attacks because of the
inherent properties of the shared wireless media. Therefore, we believe it is impor-
tant to take the trust into consideration for the NUM problems. We de?ne the trust
value of node i as v
i
.
4.2.1.2 Data ?ow and utility function
There are F ?ows that share the network resources and each ?ow f is asso-
ciated with a source node s
f
and a destination node d
f
. The set of all F ?ows is
denoted as T. Let x
f
be the rate with which data is sent from s
f
to d
f
over possibly
multiple paths and multiple hops.
In our model, the ?ow can use di?erent routes in the network. Suppose the
set of routes for ?ow f is R
f
. A route r ? R
f
is a set of nodes on the route, i.e.,
i ? r means that node i is on route r. We de?ne the trustworthiness of route r as
the multiplication of the trust values of nodes on the route, that is
v
r
=

i?r
v
i
.
Suppose the percentage of data transmitted on route r is denoted as p
r
. The aggre-
106
5
3
2
1
s
d
4
1/3 data
2/3 data
Figure 4.5: An example network with 2 routes for ?ow from s to d.
gate trust value ?ow f passes is de?ned as
g
f
=

r?R
f
p
r
v
r
. (4.11)
For instance, Fig. 4.5 shows ?ow f from node s to d. The ?ow uses two routes. One
third passes through nodes 1, 2 and 3, while the rest passes through nodes 1, 4 and
5. We have the aggregate trust value ?ow f passes as the following
g
f
= v
1
(
1
3
v
2
v
3
+
2
3
v
4
v
5
). (4.12)
g
f
represents of the quality of routes that ?ow f passes.
Users prefer data to be transmitted on paths with high trust values. Therefore,
we de?ne the utility function for ?ow f as a function of the data rate x
f
and the
trust values of paths it travels. Each ?ow is associated with a utility function de?ned
as U
f
(x
f
g
f
), which re?ects the “utility” to the ?ow f when it can successfully
transmit at data rate x
f
. We assume that U
f
() is strictly concave, nondecreasing
107
and continuously di?erentiable.For two ?ows of the same data rate, the one passes
paths with better trust values has higher utility. De?ne ˆ x
f
= x
f
g
f
as the e?ective
data rate for ?ow f.
4.2.1.3 Interference model and stability
In this subsection, we describe the interference model. The data rate of each
link depends on the transmission power and interference of other transmissions. We
denote u(G, P, BW, N
0
) to be the rate function, which models the rate a particular
link transmits, given the power assignment P = [P
ij
, (i, j) ? L], the gain matrix G,
the noise power N
0
, and the link bandwidth BW. We assume G, BW and N
0
are
constants so the rate function depends only on P.
We let µ = ¦µ
ij
¦
(i,j)?/
denote the rate vector at which data can be transferred
over each link (i, j) ? L. Then µ = u(P). We let ? denote a bounded region of [L[
dimensions. It represents the set of µ that can be achieved in a given time slot due to
the interference constraints. Notice that ? may not be convex. We let
ˆ
? := Co¦?¦
the convex hull of ?.
ˆ
? can be achieved by timesharing between di?erent rate vectors
in ?.
ˆ
? is convex, closed and bounded.
We de?ne the capacity region ? of the network as the largest set of rate vectors
x such that x ? ? is a necessary condition for network stability under any scheduling
policy. We assume that nodes keep a queue for each ?ow. Let µ
f
ij
denote the amount
of capacity on link (i, j) that is allocated for ?ow f. We have the following de?nition
of the capacity region.
108
De?nition 2 (Capacity Region) The capacity region, ?, of the network contains
the set of ?ow rate x for which there exists µ that satis?es the following[54].
1. µ
f
ij
? 0 for all (i, j) ? L and for all f ? T.
2. for all f ? T and for all i ? ^, we have

j:(i,j)?/
µ
f
ij
?

j:(j,i)?/
µ
f
ji
?x
f
1(i = s
f
) ? 0. (4.13)
3. ¦

f
µ
f
ij
¦ ?
ˆ
?.
1(i = s
f
) is an indicator function. 1(i = s
f
) = 1 if i is the source of ?ow f, and
1(i = s
f
) = 0 otherwise. The condition 2) is the ?ow constraint that must hold at
each node, and the condition 3) captures the interference constraints.
4.2.2 Utility Optimization and Dual Decomposition
Our goal is to design a scheduling mechanism such that the ?ow rate x
f
solves
the following optimization problem:
maximize
ˆ x

f
U
f
(ˆ x
f
) (4.14)
subject to x ? ? (4.15)
ˆ x
f
= g
f
x
f
for all f ? T (4.16)
We refer the above as the primal problem. Due to the strict concavity assumption
of U
f
() and the convexity of the capacity region, there exists a unique optimizer of
109
the primal problem, which we refer as ˆ x
?
.
In order for us to provide a distributed solution to the problem we use the
technique of dual decomposition[12]. By decomposing the optimization problem, we
provide decentralized algorithms which solve the underlying optimization problem.
As we will see some of the optimal algorithms require centralized information and
thus are not feasible to implement in a distributed way.
Notice that the variables g
f
and ˆ x
f
are coupled by the second constraint
Eqn. (4.16). We take the log of variables to decouple g
f
and ˆ x
f
and log change of vari-
ables and constants: ˆ x
t
f
= log ˆ x
f
, g
t
f
= log g
f
, x
t
f
= log x
f
, and U
t
f
(ˆ x
t
f
) = U
f
(e
ˆ x

f
).
Now the primal problem is separable.
After the log change, we decompose the primal problem by de?ning Langrange
multipliers ?
f
i
and ?
f
that are associated with the stability constraints stated in
Eqn. (4.15) and (4.16). We get the following dual function:
L(?, ?, ˆ x, x, µ, g) =

f
max
x

f
_
?
f
x
t
f
??
f
s
f
e
x

f
_
+

f
max
ˆ x

f
_
U
t
f
(ˆ x
t
f
) ??
f
ˆ x
t
f
_
(4.17)
+max
g

f
?
f
g
t
f
(4.18)
+max
µ?
ˆ
?

(i,j)?/

f?T
µ
f
ij
(?
f
i
??
f
j
). (4.19)
The dual objective function is
h(?, ?) = sup
x??
ˆ x
f
=g
f
x
f
L(?, ?, ˆ x, x, µ, g) (4.20)
For a given ? and ?, there are now three, decoupled, maximization problems we
110
can solve separately, i.e., source rate control, routing and scheduling. By solving for
these independently, we can produce ˆ x
t?
(?, ?), x
t?
(?, ?), µ
?
(?, ?) and g
?
(?, ?). Given
these values, we can then solve the dual problem by minimizing h(?, ?) over ?, ?.
Because the capacity region ? is a convex set, there is no duality gap between the
primal and the dual. In the rest of this section, we discuss the dual decomposition
techniques in details.
The data rate of ?ow f is determined by the maximization over x
t
in Eqn. (4.17).
It is important to note that each source node adjusts its rate using only local in-
formation ?
f
s
f
and ?
f
, thus the source rate control can be distributed across source
nodes.
Eqn. (4.18) determines the route.
g
t
= argmax
g

f
?
f
g
t
f
. (4.21)
We have that the optimal routing for ?ow f is to always choose the route with the
highest trust value. Since the trust values are ?xed, routing can be decided o?-line.
Eqn. (4.19) determines the schedule.
µ = argmax
µ?
ˆ
?

(i,j)?/

f?T
µ
f
ij
(?
f
i
??
f
j
) (4.22)
To maximize Eqn. (4.22), the term inside the second summation should take one
111
single ?ow f such that (?
f
i
??
f
j
) is maximized. Therefore we have that
µ
f
ij
= µ
ij
if f = argmax
f
(?
f
i
??
f
j
), (4.23)
and µ
f
ij
= 0 otherwise. The schedule of link rates in Eqn. (4.23) is the same as the
back-pressure scheduler introduced by Tassiulas[68]. Notice that the maximization
in Eqn. (4.23) is performed over
ˆ
?, which requires centralized knowledge. In the next
section, we propose a distributed algorithm which optimizes the network utility.
4.2.3 Distributed Algorithm
Scheduling sub-problem discussed in the last section requires global knowledge
on the rate vector, which becomes the bottleneck for solutions in wireless networks.
In this section, we study the distributed implementation of the maximization prob-
lem. We assume that time is divided into slots. At each time slot, source nodes
choose the ?ow data rate and the scheduling policy will select data to be forwarded
on each link.
The source node of each ?ow uses its local multiplier and the utility function
associated with that ?ow to update the ?ow rate in an iterative manner. One
example of the rate controller is directly derived from Eqn. (4.17):
x
t
f
[t + 1] = argmax
x

f
_
?
f
[t]x
t
f
??
f
s
f
[t]e
x

f
_
, (4.24)
ˆ x
t
f
[t + 1] = argmax
ˆ x

f
_
U
t
f
(ˆ x
t
f
) ??
f
[t]ˆ x
t
f
_
. (4.25)
112
The subgradient of h(?, ?) is given by
?h
??
f
i
=

j:(i,j)?/
µ
f
ij
?

j:(i,j)?/
µ
f
ji
?e
x

f
1(i = s
f
), (4.26)
?h
??
f
= g
t
f
+ x
t
f
? ˆ x
t
f
. (4.27)
We can then use the following subgradient method to solve the dual problem.
?
f
i
[t + 1] =
_
_
_
?
f
i
[t] ??[t]
_
_

j:(i,j)?/
µ
f
ij
[t]?

j:(i,j)?/
µ
f
ji
[t] ?e
x

f
[t]
1(i = s
f
)
_
_
_
_
_
+
, (4.28)
?
f
[t + 1] =
_
?
f
[t] ??[t](g
t
f
[t] +x
t
f
[t] ? ˆ x
t
f
[t])
_
+
, (4.29)
where ?[t], t = 1, 2, . . . is a sequence of positive step sizes. µ
d
ij
[t], g
t
f
[t], ˆ x
t
f
[t] and x
t
f
[t]
are de?ned as earlier with time slot t. According to Theorem 2.3 in [62], we know
that there is no duality gap, the converging results of x
t
solves the maximization
problem and such x
t
is the unique optimal solution.
Notice that the scheduling problem of Eqn. (4.22) requires global information.
We consider the solution to the scheduling problem that is coupled to the power
control problem, as in Lin and Shro? [44]. The link transmission powers determine
an explicit form for the allowable rate region
ˆ
?. The scheduling problem is posed
as an optimization problem, whose solution gives us the transmission powers. At
optimality, each node should transmit at full power or shut o?, and also should
transmit to at most one neighbor - the solution is thus distributed.
113
BIBLIOGRAPHY
[1] K. Aberer. P-Grid: A self-organized access structure for p2p information sys-
tems. In Proceddings in 9th International Conference on Cooperative Informa-
tion Systems, 2001.
[2] Daniel Aguayo, John Bicket, Sanjit Biswas, and Glenn Juddand Robert Morriss.
Link-level measurements from an 802.11b mesh network. In SIGCOMM ’04:
Proceedings of the 2004 conference on Applications, technologies, architectures,
and protocols for computer communications, pages 121–132, Portland, Oregon,
USA, 2004.
[3] R. Ahlswede, N. Cai, S. R. Li, and R. W. Yeung. Network information ?ow.
IEEE Transactions on Information Theory, 46(4):1204–1216, 2000.
[4] David Aldou and James Allen Fill. Reversible Markov chains and random walks
on graphs. monograph in preparation.
[5] Autonomic communication. http://www.autonomic-communication.org/.
[6] Dirk Balfanz, D. K. Smetters, Paul Stewart, and H. Chi Wong. Talking to
strangers: Authentication in ad-hoc wireless networks. In Proceedings of Sym-
posium on Network and Distributed Systems Security (NDSS’02), San Diego,
California, February 2002.
[7] John S. Baras and Tao Jiang. Cooperative games, phase transitions on graphs
and distributed trust in manet. In Proceedings of 2004 IEEE Conference on
Decision and Control (CDC), pages 93–98, Bahamas, December 2004.
[8] John S. Baras and Tao Jiang. Cooperation, Trust and Games in Wireless
Networks, pages 183–202. Birkhause, June 2005.
[9] Thomas Beth, Malte Borcherding, and Birgit Klein. Valuation of trust in open
networks. In Proceedings of 3rd European Symposium on Research in Computer
Security – ESORICS’94, pages 3–18, 1994.
[10] Matt Blaze, Joan Feigenbaum, John Ioannidis, and Angelos D. Keromytis.
The role of trust management in distributed systems security. Secure Internet
Programming: Security Issues for Mobile and Distributed Objects, pages 185–
210, 1999.
[11] Matt Blaze, Joan Feigenbaum, and Jack Lacy. Decentralized trust management.
In Proceedings of 1996 IEEE Symposium on Security an Privacy, pages 164–
173, May 1996.
[12] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge, Cambridge,
MA, 1st edition, 2004.
114
[13] Pierre Br´emaud. Markov chains: Gibbs ?elds, Monte Carlo simulation, and
queues. Texts in Applied Mathematics; 31. Springer-Verlag New York, Inc.,
1999.
[14] Sergey Brin, Lawrence Page, R. Motwami, and Terry Winograd. The PageR-
ank citation ranking: bringing order to the web. Technical Report 1999-0120,
Computer Science Department, Stanford University, 1999.
[15] S. Capkun, L. Butty´ an, and J. P. Hubaux. Small worlds in security systems:
an analysis of the PGP certi?cate graph. In Proceedings of The ACM New Se-
curity Paradigms Workshop 2002, pages 28–35, Norfolk, Virginia Beach, USA,
September 2002.
[16] Mung Chiang, Steven H. Low, A. Robert Calderbank, and John C. Doyle.
Layering as optimization decomposition: A mathematical theory of network
architectures. Proceedings of the IEEE, 95(1):255–312, January 2007.
[17] Philip Chou, Yunnan Wu, and Jain Kamal. Network coding for the internet.
In Proceedings of IEEE Communication Theory Workshop, Capri, 2004.
[18] Ian Clarke, Oskar Sandberg, Brandon Wiley, and Theodore W. Hong. Freenet:
A distributed anonymous information storage and retrieval system. Lecture
Notes in Computer Science, 2009:46, 2001.
[19] Richard Draves, Jitendra Padhye, and Brian Zill. Routing in multi-radio, multi-
hop wireless mesh networks. In MobiCom ’04: Proceedings of the 10th annual
international conference on Mobile computing and networking, pages 114–128,
Philadelphia, PA, USA, 2004.
[20] Bhaskar Dutta and Matthew O. Jackson, editors. Networks and Groups: Models
of Strategic Formation. Studies in Economic Design. Springer-Verlag Berlin
Heidelberg, 2003.
[21] Laurent Eschenauer. On trust establishment in mobile ad-hoc networks. Mas-
ter’s thesis, University of Maryland, College Park, 2002.
[22] Ferenc Forgo, Jeno Szep, and Ferenc Szidarovszky. Introduction to the Theory of
Games: Concepts, Methods, Applications. Kluwer Academic Publishers, 1999.
[23] G. David Forney, Jr. Codes on graphs: Normal realizations. IEEE Transactions
on Information Theory, 47(2):520–548, February 2001.
[24] Diego Gambetta. Can we trust trust? electronic edition,
http://www.sociology.ox.ac.uk/papers/trustbook.html, 2000. In Trust: Mak-
ing and Breaking Cooperative Relations, Gambetter, Diego (ed.), Department
of Sociology, University of Oxford.
[25] Christos Gkantsidis and Pablo Rodriguez. Network coding for large scale con-
tent distribution. In Proceedings of IEEE INFOCOM, Miami, FL, March 2005.
115
[26] V. D. Gligor, S.-W. Luan, and J.N. Pato. On inter-realm authentication in
large distributed systems. In Proceedings of the IEEE Conference on Security
and Privacy, pages 2–17, 1992.
[27] Virgil D. Gligor. Security of emergent properties in ad-hoc networks. In Pro-
ceeding of the Security Protocols Workshop, Sidney Sussex College, Cambridge,
UK, April 2004.
[28] Gnutella. http://www.gnutella.com.
[29] Tyrone Grandison and Morris Sloman. A survey of trust in internet applica-
tions. IEEE Communications Surveys and Tutorials, 3(4):2–16, 2000.
[30] IEEE Std 802.11: Wireless LAN medium access control (MAC) and physical
layer (PHY) speci?cations, 1999.
[31] Tao Jiang and John S. Baras. Autonomous trust establishment. In Proceed-
ings of 2nd International Network Optimization Conference (INOC), Lisbon,
Portugal, February 2005.
[32] Tao Jiang and John S. Baras. Trust evaluation in anarchy: A case study on
autonomous networks. In Proceedings of 2006 INFOCOM, Barcelona, Spain,
April 2006.
[33] Tao Jiang and John S. Baras. Fundamental tradeo?s and constrained coali-
tional games in autonomic wireless networks. In Proceedings of 5th Intl. Sympo-
sium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks
(WiOpt), April 2007.
[34] Tao Jiang and John S. Baras. Trust document distribution in manets. Submited
to Milcom, 2007.
[35] R. Johari, S. Mannor, and J.N. Tsitsiklis. A contract-based model for directed
network formation. Games and Economic Behavior, 2006.
[36] C.M. Jonker and J. Treur. Formal analysis of model for the dynamics of trust
based on experiences. In Proceedings of the 9th European Workshop on Mod-
elling Autonomous Agents in a Multi-Agent World: Multi-Agent System Engi-
neering, pages 221–231, 1999.
[37] A. Jøsang and R. Ismail. The beta reputation system. In Proceedings of the
15th Bled Conference on Electronic Commerce, Bled, Slovenia, 2002.
[38] Audun Jøsang, Roslan Ismail, and Colin Boyd. A survey of trust and reputation
systems for online service provision. Decision Support Systems, 43(2):618–644,
March 2007.
[39] Audun Jøsang, Claudia Keser, and Theo Dimitrakos. Can we manage trust?
In Proceedings of the Third International Conference on Trust Management
(iTrust’05), pages 93–107, 2005.
116
[40] M. Kinateder and K. Rothermel. Architecture and algorithms for a distributed
reputation system. In P. Nixon and S. Terzis, editors, Proceedings of the Frist
International Conference on Trust Management, volume 2692 of LNCS, pages
1–16, Crete, Greece, 2003. Springer-Verlag.
[41] Ross Kindermann and J. Laurie Snell. Markov Random Fields and Their Ap-
plications, volume 1 of Contemporary mathematics. American Mathematical
Society, 1980.
[42] Frank R. Kschischang, Brendan J. Frey, and Hans-Andrea Loeliger. Factor
graphs and the sum-product algorithm. IEEE Transactions on Information
Theory, 47(2):498–519, February 2001.
[43] Butler Lampson, Mart´?n Abadi, Michael Burrows, and Edward Wobber. Au-
thentication in distributed systems: Theory and practice. In Proceedings of the
13th ACM Symposium on Operating Systems Principles, pages 265–310, Oct
1991.
[44] Xiaojun Lin and Ness B. Shro?. Joint rate control and scheduling in multihop
wireless networks. In Proceedings of 48th IEEE COnference on Decision and
Control, pages 1484–1489, Paradise Island, Bahamas, December 2004.
[45] Xiaojun Lin, Ness B. Shro?, and R Srikant. A tutorial on cross layer optimiza-
tion in wireless networks. IEEE Journal on Selected Areas in Communications,
24(8):1452–1463, August 2006.
[46] Hans-Andrea Loeliger. An introduction to factor graphs. IEEE Signal Process-
ing Magazine, 21(1):28–41, January 2004.
[47] IETF Mobile Ad-hoc Networks (MANET) working group.
http://www.ietf.org/html.charters/manet-charter.html.
[48] Sergio Marti, T. J. Giuli, Kevin Lai, and Mary Baker. Mitigating routing
misbehavior in mobile ad hoc networks. In Proceedings of the 6th Annual In-
ternational Conference on Mobile Computing and Networking, pages 255–265,
Boston, Massachusetts, United States, 2000. ACM Press.
[49] Paolo Massa and Paolo Avesani. Controversial users demand local trust metrics:
an experimental study on epinions.com communicty. In Proceedings of 25th
AAAI Conference, 2005.
[50] Ueli Maurer. Modelling a public-key infrastructure. In Proceedings of 1996
European Symposium on Research in Computer Security – ESORICS’96, pages
325–350, 1996.
[51] Barbara Misztal. Trust in Modern Societies: The Search for the Bases of Social
Order. Polity Press, 1995.
117
[52] Roger B. Myerson. Graphs and cooperation in games. Mathematics of Opera-
tions Research, 2:225–229, 1977.
[53] Roger B. Myerson. Game Theory: Analysis of Con?ict. Harvard University
Press, 1991.
[54] Michael J. Neely, Eytan Modiano, and Charles E. Rohrs. Dynamic power
allocation and routing for time varying wireless networks. In Proceedings of
IEEE INFOCOM, volume 1, pages 745–755, April 2003.
[55] Hidetoshi Nishimori. Statistical Physics of Spin Glasses and Information Pro-
cessing: An Introduction. Oxford University Press, 2001.
[56] Martin A. Nowak. Evolutionary Dynamics: Exploring the Equations of Life.
Harvard University Press, 2006.
[57] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT
Press, 1994.
[58] Michael K. Reiter and Stuart G. Stubblebine. Authentication metric analysis
and design. ACM Transactions on Information and System Security, 2(2):138–
158, 1999.
[59] Sini Ruohomaa and Lea Kutvonen. Trust management survey. In P. Herrmann
et al., editor, Proceedings of the 3rd International Conference on Trust Man-
agement (iTrust 2005), volume 3477 of LNCS, pages 77–92. Springer-Verlag,
2005.
[60] J. Sabater. Trust and Reputation for Agent Societies. PhD thesis, Institut
d’Investigaci en Intelligncia Articial, Bellaterra, 2003.
[61] Eugene Seneta. Non-negative matrices and Markov chains. Springer series in
statistics. Springer-Verlag New York Inc., 2nd edition, 1981.
[62] Naum Zuselevich Shor. Minimization Methods for Non-Di?erentiable Func-
tions. Berlin: Springer-Verlag, 1985.
[63] Marco Slikker and Ann van den Nouweland. Social and Economic Networks in
Cooperative Game Theory, volume 27 of SERIES C: GAME THEORY, MATH-
EMATICAL PROGRAMMING AND OPERATIONS RESEARCH. Kluwer
Academic Publishers, 2001.
[64] V. Srinivasan, P. Nuggehalli, C. F. Chiasserini, and R. R. Rao. Cooperation
in wireless ad hoc networks. In Proceedings of IEEE INFOCOM’03, pages
808–817, San Francisco, March 30 - April 3 2003.
[65] Frank Stajano and Ross J. Anderson. The resurrecting duckling: Security issues
for ad-hoc wireless networks. In Proceedings of the 7th International Workshop
on Security Protocols, pages 172–194, London, UK, 2000. Springer-Verlag.
118
[66] Jennifer Steiner, Cli?ord Neuman, and Je?rey I. Schiller. Kerberos: An authen-
tication service for open network systems. In USENIX Workshop Proceedings,
UNIX Security Workshop, pages 191–202, 1988.
[67] Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Bal-
akrishnan. Chord: A scalable peer-to-peer lookup service for internet applica-
tions. In Proceedings of the ACM SIGCOMM ’01 Conference, pages 149–160,
San Diego, California, August 2001.
[68] L. Tassiulas. Scheduling and performance limits of networks with constantly
varying topology. IEEE Transaction on Information Theory, 43(3):1067–1073,
May 1997.
[69] Duncan J. Watts and Steven H. Strogatz. Collective dynamics of ’small-world’
networks. Nature, 393:440–442, 1998.
[70] S.T. Wierzchon. Ising model. Eric Weisstein’s World of
Physics,http://scienceworld.wolfram.com/physics/IsingModel.html, 2002.
[71] B. Yu and M.P. Singh. An evidental model of distributed reputation man-
agement. In Proceedings of the First International Joint Conference on Au-
tonomous Agents and Multiagent Systems, pages 294–301, Bologna, Italy, 2002.
ACM Press.
[72] Yongguang Zhang, Wenke Lee, and Yi-an Huang. Intrusion detection tech-
niques for mobile wireless networks. ACM/Kluwer Wireless Networks Journal
(ACM WINET), 9(5), September 2003.
[73] P. Zimmermann. PGP User’s Guide. MIT press, 1994.
119

doc_303215008.pdf
 

Attachments

Back
Top