On Data-Centric Trust Establishment in Ephemeral Ad Hoc Networks
On Data-Centric Trust Establishment in Ephemeral Ad Hoc Networks
On Data-Centric Trust Establishment in Ephemeral Ad Hoc Networks
I. I NTRODUCTION
In all traditional notions of trust, data trust (e.g., trust in the
identity or access/attribute certificates) was based exclusively
on a priori trust relations established with the network entities
producing these data (e.g., certification authorities, network
nodes) [9], [16], [17]. This was also the case when trust
was derived via fairly lengthy interactions among nodes, as
in reputation systems [4], [8], [18], [27]. Moreover, any new
data trust relationships that needed to be established required
only trust in the entity that produced those data. All trust
establishment logics proposed to date have been based on
entities (e.g., principals such as nodes) making statements
on data [4], [7], [9], [12], [16], [17], [24], [25]. Furthermore,
traditional trust relations evolved generally slowly with time:
This work is partially funded by the EU project SEVECOM
(http://www.sevecom.org).
Virgil Gligors research was supported in part by the US Army Research
Laboratory and the UK Ministry of Defence Agreement Number W911NF06-3-0001 and by the US Army Research Office under Contract W911NF-071-0287 at the University of Maryland. The views and conclusions contained
in this document are those of the authors and should not be interpreted as
representing the official policies, either expressed or implied, of the US Army
Research Laboratory, US Army Research Office, the U.S. Government, the
UK Ministry of Defence, or the UK Government.
and traffic jam. If the ice on the road causes a traffic jam,
this becomes the composite event ice on the road and traffic
jam ahead. Each i is a perceivable event generated by the
environment, network, or an application running on vehicles.
There may be multiple applications, each having its own set
of relevant events. These sets are overlapping, as their events
belong to the pool of basic events.
We consider V , the set of nodes vk , classified according to
a system-specific set of node types, = {1 , 2 , . . . , N }.
We define a function : V returning the type of
node vk . Reports are statements by nodes on events, including
related time and geographic coordinates where applicable. For
simplicity, we consider reports on basic events, as reports on
composite events are straightforward. We do not dwell on the
exact method for report generation, as this is specific to the
application.
B. Default Trustworthiness
We define the default trustworthiness of a node vk of type
n as a real value that depends on the attributes related to the
designated type of node vk . For all node types, there exists a
trustworthiness ranking 0 < 1 < 2 < . . . < N < 1. For
example, some nodes are better protected from attacks, more
closely monitored and frequently re-enforced, and, overall,
more adequately equipped, e.g., with reliable components.
As they are less likely to exhibit faulty behavior, they are
considered more trustworthy.
We stress here that the data-centric trust establishment
framework does not aim to replace or amend source authentication, as in reputation systems, but uses it as an input to
the data trust evaluation function. In fact, if a node reputation
system were in place, its output scores could also be used as
input to the data trust function. Hence, data trust builds on the
information provided by source authentication and reputation
systems without trying to supplant them. The choice of the
entity trust establishment system is orthogonal to the scope of
this paper and has been prolifically addressed in the literature
(Sec. VI).
C. Event- or Task-Specific Trustworthiness
Nodes in general perform multiple tasks that are system-,
node- and protocol-specific actions. Let be the set of all
relevant system tasks. Then for some nodes v1 and v2 with
types (v1 ) = 1 and (v2 ) = 2 and default trustworthiness
rankings t1 < t2 , it is possible that v1 is more trustworthy
than v2 with respect to a task .
Reporting data on events is clearly one of the node tasks.
For the sake of simplicity, we talk here about event-specific
trustworthiness implying that it is actually task-specific trustworthiness. Nevertheless, the two can be easily distinguished,
when necessary; e.g., when tasks include any other protocolspecific action such as communication.
With the above considerations in mind, we define the eventspecific trustworthiness function f : [0, 1]. f has
two arguments: the type (vk ) of the reporting node vk and
the task j . f does differentiate among any two or more nodes
Decisions, with
weights d , on
reported events
Event-specific
trust
f ( (vk ), j )
Decision Logic
Weights
(trust levels)
l ( vk , j )
Node type
and
event type
Security status
s ( vk )
Node id
and
event type
Reports on
events of type
from nodes v k
Fig. 1.
Report
contents
Node id
module outputs the event that has the highest combined trust
level, i.e., maxi (di ).
A. Basic techniques
The following two techniques are used for reference and
serve as a basis of comparison for the remaining three techniques.
1) Majority Voting: In this technique, the majority wins
(e.g., [19]). The combined trust level corresponding to event
i is defined by:
K
1 X
di =
F (eik )
K
D. Dempster-Shafer Theory
(1)
k=1
(2)
B. Weighted Voting
As its names implies, Weighted Voting (WV) sums up all
the votes supporting an event with each vote weighted by the
corresponding trust level to output the combined trust level:
di =
K
1 X
F (eik )
K
(3)
k=1
I
P
K
Q
P [ejk |i ]
k=1
K
Q
(P [h ]
h=1
k=1
(4)
P [ejk |h ])
K
M
mk (i )
(5)
k=1
m1 (i )
P
m2 (i ) =
q,r:q r =i
m1 (q )m2 (r )
q,r:q r =
m1 (q )m2 (r )
Scenario
number
1
2
3
4
5,6
Parameter
E[F (f alse)]
0.6
0.8
0.6
0.2
0.6
E[F (correct)]
0.8
0.6
0.8
0.4
0.8
N
50
50
10
50
17
TABLE I
S IMULATION SCENARIO PARAMETERS . E[F (f alse)] DENOTES THE
AVERAGE TRUST LEVEL OF FALSE REPORTS AND E[F (correct)] IS THE
AVERAGE TRUST LEVEL OF CORRECT REPORTS . N IS THE NUMBER OF
REPORTS . I N SCENARIOS 5 AND 6, THE VALUE OF N IS DETERMINED BY
THE NS -2 SIMULATIONS AS FURTHER EXPLAINED IN S EC . V-D.
0.8
MTR
WV
BI (prior = 0.5)
BI (prior = 0.001)
DST
0.7
0.6
0.5
0.4
0.3
0.9
0.9
0.8
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0.1
10
20
30
40
50
60
70
Percentage of false reports
80
90
100
1
0.9
MTR
WV
BI (prior = 0.5)
BI (prior = 0.001)
DST
MTR
WV
BI (prior = 0.5)
BI (prior = 0.001)
DST
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10
20
30
40
50
60
70
Percentage of false reports
80
90
100
10
(a) Correct reports are more trustworthy than false (b) False reports are more trustworthy than correct
ones.
ones.
0.7
0.8
Probability of attack success
0.8
0.6
0.5
0.4
0.3
0.5
0.4
0.3
0.1
0.1
0
10
20
30
40
50
60
70
Percentage of false reports
80
90
100
90
100
MTR
WV
BI (prior = 0.5)
BI (prior = 0.001)
DST
0.8
0.6
0.2
80
0.9
0.7
0.2
40
50
60
70
Percentage of false reports
1
MTR
WV
BI (prior = 0.5)
BI (prior = 0.001)
DST
0.9
MTR
WV
BI (prior = 0.5)
BI (prior = 0.001)
DST
30
1
0.9
20
0.7
0.6
0.5
0.4
0.3
0.2
0.1
4.5
5.5
6
6.5
Time (s)
7.5
4.5
5.5
6
6.5
Time (s)
7.5
(e) Evolution in time; evenly distributed false re- (f) Evolution in time; false reports received first.
ports.
Performance of the decision logics with respect to the percentage of false reports, prior knowledge, uncertainty, and time.
D. Evolution in Time
E. Discussion
R EFERENCES
[1] Standard Specification for Telecommunications and Information Exchange Between Roadside and Vehicle Systems 5 GHz Band Dedicated
Short Range Communications (DSRC) Medium Access Control (MAC)
and Physical Layer (PHY) Specifications. ASTM E2213-03, 2003.
[2] IEEE P1609.2 Version 1 - Standard for Wireless Access in Vehicular
Environments - Security Services for Applications and Management
Messages. In development, 2006.
[3] M. Abdel-Aty, A. Pande, C. Lee, V. Gayah, and C. Dos Santos. Crash
risk assessment using intelligent transportation systems data and realtime intervention strategies to improve safety on freeways. Journal of
Intelligent Transportation Systems, 11(3):107120, Jul. 2007.
[4] S. Buchegger and J.-Y. Le Boudec. A robust reputation system for P2P
and mobile ad-hoc networks. In Proceedings of P2PEcon04, 2004.
[5] T. M. Chen and V. Venkataramanan. Dempster-Shafer Theory for
intrusion detection in ad hoc networks. IEEE Internet Computing,
9(6):3541, Nov.Dec. 2005.
[6] B. Cobb and P. Shenoy. On the plausibility transformation method for
translating belief function models to probability models. International
Journal of Approximate Reasoning, 41(3):314330, Apr. 2006.
[7] L. Eschenauer, V. D. Gligor, and J. Baras. On trust establishment
in mobile ad hoc networks. In Proceedings of the 10th Int. Security
Protocols Workshop, 2002.
[8] S. Ganeriwal and M. Srivastava. Reputation-based framework for high
integrity sensor networks. In Proceedings of SASN04, 2004.
[9] V. Gligor, S. Luan, and J. Pato. On inter-realm authentication in large
distributed systems. In Proceedings of the IEEE Symposium on Security
and Privacy92, 1992.
[10] P. Golle, D. Greene, and J. Staddon. Detecting and correcting malicious
data in VANETs. In Proceedings of VANET04, 2004.
[11] D.L. Hall and J. Llinas. An introduction to multisensor data fusion.
Proceedings of the IEEE, 85(1):623, Jan. 1997.
[12] T. Jiang and J.S. Baras. Trust evaluation in anarchy: A case study on
autonomous networks. In Proceedings of IEEE Infocom06, 2006.
[13] A. Jsang. An algebra for assessing trust in certification chains. In
Proceedings of NDSS99, 1999.
[14] W. Kiess, J. Rybicki, and M. Mauve. On the nature of Inter-Vehicle
Communication. In Proceedings of WMAN07, 2007.
[15] L.A. Klein. Sensor Technologies and Data Requirements for ITS
Applications. Artech House Publishers, 2001.
[16] R. Kohlas and U. Maurer. Confidence valuation in a public-key
infrastructure based on uncertain evidence. In Proceedings of PKC00,
volume 1751 of Lecture Notes in Computer Science, pages 93112.
Springer-Verlag, 2000.
[17] B. Lampson, M. Abadi, M. Burrows, and E. Wobber. Authentication
in distributed systems: theory and practice. SIGOPS Oper. Syst. Rev.,
25(5):165182, 1991.
[18] J. Mundinger and J.-Y. Le Boudec. Reputation in self-organized
communication systems and beyond. In Proceedings of Inter-Perf06
(Invited Paper), 2006.
[19] B. Ostermaier, F. Dotzer, and M. Strassberger. Enhancing the security
of local danger warnings in VANETs - a simulative analysis of voting
schemes. In Proceedings of ARES07, 2007.
[20] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of
Plausible Inference. Morgan Kaufmann, 1988.
[21] M. Raya, P. Papadimitratos, and J.-P. Hubaux. Securing vehicular
communications. IEEE Wireless Comm. Magazine, Special Issue on
Inter-Vehicular Comm., 13(5):815, Oct. 2006.
[22] G. Shafer. A Mathematical Theory of Evidence. Princeton University
Press, 1976.
[23] C. Siaterlis and B. Maglaris. Towards multisensor data fusion for DoS
detection. In Proceedings of SAC04, 2004.
[24] Y. Sun, W. Yu, Z. Han, and K.J. Ray Liu. Information theoretic
framework of trust modeling and evaluation for ad hoc networks. IEEE
Journal on Selected Areas in Communications, 24(2):305317, 2006.
[25] G. Theodorakopoulos and J.S. Baras. On trust models and trust
evaluation metrics for ad hoc networks. IEEE Journal on Selected Areas
in Communications, 24(2):318328, Feb. 2006.
[26] Q. Xu, T. Mak, J. Ko, and R. Sengupta. Vehicle-to-vehicle safety
messaging in DSRC. In Proceedings of VANET04, 2004.
[27] C. Zouridaki, B. L. Mark, M. Hejmo, and R. K. Thomas. Robust
cooperative trust establishment for MANETs. In Proceedings of SASN
06, 2006.