505
Views
122
CrossRef citations to date
0
Altmetric
Original Articles

Using trust for detecting deceitful agents in artificial societies

, &
Pages 825-848 | Published online: 26 Nov 2010
 

Trust is one of the most important concepts guiding decision-making and contracting in human societies. In artificial societies, this concept has been neglected until recently. The inherent benevolence assumption implemented in many multiagent systems can have hazardous consequences when dealing with deceit in open systems. The aim of this paper is to establish a mechanism that helps agents to cope with environments inhabited by both selfish and cooperative entities. This is achieved by enabling agents to evaluate trust in others. A formalization and an algorithm for trust are presented so that agents can autonomously deal with deception and identify trustworthy parties in open systems. The approach is twofold: agents can observe the behavior of others and thus collect information for establishing an initial trust model. In order to adapt quickly to a new or rapidly changing environment, one enables agents to also make use of observations from other agents. The practical relevance of these ideas is demonstrated by means of a direct mapping from a scenario to electronic commerce.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.