The Full Wiki

More info on Relative Trust

Relative Trust: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

A notion of trust of an agent on a trustee that depends on the relative experiences with the trustee in comparison to the experiences from all of other trustees available is called relative trust of agent.

In these days ambient systems are being deployed to support humans effectively for example an ambient system with a personal agent that monitors the behaviour of a human executing certain complex tasks, and gives dedicated support for this. Such support could include advising the use of a particular information source, system or agent to enable proper execution of the task, or even involving such a system or agent pro-actively. In order for these personal agents to be accepted and useful, the personal agent should be well aware of the habits and preferences of the human it is supporting. If a human for example dislikes using a particular system or agent, and there are several alternatives available that are more preferred, the personal agent would not be supporting effectively if it would advise, or even pro-actively initiate, the disliked option.

An aspect that plays a crucial role in giving such tailored advice is to represent the trust levels the human has for certain options. Knowing these trust values allows the personal assistant to reason about these levels, and give the best possible support that is in accordance with the habits and preferences of the human. Since there would be no problem in case there is only one way of supporting the human, the problem of selecting the right support method only occurs in case of substitutable options. Therefore, a notion of relative trust in these options seems more realistic than having a separate independent trust value for each of these options. For instance, if three systems or agents can contribute X, and two of them perform bad, whereas the third performs pretty bad as well, but somewhat better in than the others, your trust in that third option may still be a bit high since in the context of the other options it is the best alternative.

Dynamics of Relative Trust in Different Cultures

The degree of reliability of available information sources may strongly differ in different types of societies or cultures. In some types of societies it may be exceptional when an information source provides 10% or more false information, whereas in other types of societies it is more or less normal that around 50% of the outcomes of information sources is false. If the positive experiences percentage given by the information agents varies significantly, then the total relative trust of the human on the these information agents may differ as well. Simulations have shown that in every culture whatever relative percentage of the positive experiences may be (except when all information agent give negative experiences all of the time the information agent that gives more positive experiences to the human gains more trust. Furthermore, the information agent that gives more positive experiences at least secure neutral trust in the long run, even the percentage of positive experiences is very low.[1]

See also




Got something to say? Make a comment.
Your name
Your email address