Privacy Protection and Trust Models
by Olle Olsson
The SAITS project, a cooperation between SICS, Stockholm University and others, will investigate technical aspects of privacy protection. Trust models is one of the techniques be evaluated.
We are investigating the problem of how computational components can acquire and use knowledge about the reliability of other components. This problem domain is conceptualised in terms of agents that interact with each other, where agents (consumers) may need services, and where other agents (producers) can offer such services. Consumers need to select which producer to enter into a contract with, and they should make this choice with the aim of optimising their accumulated utility, in short-term or long-term perspective. In open computational environments, a consumer is only able to observe the external behaviour of producers, ie, selection of a producer can only be done on the basis of hard facts about past performance of the producers.
How well does this model match properties of real-world problems? One example is electronic shopping on the Web. We consumers have to select which retailer to use, and often we only encounter these retailers on the web. Which one should we choose? Experiences, good or bad, may indicate that some retailers could be preferred while others should be avoided. But it is also important to take into account the potential gains we may make; cheap but dubious vs. expensive but reliable. Furthermore, we can make our decisions based on our own experiences, as well as on what we know about other consumers' experiences of the producers. This scenario is an example of a wider class of applications. Important characteristics of this class are that components can profit from using services provided by 'alien' components, that such alien components have externally observable behaviour, that they may deliver services of a priori unknown quality, and that the quality of services can be established after delivery.
Computational Models of Trust
A rational approach to trust concerns identification of the theoretical basis of trust. Key elements are incorporated from related disciplines, eg, decision theory and utility theory. As trust is a concept with meaning in a societal context, the theory of social choice offers important scientific underpinnings. Theoretical frameworks as those mentioned provide alternative models of fundamental concepts, as well as proofs of the limits of what can be achieved.
A core problem is the ontological status of the concept of trust. Agents that communicate trust knowledge must conceptualise trust in the compatible ways. To some extent trust can be avoided as an explicit concept, by making agents disseminate information about their experiences in terms of their concrete interactions with other agents. The drawback of this approach is that an agent may thereby disclose private information, information that could compromise the privacy of the agent. By abstracting experiences into statements about trust, an agent can prevent personal sensitive information from being publicly accessible. An intermediate solution is to identify 'regions of trust', where the amount of communicated detailed information depends on the proximity of the trust regions to which the sender and the receiver belong.
A practical engineering methodology must take into account a number of practical issues, eg, how to choose between alternative algorithmic methods for making trust-based decisions. As there are infinitely many such algorithms, it is important to understand in what way a specific class of algorithms contributes to utility of the user. There may be real-world properties that have critical impact on how well certain algorithms succeed in optimising the utility of the user, and, from an engineering point of view, it is critical to understand how such environmental factors may influence the usefulness of specific trust-based methods.
We have developed a generic model of trust, on the level of generic problem-solving methods. A workbench has been developed, based on this model, where systems built on configurable and parametric trust models can be simulated. Preliminary results have been obtained in terms of sensitivity analysis, eg, how strong or weak is the correlation between utility and some environmental factor.
The trust model will be evaluated within a project focussing on privacy in the information society (SAITS). From a social point of view, individual citizens may protect themselves by being co-operative members of their society, and informing their peers about their experiences from earlier interactions with others. This 'gossiping' model is fundamentally about establishing a societal knowledge base of trust experiences, and about the use of this in individual decision making. Important questions are; how far can privacy protection be strengthened through such means, how can 'exchange rates' between different value/preference domains be established, and what are the threats to such societal mechanisms (eg, how to prevent that knowledge bases can be compromised or manipulated).