Cognitive Science is not a unitary discipline, but a cross-disciplinary research
domain. Accordingly, there is no single accepted definition of trust in cognitive
science and we will refer to quite distinct literature from neuroscience to philosophy,
from Artificial Intelligence (AI) and agent theories to psychology and
sociology, etc. Our paradigm is Socio-Cognitive AI, in particular Agents and
Multi-Agents modeling. On the one side we use formal modeling of AI architectures
for a clear scientific characterization of cognitive representations and their
processing, and we endow AI Agents with cognitive and social minds. On the
other side we use Multi-Agent Systems (MAS) for the experimental simulation of
interaction and social emergent phenomena.
By arguing for the following claims, we focus on some of the most controversial issues
in this domain: (a) trust does not involve a single and unitary mental state, (b) trust is
an evaluation that implies a motivational aspect, (c) trust is a way to exploit ignorance,
(d) trust is, and is used as, a signal, (e) trust cannot be reduced to reciprocity, (f) trust
combines rationality and feeling, (g) trust is not only related to other persons but can
be applied to instruments, technologies, etc.
The basic message of this chapter is that "trust" is a complex object of inquiry and
must be treated as such. It thus deserves a non-reductive definition and modeling.
Trust: Perspectives in Cognitive Science
Publication type:
Contributo in volume
Source:
THE ROUTLEDGE HANDBOOK OF TRUST AND PHILOSOPHY, edited by Judith Simon, pp. 214–228, 2020
info:cnr-pdr/source/autori:Cristiano Castelfranchi, Rino Falcone/titolo:Trust: Perspectives in Cognitive Science/titolo_volume:THE ROUTLEDGE HANDBOOK OF TRUST AND PHILOSOPHY/curatori_volume:Judith Simon/editore:/anno:2020
Date:
2020
Resource Identifier:
http://www.cnr.it/prodotto/i/423386
Language:
Eng