The visual encoding of tool-object affordances

The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool-object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool-object contexts and to a lesser extent within the incorrect tool-object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool-object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool-object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention towards the manipulative grasp-posture may serve to encode grasp-intent. Results here shed new light on how an observer gathers action-information when evaluating static tool-object scenes and reveal how contextual and grasp-specific affordances directly modulate visuospatial attention.

Publication type: 
Articolo
Author or Creator: 
Natraj, N.
Pella, Y. M.
Borghi, A. M.
Wheaton, L. A.
Publisher: 
Pergamon Press., New York, Regno Unito
Source: 
Neuroscience 310 (2015): 512–527. doi:10.1016/j.neuroscience.2015.09.060
info:cnr-pdr/source/autori:Natraj, N.; Pella, Y. M.; Borghi, A. M.; Borghi, A. M.; Wheaton, L. A./titolo:The visual encoding of tool-object affordances/doi:10.1016/j.neuroscience.2015.09.060/rivista:Neuroscience/anno:2015/pagina_da:512/pagina_a:527/interv
Date: 
2015
Resource Identifier: 
http://www.cnr.it/prodotto/i/343359
https://dx.doi.org/10.1016/j.neuroscience.2015.09.060
info:doi:10.1016/j.neuroscience.2015.09.060
http://www.scopus.com/record/display.url?eid=2-s2.0-84944097519&origin=inward
Language: 
Eng