MAIA (Multifunctional, adaptive and interactive AI system for Acting in multiple contexts)

Abstract: Deteriorated motor abilities due to stroke or accident represent a serious society challenge with missing adequate assistive technologies that are trustworthy, adaptive, interactive, i.e., intelligent. MAIA proposes a paradigm shift in which human-centric AI controls prosthetic and assistive devices. MAIA will investigate critical steps towards the rapid development of human-centric control: a radically novel intention decoder, a novel concept for trustworthy human-AI interactions, and new types of datasets. The AI technology will decode human intentions and communicate them to assistive devices and to the users to ensure compliance and develop trust. The multifunctional human-centric AI controller will embed characteristics suitable for integration in robotic arms, wheelchair and exoskeletons using natural, fast and lean communication methods. MAIA’s approach will be guided by real needs of patients and caregivers and by beyond state-of-the-art knowledge in the neuro-, cognitive, and social sciences. Potential applications across the healthcare, industry, and space exploration will stimulate the development of an European innovation ecosystem and innovative enterprises.

 

Objectives: (1) To design an adaptive, semi-autonomous and bi-directional interactive AI interface for intelligent actuators capable to implement desired actions in a transparent, efficient, and natural way that is acceptable and trustworthy to the user. (2) To decode action intentions from neural signals and behavioural data in various contexts including stationary and body-motion conditions using predictive coding approach. (3) To enhance quality of life and autonomy in patients that lost motor functions due to stroke, tumor surgery or accident. (4) To demonstrate the socio-technical, organizational, and ethical benefits of AI in health care and beyond.

 

Metods: (WP1) Communicate decisions bi-directionally between users and AI system in natural, multimodal and adaptive way by formulating a natural, lean bi-directional communication scheme using Augmented Reality. (WP2) Read action intentions by formulating robust adaptive model-based decoding methods that leverage neural and behavioral signals with predictive machine learning approaches such as Deep Learning, hierarchical Active Inference, dynamic Bayesian models. (WP3) Reliably and quantitatively assess the trust and acceptance of AI system in the user’s mind by focus-group interviews. (WP4) Create multidimensional decoding models for reading human neural signals and behavioral intentions by exploiting the primate methods and deep neural network interface with the motor cortex. (WP5) Produce a prototype of a control algorithm for intelligent actuators in ROS environments and validate the methods in realtime. Organize an ecosystem and advance the debate on implications of brain-derived signals in healthcare and more general context.

Project Timeframe: 
da 01 Gen 2021 a 31 Dic 2024

tabs

Partners
Partners: 
  • WESTFAELISCHE WILHELMS-UNIVERSITAET MUENSTER, Germania
  • FUNDACION TECNALIA RESEARCH & INNOVATION, Spagna
  • CARL ZEISS VISION INTERNATIONAL GMBH, Germania
  • CONSIGLIO NAZIONALE DELLE RICERCHE, Italia
  • AZIENDA UNITA' SANITARIA LOCALE DI BOLOGNA, Italia
  • STAM SRL, Italia