Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

Explainable Artificial Intelligence for Decoding and Modulating Behaviorally-Activated Brain Circuits

Presenter:

Michele Ferrante, Ph.D.
Division of Neuroscience and Basic Behavioral Science & Division of Translational Research

Goal:

The goal of this initiative is to solicit applications in the area of eXplainable Artificial Intelligence (XAI) applied to mental health priorities (NIMH Strategic Objective 1, Strategy 1.1: Describe the molecules, cells, and neural circuits associated with complex behaviors; and Strategic Objective 3, Strategy 3.2: Develop ways to tailor interventions to optimize outcomes). Current machine learning approaches focus on classifying and predicting brain and behavioral signals but their solutions still remain uninterpretable. XAI would retain prediction accuracy while endowing these models with explanatory features.

Rationale:

Physiological markers of complex behaviors are difficult to identify because each unit of analysis (e.g., genes, activity of neural circuits, and behavior) can only explain small pathological predispositions. Integrative data-driven models dynamically informing invasive and non-invasive brain manipulations would provide more comprehensive explanations of the causal links between brain activity and complex behaviors. These models would seamlessly integrate high-resolution behavioral measures (from cameras, accelerometers, GPS, eye trackers, etc.) with innovative neuro-technologies able to simultaneously record and stimulate brain activity. More importantly, these models may come with state-of-the-art human-computer interfaces able to translate the output of the model (e.g., stimulate the brain with a precise spatiotemporal pattern) into understandable and useful explanation dialogues. For example, “In this behavioral task to improve function X, stimulate layer # of this brain region at time point Y (adaptively changing the stimulation protocol as follows […]). The following alternatives were also tested [...] and they were sub-optimal compared to the ones proposed for the following reasons [...].” These XAIs would provide new: 1) understanding of the circuit-level determinants of complex behaviors; 2) breakthroughs in multimodal data analytics; and 3) unbiased theories of brain function tested at the individual level.

Multi-modal data fusion has recently been used to predict brain and behavioral activity (from previously recorded neuro-behavioral patterns), to accurately identify biological differences within patient populations, and to perform unbiased diagnostic assessments. Because current deep-learning approaches cannot explain their inferences, it is often unclear why a model makes certain decisions instead of others or when the algorithm succeeds or fails. XAIs can be created by: 1) integrating data-driven and theory-driven models; 2) labeling features of the model with semantic information; and 3) learning from complementary models or designing models for the purpose of explanation. This initiative would promote the development of XAIs able to causally explain the link between neural activity and behaviors by autonomously performing closed-loop neuromodulation.