Bruno Averbeck, Ph.D.Chief
Section on Learning and Decision Making (SLDM)
Dr. Averbeck attained a B.S. in Electrical Engineering from the University of Minnesota in 1994. After working 3 years in industry, Dr. Averbeck returned to the University of Minnesota and completed a Ph.D. in Neuroscience in 2001, working in the lab of Dr. Apostolos Georgopoulos. His thesis was titled, "Neural Mechanisms of Copying Geometrical Shapes". Following his thesis work, Dr. Averbeck carried out post-doctoral studies at the University of Rochester with Dr. Daeyeol Lee. During this period he studied neural mechanisms underlying sequential learning, coding of vocalizations and population coding. In 2006 Dr. Averbeck moved to University College London as a senior Lecturer, where he began experiments looking at the role of frontal-striatal circuits in learning, combining neurophysiology, brain imaging and patient studies. In 2009, Dr. Averbeck moved to the NIMH and established the Unit on Learning and Decision Making in the Laboratory of Neuropsychology.
The section on Learning and Decision making studies the neural circuitry that underlies reinforcement learning. Reinforcement learning (RL) is the behavioral process of learning to make advantageous choices. While some preferences are innate, many are learned over time. How do we learn what we like and what we want to avoid? The lab uses a combination of experiments in in-vivo model systems, human participants including patients and computational modeling. We examine several facets of the learning problem including learning from gains vs. losses, learning to select rewarding actions vs. learning to select rewarding objects, and the explore-exploit trade-off. The explore-exploit trade-off describes a fundamental problem in learning. Should you try every restaurant when visiting a new city, or explore a small set of them and then return to your favorite several times?
Standard models of RL assume that dopamine neurons code reward prediction errors (RPEs; the difference between the size of the reward received and the reward that was expected following a choice). These RPEs are then communicated to the basal ganglia, specifically the striatum, because of its substantial dopamine innervation. This dopamine signal drives learning in frontal-striatal and amygdala-striatal circuits, such that choices that have previously been rewarded lead to larger neural responses in the striatum, and choices that have previously not been rewarded (or have been punished) lead to smaller responses. Thus, the striatal neurons come to represent the values of choices. They signal a high-value choice with higher activity and this higher activity drives decision processes. These models often mention a potential role for the amygdala, without formally incorporating it. They further suggest a general role for the ventral-striatum (VS) in representing values of decisions, whether they are decisions about actions or decisions about objects and independent of whether values are related to reward magnitude or probability.
In contrast to the standard model, we have recently shown that the amygdala has a larger role in RL than the VS (Costa VD et al., Neuron, 2016). In addition, the role of the VS depends strongly on the reward environment. When rewards are predictable, the VS has almost no role in learning whereas when rewards are less predictable the VS plays a larger role. This data outlines a more specific role for the VS in RL than is attributed to it by current models. Given that the VS has been implicated in depression, particularly adolescent depression, this delineation of the contribution of the VS to normal behavior may help inform hypotheses about the mechanisms and circuitry underlying depression.
Motivational neural systems underlying reinforcement learning. Averbeck B.B. and Costa, V.D.. Nature Neuro. In press
Prediction error representation in individuals with generalized anxiety disorder during passive avoidance. White, S.F., Geraci, M., Lewis, E., Leshin, J., Teng, C., Averbeck, B., Meffert, H., Ernst, M., Blair, J.R., Grillon, C., and Blair, K.S.. Am J Psychiatry. 174:110-117, 2017 PMID: 27631963.
Using model systems to understand errant plasticity mechanisms in psychiatric disorders. Averbeck B.B. and Chafee M.V.. Nature Neuro. 19:1418-1425, 2016 PMID: 27786180.
Amygdala and ventral striatum make distinct contributions to reinforcement learning. Costa V.D., Dal Monte O., Lucas D.R., Murray E.A. and Averbeck, B.B.. Neuron. 92:505-517, 2016 PMID: 27720488.
The role of frontal cortical and medial-temporal lobe brain areas in learning a Bayesian prior belief on reversals. Jang, A.I., Costa, V.D., Rudebeck, P.H., Chudasama, Y., Murray, E.A. and Averbeck, B. B.. J Neurosci. J Neurosci, 34, 11751-11760, 2015 PMID: 26290251.