Bruno Averbeck, Ph.D.
Dr. Averbeck attained a B.S. in Electrical Engineering from the University of Minnesota in 1994. After working 3 years in industry, Dr. Averbeck returned to the University of Minnesota and completed a Ph.D. in Neuroscience in 2001, working in the lab of Dr. Apostolos Georgopoulos. His thesis was titled, "Neural Mechanisms of Copying Geometrical Shapes". Following his thesis work, Dr. Averbeck carried out post-doctoral studies at the University of Rochester with Dr. Daeyeol Lee. During this period he studied neural mechanisms underlying sequential learning, coding of vocalizations and population coding. In 2006 Dr. Averbeck moved to University College London as a senior Lecturer, where he began experiments looking at the role of frontal-striatal circuits in learning, combining neurophysiology, brain imaging and patient studies. In 2009, Dr. Averbeck moved to the NIMH and established the Unit on Learning and Decision Making in the Laboratory of Neuropsychology.
The section on Learning and Decision making studies the neural circuitry that underlies reinforcement learning. Reinforcement learning (RL) is the behavioral process of learning to make advantageous choices. While some preferences are innate, many are learned over time. How do we learn what we like and what we want to avoid? The lab uses a combination of experiments in in-vivo model systems, human participants including patients and computational modeling. We examine several facets of the learning problem including learning from gains vs. losses, learning to select rewarding actions vs. learning to select rewarding objects, and the explore-exploit trade-off. The explore-exploit trade-off describes a fundamental problem in learning. Should you try every restaurant when visiting a new city, or explore a small set of them and then return to your favorite several times?
Standard models of RL assume that dopamine neurons code reward prediction errors (RPEs; the difference between the size of the reward received and the reward that was expected following a choice). These RPEs are then communicated to the basal ganglia, specifically the striatum, because of its substantial dopamine innervation. This dopamine signal drives learning in frontal-striatal and amygdala-striatal circuits, such that choices that have previously been rewarded lead to larger neural responses in the striatum, and choices that have previously not been rewarded (or have been punished) lead to smaller responses. Thus, the striatal neurons come to represent the values of choices. They signal a high-value choice with higher activity and this higher activity drives decision processes. These models often mention a potential role for the amygdala, without formally incorporating it. They further suggest a general role for the ventral-striatum (VS) in representing values of decisions, whether they are decisions about actions or decisions about objects and independent of whether values are related to reward magnitude or probability.
In contrast to the standard model, we have recently shown that the amygdala has a larger role in RL than the VS (Costa VD et al., Neuron, 2016). In addition, the role of the VS depends strongly on the reward environment. When rewards are predictable, the VS has almost no role in learning whereas when rewards are less predictable the VS plays a larger role. This data outlines a more specific role for the VS in RL than is attributed to it by current models. Given that the VS has been implicated in depression, particularly adolescent depression, this delineation of the contribution of the VS to normal behavior may help inform hypotheses about the mechanisms and circuitry underlying depression.
Costa VD, Mitz AR, Averbeck BB (2019). Subcortical Substrates of Explore-Exploit Decisions in Primates. Neuron 103, 533-545.e5. https://doi.org/10.1016/j.neuron.2019.05.017. [Pubmed Link]
Averbeck BB, Costa VD (2017). Motivational neural circuits underlying reinforcement learning. Nat Neurosci 20, 505-512. https://doi.org/10.1038/nn.4506. [Pubmed Link]
Taswell CA, Costa VD, Murray EA, Averbeck BB (2018). Ventral striatum's role in learning from gains and losses. Proc Natl Acad Sci U S A 115, E12398-E12406. https://doi.org/10.1073/pnas.1809833115. [Pubmed Link]
Rothenhoefer KM, Costa VD, Bartolo R, Vicario-Feliciano R, Murray EA, Averbeck BB (2017). Effects of Ventral Striatum Lesions on Stimulus-Based versus Action-Based Reinforcement Learning. J Neurosci 37, 6902-6914. https://doi.org/10.1523/JNEUROSCI.0631-17.2017. [Pubmed Link]
Costa VD, Dal Monte O, Lucas DR, Murray EA, Averbeck BB (2016). Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning. Neuron 92, 505-517. https://doi.org/10.1016/j.neuron.2016.09.025. [Pubmed Link]