Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

IRP Podcast: Dr. Niko Kriegeskorte: How can computer models help us better understand the brain? (NIMH Podcast)


Dr. Niko Kriegeskorte: How can computer models help us better understand the brain?


Dr. Niko Kriegeskorte, a computational neuroscientist from the Zuckerman Institute at Columbia University, discusses the challenges of deriving insight into the principles of brain function using fMRI and other neuroimaging methods.


>> PETER BANDETTINI: Welcome to the Brain Experts podcast, where we meet neuroscience experts and talk about their work, the field in general, and where it's going. We hope to provide both education and inspiration. I am Peter Bandettini with the National Institute of Mental Health. Please note that the views expressed by the guests do not reflect NIMH policy. This is episode three with Niko Kriegeskorte. We will discuss, among other things, how might brain imaging help us to truly understand the brain. Let's chat.

Dr. Niko Kriegeskorte is a computational neuroscientist who studies how our brains enable us to see and understand the world around us. Kriegeskorte's lab uses deep neural networks to build computer models that can see and recognize objects in ways that are similar to biological vil-- visual systems. Niko received his PhD in cognitive neuroscience from Maastricht University, held postdoctoral positions at the University of Minnesota, as well as at the National Institute of Mental Health here in Bethesda - actually, he was in my group - and was a program leader at the UK Medical Research Council, Cognition and Brain Sciences Unit at the University of Cambridge. Niko is currently a professor at Columbia University affiliated with the Departments of Psychology and Neuroscience. He is principal investigator and director of cognitive imaging at the Zuckerman Mind Brain Behavior Institute at Columbia University. Niko is also a co-founder of the Count-- Conference Cognitive Computational Neuroscience, which had its inaugural meeting, uh, in September 2017 at Columbia University.

You've pioneered a few techniques. And now you're a professor at the Zuckerman Institute at Columbia, why don't you tell me a little bit about what motivated you to get started in this area and, how your interests have kind of moved you along and where you're at right now?

>> NIKO KRIEGESKORTE: My initial interest was in computer science and psychology, I read a book by Paul Watzlawick and others called Human Communication that inspired me a little bit. So I started analyzing my parents' relationship I went to university, and I studied psychology initially, and I did some computer science on the side. The program was actually quite broad, so it allowed me to explore a lot. So I looked around a little bit in Germany, and I found a lot of summer schools in cognitive science. I found the Max Planck institutes, I started doing internships there and going to these schools That got me into cognitive science initially. And studying computer science on the side. I got into machine learning and cognitive science. I graduated and it came time to choose a lab to do my PhD in, I realized that I wanted to study the brain. so this was in a period in the late '90s, where, brain imaging was still quite new. There was this whole revolution going on. We, could measure, brain activity in humans noninvasively. Was super exciting to me. I was not interested initially in doing empirical work, but then everyone at the Max Planck Institute was measuring brains, I did some, some rotations helping with that. And then, in Rainer's labpeople were doing fMRI And I got sort of drawn into that. for the next dozen years or so, I was working with fMRI and thinking about how to analyze fMRI data. So that was quite a transition for me.

>> PETER BANDETTINI: it is always this tension between-- between, collecting this messy data at a very specific spatial and temporal scale and then trying to model the data itself or trying to actually model the underlying mechanisms behind the data. So that's really hard work. How would you describe your research now?

>> NIKO KRIEGESKORTE: It's interesting how, how it's evolved from the very beginning of my PhD where I wanted to study visual representations, and I noticed that, there was this concept of the population code, that the information was encoded in a distributed fashion across a population of neurons in an area. And at the same time, we were using fMRI, and we had a significantly higher resolution than we did earlier with, [PETs?] and with early versions of fMRI. So we could measure these fine- scale patterns. But then the dominant mode of analysis was to pass an eight-millimeter smoothing kernel over the data and filter out all the fine-scale structure. So I saw there a kind of tension between, theory and experiment, where in theory, we think of the representations as these fine-grained patterns, but then in the, the analysis, we treat those patterns as though they were noise, and we look just at the overall activation of entire regions. There seemed to be something wrong with that. And then in terms of the experimental design, similarly, , in fMRI, , vision, people classically grouped stimuli into blocks, so for example, a block of faces and a block of places and then average the responses to all these different visual stimuli. And in the space of the stimuli, was sort of a similar thing where you're averaging across lots of different things that are uniquely represented in the brain. Every image looks entirely different than the subjective experiences, entirely different. So I, formed this, kind of overall conviction that, we need to change these two things. We need to make every stimulus a condition in its own right, and we need a lot of different stimuli at the same time. And, we need to analyze the information in every single voxel, and we want to, not lose any of this, precious information that we can, capture with our measurement technologies, which could be array recordings, or in fMRI, it could be these voxels, where I, worked a lot in my thesis on pushing the resolution and using high-field fMRI with, 7 Tesla to get more detailed measurements, So these were, two things that, I became, quite interested in during my PhD that have stuck with me even today I'm still building on that in a sense, right? And then during my postdoc was the idea that we don't only wanna decode these patterns of activity and see, whether we can distinguish particular stimuli, but we want to explore the geometry of the representation, if you will. We want to look at all the dimensions of the representational space, not just a couple of dimensions that decoders would focus on. [With?] the decoder, you say, "I'm interested in this kind of stimulus information. Let me see if I can decode that." if you generalize that, you could ask, "Well, there are many different properties of the stimuli, so why don't we just [fit?] decoders for all of them?" And so the, limiting case of that is being interested in the entire geometry of the representation.

>> PETER BANDETTINI: so your work, pulled out these patterns that were different and much more informative than just blobs. And, then you had this beautiful work sort of comparing behavioral responses or preferences of people, how they categorized objects. You basically had a circular array where you had a bunch of objects, and they sorted them. And, the pattern differentiations seem to match to some degree to sorting. But then, you know, you mentioned computational models. could you unpack what you mean by that? what do you define as a computational model?

>> NIKO KRIEGESKORTE: Well, computational model, that term is used in many different senses. When I say computational model, I mean, primarily, what could be called a task-performing brain computational model. And what I mean by that is that the model should be interpretable as a process model of what's going on in the brain at some level of abstraction. So it doesn't need to be, biologically detailed. It doesn't need to have spiking neurons. It doesn't need to correspond one-to-one to the neurons in the brain. It doesn't even need to be a neural network model. In psychophysics, there's a long history of more abstract models of information processing, that still have, the flavor of wanting to capture the information processing that is going on in the brain in a way that captures task performance. The idea is that in order to link cognition to the brain, we need a model that, implements a hypothesis about how the required information processing might work.

>> PETER BANDETTINI: Okay. So that leads actually to the next question, which is kind of the prevailing question over all this. So we have the abstraction, the computational model. We have the data. And we have ever more sophisticated data. We have ever more sophisticated models, everything from, linear sort of models to nonlinear models to sort of neural networks and, convolutional, neural networks. so they're, hopefully, going to meet in the middle. well, the goal here is to actually understand the brain. And we need to define that. how would you define understanding the brain, first of all? what do you think that would look like? Would we be able to simulate it or emulate it or make one? what would that mean? That's, that's a great question and, I think, a very important question. we need to carve up all of cognition with different tasks. And the tasks kind of bring the information processing and the behavior into the lab and make the performance measurable and allow us to quantify how well a given system, a brain-- or a human brain or an animal brain or a computational model can perform the task. And they also allow us to measure, under what conditions task performance suffers, under what conditions, the system makes mistakes or takes longer or things like that. And this gives us behavioral characterizations of task performance that we can compare between models and brains. And at the same time, we want these models to relate to the brain itself, not just to produce the behavior, to be able to perform the tasks and match the patterns of [ours?] as well. We're not, primarily interested in the engineering objective to do the task as well as possible, but we also want to match the situations in which, the human brain fails, for example, or the situations in which a subject might take much longer to recognize an image. so that's the behavioral level. And then in addition, we want to be able to relate the dynamics inside the model to the dynamics in the brain of a human or animal performing the same task. And that's a very interesting methodological challenge that is not a purely technical and methodological challenge, but that gets to the core of some fundamental theoretical questions of neuroscience, including at what level of detail can we hope to make this comparison to find correspondency? How should the correspondency be defined between the dynamics in the brain and the dynamics in the model? And so in this way, we want to be able to compare the models in terms of their behavior and in terms of their internal activity to, human brains. And, we have succeeded when it is no longer possible for cognitive scientists to come up with tasks at which humans outperform our neural biologically plausible models.


>> NIKO KRIEGESKORTE: And so it's going to be sort of a somewhat, fun adversarial cooperation, I think, between cognitive scientists many of whom will be multidisciplinary cognitive scientists, I'm guessing, but some, in our community will focus more on sort of the behavioral aspect of it. And I see it as their job to design the tasks and program these tasks and share them with the community that highlight exactly what, neural network models or cognitive models, more generally, can't match in human behavior.


>> NIKO KRIEGESKORTE: at the moment, it's still easy for them to do that. but when we get to the point where this is dwindling and there's nothing you can come up with anymore where, we can't find the neural biologically plausible model that can match the task performance, that's when we're done.

>> PETER BANDETTINI: to use one quick example something like playing chess or Go. computers can, outmatch humans. They can outperform cognition, and it's very likely that they use an algorithm. And if you were to model that process computationally it's an extremely different process, at all spacial and temporal scales. And, it's defined by its architecture as well.


>> PETER BANDETTINI: So are we really gaining insight to the brain by designing better AI algorithms that might outperform humans, but for all very different reasons? It's not [simply?] matching performance or outperforming them. they're doing something that lends itself to the fact that it can compute so much faster, but the algorithm might be much less efficient. there might be many variables in play that maybe not add up into understanding, the human brain.

>> NIKO KRIEGESKORTE: Absolutely. I think this is a great example. So chess or Go, these games actually, have a long history in cognitive science as well. Allen Newell, in his essay, You Can't Play 20 Questions with Nature and Win, if I remember correctly, as a possible, test case for cognitive models, right? if we were done with that task, we wouldn't understand how the brain works. We would just understand how it works for that task, right? And that's why I'm saying we need all the different tasks.


>> NIKO KRIEGESKORTE: We understand that now, for chess, definitely not, right? So for chess, engineering has surpassed, human ability. However, that doesn't mean that we know how humans play chess.


>> NIKO KRIEGESKORTE: Of course, the fact that is within our reach in terms of engineering helps us a lot in modeling how humans play chess. But it would be a very interesting and cutting-edge project now that we are at this stage with the engineering to revisit the question, how humans play chess.


>> NIKO KRIEGESKORTE: And you already brought up all the ways in which the programs that have superhuman performance are different from the human brain, right? And so that's why it's important. We want to match not only, the general level of performance, we also want to match the limits of performance, so kinds of mistakes that humans make, and we want to match the dynamics in the system. So we want to be able to, relate the representations in the chess playing a neural biologically plausible model to the dynamics in the brain while playing chess. And we want to be able to explain how humans acquire it. for example, how much do they have to learn, right? A lot of these models-- for example, for AlphaGo Zero, they do a lot of self-play, right?


>> NIKO KRIEGESKORTE: and they do an amount of self-play that is impossible for, a human to perform, right? S1: 26:36 Right. Something like billions of iterations understanding the brain is fundamentally different from the AI engineering challenge, although the AI engineering challenge is a key component of all of those, right, which is why AI is a key component of cognitive computational neuroscience.

>> PETER BANDETTINI: So do you think really neuroimaging data-- how limited is it in terms of informing, really true models of how the brain is actually working?

>> NIKO KRIEGESKORTE: All our data are limited, right?


>> NIKO KRIEGESKORTE: I think someone said behavioral data or reaction times is like having a single measurement for the entire system. It's like having one voxel and-- despite being just a unidimensional measure of the entire system, it gives you a lot of constraints, right, if you combine it. So all of cognitive science is based on human behavioral data, right? It's not just reaction times, obviously. you can have much more complex data, but it's usually low dimensional data, and it's thought to be extremely powerful in combination with the constraints that your model has to be able to perform the task for adjudicating between models. And that's a special case, of course, of this empirical inference where you have a little clue, and you have a lot of assumptions. And when you bring them together, you can make surprisingly deep inferences. And, , maybe you can, find who committed a murder, but you only have these, like, three different [cues?], and you have some prior knowledge about, who it might've been. And then, by elimination, you can, find out who it was, right?


>> NIKO KRIEGESKORTE: So that's potentially powerful. But, of course, it's also true that, you need strong data. And my belief more along the lines of your original argument is that behavioral data alone, definitely not sufficient. And I'm interested in the brain because I think that we need the constraints of thinking about how the brain is organized and of massive multivariate brain activity data and anatomical data as well, in order to constrain our theoretical efforts and our modeling efforts appropriately. But this was just to illustrate that even this sort of overall measure can constrain your theory. And, of course, if you have, fMRI data, that's a, wonderfully rich source of data. It's tens of thousands of channels, right? So it's always, , how you look at it. If you look at it in terms of coverage, it's perfect. You see the whole brain. If you look at it in terms of the number of channels, well, it's tens of thousands of channels. That's, also extremely rich-- --as an informational sample per unit of time of this dynamic system that you want to understand. But, of course, if you look at it from the other perspective-- so, within each voxel, we have tens of thousands or hundreds of thousands of neurons that we're averaging across. So if we're, thinking about brain function at this very detailed level of single neurons, then, it's very unsatisfying, but the reason why we can still use those data is because we can interpret those data in the light of [strong?] theory. For example, when we have a computational model, and the computational model has a lot of much more fine-scale structure, then we can still, predict the coarse-scale dynamics from the fine-scale model. if we get to the point where we have the engineering principles that put us in a position to build something that performs any task we can come up with and that is consistent with all the data we have, that's not to say that our data uniquely identifies one solution. There might be an infinite space. We should expect there to be an infinite space of equivalent solutions. But there's also a large population of humans, and everyone's brain is different. But still, at the level that I'm interested in it, our brains are identical. what I want to understand is not exactly what does, a particular neuron iin your brain do, but I want to understand what all those neurons do together. And those in your brain do fundamentally the same thing. They use the same algorithms as those in my brain. And that's what I want to understand.


>> NIKO KRIEGESKORTE: I want to go back to these, scales, right? So we went from behavior to the level of neuroimaging, and we said that even though both of these are very coarse scale, in a sense, when you think of it at the scale of single neurons, they can still help us constrain theory and, adjudicate between different models. But, of course, there's also increasingly high quality fine-scale data. And so increasingly, we can also get constraints at that level, and that's essential as well. But then, of course, also at the same time, if we take those incredibly rich and beautiful data and analyze them only with data-driven approaches and with insufficient theoretical constraints, we also don't make a lot of progress, right?


>> NIKO KRIEGESKORTE: It's all about combining this-- the strong theoretical perspective with rich data and linking these two up very well so that we can combine the theoretical constraints and the data in an optimal way so as to [invade?] the huge canyon between them.

>> PETER BANDETTINI: Right. they only have meaning in the context of a model in some sense. Otherwise, it's just measurements. So you look at yourself now as basically a model builder, in some sense.

>> NIKO KRIEGESKORTE: It's, itrue that I kinda came full circle from the initial interest in neural networks and, machine learning in the '90s and, you know, this empirical science. And now we're doing a lot of modeling again.

>> PETER BANDETTINI: How do you see that actually ratcheting forward? I mean, every single model is falsifiable. It's all testable. but the data are maybe not good enough to falsify the model.

>> NIKO KRIEGESKORTE: So often, our data tend to be limited, but they're often good enough to eliminate our models. S1: 37:18 Okay. we compute a number that characterizes how well a model explains the data. And then we also compute what we call a noise ceiling, which is the upper bound to the performance of the true model if it were the true model. Given the noise in the data, we expect the true model to have some level of performance, right? But we also expect that a lot of untrue models will have the same level of performance. But this noise ceiling helps us eliminate models that fall short, even given the limitations of our data.


>> NIKO KRIEGESKORTE: This is how our data can drive progress.

>> PETER BANDETTINI: Okay. All right. That's, a path that more people should probably try to understand and embrace. I think it's useful as opposed to just being sort of in their silo of either collecting data or modeling or trying to make AI. you started this meeting a couple of years ago. And I've been really fascinated with it. I've gone twice, Computational Cognitive Neuroscience Meeting. could you just talk a little bit about the meeting?

>> NIKO KRIEGESKORTE: Yeah. Sure. So this meeting was not my idea. It was the idea of Kendrick Kay and Thomas Naselaris. they're two very good computational neuroscientists, and they approached me about it. I admitted that it's really a necessary meeting because all the other meetings that I go to are related to a subset of the fields that need to come together, but don't really put all these fields together and try to put the elements together and get these communities to interact with each other. So we argued quite a bit about what the name of the meeting should be. And the two versions were computational cognitive neuroscience or cognitive computational neuroscience--


>> NIKO KRIEGESKORTE: --which, in my mind, are totally different. They have nothing to do with each other. I argued strongly in favor of cognitive computational neuroscience because the other just doesn't make any sense.


>> NIKO KRIEGESKORTE: So the reason is, from my perspective, the cognitive level is the top level. Computation is the glue that relates cognition to the brain. that's one reason why computational should be in the middle there. Another perspective is that it's cognitive science plus computational neuroscience. So we kind of wanted to keep the computational neuroscience intact there in the second half, right? And the other perspective is that it emphasizes something that's missing in the current scene, and that's the top-down component, that it starts with cognition. It starts with the task. It starts with the idea that we want to understand how cognition works, and then we use those top-down constraints to interpret our data, which can be behavioral data or brain data. Or it can be at a very detailed level also, right? you can think of it as cognitive science finding roots in neuroscience and-- or you could think of it as computational neuroscience, which has always been about understanding components of computation that might be useful in the context of cognition, but it hasn't really fully related it to cognition. So you could think of it as computational neuroscience growing up towards cognition.

>> PETER BANDETTINI: is there anything that you'd like to highlight as, as a recent advancement that you're excited about?

>> NIKO KRIEGESKORTE: So there are huge advances in brain-inspired AI in the last couple of years. And, deep neural networks are a big and famous one. And I think this is really a revolution, and not just for AI but also for brain science because it vindicates the old intuitions that this, , intermediate level of brain-inspired computation that abstracts from a lot of details, of the biologyis already very useful. And that gives us a common modeling language that links cognitive science and computational neuroscience and AI. And that gives us technology and software tools that enable us to implement our theories in these task-performing brain computational models. And what that means is that there's really no excuse anymore, in a way, right? So when this started a few years ago, when I thought about how I think vision works - and vision is what I'm trying to understand every day - I didn't really have a very good understanding of my own intuitions. And that's due to the fact that for a dozen years, I've been thinking almost exclusively about how to analyze my data, my brain data. I've been thinking about multivariate data and about, modeling the noise in those data and about being statistically efficient in my analyses and things like that. But I was not thinking on a daily basis about how does the brain achieve these amazing things? How does it compute? And now that's totally changed. So what I think about when I fall asleep is, neural networks and computational neural networks. And I like that very much. And if someone has a theory and thinks they understand some aspect, for example, visual recognition, there is no excuse for not implementing that in the neural network model and then showing, on the one hand, that it really is capable of performing the task, but also that it, predicts behavioral patterns,, reaction times, and arrows, and brain activity data, right? So this has-- this kinda brings to the fore of this kind of central challenge of how does the brain information processing work? And, that--, it's very exciting that we're, meeting that challenge head on. And in engineering also there is massive creativity in terms of architectures for neural networks and inventing new kinds of tasks that, were never before thought to be tasks for machines.


>> NIKO KRIEGESKORTE: Think, for example, of the task of creating an image, right? So the task is give me an image. It's supposed to look like a photo, but, could be anything., this is not-- --the kind of task that, a few years ago, we would have associated with a task for a machine. It's more of a creative task. with neural networks engineers are thinking about these kinds of tasks.


>> NIKO KRIEGESKORTE: And, of course, many of them are not thinking about how the brain does this, and that's as it should be. But in brain science and in cognitive science, people are using the same kinds of models to explain how the human mind and the human brain achieve these feats.

>> PETER BANDETTINI: Okay. All right. final question., if someone was either established in a field or just starting out, , maybe as a graduate student, is there any pieces of advice that you would give them to, help navigate,?

>> NIKO KRIEGESKORTE: I guess two things come to my mind., the list is long, and so take this with a grain of salt. But the two things I think of- - first is, choose good advisors. So I chose two great advisors, Rainer Goebel and you.

>> PETER BANDETTINI: Oh, thanks.

>> NIKO KRIEGESKORTE: they enabled me to make my way. And, they gave me their wisdom, and they gave me the freedom that I needed to explore, my ideas and follow my intuitions. And they always supported me. And that was ,I think, absolutely key. And the second thing maybe is, to trust your intuition about what's interesting. So for me, it's happened time and again that I had an intuition that something was somehow deep or interesting or attracted me, and I couldn't always immediately, fully, rationally explain it or explain it to others. And I also met others that said, "That's a bad idea for an experiment." And some of these, things I didn't do, and others I did do. But when I look back at maybe, like a dozen or so ideas, usually when I was very excited about it, I'd say today there was a reason for that. In some cases, I realized the idea, and I later noticed that it was important in ways that I hadn't anticipated. In other cases, I was scared off maybe, or, you know, someone-- maybe there was a project meeting, and people had good arguments against, and then I didn't do it. But in more cases than not, , I-- years later, I found other people are pursuing it and doing really interesting things. So I think trusting our intuition there and following what we feel is interesting or there's some mystery there.. That's good.

>> PETER BANDETTINI: I think that's great advice. And I appreciate, he compliment regarding the advising, I'd just like to thank you and wish you the best of luck in the-- in the future.

>> NIKO KRIEGESKORTE: Thanks a lot.