Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

Francisco Pereira, Ph.D.: How can machine learning help brain imaging? (NIMH Podcast)

Listen

Ask the NIMH Experts podcast - Tech, Tolls and the Brain

Summary

NIMH’s Dr. Peter Bandettini interviews Dr. Francisco Pereira about how researchers might address challenges in neuroscience using machine learning approaches.

Transcript

microphone

Time: 00:30:53 | Size: 42 MB

Title: Francisco Pereira, Ph.D.: How can machine learning help brain imaging?

Description: How close are we to understanding the brain? Today there are tools to probe into the basic elements of computation at the level of microscopic architecture and the dynamic network properties of the brain. Are these enough? Or is the field awaiting another breakthrough? NIMH’s Dr. Peter Bandettini interviews Dr. Francisco Pereira, director of NIMH’s Machine Learning Team and Functional Magnetic Resonance Imaging Core Facility. Dr. Pereira talks about how researchers might address challenges in neuroscience using machine learning approaches.

>> PETER BANDETTINI: Welcome to the Brain Experts podcast, where we meet neuroscience experts and talk about their work, the field in general, and where it’s going. We hope to provide both education and inspiration. I’m Peter Bandettini with the National Institute of Mental Health, which is part of the National Institutes of Health, Department of Health and Human Services. Views expressed by the guests do not reflect NIMH policy. This is episode two. Francisco Pereira: How can machine learning help brain imaging? Let’s chat.

My guest today is Francisco Pereira. He is head of the Machine Learning Team at the National Institute of Mental Health. Machine learning is a growingly popular analysis method by which systems can learn to identify subtle, complex, and meaningful patterns in data. Specifically, he applies machine learning to functional MRI data with the goal of decoding brain function, as well as to identify functional biomarkers from large patient population data sets. The ultimate goal is to apply these biomarkers to inform diagnosis and treatment. His PhD is from Carnegie Mellon University, where he wrote one of the first dissertations on machine learning applied to cognitive neuroscience. He then went out to do his postdoc at Princeton University. All right, Francisco Piera. Is that the right way to pronounce your name?

>> FRANCISCO PEREIRA: Pereira.

>> PETER BANDETTINI: [laughter] Okay, Perera. Pereira. I'll keep on working on that. I've known you for long enough that I should know how to pronounce your last name. So okay. Well, thanks for coming on the podcast. So first of all, just to set it up a little bit, aside from the introduction, you're here at the NIH. You're heading up the Machine Learning Team. What made you interested in machine learning and neuroscience?

>> FRANCISCO PEREIRA: I started out in studying applied mathematics and computer science. Towards the end of the degree, the topics that I found more interesting were artificial intelligence and machine learning. If we were to pick the textbook definition for machine learning-- That's actually from the '50s, and it's the study of computer programs that learn to do something from data. And the something they could do, in this case, was actually a program that learned how to play checkers. But it could be any sort of prediction that you'd want to make. For instance, you're looking at information about someone that you want to give a loan to, and you predict whether they will default or not. Someone might come up with a rule for what they should take into account, or you could have a program look at millions upon millions of records So it's in that sense that, yes, there might be connections between pieces of information in a very large data set that would allow you to make a certain prediction. And machine learning methods help you identify what those patterns are.

>> PETER BANDETTINI: And also, it lends itself to problems that are messier and more multi-dimensional than humans typically can process. Machine learning sort of pulls out patterns and variables. The field itself obviously has taken off, in general. And it seems that there was a sort of a false start in the late '80s, early '90s in neural networks. And it was both-- The algorithms were not as optimized, and just the computational power wasn't there. And so it seems that it kind of worked, but it's sort of petered out a little bit. And then it took off from there. So the NIH is the intramural program. It's a collection of-- A number of institutes, specifically NIMH and NINDS, focus on brain imaging. One is neurology. One is mental health. NINDS is National Institute of Neurological Disorders and Stroke. NIMH is National Institute of Mental Health. One studies physical problems, like tumors and stroke and things like that. The other one studies more psychiatric problems: anxiety, developmental problems, things like that. So you're the head of this machine learning team in the Functional MRI Core Facility, which is funded both by NINDS and NIMH. So this position and this group is new. What is your role now as far as creating this position? What is your goal in terms of helping the researchers?

>> FRANCISCO PEREIRA: The team, right now, is me as well as two other researchers, Charles Zheng and Patrick McClure, which have somewhat different orthogonal expertises. So Charles is a statistician. Patrick is an expert on neuroscience and deep learning. And we've been operational only since the beginning of 2018. So, in that time, we've also been trying to see what were kind of the needs that people had. And our activities are split into three parts as a function of that. So we do a lot of just work in consulting. When people have a particular data analysis problem, and they just want to say-- Well, usually, they say, "I want to use machine learning." And I would say that about a third of the time, they came out of our office saying, "No, I do not want [laughter] to use machine learning." Truth in advertising. But usually, because we solve the problem using something else.

>> PETER BANDETTINI: They would say they don't want to use machine learning. It's usually something simpler or--?

>> FRANCISCO PEREIRA: It's simpler or it might be an existing statistical method that already addresses what they want, and it's usually very well packaged and documented by someone. So if there's such a thing, it's actually much easier for them to solve the problem that way. If it has already been done, we don't insist on being the ones implementing the way to do it. We would just want to help you find a way of solving your problem. Beyond that, we have situations where people do want to use machine learning, and they might need some coaching on how to use a particular method or being directed to a piece of software. Or, in many cases, they already read, figured things out, are trying to do it, and just want to know, "Am I doing this right? Would someone reviewing an article about this work object to these choices or those choices?" And then, beyond that - and we do this more rarely - is we might actually get the data and analyze it ourselves. If it's somewhat open-ended, what might work here? We just cannot do this for every single data set that comes through the door because there's only three of us. And then, beyond that, there are situations where there is no known method for solving a problem. And it's a really juicy, interesting problem many people might have. In that case, it actually justifies thinking, "How would we find a way of solving this or analyzing this data set to answer a particular question that a researcher has in a way that's never been done?" And those problems tend to be solved by Charles or Patrick because they want to publish in machine learning venues.

>> PETER BANDETTINI: So it seems that there're several different levels. There's developing machine learning algorithms. There's also developing ways of using machine learning on the data. So you're more focused on sort of figuring out which dimensions of the data, which aspects of the data are most informative, which to use.

>> FRANCISCO PEREIRA: Yes. What I've often, I think, tried to explain to people is that there's many different techniques and methods for doing different things. Or you're looking at some measurements of the brain, and you want to say, "Is this person a patient, or is this person control, who has no problem but is comparable in age, gender, et cetera?" And we certainly can look at existing data, where we scan someone, and say, "Is this person a patient or a control?" So we have the tool for answering this question. It's called the classifier. But maybe the question that a clinician has is not whether it's a patient or a control. In fact, they know because of other symptoms a person has whether they are a patient. In fact, that's probably what got them to go into the scanner in the first place. Instead, what they want to know is, "What is different between patients and controls?" Especially, considering that there are many other things varying such as age, gender, how long they've had the disease, what medication they're taking. So there are many characteristics that might be different between the group of patients that were scanned or a group of healthy people that were scanned. By using the tool that learns to distinguish patients from controls, we could also help you find out which characteristics are actually reliably different between these two groups. And depending on whether the person doing the research has a hypothesis about what would be the important relevant differences, then we could help them find it that way. So a lot of what we do is actually match the tools and the questions they can ask to the questions that clinicians or scientists have and want to answer.

>> PETER BANDETTINI: If a person has a psychiatric disorder, they pretty much know they have a psychiatric disorder. It's just like the nuances of how much they'll respond to treatment or, like you said, is there some aspect--? Maybe they could use neuromodulation. So TMS, Transcranial Magnetic Stimulation, where they stimulate the brain in a certain spot based on what variables seem to show up in certain types of patients. Sort of things like that.

>> FRANCISCO PEREIRA: Right. And this might be also a good time to talk about what the data might look like so this is not so abstract. Let's say it's a situation where we have patients who are depressed and healthy people who are not, who are just there as a control group. You might have people do many different things while their brain is being scanned. 

You might also have a different sort of scan that actually looks at how different brain areas affect each other while the brain is functioning. It might be the case that what's really happening in the brain of someone who is depressed is that you see differences in what a certain brain area is doing. But the fact that that particular brain area is active might also mean that it's active because some other area did not deactivate it. And there are models and theories about what these different connections and mechanisms are. And so in order to use things like magnetic stimulation to affect a certain area, you need to know what are these mechanisms. What are the places where you have some purchase by using medication, by using an implant? Parkinson's disease is the one where people actually have implants. So that's a situation where you actually know very well what the mechanism is by which the disease affects a patient. It destroys a connection between certain brain areas. So you can use an implant to change how strongly one area talks to another based on how weakened the connection between them actually is.

>> PETER BANDETTINI: Okay. So the key to using machine learning here is that the brain map - it's usually a color-coded map that's superimposed on anatomy - is basically-- The set of activations that occur with various tasks are too complicated with an individual subject to tease out just by doing your standard statistical test. You basically have to put it into a classifier that looks at the interaction among all the variables.

>> FRANCISCO PEREIRA: Yeah. I think the place where you see that the most easily is if you're looking at these scans that measure the connections between different brain areas. If you consider the brain divided into 100 areas, which is actually very coarse, you'd be looking at 100 by 100 divided by a certain number of connections. So it's many thousands of them. And if you wanted to do this with the naked eye, you'd be looking at all the people that you've scanned. Every one of them has these thousands of connections, and you want to understand which ones are reliably different between the patients and the healthy controls. It's very hard to do this with the naked eye. But the classifier can do this really, really easily and tell you which ones are most important.

>> PETER BANDETTINI: Right. It's really, really hard, even for a well-trained naked eye. It's hard to classify in that manner as well as a machine learning classifier.

>> FRANCISCO PEREIRA: Traditionally, radiologists are looking at images of the structure of the brain. 

>> PETER BANDETTINI: Yeah, I was about to bring up radiologists.

>> FRANCISCO PEREIRA: Yeah. So they will have very good expectations for what's normal and abnormal and detect any very slight, deviation. That's really what human expertise is like.

>> PETER BANDETTINI: Yeah. If you're a super expert, you're essentially a classifier of this information based on your experience.

>> FRANCISCO PEREIRA: Right. But the key point is that they can look at an image of the brain that might have 100 slices, and each slice has many thousands of pixels on it. And they can say, "This is really not millions of pixels. This is the brain, and the brain is somewhat elongated. It's symmetric. The structures kind of look like this. They usually vary across people in this way, but not in this other way. If they vary this much, there's probably a tumor in there, even if I don't even see it in this particular image I'm looking at." So it's very time consuming to train humans to do this, but it might actually be infeasible for some of the imaging modalities we collect now. So these images with tens of thousands of connections, you might see some broad aspects, but it's harder to be able to make sense of it the way you would out of a structural image. And that's why I think this method has become more important. It's not really a matter of replacing the human. It's rather being able to say, "Here's what, statistically, is more important. Now the human can make sense of this."

>> PETER BANDETTINI: Right. So, in some sense, it would augment a radiologist's expertise. We can collect different types of contrast weightings, like T1-weighting, T2-weighting, other to highlight gray matter, white matter, CSF in different ways. And then, usually, it's a radiologist looking at a scan, kind of adjusting the gray scale, and just looking at one dimension of that data … you can have a lot more scans and make these sort of multi-dimensional, synthetic contrasts that might be as accurate, if not more accurate, potentially, in the long run…you leave that to the machine learning algorithms. And then they spit out, like you were saying, some sort of statistical probability of something where they can then look at the images and maybe hone in.

>> FRANCISCO PEREIRA: I think where they could be most helpful-- So this is getting closer to clinical practice. So this is a situation where you want to say, "Yes, we kind of know what changes from one disease to another." Assuming we sort of do, the workflow, I think, more likely is going to be, yes, the radiologist still looks at everything and is a person who is responsible for what comes out, but the machine can help them flag something as unusual depending on details about this patient. And the reason why this is more important is that there might be corners of the population that, for whatever reason, the radiologist hasn't been exposed to before. So in principle, with large enough data sets, you could be able to say, "Yes, there's all these things in this scan that are a bit different from what you're used to looking at because people come from the other side of the world. And in those populations, whatever it is, this characteristic is somewhat different." The system would still be able to flag that, and the radiologists can still make up their mind.

Another way in which you would not interfere would be some situation where you have to do triage. Maybe it's an emergency department, and the radiologist that looks at brain scans of people coming in can only look at so many per hour. And some things actually require more urgent treatment. But you certainly can scan patients given the way that they come in unconscious or with whatever. And the system could actually look at it and say, "Oh, I think this person should be looked at first just because there's something potentially more serious here." So there's all these ways in which you really are not replacing human agency, and I think that's much more likely for practical and legal reasons.

>> PETER BANDETTINI: You have radiologists looking at a scan, and they're usually looking for something. They usually kind of know, "This person had a stroke." And they're looking for a stroked-out region or something like that. And it seems that the use of MRI or other medical imagery for screening has not really been that incredibly successful. And maybe just because of the fact that, by the time it's seen, there's already symptoms. It almost occurs at the same time. So it's not really useful. But you could imagine, potentially, if you really developed a machine learning algorithm that just brute force scanned for everything, and it didn't use the context of the patient to look for something, it could be that screening-- And it could, maybe, learn to start looking for more and more subtle things. And the screening could be useful in that regard. 

>> FRANCISCO PEREIRA: It's entirely feasible with even what we have now, which are large databases of scans collected for many different medical reasons in lots of different places, as long as you have the patient information. So first, you kind of go in the direction of, say, all these various readings you can get out such as all those sets of connections between different parts of the brain. And you could say, "If I see these characteristics--" You could try to first say, "How much of this, of all the variability that I see, is due to characteristics of the patient that are not pathological in any way?" And the reason this is important is, to make it more concrete, you might say that if you see shrinking of this particular area, and you are 30, that's likely pathological. But if you see the shrinking, and you are 80 or 90, that's just the effect of time. And the only way you can obtain expectations such as these is to actually try to build a model of how, as a function of patient characteristics, all the things that you've seen in the images can change or fluctuate.

But if you do enough of these and then start throwing in diagnostics, effects of certain medication, and you have enough data, in principle, you could then reverse it Data sets for doing this are split across many sites. You might also not have exactly the same measures. But it's actually one of the things we are interested in developing algorithms for, is to try to build interesting models from data in different places without it ever needing to leave or be collected, aside from wherever they are, and then combine those models together to build more sophisticated models that are almost as good as what you'd have if you'd have all the data in one place.

>> PETER BANDETTINI: Okay. So you're sort of sampling or taking the data, keeping it where it is, but then informing some central model of what the data look like.

>> FRANCISCO PEREIRA: Right. And this is also important because of privacy concerns, which is one of the reasons why the data sets are not all put together in one location, aside from it taking a lot of resources. It's also something where, if it's data from specific patients in a particular place-- I mean, it can be anonymized. But that's a somewhat time-consuming process. So many data sets haven't been. But you would still be-- It's still possible to do machine learning and build machine learning models from the data inside--

>> PETER BANDETTINI: Keeping it dispersed, yeah.

>> FRANCISCO PEREIRA: --keeping it dispersed, and then bringing those models together into one location.

>> PETER BANDETTINI: Okay. And so you're working on that?

>> FRANCISCO PEREIRA: We're actually working on a method for doing this. Patrick McClure and Charles Zheng and my team.

>> PETER BANDETTINI: That's really useful. And it's so cumbersome to sort of have everyone feed the data into one repository. If they want to use the data, to take all that data and put it in their center, you start getting tons and tons and tons of data. 

>> FRANCISCO PEREIRA: There are, now, data sets that are engineered from-- They are being collected. So longitudinal studies like ABCD, where, I think, from the ground up, things have been thought through so that there would be a pathway to anonymize the data, to release it so that people could actually leverage it for all the applications they can think of. So, I think, in the future, more data sets will be like that. Collecting things in a way that's open is really, really important. And I'm glad that this is becoming more and more the default because I think, in the limit, we might already have enough data to do many of those things. 

>> PETER BANDETTINI: And then, ultimately, this is sort of like my theme of-- I'm trying to always push fMRI to more clinical relevance. And we already talked about the idea of trying to make biomarkers, but you need, definitely, to push it beyond just these studies of making general statements about the brain.

So I'm going to totally switch gears because there's one other thing I want to talk about, the work that you did under Tom Mitchell and also what you did just before you came here - Can you talk a little bit about that?

>> FRANCISCO PEREIRA: Yeah, and I think-- But, I mean, there's actually a connection, and I'll get to it at the end. It goes back to what we talked about at the beginning, where we wanted to actually make tools that were closer to what a scientist would want to test. So, in this case, is-- Let's say a scientist is interested in what's being represented in the brain as a person reads different words. And they should read one word. It could mean many things. You don't really know where it's going. But you read a few more. You read a verb in a sentence, and suddenly, the meanings that whoever wrote the sentence had in mind is very specific. And you should build some mental representation of the scenario that the sentence describes. So the question we were interested in is, even if we had a model for what's represented in the brain as a sentence is read, how would we even know it's the right model? And so this is really an attempt to try to make the brain be less of a black box. And so what we try to do is say-- That's the basic idea, is to go from stimuli sentences to models of what the mental representations would be to then functional MRI data of people reading those sentences. And we're hoping to say, "Let's try to build a model that says, 'Given what they're reading, what should their brain activation look like?'"

>> PETER BANDETTINI: And how it varies from voxel to voxel. Voxel is like a three-dimensional pixel. It's like a volume of brain that's about, essentially, 2 millimeters by 2 millimeters by 2 millimeters. Something like that. And so you're chopping it up. You're looking at the fMRI pattern within those voxels.

>> FRANCISCO PEREIRA: Yeah. And, as you read, you see all of this pattern of changes at every location in a grid covering the brain. And let's say that for one sentence we get one pattern. So we try to build models that go from what's being read to what the brain activation look like. And then we also work in the reverse direction. And this is how we know that we have a good model, is we say, "Now, here's brain imaging data." So if our model that goes from the stimulus to mental representation to what you record with the fMRI scanner is good, then we should be able to see what-- For new sentences, as we read them, we should be able to predict what the brain activation looks like and see whether it's different from what you actually recorded. So this is one way to-- These are called encoding models.

So this is a nice way of trying to evaluate whether the model you predict would work for new stimuli, sort of an idea of generalizing to things you haven't seen before. Decoding is the opposite process. It's trying to say-- You start. You go the same way. You build a model that goes from stimulus to brain activation. But now, the way you evaluate it is by going in the reverse direction. So you start with a brain scan of someone reading something. You don't know what it is. And you look at it, and you extract the mental representation of what are the mental contents. And then you go from that to trying to reconstruct the concepts that were in the mind of the person as they were reading the sentence and were scanned, again, on something that we haven't really encountered before. So that's what a brain decoder is.

>> PETER BANDETTINI: And that was pretty successful.

>> FRANCISCO PEREIRA: I don't like to overstate it [laughter]. So basically, we wanted to test various things at the same time. We wanted to see if you learn to go in this encoding direction using brain images, not of people reading sentences, but people reading different concepts, can you build a brain decoder that still works when it sees imaging data of sentences? And the idea here is that there's an infinite number of possible sentences. Individual concepts, it's not a finite number, but it's not infinite, in the sense that I could go to the Oxford English Dictionary, and it would have a very large number of words, but the typical vocabulary is about 30,000 words or thereabouts. Words have multiple meanings. So it's more concepts than that. It's still a huge number. But what we were trying to say was we actually had some idea of how to build a mental representation of meaning such that you don't really need to see that many concepts before you build a brain decoder that would work on almost anything. So we had a theory for which concepts should we build the model from. We had a theory about how would you represent each individual concept using models derived from very large corpora of many millions of text documents. So we're trying to just put all those pieces together. So the work is more a proof of concept that, indeed, you can build effective brain decoders using only a few hundred concepts that will work even when you are applying them to data of people reading sentences.

>> PETER BANDETTINI: Okay. So it not only informs you on the limits of the technique but also-- And in some sense, how-- Right, you have to do some amount of dimensionality reduction. You have to reduce the number of concepts, to cluster them in some sense, and then it becomes more successful. But also, in some ways, it could inform the opposite way, like how the brain actually organizes information. So if you--

>> FRANCISCO PEREIRA: Yes, yes. And other people have looked specifically at that. That's one of the things we want to do with this data set. It's just that in the first paper, we wanted to actually describe the data set, the whole, overall idea. But now we actually have an interest in understanding exactly how different aspects of meaning are represented as you read in the brain, rather than just try to decode. So that was more to show that the information was there and the model we were using was a good one. It goes back to what I was talking about at the beginning, that in the end, you're using this ability to find patterns and relationships between things in a large amount of data just to be able to help people say, "Is this a good model of what's happening in the brain?" And so the ability to find these relations and to predict is not the end goal. The end goal is to answer some question that the researcher has.

>> PETER BANDETTINI: Just in closing, in the short time I've also known you, I've also known that you like to cook [laughter]. You're also a chef or--

>> FRANCISCO PEREIRA: I would not go that far.

>> PETER BANDETTINI: --you're a hobbyist. Okay [laughter]. But is there any other thing that you enjoy?

>> FRANCISCO PEREIRA: My life is quite boring [laughter]. It's very exciting at work. But [not when I'm?] retired [laughter]. So the typical weekend, I might be cooking. I might be going to visit the pandas at the Smithsonian's Zoo, which I live very close to. So it's a privilege to be able to see that on a very frequent basis.

>> PETER BANDETTINI: Yeah. The zoo is free too. So it's all the better.

>> FRANCISCO PEREIRA: It's a fantastic place.

>> PETER BANDETTINI: Yeah. So I'd like to, maybe, just end with this question. If you're giving advice to somebody just starting out in the field-- Maybe they're computer scientists. Maybe they're neuroscientists, but they're just starting out. Or maybe they're potentially interested in getting into this more. What advice would you give them in terms of, maybe, the topic or the general approach to take? And what can I do? Do I go into computer science? Do I go to neuroscience? Is there some program that does both or--?

>> FRANCISCO PEREIRA: There are programs that do both. And the way they are structured tends to vary a bit from place to place. The one I'm most familiar with was from my grad school at Carnegie Mellon, that, together with the University of Pittsburgh, they have something called the Center for Neural Basis of Cognition. You are still admitted through your home department, which might be computer science; might be neuroscience. But everyone is cross-trained at pretty much every level, from studying anatomy to neurophysiology to computational models in the neural systems to cognitive neuroscience. And, I think, what this ensures is that, A, you are always completely out of your depth [laughter] on something, which [builds nice?] [inaudible], and more importantly, the people you ask for help are the people who are going to ask you for help the next course [laughter]. But it also puts you in contact with colleagues that have interesting problems to solve. So, I think, once I was in this, it was pretty obvious that there were way more projects we could do than I had time for.

So, I think, generic advice would be, apply to places that have that sort of joint program because then odds are you will find people that want to collaborate with you.

>> PETER BANDETTINI: Being at the interface of things seems like-- I've often found that that's usually where most of the opportunity is.

>> FRANCISCO PEREIRA: Yes. And kudos to both my advisers for being patient and for CMU to actually be completely on board with us doing that. I just think that it's also a situation where people often expect you to fit in boxes afterwards in terms of academic jobs, what you do. If you really end up between two areas that--

>> PETER BANDETTINI: Each area, each domain requires a lot more expertise to sort of be welcomed as a card-carrying member of that. And so you're sort of in between. 

>> FRANCISCO PEREIRA: Yeah, and this is why I didn't want to discourage people. And I know people say-- That do machine learning asking me, "Are there interesting imaging data sets to look at? What are the real problems that people are facing?" And, at this point, they just want an introduction to someone in their university. So it's very easy to put them in touch, and they will find no problem to working together and vice versa. I also have friends asking, "Can you introduce me to someone in the computer science department. So I think what-- I've been kind of thinking about the ways of doing this is if there's some way of fostering more contact between the people with methods and the people with the applications, that's probably what we want to push for. 

>> PETER BANDETTINI: Yeah. Exactly. Okay. All right, so there's more bridges being built from one side to the other, as opposed to just people trying to be the expert on both in some sense.

>> FRANCISCO PEREIRA: Yeah, which, these days, is really-- I mean, I know a few people that are like that, some of them in academia, but I shudder to think of what they had to do, in terms of effort, to be that capable on two things at once.

>> PETER BANDETTINI: Yeah. It's interesting. It's interesting what a PhD does in some sense because, even though I got my PhD in biophysics, I've been working in neuroscience for the past 25 years, but I still feel that there was something formative about getting a PhD in biophysics that I still feel more like a biophysicist than a neuroscientist. It's interesting [laughter]. But I'd still like to-- I still feel like I'm becoming more of a neuroscientist every day. But still, you're right. I mean, you have your domain, and you think out from it, but-- It's interesting.

>> FRANCISCO PEREIRA: I also do our system administration because once a computer scientist, always a computer scientist [laughter].

>> PETER BANDETTINI: [laughter] Okay. Well, I think with that, we'll wrap it up. Francisco, thank you for sparing the time to talk, and I'm sure that you'll continue to-- You'll have a lot of extremely interesting projects and good projects to be doing in the future. So thanks.

>> FRANCISCO PEREIRA: Thank you.