2022 High Throughput Imaging Characterization of Brain Cell Types & Connectivity-Day 2, Part 2
Transcript
ERIN GRAY: There's a couple of other notes in the chat if you could possibly respond to those. Because I would like to move on to our next talk, which will be presented by Dr. Doug Shepard and Jayaram Chandrashekar. They've chaired or moderated a session on optical and non-optical imaging platforms for brain cell atlases in human and other species.
DOUG SHEPHERD: Great, all right, so thank you very much. And let me go back to the first slide. So we had a session that focused on mainly scaling up optical microscopy and non-optical microscopy platforms with the ideas that we're going to have samples that have been prepared by some community consensus. And so Jayaram and I moderated it, and we had a number of panelists and a number of notetakers. And so thank you to everyone for their input on the topics. And I guess, what we kept coming back around to in some of the previous sessions I have talked about is that the problem definition and the sample preparation choices are key. And so knowing what the goal is, is going to be critical for understanding what the sort of imaging platform design should be. And so the goals could be that we want to survey an entire human brain with limited cell-type information. We want to survey slabs from multiple human brains with many cell types. And so the problem with this is that the existing methods that we have, and where people are pushing the research at the moment are well suited to either, say, limited markers, so lower content over large areas, or higher content over small areas, such as thin slices. And currently bridging that gap is not impossible, but it is difficult, because it may involve storing either petabytes of data, or dealing with trying to keep samples stable for very long periods of time. And then furthermore, there's been a lot of work done on especially— take the mouse, for example, on benchmarking different modalities and different approaches on the seemed area that's very well understood. So you have either cell type distributions you expect, or some sort of morphological features that should be the same, and that benchmarking standard for human brain tissue is lacking at the moment.
So those are kind of the kickoff things that we discussed. And some of the panelists gave brief presentations a couple of minutes on different approaches they've had, and those focus somewhat on both label-free methods, light sheet microscopy, and then also sort of survey methods that are a bit lower resolution. And then the discussion turned to how to either increase throughput and resolution or to increase content. And so we saw this presentation yesterday from Adam Glaser, where he was discussing using cameras and optics from the electronics metrology community. And there's also the astronomy community that we can learn from as well. The question is either— we're a very small market compared to, say, the semiconductor industry. And so if we want apochromatic objectives with very flat fields of view that match some of the very large survey instruments they have, how do we convince them as a community to build things that are small markets? So for example, if we want CMOS cameras that have hundreds of megapixels, but we want controllable shutters, how do we interact with these companies such that they're willing to build small market solutions? And so that led to an interesting discussion of should we be working our hardest to adopt what exists already with the hopes or it would be trying to sort of negotiate as a community to get done to build some things that may be useful for all of us. And then at the same time, just because you can increase your throughput doesn't mean that you're increasing the content per measurement. And so there are a lot of ways to increase the content of any given amount of photons that you use from your sample. And so you can think about spectral multiplexing. You can think about switching modalities and using something like RoMON tags, so either introducing RoMON labels or using the intrinsic vibrational modes of the sample. We can think about label-free imaging, so we have things like tensor imaging or refractive index imaging, ideas that are based on the intrinsic properties of the samples and a bit of computational work.
The question there is how do you maintain the throughput. So traditional approaches at the moment, for say spectral work, for many, many spectral methods, they require point scanning. And if you use a camera, the question is how do you efficiently, say, do RoMON detection with a camera? And many label-free imaging approaches require multiple exposures of the sample view. And so how do you maintain the throughput for these approaches? So there are inherently signal-to-noise trade-offs and some of these are better than the others, and so maybe it just comes from using industrial cameras to go fast. So if you can solve the fact that you maybe have a much larger field of view and high content, then you can think about imaging very large samples, but then you have to think about the sample mounting and the stability. And one point that was brought up was that if we are using high-resolution methods that require the samples to be mounted, the coverslips or in a flow cell, and you have a huge sample, how do you maintain the structural integrity of such a large cover slip that's so thin over time. So for example, if one wanted to do fluid exchange for some sort of either multiplexing method, then you would have it in a flow cell with a cover slip and an objective and either in contact or just below it, and any flow may flex that cover slip leading to problems. And so as you scale these very large samples there are sample mounting concerns that need to be addressed as well.
So if you can solve all of those, then the question is, how do we standardize within and across different brain samples? So I'll start with the second point here. The astronomy community actually has done a great job with this, so the James Webb Telescope was brought up as part of this, but very rightly pointed out that the exception there is they were all looking at the same thing - we all share the sky - whereas in this project we don't all share the same samples. And so building one common platform is a bit difficult because we don't have quite the same paradigm of samples. And so this brings us back to, should we take a paralyzed approach where we try to build similar instruments across institutes? Do we need to standardize some sort of sample prep across it? My question is this even possible given the way the community is structured at the moment? And so this is a much larger question that needs to be addressed. The other way one could take it is, again, the other strategy that's been used more in the mouse, which is we have a standard brain region that you evaluate morphology or cell types or molecular content against having them part of some of these, so I'll use the term, "Bake-offs" where you compare a lot of different methods. I would say that this hasn't really led to standardization across the field before. And so if we are hoping to do this, what can be done differently if we go down that road again I think would be important because these types of efforts are a lot of work and they're very expensive and if the end goal is not any sort of consensus then I question whether or not that was really a useful exercise, and I'm happy to talk more to people about that. And then the last bit that we addressed as part of the panel, was if we do have these different microscopes at different resolutions and different content looking at brain samples, what do we do with this massive petabyte-level multi-mobile data? And so, we have to think about this within one brain. So, let's say we did cut to the five-millimeter slabs how do we register across that or how do we take one brain slice and images on microscope A and merge that with microscope B? And then also across many brains how do we then come to some common coordinate system, right? We need a certain unified model to place that against. So we have to bridge modalities.
There is this opportunity though, to image one sample of multiple scopes if the sample prep is done appropriately such that it is thought through so that the more nondestructive methods can be done first, and then down to say if you have to tissue clear, and then label, maybe you do that last. And then I would say, there's an additional opportunity here to leverage computational imagery and inference where you can attempt to impute features or take high-content data sets where that information can be shared across sort of multiple frameworks so that those can be integrated into one gigantic dataset. So those are the main conversations that we had. I don't know, Jayaram, if there is anything you want to add to that.
JAYARAM CHADRASHEKAR: Thank you. Yeah, I think you've covered all the points we discussed. And basically, for the two scenarios that Doug laid out, we do have the microscopy solutions to tackle high content in small samples and slightly lower content on largish samples.
DOUG SHEPHERD: Great. So thank you.
ERIN GRAY: Thank you very much for that nice presentation. I'm not seeing any questions in the chat just yet. But I'm seeing, let's see, "Benefits of imputation means that new analets can be detected computationally."
DOUG SHEPHERD: So, my—
Go ahead. Sorry, I heard someone else talking.
DAVID KLEINFELD: Yeah. Hi, It's David Kleinfeld. I know Jayaram well obviously. I mean, you raised a couple of things but It seems that a kind of institute with little bit of freedom to play around could push some simple modern tools into things like the proper cassette to hold the sample, right? I mean, you're worried about cover glass, but I mean telephones use gorilla glass as an example. I mean, there's a lot of little things out there with microfluidics and with new materials that haven't quite made their way into these sort of prosaic issues of sample holding and sample maintenance. But maybe this is sort of a good time to introduce these more modern glasses and modern fluidic chambers. They're not terribly pricey if they worked at a larger institute, Allen whatever, and they might become more of a standard in the field. I mean, these are just appropriating materials from the electronic industry, in as much as Jayaram appropriated a 72-doctor scanning lens, yeah.
JAYARAM CHADRASHEKAR: That's a very good point. I think what that put out there is that this is a challenge, but this is certainly something that can be tackled. And similarly with going the expansion route, if you start expanding large samples they are floppier, yep. But these are tractable problems, but it's more engineering, as you point out, and it can be done.
ERIN GRAY: Dr. Wu, would you like to ask a question?
ZHUHAO WU: I just want to— I mean, since we just hear that into more using what sample to standardize the test and imaging— I mean, to benchmark the whole process. We just heard that for human brain it is very challenging since it's so large. Different human brain may have so much rapidity in terminal age preparation and the operating region they are going to take out. It was raised actually, also by Peter So in our session, is that why not use a genetic model like a common mouse brain since we can focus on the inbreeding mouse line, which human is not? And it was raised under controlled environment. We can collect it with either a standard procedure or preferred procedure, just to raise this point.
DOUG SHEPHERD: So I think that largely some of those efforts I think are underway for certain types of samples. I think there are scaling issues when you go to these extremely large samples, that using smaller organs are not going to be well— they're going to give us an idea about how some of the standardization can be done, but things are going to— imaging something this large is a different effort, and doing it in a reliable, reproducible way will take some engineering work, and as David brought up and other people brought up, are going to take adoption from other methods or getting very stable stages for scanning and things like this. And so I think there are some things that can be standardized with small organs but there are going to be other things that can only be done with very large samples, and so we have to think about what that looks like.
ERIN GRAY: Thank you. We do have one more minute if there's another burning question. Otherwise, I'm hearing none. Thank you again. It was very nice to report back. And we will move on to the next, which we'll hear from doctors Bruce Fischl and Michael Hawrylycz who moderated a session on common coordinate systems, probabilistic brain cell reference atlases.
MICHAEL HAWRYLYCZ: Can you see my slides?
ERIN GRAY: Put them in presenter mode. Okay, perfect. You're good to go.
MICHAEL HAWRYLYCZ: Yeah, I'm going to just talk through— I'll talk through a few select slides from our presentation and then some summary points. Bruce Fischl, who was my kind of co-moderator, hope you'll feel free to chime in on any additional points that may not be here. This was our presenters, in addition to Bruce and myself: Paul Wang, Michael Miller, Jiangyang Zhang, Lydia Ng, David Van Essen, and Yongsoo Kim. The basic questions that we sort of tackled were these four, in addition to the summary recommendations: what imaging data are needed to build a cellular atlas; issues across scale, registration, technology; what does a Common Coordinate Framework look like; and what areas of science will we benefit? These are evidently very broad areas, not possible by any means to really address it. They've also been discussed in innumerable other contexts and I think there's a lot here. Basically, what we're after here— I've cherry-picked some of the slides from different presentations to make our summary here. In essence, what we would like is a kind of brain live maps across time and space and across species, right, with emphasis on the human. Many modalities will be required for cellular— in fact, you need cellular-level data to build a cellular-level atlas. You want to map things at a different scales. This issue of multi-resolution is extremely important in the brain and in particular with this atlas in context and through development. Some of the BICANs that are up now are trying to address this issue, what we are building here in particular. This is a slide from the product kind of proposal from the Lein UM1 which summarizes the integration of these sort of transcriptomic and other imaging datasets to kind of build a comprehensive atlas. Bruce, could you comment on these?
BRUCE FISCHL: Sure. I just wanted to point out that when we're thinking about imaging, one of the things that's important is not just being able to image the stuff we're interested in, but being able to image in coordinate systems that are undistorted to facilitate registration back to macroscopic coordinating systems. So if we want to build an undistorted coordinate system from micron to centimeter, we may need to be able to image things at the micron or 5-micron or 10-micron scale that we're only doing not because it's producing information directly, but because it facilitates the transfer of information from the microscopic to the macroscopic.
MICHAEL HAWRYLYCZ: Sorry. I guess that's an issues in—
BRUCE FISCHL: Yeah, this is just an example of some work from David Boas at BU and Wei Wang showing that you can generate images at the 10-micron scale that are completely undistorted. So you can slice it in any plane, and you don't see the imaging planes in the data because it's not distorted, because you're imaging before you cut.
MICHAEL HAWRYLYCZ: And various other aspects toward correction of the issue. The MR is a very important modality in atlas generation. And this is just to show— the point is showing that these kinds of things can be done for denoising or correction of image artifacts. Also, we want to emphasize histology too, something that is— this is a slide here from the Big Brain Atlas of human histology that— further histology data, of course, will be important in understanding cellular texture. So what imaging data do we need to build this atlas? That was basically sort of summaries of that. Now, as far as registration technology goes, one of the things that we talked about, emphasized, we presented by Michael Miller this sort of fascinating kind of new way of looking at microstructures through this kind of essentially particle and dynamical systems and kind of interaction of particle measurements. Essentially you can sort of span a brain through the use of these small kind of entities and use kind of basically methods of kind of dynamical systems, essentially to control their resolution and their spanning of anatomy. Gives you a good way to measure kind of cell density, to measure probability distributions, as in, for example, our interest in mapping MERFISH data, and it helps you control full transformations, including kind of diffeomorphic manipulations. So what do these frameworks look like? Lydia Ng and David Van Essen added to this context quite a bit.
Here, we have built a very robust CCF with a mouse. We would like to kind of— the goal here is to have a community effort to understand how to extend this to the human. There are many components of this, as we noted. David has pointed out here that a great deal of work has been done here through the Human Connectome Project and for the mapping of data this way and obtaining in particular sort of this really robust 180 kind of region parcellation of the brain, which would really, really serve a very important kind of functional map bridging the small-scale and the medium-scale. Some points here that David had made on this is that this kind of approach here can be used reliably to map between subjects, agrees with histological architecture, is very kind of sensitive and robust, and manages and kind of models cortical folding. And it basically although much more is needed, this will integrate well with our genomic data and histological data. And this is something we're planning to do as part of the ongoing UM1. Our final topic here, we're just regarding what is the benefit of this kind of to the community and the field. And these reference atlases, they provide a database with pathologies and can use for clinical diagnosis, important correlation with imaging methods, a knowledge hub to which data can be mapped to facilitate cross-species interactions, a bridge to pre-clinical studies, and basically mapping to underlying cell types and really understand this bridge between molecular GWAS and other kinds of data sets.
So just to kind of summarize our points, I want to make a series of points here that just sort of summarizes more or less, collected from our participants. There are fundamental challenges in bridging the vast resolution of functional areas, dimensional cells, and accounting for high variability. A human CCF must deal with cross-subject variability, as well as inter-subject variability through structure, function, connectivity, etc. Multi-scale CCFs at macroscale, mesoscale, and microscale with multi-modal imaging, MRI, histology, are proposed. Ideally, you should be able to toggle through these different kind of modalities and interact with them to understand the value proposition of each. Cortical geometry goes a long way, but not all the way, toward normalizing out cross-subject differences in the cortex. Knowledge about human variation in the human brain to date is largely based on MRI methods. Looking forward, regarding records brains, we need significantly more cellular-level data. You want to build a cellular-level atlas? You need to collect cells, right? And this is kind of an important kind of sort of consideration. Regarding developmental context, given the best molecular cellular functional structural changes in early infant development, age-specific CCFs with densely distributed age, kind of covering that will be important. And we have a lot of data types we want to put into this context. We need a paradigm for sharing that. This is some of the work that Lydia Ng is involved in right now at the Allen Institute with our BICAN initiatives. We need all the informatics-related components to kind of support this. The desirable CCF wants is an application. Something like a modern kind of online map or the Google Map kind of analogy. And coordinate system should reflect the brain's natural geometry, with surface space coordinates for the cortex, voxels for sub-cortical units, and the right formats for each. Imaging was pointed out earlier. Imaging modalities, such as MRI, can be used to provide un-disordered coordinate systems and map them across the data into— but these intrinsic distortions, as we noted, must be addressed.
And we need flexible probabilistic information about geometry. Important to understand the relationships between those modalities and map that as Michael Miller has proposed. Building these variable representations of the geometry of the brain to understand information content and then value and code it with each modality. Although there's been— I, personally, have participated in at least two or three such events regarding proposed for the development of multi-resolutional human atlases, but I still think that there is probably work to be done regarding a proposal for how to integrate this information, what role each will play, and what the community might do to deliver on that. So that kind of summarizes our session. I don't know. Any other of our speakers or participants would like to add anything I may have missed here, feel free.
BRUCE FISCHL: Just one thing that I would add is that I think the Google Maps analogy is somewhat of a false analogy. That is, it's kind of a Google— it's multiple maps, because the brain has multiple properties that we're interested in, and they may not line up with each other. And so we talked a lot about variability. Xiao Wang was talking about age-related variability. I know that David has shown a lot of variability in functional maps and this kind of thing. And so we need to be able to embrace both the variability across subject that Katrin showed, but also the variability within subject that is structure function relationships that— functional maps in my brain maybe shifted from the anatomy in a way that's somewhat different than they are in my experience. And any CCF that we build has to embrace both of those things. It has to be able to account for the fact that a six-month-old baby's brain looks very different than a one-year-old baby's brain. But also my brain looks different than Mike's brain. And so any CCF we build has to be able— has to be flexible and powerful enough to handle all of those sources of variability.
MICHAEL HAWRYLYCZ: Thank you. Thank you for the opportunity to be participating in this. I think everybody enjoyed this part of our session.
ERIN GRAY: Great, thank you very much for that nice presentation. Are there any questions? I have not seen anything in the chat. Would anyone like to ask a question? We have a few minutes.
RICHARD LEVENSON: There is something in the chat. This sounds very much like GIS geographical information systems, which are obviously widely used in geography. And that basically involves mapping multiple different data sources to a single geographical orientation. And I put in a link to just what National Geographic says about it, but I wonder if there are sort of analogies that might be useful going forward.
MICHAEL HAWRYLYCZ: Yeah. Trey, can you— I'm not quite sure the question. Can you rephrase that just a little bit? I didn't quite get the gist of it. Hello?
RICHARD LEVENSON: Hi. Maybe I was muted the whole time. I was talking about GIS systems.
BRUCE FISCHL: Right. We heard that part.
RICHARD LEVENSON: Okay. I don't know how I got muted.
MICHAEL HAWRYLYCZ: Yeah, these potentially apply. This is a lot of the kind of multi-resolution image mapping, image compressing, all this kind of generates from this GIS field, and they've made very important contributions. A lot of those technologies, I'm not sure that we've fully exploited all of those technologies, although there have been some work that has really addressed it. I don't know, maybe Michael Miller knows more about that. He might comment on that.
RICHARD LEVENSON: Well, the other thing about it is that I imagine the user interfaces have decades to evolve and so there may be tricks on how to actually handle and present the data.
MICHAEL HAWRYLYCZ: This is true, yeah. Yeah. Undoubtedly, that's the case. Yeah. These big systems have been really— there's major commercial, military, and other applications of those, which have put a lot of development into that so. Lydia?
LYDIA NG: Those two questions sort of follow each other. As Bruce points out, there may be not a single common coordinate system. There are many, many maps which leads to the point. It needs to be an application and not this one other fact that we produce. Things like the backbones, what GIS systems do, learning from them would be great. But we can't actually just take from it, because there is a high level of 2D-ness in it and it's also like the single Earth as well. That's the problem. There is a lot to be taken from this especially how they can do computes in a fast way, but I think there's also need to be a layer that's more about 3D geometry. And there's already a really large community that do 3D geometry as well. So I think all these things are good and they all need to be explored.
BRUCE FISCHL: Mike Miller, do you have anything you want to add?
MICHAEL MILLER: Well, I was going to make a complimentary point to yours and Mike's, and Lydia's. In high-dimensional spaces, it's often the case that we can't really average. So the idea of thinking about the average grain is much less informative than just using the brain itself as a point in the space to interpret it. And I think Christos is— Bruce, I'm thinking of Christos's beautiful work. We were all building average templates and then building sort of average statistics. Coherence is on top of that to represent average. And then Christos essentially started to work on multi-atlas methods where you essentially just keep around all of the brains you have and you think if you can— you see if you can build a parsimonious representation just by finding the closest exemplar. And I think in high-dimensional spaces, this is what we really want to be doing. So it may not be that with 10, 100, we can really build averages to represent variation, but we have to use the brains themselves as a representative and then look for the closest ones in the space, which of course, is the complimentary approach to using average. So I think that's sort of what you're saying when you say there are just so many different coordinate systems that we have to use to represent. Each one carries its power. We have to interpret with each one.
BRUCE FISCHL: Good points. Yes.
ERIN GRAY: Great. Thank you again very much. We're out of time for this presentation. So we will move on to our final report out from doctors Elizabeth Hillman and Harry Haroutunian. They moderated the session on use cases of brain cell atlas, unmet research, and medical needs. So we can see your slides. You can go ahead. Thank you.
ELIZABETH HILLMAN: Great. So yeah, I really broke my heart not to miss the microscopy session, but I'm really passionate about this one. So I was very happy to help lead this discussion with Harry Haroutunian and with our note-takers Jason Stein Mercedes. We had about 30 participants, although I suspect that some of them were also watching the football. The lead-off questions that we had were, what areas of science and medicine will most benefit from cell type resolved whole human brain data? How can we maximize the impact and accessibility of this data, and how can we better reach and engage these communities? So to sort of summarize the goals, it was really to better define the use cases for the BICAN data that we're all going to be working so hard to collect. And this is important to help guide the experimental design decisions that came up in a lot of the other sessions such as number of labels, to make sure that we engage these users and stakeholders actually during the project, to ensure that we provide data in ways that would be accessible to those stakeholders, and to maximize the overall impact, therefore, of the BICAN for both discovery for understanding the brain, and also for brain health. A quick overview of the discussion points so that you know what's coming. The first main thing that we talked about was intersubject variation. Then we talked a little bit about structure versus function. Then we got into some clinical use cases and who were our clinical stakeholders, talked about some further data uses. We got into a very good discussion about cause of death, age, dependence, and selection bias, which I think merits some of the discussions earlier, and then I'll finish with a summary of the action items. So the topic of inter-subject variation came up. Some people say, okay, variance is a problem, right, if we want to generate some sort of average brain, but also I think there is consensus that the fascinating part about studying the human brain is that everyone is different.
Now, it was suggested by Jason that one use case would be to understand the cellular basis for sort of individual differences that might be guided by things like genetics, and also pointing out that we actually do already know quite a lot about variance from things like MRI data sets, so it's worth not just sort of shrugging our shoulders and saying, "Well, there's going to be a lot of variance." And we heard this from Katrine yesterday also about the user brain. It's pretty clear there's a lot of variance and we shouldn't be surprised about that. But then the question came up. So if our numbers are going to be low and the variance is going to be high, what can we do? And so, it does seem clear we need to understand normal variance. There was a question of, are we even ready to be looking at whole brains? Maybe we should just focus on one brain region first and do many replicates so that we can actually sort of understand what we have to do to scale up and figure out what we can learn. We had a long discussion about twins and whether there was any way to sort of get twin brains, although that one, I'm not sure. And again, so, we need data to understand variance. Right? And so, unless we start getting data, we're not going to be able to really know. We all know this from writing our animal protocols and so on. So the consensus was much as what I heard earlier. We kind of have to start doing something. But I do think it's very important that we are very careful about thinking about how we're going to analyze our first samples and how that's going to impact the nature samples. And there was a sense that the BICAN seems to want it all, but doing everything might not be possible, and that it would really be great as a consortium to be making sure that all different aspects are covered, but that any one single project can necessarily meet all of the requirements.
The other thing that came up a great deal was that no brain is normal, and if we think we're making some kind of a control data set right now, the problem is, if we don't know what it's the control for, that's not so great. And also, there's always going to be mass invariability between the brains that we're going to get here unless we really constrain ourselves to a very small population group, which is also not going to meet the overall needs and goals. We got into talking a little bit about structure versus function because if we're going to be getting brains from people who obviously are dying, we need to think about if it's possible at all to get functional brain imaging data, and it was mentioned that it is possible to do this if we base recruitment on sort of hospice end-of-life situations, so that's sort of somewhat feasible. There are other opportunities as well, for example, monitoring sort of functional imaging studies in your institution that are working with populations that may have terminal illness. Some really nice ideas about epilepsy and opportunities to get fresh tissue from epilepsy patients sort of intrasurgically who may also have measurements done. But generally, the consensus was that almost all of these scenarios are going to have to address the fact that these brains are going to come from some kind of pathological state. The point was also made that if we're going to want functional imaging and cellular imaging data to try to understand this relationship between structure and function, there is a question of why that hasn't been done more fervently already in mouse brains which are a little more tractable. And so encouraging actually work in that space could set sort of a good precedent or a baseline for what we are expecting to be able to understand from doing this in the human brain in a much more challenging way.
Clinical use cases came out really well. We organized this in terms of stakeholders. So I'm popping up quotes here, right? And I think we had a very sort of clinician-heavy group, and so I think there was a general consensus that everybody there was really interested in thinking about disease. So for neuroradiologists, some of the suggestions were being able to map between modalities or to better interpret in-vivo images. So to be able to, say, interpret your MRI image in terms of, for example, potentially nuclear density. Neuropathologists was a no-brainer. I mean, they are here and they are very interested in all the different aspects of being able to understand disease, but for biomarkers add to databases. One that came up that I hadn't really heard of before that I think is really exciting is neuropharmacologists. So the quote was I think, "All psychoactive drug actions would be better understood with cell type result while brain data. For example, are all NMDA ligands working on the same cell types?" And so this seems like a new avenue and could give us new insights into things that we might want to label to have really profound impact on understanding how both psychiatric disorders and their treatments are working in the human brain, or not working in the human brain. Comparative neuroanatomists, as Patrick said, all six of them would greatly benefit new atlas generation in comparison to existing atlases, understanding variability, defining new cell types, morphologies, and region dependencies. Epiloptologists, again, came up. A great opportunity to understand the epileptic brain, and from this poststructural and functional perspective. Neurologists, of course, we have many, many disorders and diseases, many of which we're going to end up, I think, probably inadvertently including. And things like aging, Alzheimer's, Parkinson's, all kinds of disorders that could greatly value from these kinds of data sets. And so, again, the quote here was, "Is disease a BICAN interest? Yes. But we need normative data." So thinking about how to best get the normative data will then help studies in disease states is really, really important.
Neural developmental, we didn't get into it all that much, but it was suggested that this autism brain for developmental studies could be helpful. And psychiatric disorders, I'll talk about a little bit more in a minute. We talked about a few other kind of areas thinking about data uses bringing out computational methods for digital pathology and artificial intelligence, which I think even in the last year have come of age. The ability to sort of make predictive models between one format of data from another, for example, using, again, to predict a Nissl stain from cellular imaging data, or using models to predict things like cellular density from MRI. This is more and more feasible. The reason why I bring it up is because we would want to make sure that the data we generate is sort of AI-ready and is organized in a way where maybe we don't have computational power to do this on the scale of tens of petabytes of data yet, but that may well come reasonably soon, and it would be a shame to miss that opportunity. You could also use that— you could use classic neuroanatomy to train things to get over some of the constraints on having to sort of hand annotate or visually inspect data. I mean, it was raised that artificial intelligence and machine learning require large sample sizes, and so the point here is, yes, if we're really thinking about intraindividual models, take the data from one individual and then try and predict what diseases they have, we don't really have— we wouldn't have enough subjects, but for intraindividual models for example, sort of defining boundaries or looking at cell types across different areas of contexts will have a ton of data. And so there's a lot of opportunity there, I think. And this is also brought up. If we ensure that the sort of systems for imaging that we have are compatible, there's a possibility of really opening up the ability for people to add new data, new specimens to this database that will then allow the number of ends to increase for certain sub brain regions or certain disease states that would be really helpful.
In terms of data accessibility to different groups, it was stated that we need to make the data understandable and translatable across modalities. And we talked very briefly about clinical training, whether data on this scale can be accessible to clinicians. This was the only picture of an engineer and a doctor that I found online that I could tolerate. But I think that this just speaks to the need for really strong encouragement of collaboration across disciplines here. We need a lot of different people with a lot of different skills, all working together here to make this a success. And then as I said, we got into a very interesting discussion on cause of death, so with this sort of tagline being that all brains do come from some kind of cause of death. And it was stated that in one of the banks that was represented that the younger brains are predominately coming from things like drug overdose or suicide, which is referred to as deaths of despair. And so current exclusion criteria reject suicide, but actually can accept treated psychiatric disorders, which seems a little bit strange. So we felt that it's sort of a risk and an opportunity to actually consider these brains, but also to recognize that brains from different age groups may have different sort of neuropsychological, neuropharmacological phenotypes, and that's actually really worth paying attention to. The counterpoint was that other banks don't have that and stated that their younger brains tended to come from heart disease. And it was also stated that we get much fewer brains from rural areas compared to urban areas because of postmortem intervals. So there's always this potential for selection bias based on our criteria. So although it's stated that the new QB tracking portal will include all of this information, it's not necessarily going to give us insight into selection bias.
So it was actually proposed that we could do something now, which is screen all of the current brain banks for this kind of information to understand statistically, which brains are likely to be available for us. This gives us a chance to think about actually how to come up with an experimental design for questions we could feasibly ask within certain subpopulations. It's a little bit like finding out all of the animal strength that you have in your building to see whether or not you can do an experiment quicker or more slowly. So I think this would be extraordinarily helpful to give us more of an insight into the kinds of intrasubject comparisons or different questions we may or may not be able to answer with the data that we could generate just from the fact we have to take brains from people who have died. So then in terms of action items, these are just things that stood out to me, and hopefully, there will just be a minute or two at the end for the rest of the group to pitch in. So the relevance to neuropharmacology and neuropsychology I thought was really interesting, considering opportunities for intrasurgical brain specimens, also exciting, and better exploring how to acquire functional data in donors and better understand the expected distribution of donor brains to better plan for the number of comparisons we might be able to do. And consider expanding inclusion criteria to embrace different causes of death that could provide insights, even at this fairly low end. And a wonderful person stepped in at the end and said, "Take a moment to consider what seems obvious now that you would never have dreamed of five years ago. I think we were getting a little bit depressed. But this was really, actually a very helpful thing to think about. I think we're going to generate incredible insight almost no matter what we do, and I think we should use that to inspire ideas. Things that I sort of maybe hoped we'd touch on that weren't really discussed were things like early brain development. There was very little discussion of relevance to neuroscience, neuro-theory as sort of understanding of brain computation or function, which maybe needs to be a topic of another discussion. And also things like cross-species comparison and neuroethics. And I'll leave it there. Thank you.
ERIN GRAY: Thank you very much. That was a nice summary. We do have some questions in the chat. One is related to the consideration of environmental toxin exposure data as a component of potential metadata.
ELIZABETH HILLMAN: Sure. Sorry.
ERIN GRAY: Yes. They could be imputed by geography, occupation, lead, pesticides—
ELIZABETH HILLMAN: You're thinking of farmers, right? Yeah.
HARRY HAROUTUNIAN: Some of that can be done retrospectively with the geo maps of exposed zones that are currently available. And we know where the donors come from. We know their zip codes and some of that can be surmised but obviously, it would be great to get real data on the actual donors.
RICHARD LEVENSON: And there's a discussion in the chat about whether you can actually measure any of these relevant things in the brain samples and cells with, I'm guessing mass spec or something like that. And there's, what crosses the blood-brain barrier. It turns out a lot of the bad factors do get into the brain and could be detected.
HARRY HAROUTUNIAN: Yes. Some of that work is already published. Certainly with heavy metals and things like zinc.
RICHARD LEVENSON: So I'm just proposing that as another metadata for the data set.
ERIN GRAY: There's also a question, could drug-receptor binding affect the use of stains for specific cells?
HARRY HAROUTUNIAN: Well, to some extent that's why the toxicology screens that are being done by the neuro biobank is so critical, and it will be available at least on neuro biobank samples. And that encompasses over 250 different commonly prescribed medications, psychoactive medications.
ERIN GRAY: Right. Thank you. We are a little overtime. There is a discussion ongoing in the chat. It looks very interesting. But I would like to give people their break. Here, let me. So Laura, I believe we're giving a 30-minute break. Is that correct?
LAURA REYES: Yeah. Coming back at 1 o'clock.
ERIN GRAY: Okay. Great. But I would also like to again thank the panel of moderators for their nice presentations today. Many challenges were discussed and potential avenues to solve those challenges were also posed. I really sincerely hope that you all keep the conversation going after today, and we can really begin to tackle these challenges with a collaborative community. So I'm very encouraged by the discussions here today. So thank you again for those nice presentations. So it's about 25 minutes until we will resume the workshop. So I will see you back in 25 minutes.