Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

Photo of Joshua A. Gordon, M.D., Ph.D.

Sophomore Year

By

Last year, just before Labor Day, we made a trial run. My daughter and I tested the commute she would take to her first day of high school. We figured out how long it was, where to get off the train, what was the best way to walk. We discovered a Starbucks on the way. Then we headed back home. The trip was quiet, both of us pensive, thinking about the year to come.

Not this year. Poring over her schedule over Labor Day weekend, my daughter exuberantly declared which teachers she was excited about and which she dreaded. She talked about where each course would lead, how she hoped to get into a special elective next semester. She zealously looked forward to the future, anticipating softball season in the spring, and even planning for next year’s classes. A rising sophomore, she had a plan and was daring to share it.

As I enter my own sophomore year, I am feeling a bit like my daughter. I spent the last year finding my way, taking tentative first steps, and now I’m ready to share what I’ve learned and plan where we need to go. I tried to spend the bulk of my first year listening to all of you and learning what it means to try and accomplish the ambitious mission of the NIMH. I heard from established scientists and health professionals excited by their latest findings and trainees daunted by what lies ahead but determined to give it a go. I listened to impassioned self-advocates as they told me their stories of lived experience, as well as committed family members and friends frustrated by limited options for care. I met with countless proud, overburdened but steadfast NIMH employees and heard firsthand from them how deeply committed they are to helping work towards greater understanding of and better treatments for those who suffer from mental illnesses.

My freshman year experiences have convinced me more than ever that our field faces both significant challenges and tremendous opportunities. I’ve already articulated in prior messages my thoughts about research priorities to take advantage of some of these opportunities, in suicide prevention, neural circuits, and computational psychiatry. These remain priorities. Here and over the next few months, I want to share with you some additional thoughts and priorities I’ve developed over the course of the past year, that I hope to be working on starting in the next year.

Opportunities in disease-focused research

We are in the midst of a revolution in basic neuroscience, particularly in areas relevant to psychiatric disease. We must continue to support these discoveries, as they are the wellspring from which the transformative treatments of the future will emerge. But in key areas—biomarkers, genetics, and animal models, for example—we can and must think more translationally.

I wrote about two compelling biomarker studies over the summer. I want to underscore the main points of that post. We desperately need biomarkers in psychiatry, as aids in diagnosis, prognosis, and treatment selection. And computational methods combined with large, shared databases provide an unparalleled opportunity to develop brain-based biomarkers that could indeed help patients and doctors make crucial decisions. But these biomarkers will only be helpful if they address key clinical questions. What lies ahead for a particular patient? Which of the available treatments have the greatest chance of success? By how much? It is not good enough to test whether a particular biomarker describes a sub-group of patients accurately, or whether an individual belonging to that sub-group responds well to a particular treatment (though a good biomarker must also do this). Classifying patients by the biomarker must help answer one or more clinically relevant questions, and the design of the biomarker study should take this imperative into account. Biomarkers that differentiate between responses to two or more treatments, that differentially predict longitudinal course, that predict future clinical response early in the course of treatment, or that tell the clinician when it is ok to discontinue a course of treatment, would be incredibly useful to psychiatrists; studies to identify and evaluate such biomarkers should be prioritized.

Nowhere is the revolution in psychiatric neuroscience more obvious than in genetics. Five years ago, we knew of a handful of genes with bona fide links to psychiatric disorders. Now we know of hundreds of locations in the genome that are linked to schizophrenia, bipolar disorder, obsessive-compulsive disorder, post-traumatic stress disorder, attention-deficit/hyperactivity disorder, and/or depression. As we narrow down these loci to a list of genes, we need to increase our focus on understanding their biology, individually and collectively. I’ve asked a broadly representative group of psychiatric geneticists and neuroscientists, under the auspices of the National Advisory Mental Health Council, to advise me with regard to a path forward for harnessing the power of these genetic findings. Their report is in the final stages of preparation, and will be presented at the next Council meeting on January 25th. One of the key discussion points was the candidate gene approach, in which genes identified through directed studies or limited patient samples have been advanced despite insufficient statistical proof. Through these and other discussions, I have come to recognize that we can no longer use now-discredited genetic links to justify studying genes like DISC1, dysbindin, etc., in the absence of the harder proof that comes from unbiased genome-wide approaches. Instead, we must encourage investigators to pivot towards studying the collective neurobiological actions of genes that have achieved genome-wide significance, such as complement component 4 (in the MHC locus) for schizophrenia or CHD8 for autism.

The animal model space, too, has become more complex as we appreciate the need for more rigor and better translation to humans. We have recognized for some time the limitations of approaches such as face validity—to what extent a model manifests the apparent characteristics of a human disease—or the use of purely behavioral assays to measure the effects of single gene knockouts. The NIMH, as well as many journals and scientists, has begun to realize that we cannot and should not be talking about models of disorders but rather models for scientific inquiry—that is, models are used to ask specific questions that are hopefully translatable, rather than to represent a disorder as a whole.1 Genetic models are particularly problematic in this regard. When we think about “modeling” a genetic predisposition in an animal, what we are really doing is investigating how that genetic predisposition affects cellular and brain function, so as to trace potential pathways from the gene that might lead us to phenotypes relevant to the disorder. Demonstrating this relevance is challenging for psychiatric disorders, where the route between gene and phenotype is likely to be a multiplexed web rather than a linear path,2 where multiple genes of small effect sizes interact, and where key anatomical and physiological features of the brain are likely dramatically simplified or even missing in rodents. In order to focus on models for rather than models of, we prefer the term experimental systems, rather than animal models. This new terminology is inclusive of human cell systems, which address issues of genetic complexity and specific human biochemistries, as well as non-human primate models, which address inadequacies of rodents with regard to the structure and function of the intact brain. Finally, experimental systems should be thoughtfully integrated with parallel patient studies, in order to fashion a translational pipeline with the potential to confirm relevance and generate promising treatment targets.

Opportunities in Clinical Research

As soon as I arrived at NIMH, the issue of our portfolio balance was raised. Clinical scientists have been concerned by what they perceived as a shift in focus away from clinical research and towards basic neuroscience. Their concern is that basic science research prioritizes long-term timeframes while diminishing efforts that might benefit those who suffer now. My initial response to these concerns was to reiterate NIMH’s commitment to maintain a portfolio of research efforts that prioritizes excellent science, with the secondary mandate to maintain a balance of research with short-, medium-, and long-term timeframes. Of course, the reality is more nuanced. How do we define excellence in science when comparing efforts across such vastly different disciplines as molecular neuroscience and implementation science? What is the current balance of investments, and what should it be? And more importantly, how do we ensure that the dollars we do spend are spent wisely? There are no ground truth answers to these questions, but we must consider them nonetheless if we are to fulfill our duty to current and future generations of individuals afflicted with mental illnesses.

With regard to scientific excellence, NIMH leadership, with significant input from the extramural staff, spent considerable time this summer coming up with what we hope are general criteria we can use to compare grant applications across the NIMH research portfolio. These criteria are meant to supplement, not circumvent or supersede, the evaluation of the quality of the science conducted by our peer review panels, which do an excellent job of judging science within a particular discipline. We plan to combine these complementary approaches to help us look across our divisions and offices and prioritize excellent science while assuring portfolio balance. The criteria fall into four categories—rigor, impact, innovation, and investigator—and each category can be summarized by a question that begins with the same phrase:

Assuming everything works…

Rigor: Will the study be definitive?
Impact: How will the study change the field?
Innovation: How will the study shake things up?
Investigator: What do the people add?

I’ll be expanding upon these ideas in a future director’s message, as we begin to use these criteria to ensure that discussions about balancing the portfolio take place within the context of ensuring excellent science.

But what about our portfolio balance? I asked the team in our Office of Science Policy, Planning, and Communications to provide me with hard numbers, but even this seemingly simple task turns out to be challenging. We categorize research efforts many ways at NIMH, but none of them correspond precisely to timeframes. This is understandable if you think about it, because it is incredibly difficult to predict how long it will take for a scientific advance to change clinical practice. Nonetheless, the team took on the challenge, and came up with a couple of different ways of categorizing research programs that roughly reflect these timeframes. I will show you the data and discuss its implications in an upcoming director’s message specifically on portfolio balance.

Finally, the NIH has placed increasing emphasis on addressing issues of rigor and reproducibility. The challenge of ensuring reliability in research studies is particularly evident in psychiatric research, where we have a long history of landmark initial findings failing to replicate. Genetic linkage, neuroimaging findings, and clinical trials have all suffered in this regard. Various reasons have been given for this difficulty in replicating, including the heterogeneity of the illnesses we study, changing methodology, placebo effects, improper statistical methods, and diagnostic uncertainty. Each of these may play a role, but the principal issue is how to move beyond the problem and develop a program of research studies that have sufficiently rigorous design and sufficient power that they will indeed replicate. This means increasing the size of our studies so that they are adequately powered for real-world effect sizes; ensuring that the statistical analysis plans included in grant applications contain sufficient information for reviewers to properly evaluate them; ensuring expertise in review sections and program staff to properly assess statistical plans; and encouraging and enforcing data sharing platforms that will enable third-party confirmation and mega-analyses that consolidate data from multiple studies.

In sketching out these priorities I hope it is clear I’ve learned a lot from you all, and I will continue to listen and learn as we move forward. In sharing some of the conclusions I have drawn I realize that I’m sticking my neck out a bit, and there are bound to be some of you who disagree with what I propose. From listening to you all for the past year, I also know that you’ll be plenty willing to tell me when you do disagree. Nonetheless I have just enough of my daughter’s exuberance to think that I’ve got at least a few things figured out, going into my second year at NIMH.

Oh – and I’ve learned another thing. There’s a Starbucks on my way to work, too. 

References

1 Gordon JA. Testing the glutamate hypothesis of schizophrenia.  Nature Neuroscience. 2010 Jan;13(1):2-4. doi: 10.1038/nn0110-2.

2 Bolkan S, Gordon JA. Neuroscience: Untangling autism.  Nature. 2016 Apr 7;532(7597):45-6. doi: 10.1038/nature17311. Epub 2016 Mar 23.