Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

FY2024 Individually Measured Phenotypes to Advance Computational Translation in Mental Health (IMPACT-MH): U01 and U24 NOFOs Technical Assistance Teleconference

Transcript

JENNI PACHECO: Okay. And I'm going to try and share my screen. All right.

SARAH MORRIS: All right. Should I let people in?

JENNI PACHECO: Sure. Yeah. Sure. We'll probably wait. I mean, we can let them in, but wait a few minutes too to see if more people trickle in.

SARAH MORRIS: Yeah.

JENNI PACHECO: All right. Hi, everyone. We're starting to let some people in from the waiting room, but we will wait a few more minutes as people keep popping in ‘til we get started.

JENNI PACHECO: All right. I see that we were continuing to let people in, so I just wanted to let everyone know that we will just wait a few more minutes as people pop in-- or not minutes, but. And we'll get started shortly.

JENNI PACHECO: All right. So, I'll go ahead and get started. Thank you to everyone who's joining. This is our technical assistance webinar for the IMPACT-MH initiative, the individually measured phenotypes to advance computational translation in mental health. We've got two funding opportunities for a U01 and a U24, so we'll be answering questions about each of those today.

Just some general information. This call is being recorded and the recording of this meeting and all the content that we present will be posted at the website listed here on this slide. This website is also listed in the funding announcement if you need to find that. We're asking that you please submit any questions that you might have through the chat. We will try and get to answering additional questions.

We have a lot of questions that we've heard up to this point that we've kind of pulled together into some slides that we'll be presenting. If we're unable to get to everyone's questions, we'll ask you to please email them. And then in the few days after this, we'll get all those together with some answers and post them to this website as well. So as I said, we have two--

[inaudible].

JENNI PACHECO: Yeah.

SARAH MORRIS: Maybe go back one slide and pause for just a minute because there are still people joining.

JENNI PACHECO: True. Perfect.

SARAH MORRIS: Yeah. Maybe just for 30 seconds.

JENNI PACHECO: Sure. Oh, yeah. I see. I can't see the-- I can't see it when I'm sharing the screen.

SARAH MORRIS: And now that I've said that, they've all stopped. So never mind.

JENNI PACHECO: All right. Well, we'll keep going. But this will all be posted later. And then if you use the Zoom for your question-- the chat for your questions, and we will try and get to those as well.

And so as I said, we have two funding opportunities. One of them for a U01, and one of them for U24. These are both cooperative agreements. So there will be substantial involvement from NIMH staff and the steering committee for the impact initiative after awards are made.

And we ask that you look at carefully the applicant instructions in section four. The review information in section five and the terms and conditions in section six, which will give you a lot of information about the applications and what you'll need to include when you apply.

Of course, the purpose of this webinar is to provide information about these RFAs and to answer some general questions. We can't answer any specific questions about planned applications for that we'd want you to contact your program officer to discuss your specific application. If you aren't sure which program officer to contact, you can please contact me directly, I'm jennypacheco@nih.gov, and we'll be able to get you some kind of feedback prior to your submission.

So just some background about IMPACT. Of course, our current approaches for diagnosing mental disorders can lead to a great deal of heterogeneity within the diagnostic groups. And they don't really provide a sufficient characterization of individual patients that can inform clinical decision-making.

So here we're really hoping that by using machine learning and other data-driven approaches, we can integrate the data from behavioral assessments and the clinically available data for the potential to generate more precise and objective clinical phenotypes. And this work is consistent and sponsored by the research domain criteria here at NIMH. And we're seeking studies that will enable these data-driven algorithms to generate clinical phenotypes that will optimize the evaluation at the level of the individual to inform clinical purposes.

The pair of notices of funding opportunities are intended to stimulate and support research that will use behavioral measures and computational methods to define novel clinical signatures that can be used for individual-level prediction and clinical decision-making. So our U01 projects will use a multi-component approach to identify or develop behavioral tasks and other types of measures as appropriate that are optimized for measurement of individual differences.

We'll collect the data from novel clinical cohorts or identify existing datasets that include behavioral data. They can derive the novel clinical signatures to incorporate behavioral measures and information derived from the clinical record, and they'll partner with the Data Coordinating Center of the DCC, which we've described in the funding announcement 106. Which will coordinate harmonization, aggregation of data, analysis of combined data, and create a data infrastructure to store and share the data.

And we'll get to this in a little bit more detail, but I want to say it upfront that applicants can propose to collect new cohorts or may leverage data from existing clinical cohorts, as long as those cohorts have the appropriate data structures. And you can use a combination of these two approaches with new and existing data. The U24 grants, which will fund the data coordinating center, will support the work of the impact UO1 research projects.

And so the DCCs will really be responsible for facilitating regular communication and coordination among the impact MH projects. Where applicable, they'll support the use of common data elements standard measures, and uniform data collection. They'll build the informatics infrastructure and pipelines to gather and store the participant-level data. They'll perform computational analyses on combined datasets, and they'll monitor the collected data to identify and address potential biases in data collection.

So to go over the timeline real quick. I know this is a short timeline that's probably the first thing most people notice. The letter of intent is due May 14th, which is in just a few weeks. And the letter of intent is optional, but we do request it.

So if you know that you're pulling together an application, if you can submit a letter of intent, that'll be great. It'll help us make sure that the program staff knows what's coming and can give you some feedback prior to submission if needed. The applications are due June 14th. This is the same for both of the RFAs. The June 14th deadline will allow for the scientific review to be held in the fall of 2023.

The review for this will be coordinated through NIMH will coordinate a group of peer reviewers to handle that. And then they will go to the advisory council meeting in January 2024 with a start date of hopefully around April 2024.

So now I'll just go through some general questions about both or either RFA first. The first question we keep getting is, will the RFAs be reissued or submission dates added? Currently, there is no plan to reissue the RFAs or add any submission dates. The budget cap, so the UO1 has a maximum annual direct cost of two and a half million dollars. And the U24 has a maximum annual direct cost of a $1 million, and the maximum grant duration is five years for both of both the U01 and the U24.

And there's no prior approval to exceed the budget caps. Those are the caps, and you'll have to work within those budgets. Are the set-aside amounts that are provided in the RFA for the full duration of the grant or only the first year? The set-asides are for the first year of the grants. So NiMH is intending to commit $30 million total in FY24 for both of these notices to fund up to eight UO1 awards and one or two U24 awards.

Are there special page limits for applications under these notices? No, the standard page limits apply, so that's a 12-page research strategy section for both the UO1 and the U24. And you can follow this link here to see the NIH page limits table for more information. Are subcontractors allowed to issue subcontracts, or are there third-tier subcontracts allowed? We would highly discourage these, and they would only be allowed under some unusual circumstances. We would encourage you to contact grant's management or your program staff or your program officer to discuss strategies if you have a specific situation where you think this would be needed. And are applications from non-US institutions allowable? So for the UO1 foreign institutions are eligible to apply. But for the U24, foreign institutions are not eligible to apply. And you can see more information in the notices and section three about this.

And what about the appendices allowed? So the NIH policy regarding appendices is applicable to these notices. So only the permitted forms of appendices are allowed. And also, letters of support are not considered appendices. So you can submit those as a part of your application. So you may have noticed that there is a plan for enhancing diverse perspectives, which is starting to be required in a bunch of applications or announcements coming from NIMH. And it is a required in the impact applications. So the plan for enhancing diverse perspectives is really a means to look at how your grant staff, your study staff, and your-- the participants you're recruiting are really trying to include populations that may not have normally been included or you haven't included in the past. And this can be on any and many different ways that you can include diverse perspectives.

We can include scientists of different stages in their careers with different scientific expertise of different locations, be it rural urban, or more remote, or different diversity or ethnicity backgrounds. And the PEDP is just a summary of how you've included this through all aspects of your project. The PEDP is just a one-page summary that's uploaded as an Other attachment. And so you can see some information about that in section four of the funding announcement.

There's a web page that I've put at the bottom of this slide that has some examples of what is included in the PEDP and how to incorporate that throughout the project. So here's a couple of questions that are all about how-- whether someone can be a PI or have multiple roles in multiple different projects. And they all have basically the same answers.

So can an investigator be a PI on both a U0 1 and a U24, or can an investigator be a PI on a U01 or a U24 and [a co I?] or a consultant on a different U01 or U24, or can an investigator be a PI on more than one application of the same type-- so two U01s or two U24s? So the answer in all cases is yes. But if an investigator is included on both a U01, or a U24,or on more than one -U01 or U24- the applications really should not mention or refer to any of the other applications. So every application should really be a stand-alone application. But you can have staff listed on more than one if needed. Following that question, should the U01 or U24 applicants coordinate their applications with other U01s or U24? It's not necessary for them to be coordinated. And actually, I might go a step further and say you probably don't need to coordinate and shouldn't.

As I said, each application should be kind of a stand-alone application. They'll be reviewed independently on their own merits. And we should really avoid a coordinating any budgets so that we don't have any problems with budget shortfalls if the application you coordinated with isn't funded. And again, you should not reference other submitted applications in your application. Once all the grants are awarded, then we start coordinating and harmonizing methods across all the projects and with the DCC.

What is the governance structure and decision making process for the U01 projects, the DCC, and the steering committee? So once we've awarded all of the U01 and the U24, we'll form a steering committee that will have representation from every project. So it'll include all the PIs from the UO1s and the U24 will include some NIMH staff, as well as any external members that the steering committee thinks would be helpful to be involved. The steering committee will then oversee a lot of the work that the DCC will organize, so it'll have some say in the harmonization and coordination efforts, as well as in the DCC's analysis of the combined datasets.

So what that means is that if you're a UO1 project and you've collected a bunch of data and you've shared it with the DCC, they won't be doing any analysis with that data without your kind of say and oversight. So no one's going to use the data in a way that you weren't intending or that it's not meant to be used. You'll be able to have some say in that.

Following that question, who will the UO1 teams or the U24 teams conduct the data analyses? So the data analysis specific to the aims of the UO1 awards will be the responsibility of the UO1 recipient. So when you submit your UO1 application, you're proposing data collection, data analysis, and answering a scientific question, and you'll be responsible for carrying that out.

The DCC will collect data from all of the UO1s and to the extent that they're able in some cases, they'll perform analysis on those combined datasets. So they'll be responsible for any additional data analysis that can happen when we combine data. Similarly, each UO1 project is responsible for validating and for assuring the quality of the data when they transfer it to the DCC, the DCC then will verify the integrity, monitor it for completeness, and then they will upload it on a regular schedule to the NIMH data archive or the NDA.

So now we'll go through some questions that are specific just to the UO1 and after that, I'll go through some that are specific just to the U24. So for the question for the UO1, the first one is, will these applications be assessed for responsiveness before being reviewed? And what are the responsiveness criteria? So yes, they will be assessed for responsiveness before they're sent to peer review. These bullet points here are what's listed as considered non-responsive. This is also straight from the notice online.

So this is what we'll be using to evaluate them for responsiveness. What is the optimal sample size? So for the sample sizes here, we really that's going to really be driven by the type of data that you're using and the scientific question that you're asking. We certainly want everything to be well-powered. So we'll need some statistical power to justify the size that you're proposing. And that justification should be well detailed in the research strategy section, but we don't have any kind of maximum or minimum limits that we're putting here. It's really driven by the questions that you're asking.

Should applicants expect the usual policy requiring a single IRB from multi-site studies? Yet, the multi-site UO1s should plan to use a single IRB in accordance with the policy that can be found at this notice listed here.

Can a site be included in more than one, UO1 application? Yep, that's allowable. But if it is-- if there is one site that's included in more than one funded UO1, we would probably need to review and maybe revise the budget to avoid any duplicative expenses and to address any concerns about recruitment, having an added the additional site or grant into that site's purview.

Is it allowable to propose a clinical trial for the UO1s? Yep, clinical trials are allowed, but they're not required. And we really want to limit these to studies that propose to use this trial to assess the outcomes for deriving or validating a novel clinical signature. Any trials that are testing the efficacy or effectiveness of interventions are not considered responsive to this notice and they will not be reviewed. And there's more information here at the site for information about NIH's definitions of clinical trials and funding for those.

So what specific data types should be proposed? So really, one of the main things to focus on with impact here is that we really want you to select your measurements with scalability in mind and think about their potential to be implemented in routine clinical practice. So we're really trying to focus on these clinical signatures that can be assessed at the level of the individual, so we are wanting there to be some behavioral measures that we can add in conjunction with data that is available in the clinical record that would kind of maintain this low-cost, high-usability type of model.

I think data types of other-- other than the behavioral data such as EEG, MRI, blood-based measures, or genomics could definitely be used in conjunction with behavioral data either to further refine or disambiguate the clinical phenotypes or to validate them even further. But the primary goal is to focus on these highly accessible measures.

Will data collection measures and methods be harmonized across U01 projects? Yes, they will. When two or more funded U01s propose to measure the same information, we will probably be asking people to use the same measures when they can when that's feasible. And when we can't use that or when it doesn't make sense to when we want to be collecting data from two different measures, we might start asking projects to use the same measure as a different project for some subjects so that we can start to compare them more directly. The DCC will be responsible for facilitating the coordination and communication among the projects and figuring out where to best leverage opportunities for harmonization.

So given all of the likely delays in initiating data collection due to the post-award consultation, should U01 applicants indicate that the study will be delayed onset with regard to human subjects protection? No, the U01 studies should not be marked as delayed onset. We're not classified as delayed onset here. And the human subjects and clinical trials information form should be completed based on the proposed research. So there may be some delays in some startup, and we could build that into your project timeline, but we wouldn't be considered a delayed onset study.

Is there a preference for the U01s to use existing datasets or to collect new data or both? There is no preference. The scientific goals of your project and the availability of appropriate datasets would really guide your decision about whether using existing data or collecting new data is more preferable. If you're using a proposed dataset, we're really asking you to be sure that it's got all of the data that you would need in order to address your scientific question and to be sure that it's compliant with the fair data principles that they're findable, accessible, interoperable, and reusable. And if, really, there is not a dataset that exists that is like that, then we would hope that you would start to build a new one that would answer the questions that you're needing with some new data collection.

If the use of an existing dataset is proposed in a U01 application, should the specific dataset to be used be identified in the application? Yes, if you're proposing to use an existing dataset, we would want you to tell which dataset that is. And if you don't currently have access to the dataset, the documentation that you will be granted access to that dataset should be provided in your grant application.

Should a U01 application be organized like a center grant with cores and projects? Nope. A U01 is structured basically exactly like an R01? The fact that it's a U01 just is evidence that there will be some input from NIMH across projects to do this harmonization, but we don't need to have it structured any differently than an R01. There's no need for cores or specific projects. And as I said before, the single research strategy section has a page limit of 12 pages, and that will be submitted with each application. So now on to the U24 questions, we have just a few here.

Again, for the U24s, will these applications be assessed for responsiveness before being reviewed? Yep, they will be assessed for responsiveness, and here are the responsive criteria. And those are also listed in the notice itself. For the U24 application, should specific data types be anticipated? So again, we're asking the U24, the data coordinating center to come in with an application when we don't actually know which data will be collected or proposed in the U24s that they'll be working with. So we expect that you should anticipate behavioral data, including task-based and sensor or naturalistic data, as well as clinical records data.

And you can imagine that there would be at least one or maybe two U01s that might do other types of data, including EEG, or MRI, blood-based measures, and genomics. Since we won't know the data types that will be used until after the U01 grants are funded, the U24 application should describe in general terms, the process and tools necessary to gather, combine, store, and analyze this health information datasets that they would expect.

Are the U24 teams expected to have expertise in collection, storage of blood biomarkers such as metabolomics, proteomics, and others? The U24 applicants should include descriptions of their capacity to handle and process data and bio-samples that they might anticipate that would be relevant to impact MH. They will not be expected to serve as the repository to store these bio-samples. Existing repositories such as the NIMH repository and genomics resource will be used for storage. But if any expertise in the specific data types to be collected is not available on the DCC team, individuals with that expertise can be invited to serve in the steering committee or be added to the DCC study staff as needed after we've seen what's in the U01s.

If genetic variables are included in U01 project selected for funding, will the U01 projects arrange for DNA extraction genotyping and sequencing independently or will this be coordinated centrally by the DCC? So if there are enough funded projects, U01 projects that are including genomic data. The DCC will coordinate the choice of technology, the timing, and the facility that will conduct the analysis so that we can minimize site-related variability and batch effects in our genetic analysis.

The cost for generating and analyzing the genetic data will be the responsibility of the UO1 teams with the coordination of data analysis by the U24. And what type of expertise is optimal for the study team for the DCC. We imagine that the DCC must be experienced in coordination and management of multi-site clinical research studies, including having success in meeting various milestones and timelines.

Additional expertise in the DCC team should include proficiency with multimodal data, state-of-the-art computational and data analytics skills, and a few other things that we've listed both on this slide and in the funding announcement. So that brings us to the end of the kind of pre-slotted questions. I think at this time-- I think we have maybe some questions in the chat. Sarah, are there some that we should address for everyone before we move on?

SARAH MORRIS: So far, all the questions that were posted in the chat have been just sort of answered in real-time. We did receive a couple of questions through email so maybe we can pivot to those.

JENNI PACHECO: Sure.

SARAH MORRIS: And then if anyone does have any questions that they haven't posted in the chat, please feel free to do that now. So by email, we received a question about clarifying the role of data analysis in the UO1 application. Should that be part of the aims and the research strategy, or should the aims focus purely on data collection and data extraction from clinical records?

JENNI PACHECO: Yeah. So I think there should be some data analysis in there. I think what we're really hoping for with impact is to get at identifying these clinical phenotypes that would help with better individual-level prediction. And so while the data collection and having the right data is really critically important and having enough of it, I think doing some analysis there to identify the phenotype should also be kind of central to those aims.

SARAH MORRIS: Yeah. And just to maybe further emphasize that, we expect that the UO1s will do their own local data analysis according to what they proposed. We're not trying to build a fully harmonized network of sites here. We're anticipating that there will be a lot of diversity and the measures and methods and participant groups to be enrolled as proposed by the UO1s.

And so we're not going to very heavy-handed top-down insist that every UO1 use the same measures and methods because we just think the diversity of the science will be too great. Instead, we'll be looking-- we'll be asking the DCC to survey the measures and methods used by the RO1s ones to find opportunities for harmonization. And if there are minor changes or version differences or differences in methods that could be reconciled across sites to really maximize the harmonization of data for centralized analyses, then the DCC will be investigating those opportunities.

JENNI PACHECO: Great.

SARAH MORRIS: Maybe I'll go through the rest of these emailed questions and then we can get to the ones in the chat.

JENNI PACHECO: Yeah, that sounds great.

SARAH MORRIS: Oh, I think I just answered the second one. The second one was, is it NiMH envisioning multiple UO1s or a larger or a very much larger UO1 like Pronet. So I think that's what I just said that it won't be like if-- for those of you who are familiar with the Pronet grant, it's large multi-site and every site is using the same measures. This will be much more diverse than that.

JENNI PACHECO: Yeah. And I think I should have mentioned, or I may have glossed over it, but we are envisioning several UO1s and probably just one data coordinating center, which, as I did mention, you don't need to coordinate with them ahead of time. We will fund all of those independently, and then once those decisions have been made, then we will start harmonizing and coordinating among the sites, knowing that there may be sites that propose populations and questions that have almost no overlap and there might not be anything to harmonize between those projects.

SARAH MORRIS: Yeah. And just to add on that, we left open the possibility of funding more than one DCC in case the diversity of science is such that it would make sense, for example, to have one DCC focused on neurodevelopmental UO1s and another DCC focused on UO1s that involve mostly data from adults, so we left open the possibility of having two separate DCCs. Will there be a preference for grants that focus on behavioral tasks designed with particular parameters such as reward processing, or will grants focus on naturalistic behaviors such as sleep patterns via Actiwatch be equally considered?

JENNI PACHECO: So I think we don't have necessarily, a preference for kind of behavior or over something more naturalistic or passively collected. I think that their concern or the bigger focus would be thinking as to how easy something might or might not be to implement into clinical care down the line. So really, we're hoping to identify clinical phenotypes that help us make better clinical predictions, but if that phenotype is derived from a measure that's really difficult to implement in the clinic, it does not become as useful or actionable as something else.

SARAH MORRIS: Great. Last one. There are many more types of computational modeling than AI, Bayesian approach to some machine learning. Will those approaches be viewed as equally appealing or should we stick with those three, which I believe the specific three mentioned in the funding announcement?

JENNI PACHECO: Yeah, I think we mentioned those as specific examples to give people a place to start, but we are not tied only to those. I think that really, the scientific justification for what type of data analysis you're using is really going to drive what makes the most sense for your project, so we're open to other types.

SARAH MORRIS: Absolutely. Okay. Turning to the questions in the chat, can you clarify the discouragement of subcontracts mentioned at the beginning of this chat?

JENNI PACHECO: I can try. So, I think we're just trying to not have there be so many levels of different kind of parts of the project or different oversight or need to send funds to multiple different levels of institutions. So, each project could make so they're subcontracts, but then having those subcontracts have subcontracts is one tiered of issue too many that we try to avoid.

SARAH MORRIS: Right So subcontracts are fine. Sub-subcontracts get problematic. And if you, for some reason, need to do that, let us know, and we can talk with Grants Management about how to make that work. But subcontracts are fine.

JENNI PACHECO: Yeah. Yeah.

SARAH MORRIS: Can we include optimization of computational phenotyping in nonclinical cohorts?

JENNI PACHECO: Sorry, I was trying to make sure I got all the words there. Yes, I think there is a place for a nonclinical cohort. I think that we have to, especially-- we talk about this in RDoC, kind of being sure to look at the whole spectrum. I think there's a place for looking at people who might have risk for a clinical disorder or be prior to a diagnosis. But again, just keeping in mind that the ultimate goal here is to try and improve clinical decision making-- that if all we're doing is looking at people that are not at risk and don't currently have any problems, this might not become as actionable in the clinic for people who have current diagnosis or current clinical issues that they need help with.

SARAH MORRIS: Great. I think that is all the questions that haven't been answered in the chat.

JENNI PACHECO: Fantastic. So I'll not sign us off just yet, so if people want to put more questions in the chat, please feel free. It looks like there's a small enough group that if anyone has a really burning question or you asked a question and we didn't quite answer it correctly, if you'd like to raise your hand, we might be able to take a few of them that way if needed.

But I will just repeat what I said at the beginning. This whole webinar has been recorded. We will be posting the recording of it as well as the materials presented, including any of these questions that have come up after the webinar ended, on our website. And that website is listed in the Notice of Funding Opportunity, so you can find it there. And if you have any questions that you really wish you'd asked but didn't, email RDoCadmin@mail.NIH.gov. You can send a question there. We'll post that and the answer as well to the website so everyone can see.

SARAH MORRIS: One additional question: to what extent should applications adhere to the RDoC matrix?

So this is an initiative kind of sponsored by or organized by RDoC. So of course, we want to think about all of these RDoC principles. I think the matrix is really just exemplars of places to start thinking about things along different constructs and along different units of analyses. So you are not tied to only what's in the matrix. If there are other constructs that you think are important, that you want to include, by all means, you can feel free to do that.

I think, as with anything, you'd want to have scientific justification for whatever you're including in your study, but you certainly aren't bound by those. I think one thing that we've said a few times about this, as a little bit of a departure from some of the ways that we've talked about RDoC in the past-- here we really are starting with this idea that a clinical diagnosis or things in the clinical record do have this kind of signal in them that is important. So people are probably not having a diagnosis of depression and that's completely meaningless. So we kind of want to start with that signal or that information that's in the clinical record and see what we can add to it to further refine what that can tell us about the individual. So, we're not asking you to not mention any diagnoses or anything in this, but keeping in mind that we do want to look across kind of the whole spectrum of functioning as well as apply this to multiple different domains and disorders as we can.

SARAH MORRIS: Yeah, it's a shift from RDoC being agnostic about diagnosis to, okay, let's not throw the baby out with the bath water. What is the signal that can be gained from clinical diagnoses and other information in the clinical record? Another, an additional question, biological measures seem not just discouraged, though secondary. Are there any specific concerns with using them?

JENNI PACHECO: So I mean, I think the only concern is, as we mentioned, that the focus here is to really try and find phenotypes that could be actionable and maybe added to kind of clinical care or collected in a routine clinical visit. And so things like MRI or EEG or a full genotype or workup for genomics, these things are not always completely accessible or able to be implemented in all clinics that may not be associated with a larger medical center. So we're trying to stay away from those.

I think there could be cases where if you think of this as like a decision tree, you might be able to identify people where this more biological or expensive or invasive procedure is going to actually tell us a lot more about this person and it is really recommended at this time. So we just want to use them a little bit more carefully and not as the first line for every single patient that comes in through the clinic.

SARAH MORRIS: And there's also a role for biological measures perhaps invalidating some of the novel clinical signatures that might be the focus of a [U01?] application. So I can imagine aims where the novel clinical signature would be derived from clinical data and behavioral data and then validated using a biological measure that wouldn't be necessary in order to establish that signature in a clinical setting but could be a way of validating it. Next question, by clinical record, are you referring to data from the Electronic Health Record, traditional clinical research assessments, structured, or self report, or both? Jenny, do you want me to?

JENNI PACHECO: Yes.

SARAH MORRIS: Okay. Let's give you a break. You've been talking for 40 minutes. So, okay, yeah, this is a tricky question. So the spirit of the funding announcement is to take data from the Electronic Health Record.

Now, we know that there are existing datasets that were not drawn from the clinical record, the Electronic Health Record, but have data that one could find in the Electronic Health Record. So we sort of left that door a little bit open to use data that you might typically find in the Electronic Health Record, but not necessarily extracted directly from the Electronic Health Record.

But the spirit of the announcement is that we [we?] don't want to focus on deriving novel clinical signatures using data that are never going to be available in a regular clinical setting, right? So throwing a bunch of expensive imaging, etc., to derive a clinical signature that then nobody can take into the clinic-- that's just not the point, so I would advise you to steer clear from including clinical interviews that take two hours to administer, and nobody's ever going to be able to do that clinically.

Certainly, self-report measures are feasible to use in clinical settings, brief clinical interviews. So those kinds of tools would be perfectly fine even if those data aren't directly pulled from an electronic health record but gathered as part of the research project. George, raise your hand or chat if that doesn't answer your question. If you have any follow-up question or anybody?

JENNI PACHECO: All right. Do we have any more questions coming in, or?

SARAH MORRIS: Not now.

JENNI PACHECO: All right.

SARAH MORRIS: They're slowing down. [laughter]

JENNI PACHECO: I will pause for just a moment in case anyone else wants to get one last one in, but while we wait, I'll just thank everyone for joining in here live or watching it later. And feel free to, like I said, send any questions you have to this email address, rdocadmin@mail.nih.gov or reach out to your current program officer or to myself, jenni.pacheco@nih.gov, and we can help get you any answers or feedback prior to your submission due on June 14th.

SARAH MORRIS: There's one additional question. Do we want to go ahead and answer that now or do it in writing? Okay. So the question is about a clinical trial could be about a diagnosis, for example, a major depression, or it could be more consistent with RDoC, for example, patients with high levels of mood or anxiety. Would either design be responsive to this NOFO?

We should talk about clinical trials separately, but. So we would encourage applications in the spirit of RDoC. That is, to say, not making any assumptions about the validity of an existing diagnostic classification. So again, there might be information in that clinical diagnosis, but we don't want to make assumptions about homogeneity and validity of any given diagnosis. So we would encourage you to lean more on the RDoC side of things. And then as far as clinical trials-- so any U01 application should not include a clinical trial to test efficacy of a novel intervention. So those kinds of treatment development applications really need to come in under the NiMH clinical trials series of funding announcements for treatment development. However, if you wanted to use an intervention with established efficacy, for example, to test the predictive utility of a novel clinical signature, that would be an allowable type of clinical trial under this announcement. Okay.

JENNI PACHECO: Great. All right. Well--

SARAH MORRIS: I do believe we are done. [laughter]

JENNI PACHECO: Seeing no more questions. Hopefully, that means we've answered absolutely everyone's, but again, thank you, everyone, for tuning in here and for sending in some questions ahead of time. And please be in contact if you have more questions before the submissions.

SARAH MORRIS: Thanks, everyone.

JENNI PACHECO: All right. Thank you, everyone.