Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

Webinar: RDoC - Fear & Anxiety: From Mechanisms to Implementation

Transcript

[music]

Speaker 1: Welcome, everyone. This is the Delaware Project RDoC ABCT Webinar Series, the first inaugural webinar. We're going to be discussing fear and anxiety. We're moving all the way from mechanisms of pathology to implementation. And we're really excited about this webinar series. We really hope that this will accomplish a lot, particularly connecting some very disparate areas of intervention science and inspiring research development, and implementation across areas, as well as creating resources from this dialogue that can be used in future innovation, and generally creating a dialogue that is oriented towards solving big problems by fostering innovation and cross-cutting themes and research. And we feel like we have a great webinar to that end. To start off with, I'd like to talk a little bit about why all of this is so important. Those in the audience probably already know about the global burden of mental illness and neuropsychiatric disorders. This is the World Health Organization data on the DALYs, which is adjusted life years. And as you can see, neuropsychiatric disorders have a huge impact on quality of life and on the global burden of disease. To address this, we need to do a lot. We need to look across areas. We need to identify mechanisms of pathology, treatments that target these mechanisms in an efficient way with precision, and then get these treatments out in the community. This was kind of one of the main aims of the Delaware Project, and that is all about rethinking the way intervention science works.

Speaker 1: Traditionally, intervention science was kind of a linear pipeline. We started with basic research, often very good research, that led to intervention development, thanks to translation thinkers, and then into packaged interventions that could be tested in randomized trials. But all too often, this was where intervention science ended, with efficacy trials in the lab. And the reason was pretty simple. Beyond here, there is the real challenges, especially to researchers who are worried about internal validity. But the past decade or so has seen some really innovative and exciting work in effectiveness research, showing that interventions in the community can be effective, and doing from small to large-scale demonstrations of this. So there's great work in that direction, but there's still work to be done. And dissemination implementation has been coming online, especially with the formation of the Dissemination and Implementation Science Special Interest Group at ABCT, SIRC, other groups that are really looking at not only disseminating implementing these interventions but studying the dissemination implementation process. The problem has been that these areas are kind of their own little silos, and the linear pipeline doesn't work to address the problem the way we want it to. Accordingly, the Delaware Project really worked to think about ways to integrate these areas such that they can inform one another in this linear fashion. But also, dissemination implementation can inform basic science, create questions that can be answered in the lab that would be relevant to disseminating and implementing interventions. Likewise, basic science can inform dissemination implementation, especially areas like social psychology and industrial organizational psychology. Dissemination implementation can feedback through intervention development, whereby translational thinkers can incorporate the real-world challenges in dissemination implementation when designing, and revising, and refining interventions.

Speaker 1: Efficacy trials can inform basic science as new information is brought to light that highlights questions that need to be answered about the mechanisms of pathology. There are some cautionary paths here. For example, moving directly from efficacy trials to dissemination implementation may create kind of premature hopes about intervention that doesn't fit well in the community and therefore, while not prohibited outright, could create problems. This much more integrative vision is kind of one of the foundational stones for this webinar. Our goal is to think about mechanisms of pathology, how those inform treatment, and then how those inform dissemination implementation. And this webinar will address each of these areas with three different talks. However, there are some common themes as we move through these areas and back through these areas. And our question-and-answer period at the end, we're hoping we're going to get a lot of great ideas and good questions whereby our three experts can talk about the cross area, themes that really move this forward. So we're very excited to have with us Dr. Stewart Shankman from University of Illinois-Chicago. He's going to talk about mechanisms of pathology as related to fear and anxiety, followed by Dr. Alicia Meuret from Southern Methodist University, who is going to talk about how some of that research has translated into her work with interventions addressing the constructs of fear and anxiety, and then Dr. Shannon Stirman, who's going to talk about dissemination and implementation both from a broad perspective and as related to fear and anxiety. Throughout all of this, you'll notice that the theme of fear and anxiety is a cross-diagnostic - and this is consistent with the innovative approach of RDoC - which we feel creates kind of a unifying opportunity for this cross-area dialogue. Without further ado, I'm going to now pass the screen over to Dr. Shankman.

Speaker 2: Thanks, Tim. I'm going to share a screen. That was the second. Okay. So thanks for that introduction, Tim. So I'm going to be talking about response to unpredictable threat as a novel and specific target of treatment. So just to kind of do a brief overview of the Research Domain Criteria Project, or RDoC Project, the goal of RDoC is-- it's a research initiative put forth by NIMH with the goal of moving beyond studies of single categorical diagnoses, and to identify trans [inaudible] constructs. And the idea is that you can study these constructs across multiple units of analysis. So the five or now six broad domains of RDoC, [inaudible] things like acute fear and potential threat, positive valence system constructs, things like reward learning and individual differences in [inaudible] [cognitive?] system, things like attentional systems or memory. Social processes are things like attachment and affiliation. And arousal and modulatory system that are things like circadian rhythms and arousal functions. And a sixth domain that's pending is motor system domain. And the idea is that within these six domains, these constructs can be studied across these different unit analysis, so you can study things with a negative valence system construct, which is the main one we're we're going to be talking about today. You can study those constructs at the cellular level, at the circuit level, at the [physiological?] level, and so on and so on, to get a sort of full understanding of the impact that these constructs can have on functioning in psychology. And just to be clear, RDoC is not biological reductionist. Environment very much plays a role in our understanding of these constructs, as does development. So the idea is, in terms of development, that you can study these constructs throughout development and maybe how they might change or when they might come online or go offline and so on.

Speaker 2: So the RDoC is a framework that sort of guides, again, a lot of the talks that we're talking about today and a lot of the work that my lab does on the co-occurrence of depression and anxiety. So depression and anxiety cooccur at very high frequency rates. And just to give you an example from the National Comorbidity Survey, the lifetime prevalence in major depression's about 16%, and the lifetime prevalence of any anxiety disorder is about 28 to 30 percent, and the overlap is substantial. So it varies depending on anxiety and depressive disorder, but the comorbidity can range anywhere between 25 to 75 percent. So, again, very high co-occurrence rates of mood and anxiety disorders. So that's what leaves people like me and other folks to try to sort of recognize the importance of trying to study the core mechanisms of these conditions in order to fully understand their co-occurrence. But it's also equally important to study the mechanisms that are more specific to anxiety versus those that might be more related to depression so we can then help identify more specific targets for treatment. And I think this relates to the goal of personalized medicine, which I think a lot of us want to sort of work towards. Because some targeted interventions might be applied to some disorders, but it's likely that most disorders involve multiple mechanisms. And a comprehensive intervention might involve combinations of different targets, or targets that might be more specific to anxiety, or targets more specific to depression that we might need to come up with in order to truly do personalized medicine, really understand the mechanisms that are common versus the ones that are shared. So the main point of my talk is that heightened sensitivity to unpredictable threat, which is an RDoC construct that relates to certain anxiety dimensions but not depression. And I'm also going to make the argument that this RDoC construct of heightened sensitivity to unpredictable threat is not just characteristic of anxiety dimensions but also connotes vulnerability for them as well.

Speaker 2: So individuals [with all?] anxiety disorder are highly sensitive to threats. Some individuals with OCD are sensitive to germs. People with social phobia are going to be sensitive to evaluation, perceiving that as threatening. Panic disordered individuals have a heightened response to physiological arousal like heart rate and muscle tension. But more and more recently, researchers are starting to see that it's not just threat in general, but the predictability of threat makes a difference. Specifically that if the threat is unpredictable that might be particularly anxiogenic for individuals. And this distinction between predictable threat versus unpredictable threat is going to grow and grow in the field. So you can think about predictable threat being a situation where you need to mobilize for immediate action, it's kind of this classic fight, flight, or freeze response. And it's akin to the RDoC construct of acute threats. So you're walking through the woods, and then there's a bear, jumps out at you, he's got his teeth going. There's nothing to predict. There's no ambiguity. That's the fear response. That's this predictable threat. Again, there's no uncertainty or unpredictability involved. Unpredictable threat, however, would be if you're walking through the woods and-- "Wait, is there a bear off in the distance? I'm not sure. Does it see me, if it is a bear? Is it coming towards me?" It's unpredictable. The danger is unpredictable. The danger is uncertain, so you need to prepare for a potential negative event. You need to be more hyper-vigilant. And, again, it's akin to this RDoC construct of potential threat. And this distinction between predictable versus unpredictable threat has been shown to have different underlying neurobiology in animal and human studies and also have a different response to pharmacological challenge as well. So in humans, in our lab and in other labs, we examine these two constructs, our predictable/unpredictable threat, using Christian Grillon's NPU task.

Speaker 2: So in the NPU task, there's three experimental conditions. A no shock, predictable shock, and unpredictable shock condition. And in all three conditions, a shape appears and the shape disappears. A shape appears and a shape disappears for, say, 90 seconds. And in the no shock condition, people are safe from shock whether the shape is on screen or the shape is off. In the predictable shock condition, people get shocked when the shape is on screen, but not when it's off. So now I'm in danger, now I'm not. Now the shape appears, I'm in danger. Now I'm not. I can predict when the dangers come, that's the predictable condition. In the unpredictable condition, I can get shocked in between the shapes, at the beginning of the shape, maybe right before the shape, not at all. I don't know when the shock is going to come. That's unpredictability. That's our condition that we really think is going to be characterized in those with anxiety disorders. So one of the dependent variables that we get from this task is the EMG startle response by placing two electrodes below the person's eye and measured the strength of their blink response across these different conditions. And the basic task effect in startle looks something like this. So you have sort of a low but some normal startle response in the no-shock condition, whether the shape is on the screen or off but a heightened startle response when the shape is on the screen in a predictable condition, as well as whether the shape is on the screen or off in the unpredictable condition. Now, there's a lot of individual differences in how people react to unpredictable threat. [inaudible] we and other groups have shown that panic disorder but not major depression's associated with an elevated response to unpredictable threat. So across the shape is on the screen, the shape is off the screen, individuals with panic disorder show an elevated startle response compared to individuals without panic disorder. We've also shown that a startle to an unpredictable threat is related to a higher familial rate of panic disorder. So individuals with panic disorder in their families also show a heightened startle response to unpredictable threat.

Speaker 2: And, interesting, this finding held over and above whether those individuals had a diagnosis of depression and/or panic disorders. In other words, independent of whether they themselves had panic disorder, if they had a family history of panic disorder they showed elevated startle response to unpredictable threat. So this led us to wonder, "Well, maybe this response to unpredictable threat is a vulnerability factor for at least panic disorder given its association with family history." We also examined whether an unpredictable threat responsivity relates to more than just panic disorder. So in a recent study with my colleague, Luan Phan, and Stephanie Gorka, we examined across multiple anxiety disorders and showed that individuals with social phobias, specific phobia, and PTSD exhibited heightened response to unpredictable threat, but those with major depression and GAD did not. And this sort of got us thinking, "Well, I wonder if response to unpredictable threat characterizes fear disorder versus distress/misery disorder." So there's a large, large literature by Bob Krueger and other folks that differentiate internalizing psychopathologies into those that are characterized by fear, which are things like social phobia and panic disorder and specific phobia, versus distress/misery internalizing disorders which are things like GAD and major depression. And interestingly in this study, we found no group differences on response to predictable threats. So these elevated and startle response was very specific to response of a predictable threat. So, again, we sort of want to return to this question. Is this variable, heightened response to unpredictable threat a concomitant of fear disorders or anxiety disorders maybe in general, or does it reflect vulnerability as well? So in our current RDoC-funded RO1, we divided our samples. It's a family study. We divided our sample into those that had a family history of fear disorders - so if they had panic disorders, specific phobia, or social phobia - and compared those who didn't have a family history of panic disorder and specific phobia or social phobia. And sure enough, we found those with this family history of these fear-based disorders showed a heightened startle response to unpredictable threat compared to those without a family history of fear disorders.If we divide the sample into those that have a family history with distress/misery disorders, with GAD and major depression, we show no group differences. And this held after adjusting to the program's diagnosis. So independent of whether the individual had, themselves, a fear-based disorder, if you had a family history of fear disorder, you had a higher startle response to unpredictable threat compared to those without a family history of a fear disorder. And, again, there were no effects for predictable threat, showing the specificity for unpredictability.

Speaker 2: So to summarize, individual differences in the RDoc construct of unpredictable, aka potential threat, is an important mechanism for certain internalizing psychopathology, and maybe it's only important for fear-based disorders and not distress/misery disorders. And our results also suggest that it might just be characteristics of these fear-based disorders but also reflect the vulnerability as well, given its association with family history. So maybe preventive interventions could try and change this target of individual's responsivity to unpredictable threat. So how would this look? So say you're doing a prevention-- intervention trial, so you have your randomized people to either get your intervention versus a control condition. Maybe the target that you're trying to change is a reduced response to unpredictable threat, and that becomes your dependent variable, which ultimately will then, hopefully, lead to a decreased onset or escalation of certain anxiety disorders. Again, thinking of this as almost a mediating variable, the intermediary variable to be the target of intervention. And startle as a measure-- as far as biological measures go, it can be administered relatively easily in clinics as an indicator, maybe early, or a treatment response, or maybe as treatment response in general. So Alicia is, in her talk, going to be talking about how other targets could be used in intervention research. I just want to acknowledge the funding sources for my work, my lab team here at the University of Illinois in Chicago, and for the Chicago Cubs for winning the World Series a few weeks ago. [Next?].

Speaker 1: Thank you very much, Stew. We're going to go on now to hear from Alicia Meuret. We really are appreciative of these experts thinking cross-diagnostically and cross-area. And some clear themes are emerging, for example, the importance of threat as a marker of psychopathological mechanisms. And the ways in which basic research can identify those mechanisms help us to understand them in a way that leads to treatment and treatment research. I do want to make one kind of housekeeping note and that is as we go along, we're going to save all questions for the end, but if you have a question in the moment go ahead and type it in. You'll notice at the bottom of the screen, there's a little Q&A icon, and you can click on that, type your question in, and then we'll get to that in the question-and-answer session at the end. Okay. Without further housekeeping, go ahead, Alicia.
[silence]

Speaker 1: Alicia, we're having trouble with your audio.
[silence]

Speaker 3: [Should we?] start again? Okay. So in my translation titled translation to intervention targeting threat sensitivity in anxiety and depression, I would like to provide the audience with an example from my research on testing targets of threat sensitivity and how their engagement or change via targeted intervention can influence clinical functioning. Just briefly, I will discuss the principles of target engagement via intervention, what are targets, and how can we test whether their change, in turn, leads to or mediates therapeutic change. When we understand mechanisms of disorders across multiple units of analysis - so genes, molecules, cells, circuits, physiology, behavior - we can test interventions designed to target those mechanisms. I will then provide two examples of mechanicalistic based interventions that are aimed at targeting assumed mediators of threat sensitivity, the first being CO2 as a target in panic disorder and the second being the testing of multiple mediators across several units of analysis in an ongoing pan-diagnostic intervention study in collaboration with my lab and the lab of Michelle Craske at UCLA. So mechanism research in a context of psychotherapy research, or the question of what works and for whom, may have until more recently felt to many like the famous quote by Mark Twain, "Everybody talks about the weather, but nobody does anything about it."

Speaker 3: Despite a mounting need for psychological services for individuals suffering from an anxiety disorder, only small number receives, has access to, or can afford therapy. When they do receive treatment, it is rarely evidence-based. However, even in the most empirically supported psychological intervention, CBT, less than half of the sufferers respond favorably. The key for improving efficacy and dissemination in the ideal world would be the identification and manipulation of mediators, biological and psychological markers that drive the disease, the identification of moderators of pre-treatment characteristics which determine the success and failure of a particular intervention, and then, lastly, these should be brief, cost-effective, easy-to-train interventions. However, in the majority of efficacy studies, there is a continued focus on whether an intervention is successful or not, as opposed to whether the intervention indeed successfully changes the underlying dysfunction or target, mediator, which in turn then changes the outcome, which here would be the [inaudible]. So what are suitable targets? Suitable targets have to be scientifically grounded, which means that there should be an established link between the target and the functional and clinical effect. Let's use the example of salt. And this is a simplified example. Lifestyle interventions for hypertension suggest that there is a direct link between sodium, which would be the target, and blood pressure, which would be the clinical outcome. More so, consumption should, therefore, lead to higher blood pressure. So if you eat less salt, you should have lower blood pressure.

Speaker 3: To test this assumption, we have to manipulate the target which would be sodium by introducing a low-sodium diet, which then, in turn, should lead to lower blood pressure. So using a mediational model, the intervention, low-salt diet, should lead to a change in the target, so lower levels of sodium, which then, in turn, should lead to lower blood pressure. Low-salt diet could also be influenced-- reduced by-- if this would be reduced then as a partial mediator or as a full mediator that would provide evidence for sodium playing an important role. We could also then assess the specificity of the intervention by including an alternative intervention such as a healthy diet, which should not lead to the reduction in the target if indeed sodium would be specific. And lastly, we could assess the specificity of the mediator by simultaneously testing competing mediators such as weight loss, in this case, a non-specific mediator. And finally, we could also assess the specificity of the targeted mediation. So, in this case, we would expect that people that are particularly high on sodium levels would benefit the most from an intervention targeting sodium levels. So with this model in mind, I will present the design and findings of an intervention that targets a biological index in panic disorder, carbon dioxide. In the context of the RDoC model of negative valance, the respiration is really only one of the indices under the physiological units of analysis. I will describe how research into the negative valence system threat sensitivity at the physiological level leads to the identification of hyperventilation as a potential modifiable mechanism and target for intervention.

Speaker 3: This slide shows how reactivity to fear is an aspect of the negative valence system. Within the system, you're targeting panic/fear at a physiological level based on scientific evidence of the rule of hyperventilation in panic disorder. When we breathe normally, our carbon dioxide levels are in a population range of 37 to 45 mm Hg. However, if you breathe in excess to metabolic demands, CO2 decreases below normal levels, and we refer to that to hypocapnia or hyperventilation. Likewise, if you breathe too little, our levels increase in a not normally high range and we refer to that as hypercapnia. Both hypercapnia, as well as hypocapnia, have been tied to panic hyperventilation symptoms such as dizziness, heart racing, shortness of breath, as well as cognitive symptoms such as fear of dying. And these are symptoms that panic patients in particular are very sensitive to. Indeed, both basic and applied research has provided evidence for sustained and acute levels of CO2 in panic and also other phobic disorders. Phobic patients tend to hyperventilate so have particularly low levels of CO2 when confronted with their fearful cues such as blood in blood phobia patient or driving in driving phobia patient. There's evidence that patients during a hyperventilation test have a delayed recovery, so they continue to hyperventilate after a hyperventilation test, whereas people with social phobia or healthy controls do not.

Speaker 3: And finally, there is evidence for variation in CO2, so hypercapnia as well as hypocapnia, in preceding the onset of out-of-the-blue panic attacks. CO2 in these studies was associated with respiratory symptoms, chest pain, dyspnea, but also cognitive symptoms such as fear of dying. Thus, if CO2 is indeed an important biomarker in anxiety/panic, interventions aimed at normalizing hyperventilation should prove effective. Capnometry-assisted respiratory or short, CART, was developed to examine systematically the merits and mechanisms of changing hypocapnia in patients with panic disorder. This brief four-week treatment uses portable capnography, which allows the testing and the monitoring of end-tidal CO2 levels along with respiration rate, oxygen, and heart rate, which allows to very systematically assess to what extent patients are indeed able to normalize CO2 level if they are in fact in a hypocapnic range. The daily exercises are comprised of slower but most importantly more shallow breathing. Evidence from five randomized controlled trials in patients with panic disorder but also asthma result in the normalization of CO2 along with significant improvements in clinical symptoms. But it was unclear if the change of this mediator, so CO2, was in fact driving the improvement. To test the specificity of CART in achieving reductions in panic symptoms by means of increasing CO2 levels, we expanded our design to include an alternative intervention cognitive therapy. Whereas we assumed that cognitive therapy would lead toward a reduction of panic symptoms, we would not assume that there would be a change in CO2 levels.

Speaker 3: We also tested the specificity of the mediator CO2 in successfully reducing panic symptoms by simultaneously including competing mediators, in this case, misappraisal and perceived control. Using a multimediator analysis, we tested the impact of each mediator over and above the other mediators within the two active conditions. In the individual's randomized CART, CO2 unidirectionally mediated the reductions in panic symptoms severity over and above the competing mediators, perceived control and cognitive misappraisal. In patients assigned to cognitive therapy, cognitive therapy bidirectionally mediated misappraisal and perceived control. However, it did not lead to a normalization of CO2 level. In fact, CO2 remained in a hypocapnic range. However, in some of the patients where CO2 did increase CO2, again, unidirectionally drove the changes in clinical outcome over and above the CO2's specific mediators, even though CO2 was not actually targeted. However, as mentioned earlier in the context of the RDoC model of negative valance, respiration is really only one of several physiological units of analysis under the negative valance system. The goal of our ongoing collaborative study is to examine target engagements in multiple units of analysis of threat, potential harm, and sustained threat using an intervention that is specifically aimed at targeting multiple constructs of the negative valance system, and that should lead to a more global reduction in threat sensitivity. In this ongoing intervention study, individuals with elevated levels of anxiety, depression or stress and low clinical functioning were randomly or are randomly assigned to either an intervention aimed at reducing threat sensitivity called Negative Affect Treatment or NAT, or they are assigned to an intervention aimed at improving reward sensitivity called Positive Affect Treatment or PAT, which is aimed at targeting reward anticipation and motivation, reward consumption, and reward learning. By simultaneously testing multiple possible mediators of threat and reward sensitivity, including the two mediators mentioned today, CO2 and startle magnitude, we hope to identify the most sensitive mediators and moderators of threat and reward sensitivity. I want to thank the NIH for their generous support of this work, my students, post docs, and collaborators.

Speaker 1: Thank you very much, Alicia. We're really excited by these developments because it's clear that targeting these mechanisms creates greater precision and, therefore, greater efficacy of interventions. The real challenge, of course, is how do we get them into the hands of community practitioners who can make use of them? That's going to be exactly what Dr. Stirman talks about, so we'll go next to her. Take it away, Shannon.

Speaker 4: How's that? Perfect. Okay. So I couldn't resist the title From RDoC to Our Docs. My colleague, Craig Rosen, gave me that, so cue the groan. But I also want to mention that increasingly we're learning, often in the practice settings where we're trying to implement these interventions, they're not PhDs. They're MDs. They're social workers, counselors, often lay health workers in some settings. And that's an important thing to consider when we think about how to implement these interventions. So the reason that implementation science has kind of emerged in the last decade or two is that we started to realize that if we develop an intervention, test it, publish the results in a journal, and even give some conference presentations or workshops, that they're not being implemented in routine care settings at the rate that we'd like to see. And there are a number of factors at play, which we'll go over. But what we're finding is that most newly-developed interventions don't ever make it into routine care. And that for those that do, there's an estimated 17 years from research to practice. Even when mental health systems shift policies and make significant investments in training and in preparing the workforce to develop these interventions, we see significant variations in how often these treatments are being offered, how well they are being delivered and also in the patient-level outcomes that we see. So this has really led to more investment and more research in the area of implementation science so that we can better understand how to bring effective treatment into routine care settings. And there are a number of factors that have been shown through research to influence the success of efforts to implement evidence-based treatments in routine care settings. And these factors exist at multiple levels, and they can also influence one another.

Speaker 4: So, for example, policy changes in the outer context, for example, changes in reimbursement policies, can influence whether or not an organization decides to invest in training their workforce in offering the interventions. If it can be reimbursed through insurance, it's more likely to be implemented. So, for example, with that preventive care, it's something that was emphasized in the Affordable Care Act that's been something that more settings have demonstrated some openness to. So I'm going to talk through a little bit of how these different factors at different levels might influence whether or not these interventions that we talked about are successfully integrated into practice after they're developed and tested. One thing to mention is that RDoC was conceived as a research tool, and it really hasn't penetrated the practice world yet. So in some ways, it could amplify some barriers, but other [inaudible] or if they require special equipment, could be a shift in terms of routines, and workflow, and space needs. So I'll illustrate that here in a second, but just looking-- first let's walk through some of these different aspects of influences on implementation success. So I talked a little bit about reimbursement policies, but there are also public perceptions of what effective treatment is, and for psychosocial interventions, what psychotherapy is. So some people still have the notion of therapy being lying on the couch and free-associating while somebody sits behind you, and takes notes, and says very little. But we also have to think, when we're treating anxiety, about some of the perceptions around exposure. So, very recently, in The Atlantic there was an article that was titled The Only Cure for OCD is Expensive, Elusive, and Scary. And you see the picture that was used to illustrate this. So media can also influence and be influenced by public perception of what psychotherapy is. And they can play a role in how open people are to implementing new practices.

Speaker 4: At the level of the inner context - so this is within an organization that might implement these treatments - factors like legal support and the support of the legal department and the compliance officer can play a role. So, for example, if the intervention required some form of exposure that would take people out of the office, the legal department might object to that. And that could influence whether and how certain types of exposure were done or not done in that setting. Other considerations include things like the structure of service delivery, space, and staffing availability, as well as workflow. So, for example, if it were important to examine startle response to see whether or not that's an appropriate-- and a particular intervention is appropriate for someone or to see if treatment is working. Decisions would need to be made about how much equipment could be afforded. So that would be maybe more at, possibly, the outer context level but also the inner context, whether the organization can invest in any type of equipment that would be needed. But then there would also be considerations about where it would be stored, if there's a good space to store it and secure it, who would administer the assessment. And decisions about that might depend on whether or not assessment could be reimbursed. If not, it might need to occur during treatment. And then that would mean that there would need to be training for all practitioners on how to use it, as well as coordination on the use of it so that it was available when people needed it, etc. And these all sound like relatively small things. But in the context of a busy and underfunded public mental health setting, they might add up to become somewhat difficult to overcome. So that's something to think about kind of at the early stages, as we're developing assessments and interventions, whether or not there are kind of scalable ways for those things to be rolled out into routine care settings and sort of what the most efficient way of getting the information we need would be.

Speaker 4: And then, in regard to treatment, we also have factors related to the therapist that could in some ways lead to openness to some RDoC concepts but also maybe resistance to others. So therapists don't tend to like DSM. They tend not to think that it reflects what they see in practice. And they tend to think a little bit more about what individuals' goals are and sort of a different way of conceptualizing the underlying disorder. So they might be open to the idea that there are some underlying factors that need to be addressed before improvement could be seen. But the nature of those underlying factors that we identify through a program of research that's informed by RDoC might not really fit with the way that they conceptualize treatment. So those are things to think about that would need to be considered in terms of how we train and how we even present the interventions to clinicians and to people who would potentially be delivering them. There is some evidence though-- Melanie Harned and Linda [Demask?], and others have done research on technology-based training and concentration strategies involving learning communities to increase the use of exposure. Because these provide opportunities to see and experience the impact of the intervention and get support as they're learning to deliver it, and then also address perceptions as well as organizational barrier. So there are intervention strategies that are being developed that can address some of these challenges. Regarding the intervention itself, you'll see over in the innovation circle that concepts like relative advantage, compatibility, and the observability of the intervention is also relevant because they're going to influence whether or not people think it's worth it to get trained, whether they continue doing the intervention, whether they perceive that it has advantage over what they're already doing. So establishing was to track outcomes in routine care practice settings can allow those outcomes to be fed back to the treatments of [inaudible] and former assignment of the treatment. And this is what John Wise advocates in the deployment focus model of intervention development. And then also, principles of uses kind of design, which we'll talk about in a few minutes, can be informed by the outcomes that we see in order to make the intervention more efficient. And this sometimes already happens with things like cultural adaptation to existing intervention. But it can also be used to increase engagement and efficiency of treatment.

Speaker 4: So I also want to give an example of technology-based assessment and interventions. Because technology can actually facilitate in many ways some of these interventions that we're talking about. It can make exposure more controlled in terms of the amount of time and stimulus required. But it's important to consider factors like cost and the available technology. Over time, wearables might make some aspect of assessment more feasible. And as smartphones become more ubiquitous and they use very little data, they can also be used, potentially, to even administer some of the different paradigms that we might find to be effective. One example of the use of technology is the CALM intervention that Michelle Craske and others work on. And this is an intervention that walks a patient and a non-mental health trained professional through the intervention. So it actually was a nice way to facilitate training and adequate levels of fidelity. And Geoff Curran and colleagues did a study to look at barriers of facilitators and found that the degree to which it was prioritized and the amount of buy-in, as well as factors like space and workforce, somewhat weakened enthusiasm for the intervention. But the fact that it was a low burden and that they got good feedback from patients were positive and things that sort of facilitated the ongoing use of the intervention. In terms of sustainability, one factor that needs to be considered is whether the interventionists can be continued to be paid for in the budgets of the existing-- of the organizations that implement an intervention like this or whether there's a way to integrate it into someone who's already employed into their regular workflow.

Speaker 4: So shifting gears a little bit, we've talked about potential barriers and facilitators. And understanding barriers can inform the selection of implementation strategies for interventions that have already been developed, but they can also be used to inform the design of interventions to minimize the challenges associated with getting treatments into routine practice. That way we're less likely to be sitting here in 20 years wondering how to get RDoC-informed interventions into routine practice settings. So user-centered design is a principle that's often used in web and app development and in consumer goods. And there was a recent article in Clinical Psychology: Science and Practice that suggested that it could be useful for developing interventions as well. So one thing to stress is this is a development by committee because you've identified targets, and you have an informed theory on how to produce change, and that has to be one of the guiding factors. But knowing the constraints of the environments into which you ultimately hope that these interventions will be implemented allows you to build something that will work in the intended setting. So bringing stakeholders to the table like the people who will be delivering it, potential patients, administrators within organization, can allow their input to increase the likelihood that interventions will ultimately be transported into treatment settings. And RDoC can facilitate this work by informing the development at the most efficient and precise forms of intervention possible, which can have an impact on things like learnability and memorability that you see up on this chart. But it's also important to think about how the material might need to be presented to the audience like the providers and patients who might be less familiar with the concept. So thinking even about what the psychoeducation would need to look like, and how the rationale would be presented, and finding compelling ways to build that or any [inaudible] intervention can be important to consider.

Speaker 4: And then also, when developing an intervention, learning about these potential barriers and facilitators at the outset allows the development of interventions that are more likely to be successfully integrated into the intended setting and reach the intended audience. So using kind of an iterative process as you're developing and piloting the intervention with some of these factors in mind might ultimately increase the success of efforts to get these treatments into routine care. And finally, I want to talk about how implementation science can be integrated into research design. So formative research, like we talked about, getting stakeholder feedback and getting information about outcomes as well as things like feasibility are certainly important. But then also, hybrid research can be important. So hybrid research is sort of a combination of effectiveness and implementation research. And by integrating the two, it can inform and speed the translation of research into practice. So type one focuses primarily on effectiveness and pre-implementation factors. Type two is equal emphasis on both the effectiveness as well as how to implement the intervention. And then type three is maybe when you have a little bit more effectiveness data, so you're focusing on how to implement it. And also getting some measures of effectiveness to make sure it's having the desired impact once it's translated out into the routine care setting. But also, if some of the things that we're talking about or that I talked about with the intervention development aren't feasible, if there just really isn't a way currently to do something that's lean, and efficient, and scalable and it's going to require some changes in terms of the way that the workflow-- changes in the workflow, changes in how assessment is done. Doing some cost analyses to justify those changes in routine care right at the outset can also lead to better buy-in at the organizational level and can also help speed implementation.

Speaker 4: Other factors to consider are looking at the impact of different forms of adaptation on effectiveness so that when you're training, you can tell people what absolutely needs to be preserved and what can be changed a little bit if needed. And also strategies to improve implementation and long-term sustainability of the intervention are important to consider at the outset. So even factors when you're doing research-- like if you hire extra staff at a clinic to get your study done, will it survive when the funding ends? Or would it be better to train the existing staff and have a model to backfill if there is turnover in rapidly trained new staff? That might actually help inform effort to ultimately spread the intervention and get it into routine care settings. Because if you rely on an intervention that paid for the study, there's lots of evidence that it won't last, even in that clinic, after the study ends. So with that, I'm going to turn it back over to Tim, and we'll kind of integrate these three talks and, I think, have some discussion.

Speaker 1: Thanks very much, Shannon. We're really impressed with work that has gone into preparing these kind of cross-cutting themes and talks, and now is your chance to ask questions either about a specific talk or perhaps more importantly, about these kinds of cross-cutting themes and opportunities. We have some questions that were sent in ahead of time. We're going to start with some of those, and then we have some questions that have popped up and will continue to pop up as we go along. In preface to this, I want to emphasize a couple of things. First of all, the presenters do not necessarily represent NIMH. However, we're thankful to be working very closely with NIMH. Also, if you have specific questions about RDoC and RDoC funding, whether or not your idea, your project will get funded or could get funded by NIMH and how it connects with RDoC, we'd encourage you to go directly to NIMH with those questions, and they're happy to field them. The first question I want to start out with is one that came in ahead of time, and that is about training implications. What are the implications for training on all levels? So that includes community clinicians who are already practicing as well as graduate students, interns, and postdoc students. Shannon, since you've got the screen, I'm wondering if you want to start and field that question.

Speaker 4: I think that that's-- and training, I think, is a challenge when you're trying to span all three of these things. But when you're working at sort of in graduate program, people do have the opportunity to start learning both kind of the rationale and the science behind the way these interventions are developed but then also learn about how to make them implementable. So I do think that including training that at least informs people a little bit about all of these things are important. But also, given that a lot of the people that are delivering these interventions are not at that PhD level, thinking about training, and master's programs, and social work programs, I think, is also important. So figuring out ways to kind of increase intradisciplinary exchange and get some of these ideas into master's level programs I think would also be important.

Speaker 1: Thanks. Stew, do you have any thoughts on that issue? Training and how mechanisms can be trained or at least understood at all levels?

Speaker 2: Yeah, yeah. I think it's safe to agree with what Shannon was just saying. But it's not just for PhD programs as well. I think for medical programs in medical schools, and maybe residency programs might be a better fit in residencies to be exposed to the understanding of basic mechanisms and how that translates into interventions and then ultimately into dissemination implementation. I think in sort of all aspects of mental health training programs, I think it's important.

Speaker 1: Thanks. How about you Alicia? Can you comment on perhaps treating with your interventions - you've certainly trained people - and how that applies across the training spectrum?

Speaker 3: Yeah, so very much agree with Shannon and Stew. And, clearly, there are obstacles that have to be overcome. I do believe now it is very important for people to understand the underlying mechanisms of the treatment, and what is really intended to be targeted, and how this can be best targeted. And, of course, in that respect we will have to do a better job to make this more disseminable - less dependable on extensive training - but simplify it and therefore, making it more available to everyone.

Speaker 1: Thank you. Okay. We're going to go to another question that was submitted ahead of time and that is, why does the Delaware Project support RDoC? In what ways does thinking about mental illness dimensionally actually help dissemination implementation? The question is, it seems like it would be the other way around. That is, the more readily we can simplify diagnosis the more readily systems of care can manage their needs. And this is actually similar to some other questions that have been submitted about the potential for RDoC as a kind of unifying process, acknowledging that there are some who have other views of RDoC. Stew, do you want to field that question first?

Speaker 2: Yeah.

Speaker 1: That was a lot.

Speaker 2: Yeah. It's a great question. I mean, I think RDoC really is consistent with the goal of the Delaware Project. And while it's true that it might be easier to disseminate a system that's-- really, easier to disseminate a system that people are already used to, i.e., DSM. People are already used to DSM when thinking categorically. And so it's easier to disseminate something that things are used to. But right now, our interventions are-- today we're talking about negative valence and anxiety. For anxiety, they're sort of moderate at best. So just because something's easier to disseminate doesn't mean that's always the best thing for our clients and patients. So ultimately, I think when RDoC-- right now, RDoC is a research initiative, but the next-generation, I guess RDoC 2.0, I think is going to be interested in more of the clinical utility and clinical application of it. And so, indeed, the future-- it's going to be important for clinicians to - I don't know - lack of a better word, get with the times.

Speaker 1: Thanks. Alicia, do you have any thoughts on that?

Speaker 3: No, I very much agree. And I do believe that it will simplify matters. In fact, intervention research is already moving in this direction. There are unified protocols. There are transdiagnostic protocols. There will be hopefully soon pandiagnostic protocols. It will allow to, rather than focusing on very specific diagnoses that may not exist in the real world, come up with techniques and methods that will apply more broadly.

Speaker 1: Thanks. Shannon?

Speaker 4: Yeah. I think just one thing to add is that if we find compelling reasons for-- I mean, people want-- when they're trained in therapy and when they're doing treatment they always have a reason for why they do what they're doing. And so if these concepts can be translated in an understandable way out to the people that are ultimately going to be delivering them, I think people are happy to-- if they have an understanding and they can have a sense of why it's important to target this particular thing or why this intervention might be important at this particular time-- we talk about training people in case conceptualization right now because we don't want people to feel lost if they encounter something that they don't quite know how to handle in treatment, especially when they're delivering a new type of intervention. And so it might be a shift in how people conceptualize some of these things that they see. But by helping people understand these concepts, I think people can feel like they have tools and they understand--

Speaker 1: Shannon, we've lost your sound.

Speaker 4: --why. Oh.

Speaker 1: There you go. You're back.

Speaker 4: Okay. So even though there could be a shift in the idea of why and in what they do, I think as long as people feel like they've got that information and they've got some tools, it can actually help them feel more prepared in some ways to address some of the things that they're seeing. It's just a matter of figuring out how to handle that shift in terms of the way that people think about what they're doing when they're treating people which, depending on what we find as promising interventions, may or may not be a significant shift.

Speaker 1: Thanks. I think that makes a lot of sense. The next question is related. It's actually the first one that popped up in our Q&A this session, and that's how might treatment effects feed back and inform psychopathology research and other basic research. Stew, do you want to field that one?

Speaker 2: Say that again. You broke up for a second there.

Speaker 1: It's how treatment effects-- so what we're learning from randomized trials and mediator/moderator research like Alicia is doing, can that inform basic research on psychopathology and mechanisms? And if so, how?

Speaker 2: Gotcha. Yeah. Say for example a treatment study shows that a particular target or particular-- maybe target's not the right word. But say a particular outcome variable changes very early during the course of the treatment. And then there's other aspects of the syndrome or of the phenotype that either don't change or change very, very late in the process, then I think trying to understand, which are, maybe, the core aspects of these illnesses then? Because if the intervention is working, maybe things should kind of get better all at the same time. But if there's different components to the illness that are changing at different times, or maybe not changing at all, then we need to understand that a little bit better. Again, getting back and doing some of the basic experimental psychopathology research to try to decompose that syndrome into its different components. Because clearly, again, [inaudible] this actually happens, that if different components in the intervention-- or even different components of the syndrome are changing this as a response to the intervention, then maybe we're dealing with a really heterogeneous syndrome, here.

Speaker 1: I think that that's a great answer, and there's some good ideas and innovation in there. Alicia, do you want to comment on the same question? You probably thought about it before.

Speaker 3: Well, and I would just add to that, that I would, actually, very strongly encourage practitioners and therapists to, for example, submit case studies or case description. Because we are thinking about a set of mediators that oftentimes do derive from basic research. But it's very much possible that there are many other mediators, risk factors out there that we have not thought about. And the person that is interacting with the client directly, with the patient directly, is in the best position to observe them and share them with the researchers. And one way of doing that is by, for example, publishing these results in a case study. This can be very, very valuable.

Speaker 1: I think that makes sense. Shannon, I'm going to give you a chance to respond in a minute if you want to. But there's a question out there that actually is related to what you just said, Alicia. And that is, in your work, specifically, how do you decide on the measurement resolution and timing of the mediator assessment in relation to outcome?

Speaker 3: So, I mean, that is a little bit more statistically-based. I mean, we know that-- I mean, ideally, to assess temporal precedence. And so that is the idea of to what extent does an assumed mediator changes the outcome. We would ideally not want to do this only at a pre-post level. But to, in fact, get a better picture, it is very advisable to assess multiple times throughout the treatment. And that can actually also allow an aspect that was mentioned earlier that if we see that some of these targeted mediators change particularly early, they may be the ones that-- and maybe subsequently predict greater outcome, these are the particularly strong mediators that we want to focus on. But preferably for a statistical model of mediation, we have multiple assessment time points. In the study that I was just talking about, we had five assessment time points at every session. And that can, of course, make it sometimes much more difficult, or basically unrealistic, to assess certain mediators such as neural mediators. We can not, of course, expect a lot of-- we can put a person in a scanner multiple times throughout an intervention, and so there are sometimes designs in the application can be more difficult.

Speaker 1: Thank you. Shannon, any thoughts going back to that connection between how treatment research can feed back to basic research?

Speaker 4: Yeah. I think both providing the outcomes, whether through treatment research or even sort of practice-based research. Feeding back outcomes, as well as information about when the clinician thought they needed to maybe adapt the intervention and why or things that came up that interfered might actually provide some really useful information. For example, if you're learning that some people, just-- their anxiety got so high they couldn't engage in a particular aspect of the treatment. And maybe you start to see that a subset of people-- this is something that either you're learning about by watching the sessions when you're doing fidelity assessments, or the therapist is reporting it. Or maybe even there's something in the assessment that's being provided. Where you can determine that it might start to give you ideas about some individual differences that would need to be explored back before you can figure out how you can intervene. Or it might give you ideas about a different type of intervention that could be used and built into the treatment. When people, for example, have such a high level of anxiety that they can't focus or participate in whatever the particular intervention is, that's trying to be used. So I think that's one way that feedback could actually inform both basic research, as well as the refinement of the intervention itself.

Speaker 1: Thank you. I think those are all great ideas and very consistent with the aims of kind of integrating intervention science. We have another question along those lines and very consistent with the Delaware Project goals. How can science-oriented programs begin to implement RDoC measures in training clinics? As a clinic director myself, I'm interested in this question, and clearly, others in the audience are. The question is clarified, specifically kind of what is the low-hanging fruit that we can implement now? Are there measures out there that could be put into training clinics. Stew, do you want to start on that one?

Speaker 2: Yeah. Like Shannon was talking about before, I think it boils down to the unit of analysis that you're interested in. So in a training clinic, it's probably not feasible at this point in time to do fMRI scans on people. Maybe at another point in time. And even some biological, psychophysiological measures might not work as well. But there's some really interesting work, for example, done by Chris Patrick, down at Florida State, where he's looking at sort of latent variables of RDoC constructs and finds that wherther you have a biological variable-- but you could have a self-report indicator of that latent biological construct. And maybe you can take this measure, this self-report measure, and implement that in a training clinic. So as far as on the assessment side of things, I think that's useful. The other place where I think RDoC can implement-- so we got an assessment side; RDoC can play a role in a training clinic. But I think also in terms of didactics. The origin of RDoC-- while it's a new initiative, the concept of RDoC has its origins in kind of just basic functional analysis that's been around for 20 or 30 years. Where instead of assessing a person and saying, "Well, here's their DSM category," and, "Let's pull the manual off the shelf," a lot of us said, "Well, what's their functional deficit? Do they have a deficit in how they respond to reward? Do they have a deficit in how they respond to threat?" and then tailoring your intervention based on that. That's kind of the origin of RDoC, and so I think educating people in programs regarding this basis of doing a functional assessment on people is important as well.

Speaker 1: Thanks. Alicia, I know you've done training, at least with your interventions, and there are new measures that you're using, especially when you're looking at things like CO2. Can you implement those in training clinics? And if so, what's the best approach?

Speaker 3:  So a measure like CO2 is relatively easy to implement. It would take seconds, minutes, to assess CO2. Of course, it is a device that is expensive, and a portable capnometer, which is a medical device, currently runs at probably $2,500. And so that is a reality, but I do believe very strongly that-- also, and I want to add to that what Stew said, "Mediators are not just-- or units of analysis not just neuro ones, or physiological ones, or [cellular?] ones. I mean they are-- it is self-report. And originally it did-- I mean, we saw-- where we probably have to end up [culling?] most information. And so even just assessing symptoms on a weekly base using maybe some more diagnostic independent assessment tools, some [inaudible] tools, is extremely valuable at tracking them and observing that if a person is stagnating in their-- based on the assessment on their reward sensitivity measures or threat sensitivity measures, to be able to note that and intervene and see, "Well, can we shift gears and use a different maybe treatment technique at that point?"

Speaker 1: Thank you. That's helpful for me as a clinic director and hopefully others as well. Shannon, you can address this if you want to, in the clinic director question, but there's kind of an analogous question that might be useful, and you could field this one. It's what is the thinking now on the best way to train practitioners on newer treatment models or the principles or approaches? For example, inhibitory learning and expectancy violations for panic, what's the best way to get these ideas out in front of the clinicians on the ground?

Speaker 4: Well, there's more and more research that suggests that training has to be multi-faceted. So just doing, for example, a workshop is not going to work. And actually, there's evidence that although training that involves ongoing consultation can lead to-- basically, what we seem to be finding is that most anyone can be trained to do these interventions with some sort of a workshop and intensive consultation. But whether or not they then go on to use it really depends-- and training and consultation doesn't seem to be quite enough. But there is some evidence that some form of ongoing consultation where people bring in information about their cases, maybe bring in some treatment materials--

Speaker 1: Oh, we lost your audio again, Shannon.

Speaker 4: You lose my audio? I'm sorry.

Speaker 1: Now you're back.

Speaker 4: So I'm going to just hold the phone up to my ear here. So training and consultation that would address sort of barriers at the level of attitudes and perceptions of the intervention as well as organizational challenges seems to be important. And I would add that the information has to be presented in a way that's compelling, and very user-friendly, and usable. So if you're doing a workshop, having lengthy didactics about the theory behind it is less likely to really engage people. Finding ways to break up what people need to know about the theory with experiential exercises and then having some tools that people can use when they go back and try to practice it, like very brief clips of videos, or podcasts, or something that they can refer to quickly before they go in and do an intervention, or help sheets and tip sheets to walk them through what they need to do and why, can be things that make it a little bit easier for people to start integrating these practices into their treatment. And it can also make it a little bit more digestible. It makes the concepts a little more user-friendly, just remembering to eliminate jargon, present things in a simple way. And have it so that people can access the information they need, even between consultations. So that if they need a quick refresher before they go into a session they've got it. So those are some different ways that training can be enhanced and increase the chances of people starting using these interventions.

Speaker 1: Thanks. Those are all really great ideas. And I know, because I know your work, that you've seen that in action and are speaking from experience. I want to go now to a question that's actually surfaced a number of different times from the audience and that is, what are the policy implications? Are there ways in which the things that have been discussed can be enhanced? Or the delivery, dissemination, and implementation can be enhanced? Shannon, since you've got the screen, do you want to tackle that question first? I know you've come up against policy in lots of different arenas. Do you have thoughts on how what we've discussed has implications for policy?

Speaker 4: Well, I think that at some point there could be enough evidence about the importance of certain forms of assessment, and eventually also the cost benefits of doing some assessment that might not currently be reimbursed, that then could become reimbursable if we can demonstrate that actually getting that information at the outset about who's going to be most appropriate for certain types of treatment, or how to target treatment more efficiently, could lead to changes in the way that assessment is reimbursed. And I think that would be one important policy shift. Especially if the assessment takes more than just a few minutes at the start of a session, yeah. Right now, something like a Quicksoft report could obviously be integrated pretty easily into a therapy session. But if there's a test that would need to be done maybe prior to figuring out what treatment is going to be most beneficial-- we would probably need to do some research and demonstrate that the [up front?] costs would be sort of recouped in terms of treatment efficiency and reduction in likelihood of relapse, etc. But that's one thing that comes to mind. There are also some healthcare systems that implement policies around the types of interventions that need to be implemented. And those are largely based on things like evidence of the effectiveness, and the precedent need, and relative priority within those different systems. So I mean, there could be policies around making certain forms of intervention or assessment available within different organizations or different settings. But, again, I think in the case, for example, where they implemented cognitive behavioral therapy throughout the healthcare system, things like that were done after a cost analysis as well as a demonstration project demonstrating cost benefits as well as effectiveness.

Speaker 1: [inaudible]. Thanks. Stew, it sounds like you got thoughts on this too?

Speaker 2: Yeah. I think another policy issue that we were talking about before has to do with training. The oversight agencies that accredit-- whether it's psychology program, or social work programs, and so on, sometimes there's a [inaudible] some of them, a lack of emphasis or maybe a lack of recognition around the importance of science, whether it's science in the basic psychopathology level or any of these assets we're talking about, dissemination science, interventions, and so on. So in terms of the policies of the accreditation bodies, I think it's really important for things like the Delaware Project and RDoC to try to influence those types of policies as well.

Speaker 1: Yeah. And accreditation's a huge issue, especially when it comes to training, and that's one of the core kind of tenets of the Delaware Project. Alicia, do you have thoughts on policy either as it relates to training, dissemination, implementation or RDoC?

Speaker 3: Well, I mean, I think what I would only add is that if it can be shown that certain assessments, even though maybe more costly in the beginning, will help to identify who will be benefitting most from a specific treatment, I mean, ultimately this will have a very positive effect on, of course, the outcome of this particular person but also in terms of reducing overall cost and then, in turn, being then less taxing on society.

Speaker 1: Thanks. I think that's really important and sheds some key light on this. I'm going to shift gears a little bit and go back to a question that was submitted ahead of time but has come up in lots of settings, not just here but in some [inaudible] I've been to and in private conversations. And they're kind of two twin questions. The first question is, do you think that RDoC is a finished product? And the second one is, does RDoC replace the DSM or other diagnostic categorization systems? Stew, you mentioned us--

Speaker 3: Tim, your screen just froze.

Speaker 2: I think I heard the question though. The question was, is RDoC a finished product, and will it replace the DSM. I think that's what he said. And it's not a finished product. I mean, I don't speak for it. I don't work for NIMH. But from every reading that I've read about RDoC and every presentation that I've seen from individuals in RDoC unit, RDoC is designed to be kind of a living, breathing nomenclature and really more of, I guess, a guiding to how to understand psychopathology. For example, the RDoC Matrix, which is on the website right now, is not supposed to be the definitive final set of constructs that one is supposed to study or one is supposed to identify. I think at some point there'll be some RDoC constructs that might have a little more [legs?] than others, but I know that NIMH is interested in other constructs that are not just in this RDoC Matrix. And in terms of will it replace the DMS? I don't have a crystal ball. I'm not sure what sort of the future holds. But I think, at least in the foreseeable future, we're going to start to see the influence of taking an RDoCian perspective in clinics. And so instead of people, for example, in clinics talking about, "This person has major depressive disorder," or "this person has panic disorder," I think we'll to see more discussions that, "This person has really heightened response to potential threat," or "This person has a blunted receipt of reward." And using that type of nomenclature I think is going to start to pop its head up in clinical practice.

Speaker 1: Thanks. Shannon, you might see some dissemination implementation implications for that question. How does RDoC fit in all of the factors you discussed in your talk, and do you have any thoughts on its relationship to the DSM and how it's being used or might be used by clinicians in the community?

Speaker 4: Yeah. I mean, I think we're probably a little ways away from the shift that Stew was talking about in part because-- I mean, the DSM in some ways has made it into a lot of routine care settings only because a diagnosis is required for reimbursement. So I mean, some people do talk about things like depression or PTSD, certainly, people know when people have panic attacks. I think if people start to have a better sense of the underlying mechanisms, what we will see is people maybe starting to talk about, "Okay, this is how I'm going to treat this phenomenon that I see." But I think more often, people, in a lot of practice settings, are really talking about sort of the clinical phenomenon that they see. And sometimes they're talking about things like anger or relationship problems, and not even something related to DSM diagnoses. But if they develop a way of understanding these things that are informed by RDoC, then it could certainly inform the way that they target those things. And so we might see a shift in how people target things like panic or the way that they-- and maybe even the way they conceptualize. But it's interesting to think about replacing DSM because I think in some practice settings, there's already resistance to DSM that wouldn't make it hard to replace, as long as people find compelling and have a good understanding of what's replacing it and how it can actually inform their practice.

Speaker 1: Thanks. Alicia, I don't know if you want to add anything on the question of RDoC, its role, its kind of expandability, etc.?

Speaker 3: Yeah. I mean, again, I very much agree with what Shannon and Stew said, and yes, I think we should be looking at RDoC as a living system. The DSM right now still provides, I think, a wonderful opportunity for communication across different healthcare providers. But I think that RDoC particularly will really help to encourage experts in these different units of analysis to test their hypothesis in a less categorial way, and so I see that as a wonderful opportunity and motivator.

Speaker 4: One thing to add is that I think that there is openness to things like unified or transdiagnostic protocols and that people will respond, I think, positively to the idea of, "I'm going to learn kind of suite of interventions and how and where to apply them." So that instead of being wedded to a diagnostic system I can really work to most efficiently target what I see in the patients I have. How people respond to kind of the language of RDoC - I think everyone's [pressed?] into the idea of more transdiagnostic or pandiagnostic type of intervention - I think, is something that in a lot practice settings there would be a lot of receptivity towards.

Speaker 1: That's a really good point. There's a lot of options for how we move forward. We really only have time for one more question. So last question is going to be one submitted ahead of time and that is, where the rubber hits the road, what about costs? Does addressing mechanisms and utilizing evidence-based strategies cost more, or does it ultimately reduce cost by making interventions more efficient? Alicia, I know you have some experience with this, in fact, some data at least on your interventions. Do you want to start with this one?

Speaker 3:  I mean, I do strongly believe that if we end up being able to target-- or to hit the targets that are most essential in the different pathologic phenomenas the hardest, that this will be the most effective and efficient way of treating disorders. However, it is true that in the example of CART, I mean, it does require a device that is rather costly. To use it as an example, it is on the other hand a very brief treatment. It's only 5 session long, whereas, currently, CBT is approximately 13 sessions. So if you assume that one therapy session would cost $100, and the capnometry device would cost within about $1100, then already, after the full therapies CART would actually be $800 cheaper, because it is only one-third of the length of regular CBT. But again, CART is only a model right now. And it only applies to a very small aspect of the units of analyses within the threat category. To what extent it really transfers to any other ones remains to be seen, but I do think it provides at least model of how we can test it and then determine, ultimately, will this actually reduce cost? And therefore make it, maybe, more dissemable and more attractive.

Speaker 1: Thanks. Shannon you've talked about costs already in terms of the model you proposed. Any specific thoughts in response to this question?

Speaker 4: Yeah. I think it depends on who incurs the cost. So for example, in a healthcare system-- so for example, like the VA, where it's sort of a more self-contained organization, and they can invest in and buy equipment that they needed, but then they would know that instead of having to provide a dozen sessions they could provide three or five, I think that there could be a good argument made for the cost benefit as well as treatment efficiency. Because when you think about the modal number of therapy sessions that people attend as one, if we can convince people to come and stay for three or five sessions as opposed to-- especially when they have long drives and things to get to this type of intervention, obviously efficiency is going to be a good sell. When it's an organization that relies on reimbursement, and they have to make the investment in that piece of equipment, and they're not necessarily going to be reimbursed for it, I think it would be a matter of figuring out whether or not there is a way to make that cost back up in some way because they wouldn't necessarily be seeing-- they wouldn't be seeing the benefits in the same way that a more closed healthcare system would. However, there could be other benefits and ultimately-- for example, if they can get more people to stay for three to five sessions, and if they see less dropouts, then ultimately they might be able to bill more, and eventually, they would be able to make up those costs.

Speaker 1: Thanks. Stew, any thoughts on the cost question as it comes to--?

Speaker 2:  I think Alicia and Shannon really sort of answered it well. And just one sort of small thing to add that whether it's at the agency level or even a macro cost level if we're thinking insurance companies or third-party providers to think not just about the short-term costs but about the long-term costs, I think those are [crosstalk] different things. So while it might be an investment on the front end in terms of the-- forgetting about the cost to patients - just dollared cost, I'm talking about - that in the long run things might be considerably cheaper. I think balancing those two, sort of short-term cost versus long-term cost.

Speaker 1: Thank you. So we are unfortunately out of time. We have a lot of great questions still, but won't be able to address them. I do, in conclusion, want to thank our presenter's Stew Shankman, Alicia Meuret, and Shannon Stirman. Also, thank NMIH and the RDoC unit, specifically, as well as ABCT and the Delaware Project folks. Thank you all for attending. For submitting great questions. This will be posted online, so check back on the website. And this is a webinar series, so we'll be moving forward with our next webinar hopefully soon. Thanks, everyone.

[MUSIC]