Advancing Training in Suicide Prevention Clinical Care - Day 2
STEPHEN O’CONNOR: Hello. I'm Stephen O'Connor, and I chief the Suicide Prevention Research Program in the Division of Services and Intervention Research at the National Institute of Mental Health. Welcome to day two of the NIMH workshop on Advancing Training in Suicide Prevention Clinical Care. We're delighted that you will join us for a second day of exciting presentations and discussion to assess the state of the science related to suicide prevention training in clinical care with the ultimate goal of preventing suicidal behavior through improved training of providers. This workshop is sponsored by the NIMH's Division of Services and Intervention Research and the NIMH suicide Research Team.
You will find the agenda and brief biographical statements for each of our presenters, discussants, and invited attendees on the Eventbrite registration website. Today's event will go until four p.m. Eastern Time. Just as on day one, today's workshop will consist of four separate sessions. Presenters have 10 minutes to speak on their topic, and when all presenters are done, we will have approximately 30 minutes of discussion facilitated by the session moderator. All panelists are invited to participate in the discussion of each session. These workshops are being recorded and will be made available in the future on the NIMH website.
We look forward to today's session building upon themes identified in day one, including the need for greater integration of tools and technologies to support clinician training; designing training strategies that better incorporate characteristics of local communities, clinics, service users, and service providers; partnering with lived experience experts to ensure strategies match individual preferences; addressing organizational factors to better support clinicians in the context of suicide prevention programming; and the need to leverage opportunities to improve the likelihood that training programs graduate students competent in suicide prevention clinical care best practices. We encourage the audience to submit questions through the Zoom Q&A Tool. Please note that we will not be able to answer every question, but we will do our best to address themes presented in questions posed.
Let's go ahead and get started. I'm pleased to introduce Dr. Rinad Beidas from the University of Pennsylvania who will moderate our first session of the day.
RINAD BEIDAS: Thank you so much. My name is Rinad Beidas, and it is a true honor and pleasure to be moderating this session on Barriers and Facilitators of Successful Training in suicide Prevention Clinical Best Practices. We have a set of esteemed speakers who I am very much looking forward to hearing from. Our session will start with Dr. Edwin Boudreaux from the University of Massachusetts. Second, we'll have Dr. Anthony Pisani from the University of Rochester. Third, we will have Dr. Michael Lindsey from New York University, and last we will have Dr. Dana Prince from Case Western Reserve University. And then as Dr. O'Connor noted, we will have time for Q&A towards the end of the presentation session. Please send your questions as they come up through the Q&A so that we can be sure to have a very rich discussion. Thank you very much. Dr. Boudreaux?
EDWIN BOUDREAUX: Excellent. Thank you. Sorry for the -- for the slow start there. So I appreciate that the introduction. Today we're going to be talking about facilitators and barriers for training emergency department clinicians, and we're going to use a methodology and an approach towards understanding the barriers and gaps that my team has been kind of working to develop. But we're going to look at this from a biopsychosocial case formulation approach that we've tried to apply to systems-based analysis, so rather than an individual patient that we use the same types of thinking that we include in biopsychosocial case formulation to analyze systems of care and where they fall short. And we've also superimposed upon that an implementation science model that you guys might be familiar with, which is the EPIS Model, the Exploration, Preparation, Implementation, and Sustainment. And we've tried to look at the problem of through these two lenses and have a -- try to draw parallels between those two different approaches.
Part of the parallels that we are making here that whenever you look at the EPIS framework, we really focus on inner context -- the inner context of the organization or of the unit where we're trying to implement improved protocols and trainings are often very influential in the success of those implementations, so these are some of the constructs that typically fall underneath the EPIS inner context. And you see to the right behavioral case formulation factors. So when we think about behavioral case formulation and we're trying to understand what might be happening within an individual client, we might identify these proximal factors.
So inner context from EPIS and proximal factors from case conceptualization kind of have some resonance with one another. They're often considered the driving events for the problem that we're trying to evaluate, and these are near-term factors. In the EPIS Framework we have some outer context factors which are often a bit more distal, so you can see that the inner -- outer context from the EPIS Framework line up with the distal factors. And we'll show how we've been able to kind of combine these two approaches to understand what's going on with patients who have suicide risk.
So we're going to start with the problem. The problem is a 34-year-old male presented for belly pain, died by suicide within 48 hours of discharge from the emergency department. So we have this adverse outcome, this bad outcome, and we want to ask ourselves why. What was the reason why this occurred? So we're going to kind of use a case example that's an amalgam of a bunch of different experiences we've had through the -- through the studies that I -- that I've completed. And so this is not a one specific case study, but it's emblematic of the types of things that we have observed.
So why did this gentleman die so shortly after his emergency department visit? Well, the most proximal cause or an inner context is that he was not screened for suicide risk during his ED visit, so his risk was never identified. You ask, well, why wasn't he screened. Well, the nurse who was treating the individual was not trained on the ED's Universal Screening Protocol. So in this -- in this scenario, the ED did have a universal screening protocol in place, but the nurse was not properly trained on that screen protocol.
So we take the next step and what are some additional inner contexts or proximal factors that might've influenced the fact that the nurse was not trained on these screening protocols, and we see that the emergency department does not pay for off-duty training, which means that the training has to be completed during the person's clinical shift. The organization relies on self-paced virtual training during their shifts, so there's no skills-based training on screening, especially behavioral health and any kind of prevention efforts, and there's no performance auditing with feedback. So these are some strategies that we know can improve training, but they're not being implemented in this particular facility.
Why aren't they being implemented in the particular facility? You can see some of the distal factors, or what we might consider outer context factors, for this particular unit. The health system that this ED is in is struggling financially, so that's one reason they don't pay for off-duty training. Their nurse managers are overworked, and, therefore, they don't -- aren't capable of completing performance auditing with feedback for their line-level nurses. The health system focuses on medical procedures and throughput, so this is very common in integrated systems where the procedure-driven care and medical care is prioritized. So this influences the type of trainings that are emphasized for the staff. Why is this the case? You can see all the way to the -- to the left here is our most distal factor or the outer context factors that we simply have a fee-for-service system in America. And if you have a fee-for-service system, what's going to drive prioritization of training and care is what you get paid for.
So how does this help us? How does this help us with uncovering facilitators and challenges? Well, as you can see, a big -- a really important proximal driver of the fact that the person was not trained on the universal screening protocols was the fact that we rely on self-paced virtual training. Everyone probably has this experience. If you're working within the health system, you're going to get a lot of your training through e-learning for you or some kind of platform that provides virtual training. And as you probably know if you've been through this -- I've been through it a thousand times in my compliance training every year -- people just tap through this stuff. There's very little to prevent people from simply going through the motions and not actually getting trained.
But does it have to be that way? I think that we could probably figure out ways that we can improve virtual trainings. You've heard some of this in a -- in our previous sessions and you'll probably hear more about it today, but I think virtual trainings, you can't get rid of them. They're important. They're essential, and they're to be the go-to that many health systems use because of their efficiency. But can we make those virtual trainings more effective? Can we introduce, for example, a computerized adaptive training so people have to complete quizzes or knowledge assessments, and then their training profile or their training program is tailored to their weaknesses, to the areas that they don't -- they don't know? That way you're not forcing people to go through trainings for skills they may have already developed, which is a colossal waste of time.
Can we gamify these trainings? Can we make it engaging? Can we make it interesting so that people would be way less likely to pan through it. And also, can we use serial exposure to the information rather than just a one-shot opportunity for learning? We know that people don't learn that way, especially busy clinicians. So are there ways to be able to improve our virtual delivery so that they're exposed to it multiple times? Can we align our electronic health record workflow and build just-in-time tools? Sometimes people need the training and need a quick reminder of what they're supposed to do right at the point of care, right in the electronic health records so that the EHR workflow and the just-in-time tools in the electronic health record can reinforce the virtual training, once again a serial exposure just at the time of care.
We know that performance auditing and feedback is extremely important for getting people to change their behavior, but it's just not done that often, and part of the reason why it's not done that often is because it's laborious. It takes a lot of time. It takes a lot of clinician time. Are there ways we can create more efficient auditing processes? In other words, can we do brief debriefings after a patient encounter and not requiring an observation by a trained observer? Can we use patient checklists where the clinician and a manager are going through the checklist of key performance elements together in a collaborative way to determine if those performance elements have been met, and it can be done in a simple and efficient way rather than requiring intensive observation?
Also you may have heard that, you know, there are ways to automate the feedback of performance using the electronic health record so you can actually generate reports that identify when performance elements are not being met, but also using artificial intelligence and AI approaches to training specific skills. This automated performance feedback using AI can actually be done virtually, so you might end up having both a feedback loop here that, as we get better at training using AI, that could actually replace some of the outdated and worn self-paced virtual trainings that we currently use in healthcare settings.
If you go back farther into the -- this proximal cost here for the emergency department not paying for off-duty training, you'll recognize that a lot of health systems simply haven't established a continuous quality improvement culture for process improvement and haven't really focused on figuring out how they can pay for trainings and how to make the cost case for that. So if we can address this and promote CQI culture, we may be able to emphasize the importance of training.
And finally, if you go all the way back to the -- a primary driver here, you're probably not going to change the fee-for-service health system in America. That's not really going to be realistic. However, there are states that have already explored using state mandates for licensure that require minimum qualifications and continuing education credit training in suicide prevention and risk management. So you can see that this might actually help to address this fee-for-service focus. If you can't change that, then at least you can mandate that people get trained.
And so this is a summary of our use of the case formulation and EPIS Models for identifying facilitators and challenges and not -- and training in the ED. Thank you. Tony, I think you're up next. Tony, take it away.
ANTHONY PISANI: Okay. Thank you. Thank you so much, Ed. Good to be here. And I was asked to talk about the barriers and facilitators to successful training in primary care. First, just some acknowledgements of funding from AHRQ and also participating in this study, a really interesting study that I'll share about later, that was funded by the CDC. I'm the founder of SafeSide Prevention and receive some book royalties.
So when it comes to barriers and facilitators, I -- you know, I sort of just want to be, like, you know, "what he said" and then sit down because a lot of the barriers and facilitators that Ed just identified for -- in the emergency department are similar to those in primary care. I mean, we have the same kind of system where people are not paid for off-duty training. We rely on these online modules. There's no performance editor -- editing -- auditing, etcetera. So everything -- almost everything that Ed said relating to the ED also applies to primary care. But there are some unique things as well, and I'll try to focus on those with a particular emphasis on research gaps and questions as Dr. O'Connor asked us to do.
So this is one of those, you know. The beginning of almost every presentation about suicide prevention and primary care has some percentage that they put into this sentence, and I'd be curious the different numbers that people might say with respect to this. One commonly-cited study was from 2001, Luoma -- I don't know if I'm pronouncing the name right -- but excellent study from 2001 that was around 40 percent. He said there's a lot of different ones. For me, the most persuasive and recent study about this was what Dr. Brian Ahmedani and group completed, and the number there would be around 27 percent. But it's actually a more subtle picture than that because it really depended upon age group. So I think if you were really going to say this, you would say somewhere more 10 and 51 percent of patients who died by suicide had contact with primary care in the month before they died.
And I'm bringing this up because this is, I think, cited as here's the prevention opportunity. So my question is, well, what would it take to translate that opportunity into reduced suicide. And, you know, I think where we quickly go -- just, well, let's identify these people. Well, let's talk a little bit about some of the challenges there, and then I'll make some suggestions of where research might address some of those challenges.
So first of all, as you look, it does depend upon age, and this is the percentage. You see here on the right the percentage who had a primary care visit within four weeks of their -- of their death, so there is some difference in terms of age group, and that probably has a lot to do with who goes to primary care. But really significant in this, and I think it's -- to me, it's maybe the most important, you know, takeaway in relation to how do -- what do we need to do to take advantage of this opportunity, is that 71 percent of the visits that were -- the primary care visits that were kind of tracked in this, you know, really impressive study were people who were not there for mental health or for chemical dependency. And this is really important because a lot of the efforts that we -- that we make have been targeted around those groups. And, you know, just as an example, one huge accomplishment in the last few years has been this summary of recommended standard care elements. And I was not a part of developing these elements, but I've talked with people who are and I know that it was a massive amount of work in trying to get consensus on what all these would say, was -- you know, is a big task, and this is a really important progress for the field.
But as you can see, the focus here ended up being on identifying suicidality for primary -- this in primary care -- with people who have mental illness or substance use conditions. So that would be that minority of people who had seen primary care recently and then died. Again, not taking away anything from this really important document, but I think this has been the common -- the Joint Commission same way -- focusing on screening among people who are receiving treatment for behavioral health conditions.
Another kind of challenge in taking advantage of this opportunity is people being willing to disclose their suicide concerns to -- in primary care. So this study from 2016, another really impressive group. This one was done in the VA seeking to replicate an earlier study that I'll show you in a second, but one just -- there's a lot of interesting things from this study that looked across not just patients who are seen for depression or mental health concerns, but across the board. And what they found was about 61 percent of those who had died within 30 days had said not at all on the PHQ-9 question about thoughts you would be better off dead. So most of the people who died in that group had said not at all, and as I said, that was replicating an earlier study by Simon that had little bit bigger -- this was -- that only included people who were being treated with depression, and among them, only about a quarter of those people had said -- in attempts and deaths about a quarter had said "not at all."
Now, one other interesting study is a study by Tony Jerant and Paul Duberstein and the group, and they did this very interesting patient activation intervention. What they wanted to see is could we -- could we kind of prompt people to make these -- to talk about these things with their doctors because one of the concerns is people just don't talk about those things. This is focused on middle-aged men, and to make a long story short they had -- they contacted more than 4,000 people in order to get really about 52 that matched their criteria. And these were people who had -- who had met a set of -- you know, they have already -- stated they had suicide concerns on a phone screen. They had to be coming into primary care or willing to do so, a very selective group. And even among this group, very selective, 35 percent were selected and just consented into a study about suicide did not report discussing suicide with their primary care providers even though they had this intervention that seemed to be helpful.
Okay. So the way I would see it is that we need -- I think we have to stop lumping together people who are actively disclosing either because they say I have these suicides or I want to kill myself, or they're willing to disclose on a -- on a screening form, or people who are willing to come forward and say I have a mental health condition, versus those people in a couple of other buckets who are not readily feeling comfortable or willing to share that. And I think that one of the things we can do especially is to focus on the people who might be thinking or even at advanced points in planning but aren't sharing these things.
And so a couple -- a few research questions. One is what are the behaviors, routines, messaging, norms that will increase people in that category, willingness to disclose. Part of that might be algorithms, how can those aid, and in each of these areas there's work being done. But even if we get really good algorithms, how can we co-design a human computer interaction so that those algorithms actually address the risk and provide what patients really want and need. And then if -- once people do disclose, we really don't know what are the key ingredients of a successful response, particularly that could be kind of fit into these brief encounters where many people, even if they do have access to mental health care, are not willing to go to mental healthcare or don't feel comfortable, like, it meets their needs, and defining "successful" as being something that matters to patients and reduces suicide.
Let me go through.
Another set of questions has to do with quality. How can we feasibly measure quality? My close collaborator with Lived Experience, Kristina Mossgraber, put it like this. You know, what you do matters, but how you do it matters more. And we don't really, either for research or in practice, have good ways of measuring quality, and this kind of echoes what Ed said before, providing timely feedback to people. And finally that inner context. What are the -- what's the climate, the leadership setting that is going to most promote disclosure, people engaging with primary care around their suicide concerns, feeling satisfied and cared about, wanting to -- wanting to pursue safety and recovery. We really don't even know about what is the overall context in which that's going to happen. And maybe also what are the roles that opinion leaders, trusted colleagues, and professional networks can play in promoting the kinds of norms that will ultimately make people feel comfortable and wanting to share and engage.
So I'll stop there and pass it to Dr. Michael Lindsey.
MICHAEL LINDSEY: All right. Thank you, Tony. I think some of your points actually dovetail well with some of the things that I'll be presenting on. So thank you to NIMH for convening this conversation, and I'm going to talk specifically about work that colleagues and I have done related to the rising trend in suicide and suicide behaviors among black youth. And we're framing the work that we're doing as a Ring the Alarm moment. I'm going to talk about really quickly three studies that we have done. Two recently came out. One came out in 2019. And so all of the information that I'm going to be sharing with you today is available in the literature.
The first thing I'm going to talk about is a study that recently came out in the Journal of the American Academy of Child and Adolescent Psychiatry that was led by Arielle Sheftall, and we looked specifically at suicide and precipitating circumstances related to a suicide attempt. The finding that caught the attention of the New York Times was our finding around what's happening with black girls, and I'll share that in a moment. In this study, we were looking at trends and precipitating circumstances among black youth by age group, so five to 11 years old, 12 to 14, 15 to 17, and we looked at it by sex. We used data -- combined data from the CDC, the WISQARS and NVDRS from the years 2003 to 2017, examining what was happening among both boys and girls across those age groups.
So just quickly, I'm going to get right to the findings. We found that the 15-to-17-year-old age group experienced the largest annual percentage change, with an annual increase of 4.9 percent relative to the other age groups. Five-to-11-year-old group saw a 3.9 annual increase. Regarding trends by sex, black girls have the largest annual percentage change at 6.6 percent, and the most common method of suicide among black youth was hanging, strangulation, and suffocation. In terms of precipitating circumstances, we found that black children between the ages of five and 11 years old who were diagnosed with a mental health concern were more likely -- 73 percent of them -- to have a diagnosis of ADHD/ADD compared to other age groups at the time of their death. For the 15-to-17-year-old age group, they were more likely to die by firearms, have a recent criminal legal problem, have boyfriend/girlfriend problems or crises, test positive for marijuana at the time of death, have a substance use concerned unrelated to alcohol, and if diagnosed with a mental health concern, more likely to be diagnosed with depression or dysthymia.
So key questions and gaps. What indicators of suicide behaviors should we look for among the five-to-11-year-old youth -- black youth with ADD or ADHD given our finding? Why is marijuana associated with suicide deaths among the 15-to-17-year-old black youth? It was a surprising finding given what we know about substance use and its relation to suicide behaviors. And then the big question is why are we seeing this uptick in terms of black youth who are dying by suicide, especially females? We're happy and pleased that NIMH has taken great concern and an interest in this issue and have put out several calls for research, and we need more research really to unpack and figure out why we're seeing this uptick in trends.
So the second study was a study that came out in Pediatrics a couple years ago where we looked at data from the CDC, the YRBS, from 1991 through 2017 at the time of the study just trying to understand what were differences in terms of trends in suicide behaviors over that time among high school-age youth in the U.S., and also looking at sex differences. And so we looked at four indices of suicide behaviors: thinking about suicide, planning suicide, have an injury -- I'm sorry -- attempting suicide and having an injury related to suicide attempt. The major finding there was that in terms of suicide attempts, there was a 73-percent increase for black youth only. Every other racial and ethnic group actually saw a decrease in suicide attempts over that span of time.
Recently looked at 2019 data because the YRBS obviously has made 2019 data available, and there was actually 144-percent increase, and again, only for black youth as opposed to other racial and ethnic groups. In terms of self-reported injury related to a suicide attempt, that was -- there was a 122-percent percent increase from 1991 through 2017, and it was for black boys. And then looking at data from 2019 recently, there a 166-percent increase in that category for black boys.
And so it led me and my lab members to ask this question -- are black youth going right to an attempt -- because what we observed in that Pediatric study is that over the span of those years, 1991 through 2017, we actually saw a decrease in thinking about suicide and planning suicide for all racial and ethnic groups, but there was that uptick and upward trajectory over the span of time for black youth in terms of suicide attempts and having an injury related to a suicide attempt.
So a new study that just came out in Prevention Science -- Meghan Romanelli, a former post-doc, now assistant professor at the University of Washington led the study -- where we looked at the years 2015, 2017, 2019 data from YRBS and tried to understand distinct patterns of suicide thoughts, plans, and attempts among U.S. youth. We applied the Ideation to Action Framework to allow us to look at -- within the group of kids who are actually exhibiting suicidal behaviors, were there any distinctions or distinct patterns among that group? And as you all know, most of the research on suicide thoughts and plans often compares youth who are engaged in those behaviors versus non-suicidal youth. So it's interesting for us to look within that group to determine some distinct patterns.
I'm not going to go over obviously all of the findings and the methods, but I'm just going to get right to the findings. There were four distinct groups in terms of patterns: self-reported suicide behaviors, a group that reported thinking about suicide only; a group number two, thoughts and plans; a group three that had had reports of thoughts, plans, and attempts; and group four is a group that reported thinking -- I'm sorry -- attempting suicide without thinking about it or planning. And our sample was a little bit over -- close to 7,500 youth. So I'm going to go right to the key finding here in this last column. We found that that black youth were more likely to report a suicide attempt only without thoughts and plans as opposed to white youth, our reference group, who reported to have thoughts, plans, and attempts. So again, I want to repeat that. Almost two times more likely, black youth were more likely to report a suicide only without thoughts and plans as opposed to a reference group, white youth, who reported thoughts, plans, and attempts.
And so a couple of questions here, and obviously some real important public health implications. What does this mean in terms of screening and prevention in terms of how do we detect, surveil youth who have the greatest chance to attempt suicide? I think we need to jettison somewhat a cookie-cutter approach because the common warning signs are not always present across racial and ethnic groups in the way that we have traditionally thought about those warning signs. And so we need to think about other ways to assess -- I'm sorry -- should be "assessed" -- suicide attempts beyond thoughts and plans because there might be a group of youth, As we have found, that re actually thinking about -- I'm sorry -- attempting suicide without thinking about it seemingly or planning it.
Thank you, and if there are questions or if you want to get copies of the study, I can send them to you. I'm going to turn it over to our next presenter.
DANA PRINCE: Okay. Thank you, Dr. Lindsey. It's really wonderful to be presenting after you. Your work has just been instrumental in driving research policy and community response to this crisis of suicidality among black youth. And I've been asked to speak to another identity- and cultural-based group who also experienced disproportionate suicidality, and that's sexual and gender minority youth, or youth who identify as lesbian, gay, bisexual, transgender, and gender non-binary, queer, or questioning.
And one thing I want to note before I go into this is the importance of intersectionality, which is to say that Dr. Lindsey's scholarly work focuses on black youth and sex differences, black boys and black girls. Many sexual and gender minority youth who are experiencing suicidality are also BIPOC. That is, they are black, indigenous, or people of color. And the intersection of both racial identity and sexual and gender minority identity actually can put these kids at greater risk. One recent study by a Presidential Postdoctoral fellow at the Ohio State University, Dr. Allen Mallory, looked at how racial discrimination plus discrimination based on sexual and gender minority status put those kids at greater risk of suicidality compared to white SGM youth.
So many of us -- I would assume most folks on this call -- are aware that there are particular groups that are more vulnerable when it comes to suicidality, and we all know the statistic that suicide is the second-leading cause of death for youth between the ages of 10 to 24. Well, sexual gender minority populations and youth are at a two to four times increased risk of suicide attempt across the life course. When it comes to youth, adolescents, and adults, there are some known risk factors or sort of drivers of suicidality that include caregiver and family rejection, SGM-based abuse and neglect, sexual/gender minority-based peer victimization and bullying, and then internalized homo, queer, and transphobia.
DANA PRINCE: So just describing some of our known risk factors, and these have been looked at in terms of research as cumulative and compounding, and that greater exposure over time increases suicidality among sexual and gender minority youth. Okay.
So what does this mean in terms of this landscape of identification, and assessment, and referral? So this -- these risk factors that I listed leave sexual gender minority youth to come into contact with some key systems which are represented here in this slide. And I'm focusing on systems that aren't necessarily the highlighted ones in many of our other conversations around emergency room or primary care even. These systems are ones that you've come into contact with because of running away, because of family rejection, because of being abused or neglected because they are a sexual minority or because they're coming out as transgender and family are rejecting them. They may come into child welfare contact, into juvenile courts. They come into community-based services and then also crisis response, including crisis response that's specifically developed and tailored to sexual and gender minority youth.
And so when I talk about disproportionality, our studies are somewhat limited because we don't collect data across these contexts on sexual orientation, gender identity, or expression. This is a significant limitation and one that many federal organizations and agencies, including the NIH, are working to address. But what we do know from studies is that about two to eight percent of U.S. adolescent population identifies as LGBTQ. However, between 11 and 30 percent of youth in juvenile justice systems or child welfare systems identify as LGBTQ. Again, many of them are in contact with these systems because of their sexual orientation or gender identity and expression. And these youth then may experience discriminatory and traumatizing interactions with healthcare and social service providers when they come into contact in the system. And this leads to delays in accessing mental healthcare, disengagement from services, a loss to follow up, and also exacerbation of negative mental health outcomes, including their own suicidality. So in other words, when a queer youth is being assessed around lethality and their full-person, their sexual orientation or gender identity which may be a part of their ideation or reasons for planning are not accounted for, are not seen, that can actually exacerbate their suicidality.
So we also see in this figure how fragmented our service system is, which everyone in this room knows about. And so if you go into inpatient, you may not be connected with outpatient. I think Dr. Prinstein said this on the first day, how hard it is to get adolescent outpatient care for suicidal adolescents. I'm going to say that it's even harder when those suicidal adolescents are transgender, non-binary, or LGBTQ, and that part of that suicidality is because of those aspects of themselves.
So youth are overrepresented in these entry contexts, and yet we do not ask about their sexuality or their gender identity in affirming ways. Now, some people say that systems are actively heterosexist, or cisgender, or homophobic, and in some cases there are interpersonal instances where this may be true, right, where there may be a provider who has bias and that bias is coming out in that interpersonal context. But the other thing to remember from a system dynamics perspective is that even systems where there is no -- where the policy is so-called neutral can reenact cisgenderism and heterosexism. And so we know that even so-called mutual policies can incur harm. So where systems may not be actively heterosexist, cisgender, homo- or queerphobic, they can still have policies and procedures, seemingly neutral, that actually reinforce structural heterosexism, cisgenderism, and other types of structural inequalities like racism.
So at the -- at the point of acute crisis risk, SGM-affirming interactions are going to make a difference. Assessing and determining lethality without understanding how SGM-specific factors, like abuse and neglect, or peer victimization and bullying, or internalized transphobia or queer phobia, factor into that hot crisis moment are going to leave these youth potentially with unmet needs. And then absolutely in terms of addressing the longer-term underlying factors that play into youth suicidality is going to require coordinated and SGM-affirmative care.
Here is an example. So this is a quote from a pilot study I conducted from 18-year-old transmasculine youth, and this was an interview I conducted in a residential treatment facility. And this youth said, "I need someone" --
Oops. I'll go back.
"I need someone that treats gender dysphoria. I cut myself because of the gender stuff. I had a very rough time in foster care -- foster home and going to school. I've been suicidal and now I've got aggression. I was never aggressive before I came here to this treatment facility. I just wish my therapist could work on the gender dysphoria." So this is a really clear example of underlying, ongoing, and unattended to gender identity dysphoria as one of the drivers of prolonged suicidality in a young person who has had multiple attempts and is in need of ongoing care. And even though this young person is literally in a treatment facility and has connection to evidence-based practice with a -- with a -- with a mental health provider, the core issue is not -- is not being addressed.
So what is SGM-affirming care? This is a very short definition for folks who are not aware. But SGM-affirming care is essential. So, first of all, it's going to validate a wide range of sexual gender minority identities as normal and healthy, and also understand the impact of this -- of societal-based discrimination on youths' lives.
So through my work that I've been doing here in Cuyahoga County in Ohio, we've identified, particularly within the child welfare context, factors which are represented in this conceptual figure that decrease suicidality among sexual and gender minority youth. And it's vital for all of us to recognize conceptually the dynamic and complex interplay and nature of systems. When we're thinking about intervention points, we know that there's more than one point of intervention, and that we need multiple multi-level, concerted interventions if we hope to shift the outcome, specifically for youth like sexual and gender minority youth.
One thing I'd like to call out, too, is in the first session we talked about and saw the slide that over half of the workforce that's interacting with youth at higher risk of suicidality are not trained clinical psychologists. And so the role of mental health counselors, and social workers, and juvenile court workers, and community workers, and crisis response staff is really vital to this question of training. And the way to tailor training specifically for this population within those systems requires coordination between systems that actually many systems want to do and yet often fall short of being able to actualize. And so having more of these kinds of NIH calls and also funding mechanisms that are really focused on system-level outcomes and recognize the complexity of working across these systems, because we talked about how youth fall off. They fall out of our picture. And I think we're not -- I don't think they're falling out. I think that often they are pushed out or silenced or ignored, and that the need underlying what -- is not -- is not being -- is not being met.
And so some things we've been working on here inside of systems -- the juvenile court system here in Cuyahoga County, child welfare system -- and, one, safe and culturally appropriate identification of sexual orientation and gender identity and expression. And this needs to happen at multiple points in contact with the youth who's inside of one of these systems. These systems already do screening around suicidality, and yet they do not often link identification of sexual gender minority status with suicidality. In some preliminary work I've done here, we found a twofold increase in risk among those juvenile court-involved youth who also identify as sexual and gender minorities. So clearly there's more work to be done here.
Moving into assessment and appropriate referral, understanding the specific factors, the specific stressors that sexual gender minority face that actually put them at increased risk for suicidality, and where to send folks in terms of outpatient for community care. Something that's also really vital here is work that other folks are doing. I was really excited by Dr. Alonzo's talk and also Dr. Whitehead, because lived experience and local context really are so vital to this, and being able to move across and between the different contexts that youth are engaged in to link those providers requires centering community knowledge.
So I'm going to stop there and turn it over to our moderator.
RINAD BEIDAS: I'm going to go ahead and ask all the speakers to turn on their webcam. I'm just going to briefly make some comments, and then there are questions that have come through, and I'm hoping we can have a rich discussion with our esteemed panelists.
I just want to start by thanking you all for this tremendous set of presentations. It was incredible to hear about all of your work and the shared mission that we all have to save lives equitably, so thank you, and if we were all together in a room you'd be hearing resounding applause, but incredibly grateful to all of the tremendous points you all made. I'm going to go through each of your presentations very briefly just to call out a few things that really stuck in my head and then share some common themes, and then -- and then we can launch into questions.
So, Dr. Boudreaux, loved how you brought together the implementation science, EPIS Framework, with your case conceptualization, and I think that you did an excellent job describing how you might design implementation strategies to overcome the barriers that we see in implementing suicide prevention practices in emergency department settings. And I thought it was a really nice example also of using behavioral science principles in the design of your approaches. You know, we often think about the East Framework from behavioral insights, make it easy, attractive, social, and timely, and many of the strategies that you were describing leverage those principles. So thank you for your great work.
Followed by Dr. Pisani, who described barriers and facilitators to training in primary care. I do have to say I think there was a new skill unlocked with just in time slide making. I'm really impressed. And I think, you know, you made the really salient point that everything in the emergency department applies to primary care, and then there's also some unique determinants for us to be thinking about. And you made some excellent recommendations and questions for us to be thinking about as we move forward in this context. And again, highlighting the theme of the importance of the inner context factors that might be relevant to implementation, such as leadership and culture, which I also think came out in the previous presentation. So thank you very much for your great presentation.
Dr. Lindsey, thank you so much for amplifying these very concerning rising disparities in suicide and black youth and in ringing the alarm. Indeed, the alarm must be rung, and we must be addressing these disparities. I think that the work that you presented very aptly shows the patterns and disparities, and that you've asked some very important mechanistic questions so that we can better understand how to intervene with black use and save lives given these concerning findings. So really important to think about implementation broadly and then funneling in on specific populations that are at risk.
Then followed by Dr. Prince who spoke about sexual and gender minority youth, the importance of intersectionality, and the multiple systems in which youth present. I love how you amplified the voice of SGM youth. Thank you so much for doing that. And also the complexity of addressing implementation, and prevention, and intervention across multiple settings.
So, you know, I was just incredibly impressed by this set of presentations. There's a couple of kind of core themes I'm just going to pull out before I stop talking and we listen to these esteemed panelists with the questions that have been coming in. You know, I think one of the key things that really warms my heart as an implementation scientist is thinking beyond just training, right, and the multiple contexts and levers that we might have to increase the implementation of suicide prevention and intervention writ large, and then in specific populations, and the fact that there are many common barriers and facilitators. So while we don't want to take a cookie-cutter approach -- I completely agree with Dr. Lindsay's important points -- I do think we can get to implement -- better understanding implementation faster if we do understand those common barriers and facilitators and then the things that are unique about our settings and our populations that we have to address so that we're not, you know, spending many years trying to understand the barriers and facilitators, and that we're getting to inter -- getting to interventions more quickly.
Another key point that I heard relates to this concept of fidelity which I think is very tied into this question of training. So it's not just adherence, but it's also about the quality of our prevention and intervention practices that will likely dictate how successful we are in saving lives. And also going to Dr. Pisani's point, our success and get -- and having people disclose to us, particularly in some of our specific populations, you know, that we talked about today. So in black youth use and SGM youth, what are some of the kind of key competencies that we need to focus on developing in our workforce to address the needs of specific populations. In fact, Dr. Prince described some of the SGM-affirming competencies that might be needed but would like to talk about that in more depth and also as it relates to black youth.
And I think the last thing I'll just say is this is complex, and, you know, we need to be thinking about multi-level strategies both within one context and then across contexts. So there's not going to be one silver bullet, but as we start thinking in this way and intervening in multiple settings and across populations, I believe that we will address our shared vision and mission to save lives equitably.
So I'm going to go ahead and stop now and go over to the questions. And there were also some questions that came through from the panelists, so I might start with one of those and then I'll pivot back and forth between the Q&As that everyone can see. But Dr. Stanley raised an important determinant, which I think would be really valuable for us to talk a little bit about, which is discomfort with talking about -- you know, asking about suicide, intervening on suicide even with training. So I'm curious if our panelists might be able to comment on how they think about discomfort. I might think about it as self-efficacy in asking about suicide both broadly and then within specific populations. So who might like to take that first?
ANTHONY PISANI: I can say something about it. Others might also. I think it's -- I think it's really critical and maybe we can even expand it to say comfort with sharing and disclosing suicidal thoughts, and then comfort with engaging about that, let's say, in primary care, and then comfort with taking the suggestions and recommendations. At every -- at every stage here we really don't know. There's been a couple of really nice studies. Julie Richards and Ursula Whiteside and a group did two qualitative studies about primary care that were really nice, so if you wanted to look those up, I could also find them and give links. Talking about what are the kinds of things that patients are looking for to feel more comfortable.
One other thing I'll say is that I do think that that it's not going to be training, meaning just more information that people need in order to feel self-efficacy. I think there has to be -- I think that's something that's going to happen at a group level. I think people get interested and passionate about hearing and responding to suicide concerns, in part because of a -- of a group that they're part of and feeling like they're part of a movement or part of a -- of a set of priorities that their setting has. So I think we have to think beyond the individual level when trying to address discomfort.
MICHAEL LINDSEY: I think I'll add to that. That's a great point, Tony. In terms of self-efficacy, when Dr. Stanley's question which is posed, I was thinking about it from another angle, which is obviously from the work that I do around disparities, and I think two things are really important. The point that I made about understanding what our unique risk factors as per race, ethnicity, and other, you know, marginalized populations is really, really important. And so in terms of how we build our self-efficacy around, you know, understanding and obviously training, we need to be more knowledgeable about what those sort of nuanced factors might be so that we don't ask the wrong questions or miss asking questions altogether because -- from a diverse disparities perspective I think about stigma related to mental illness and treatment that, you know, impacts a lot of minoritized populations, in particular.
And so I think that it's important to appreciate the impact that stigma might have. And so it then becomes even more important that we communicate and share these matters, probe these matters in really culturally-respectful and sensitive ways, or else, you know, there's going to be a total shutdown. I actually for a podcast series we have out recently interviewed a mom whose 10-year-old son died by suicide. And during the course of the conversation -- she was black, her son was black -- she said the "S" word. You know, we never talked about it in our house that. You know, I don't know where he got that from and the whole nine. And so, you know, it's even challenging to call out the name of suicide, right, the word "suicide," right? So it's -- the stigma pieces cannot be underestimated.
EDWIN BOUDREAUX: I think that the -- what I've observed is that this is not a type of fear or poor self-efficacy that applies across the board for all providers. Behavioral health providers can have much more comfort with this than non-behavioral health providers. And in the non-behavioral health providers, there's a few things that they're worried about by asking this question. Besides just the unfamiliarity and the stigma associated with asking about suicide, there's the fear that by asking this -- questions about suicide of patients who don't have a diagnosed behavioral health disorder. So especially, if you're trying to expand your screening, that's when you see this resistance escalate. Non-behavioral health providers are going to be less resistant to ask or to recognize that they should ask if they've already identified the patient has a behavioral health disorder because that's the sequence that I think is most common. You know, identify they have a psychiatric problem and then you consider whether they're suicidal. So it's not a random thing. It happens in these clusters of particular context of providers and patient interactions.
And the second biggest component that I see is people are afraid, especially in primary care settings, about asking this question and not being prepared to know what to do with a "yes," so what I've termed the Pandora's box. They don't want to open Pandora's box because they're afraid of what happens if the person says yes. It's not that they're necessarily afraid or unfamiliar with how to ask questions. It's a matter of what if they say yes, and that's a bigger resistance, I think, because there are legitimate concerns about that. If the person isn't prepared to act, if there's no resources, if they don't know what they should do, then it would be really questionable to ask those questions. So before they start to be comfortable with asking the question, they have to be comfortable with what to do if the person says yes.
DR. PISANI: And beyond just refer the person to behavioral health. I mean, this is one of the things that, you know, can drive me crazy. Like, the main thing we can tell people right now is we'll just refer them. Well, most people aren't going to get there if you do -- if they have access it may not be acceptable. And, you know, to Dr. Lindsey's point, not for every single group is that going to be a preferred intervention. So we need to have -- you're more than a gatekeeper. Your role is not just a find out and send out. You know, the role -- we're going to have to have interventions that people can actually do, as you said, Michael, you know, that that would be -- that would be acceptable and appropriate for the -- for the culture and not rely so much on, well, here's your one answer. If they don't, then just get mad that they're not -- that they're not following up with your recommendation.
RINAD BEIDAS: Absolutely. So I'm going to pivot us because there's a lot of good questions and I want to make sure we can get to some more of them. Katherine L. asked a question about incorporating the lived experiences, and so I'm wondering if you could all broadly reflect on how you think about incorporating and amplifying the voices of people with lived experiences, and then particularly in the populations that you've been doing work within. I would love to hear from everyone that answer. Maybe I'll ask Dr. Prince to start.
DANA PRINCE: Okay. Sure. So a lot of the work that we do is qualitative and quantitative, and I think that's really important because people resonate with stories. And especially in terms of training you have to understand why, and there has to be some kind of connection to what -- why do -- why am I supposed to ask these questions now, too. Why do I need to have a conversation with each young person that comes through the intervention or detention center about their sexual orientation or gender identity?
And so I think, you know -- and the thing I'll say, too, about lived experience, it's both for the youth themselves, but also having champions and people in leadership with -- inside the organization that also tell their stories about why they are doing it has been very successful for us. I can say, like, one example of this is in Cuyahoga County and our Child Welfare was starting a pronoun campaign. You'll notice that I have my pronouns on my name here. And now in the world of Zoom it's actually a really simple and easy intervention for people to do, and for folks who are inside the community it's a signal that, oh, this person knows something about LGBTQ, you know, culture and society.
And so with -- in terms of Child Welfare, the director started wearing a pronoun button and started wearing it to all of her staff meetings and had them available for staff to start to wear, and that was part of the culture-wide change. So I think that those stories -- I know you say lived experience. We're talking about youth themselves who have experienced the system or those who have had an attempt. But I think in terms of system change work, it's also people inside those systems within leadership roles that are -- that are modeling that for staff and for their peers. I'll stop there.
MICHAEL LINDSEY: I'll add that this is something that's near and dear to me. I really believe it's important. And to that end, our Institute has just put out, as I mentioned, a podcast series, so in addition to, you know, peer review publications, we're looking for ways to disseminate information about the Ring the Alarm moment. And within our podcast series we I have interviewed folks who have lived experiences.
Unfortunately, as I mentioned, we have the mom who lost -- whose son died by suicide, and we have a person who is a survivor as an adult but talks about attempting suicide as a child. You know, professionals. We have an NFL athlete who talks about, you know, his advocacy around mental health, et cetera. Just trying to, you know, bridge those divides and think of, you know, ways to disseminate information that is desirable, sound bites that people can resonate with.
ANTHONY PISANI: I think I can answer where does the role of lived experience fit in. I think maybe the quickest way I can do that is suicide prevention and health systems involves having a culture of safety and prevention, following best practices, policies, and pathways, and having a robust approach to workforce education. And there's a bunch underneath each of -- under those, but just to say that in each of them there's an important role for lived experience.
One is have people with lived experience in your project leadership team. I suggest people who are being paid and are actually a valued, central part of the team. That's one way to promote, you know, change and a culture. Second, pursuing co-design of services with service users and families. And then with respect to education, there needs to be a really important voice for lived experience in actually providing the education. We can't just be clinician-to-clinician when providing education. It needs to be, you know, sort of better together. Clinical and lived experience together can better do that that education piece as well.
RINAD BEIDAS: Man, you're good with pulling up those slides like that. Well done. I have a lot to learn from you. I see Ms. Rowe. Would you like to ask your question?
SHELBY ROWE: Actually I wanted to help comment on answering this one from a lived experience perspective. So I totally second everything that everyone else said. I especially loved the slide that Dr. Pisani shared. And I think when it comes to incorporating lived experience, we know from YRBS that one in five young people have thoughts of suicide, that one in 10 will attempt. That means one in five adults, one and five of our colleagues have had thoughts of suicide. One in 10 have survived an attempt. And making it safer in our workplaces to bring that experience because I would challenge in most workplaces, we have a lot of wisdom and lived experience around suicide in our leadership teams already. And so tapping into that and then, yes, making sure those meaningful roles, yes, the stories through a lived experience, talking about their experiences can be powerful in shifting attitudes for an organization, create those kind of aha moments of why do we do this.
But also they have a lot of wisdom when it comes to the different screening tools and treatment tools, you know. Just as you would ask someone, you know, a population of cancer survivors what was helpful, what helped you get through this, asking individuals and then be ready to listen to their answers on what they thought was helpful and what wasn't helpful can really help shape our programs in a powerful way, yes.
And then making sure then them that we are incorporating diverse voices. Not all lived experience is the same, and so making sure that we're asking a diverse group of advisors will make our programs stronger.
RINAD BEIDAS: Ms. Rowe, while you're here, you had a great comment. I was wondering if you'd be willing to ask it since you're here rather than me read it.
SHELBY ROWE: Sure. I almost kept rolling. No, I'll be polite and be myself. But so one of the big barriers, and it came out on day one and we kicked off this morning, is, you know, we can't overlook that elephant in the room that we've got a fee-for-service healthcare system. So my question, which may be bigger for this group, and maybe, Dr. O'Connor for another conversation. But how do we change the fees? How do we get the fees more competitive with other lifesaving health events? There are other lifesaving treatments that have attractive billable options, and so how do we get the fees for mental health on par so it is seen as something that is feasible to do in an ED where that is very crucial, this fee-for-service, in getting those services done and those individuals moved on to that next level. How do we make that more attractive?
We have phenomenal people in EDs, but they're fighting against the system that devalues mental health. So I don't know. I think if we can figure out the trick of changing that billable code, changing that dollar amount to make it worth their time, we'll see a much greater investment in that. So more interested in thoughts on that than an actual question. If someone can solve that today, you'll be my hero, but I think it's bigger than what we can solve.
RINAD BEIDAS: Anyone have any thoughts on that critical kind of outer setting driver of behavior and encounters?
EDWIN BOUDREAUX: I think -- yeah, I put it in my -- in my formulation of the problem, and I have another formulation that I didn't get to because it was too many of them, and it's also in there because it's omnipresent. I've done some work in this area, but I don't know of anyone who's actually been able to solve this satisfactorily. So I'd love to hear if anyone has some practical experience with actually improving behavioral healthcare reimbursement, specifically in the emergency department.
But it typically is going to fall under two types of economic drivers. One is going to be an incentive approach and one's going to be in a disincentive approach. So if we want to change behavior, you either penalize people for not doing something. And so CMS, you know, Center for Medicare and Medicaid Services, and other organizations have long history of this. they say if you're going to get this payment for this service, it has to include these following behaviors and documentation of that, and if it's not in there, then we're not going to pay you for it. And that drives a lot of healthcare behavior and system decision making because they know that if they don't do those things, they're not going to get paid.
A practical example of that is an orthopedic procedure. You know, for a long time we had patient-reported outcomes that we knew were important for monitoring people's performance after they got a total knee replacement, but it was never -- it wasn't being implemented. And then CMS required proof that you were doing patient-reported measures or patient-reported outcomes or else you get dinged for the service. So instead of this bundle, you get, you know, five percent off of the bundled payment for that knee replacement. And then all of a sudden, every orthopedist across the country started doing patient-reported measures and reporting that because they had to. So I think a disincentive approach is one way to think about it. That that hasn't been done with mental health to my knowledge with CMS.
The other is an incentivization. So CMS, you know, had recognized problems with the transitions of care. We talked about that here and we've talked about it in suicide prevention for a long, long time, that people moving from the ED to other areas, they really need care coordination in order to adequately get them transitioned. Well, CMS started to pay for care coordination codes, you know, and that -- this is relatively recent that there are actual codes that you can bill for in order to coordinate care. Now, people have been using that in an outpatient care setting and it's starting to drive changes in practices, in integrated primary care practices, but it hasn't really been used in the ED care setting, although it possibly could.
So I think at this point, that care coordination component might be the next place to look to see if we can somehow leverage those care coordination payments in order to improve the care transition from the ED to outpatient. Those are the two -- my thoughts on it.
MICHAEL LINDSEY: I have a lot of thoughts, but I'll quickly sum them up. Those are really, really great points, and I think, you know, there are folks smarter than me on this issue, like Ed who I highly respect, has a really great idea. I also think about proximity of services, again, from a disparities perspective as being really salient in terms of the gap between mental health need and service issues. And I also recall from, you know, various studies that even controlling for financial factors for ethnic minority populations, there still is a gap in terms of mental health need and service use. And so I think that, you know, beyond the financial factors, we also have to really be thinking about ways to bridge the divide between need and service use.
RINAD BEIDAS: Thank you for the wonderful answers. I don't think we've solved it yet, but there are many of the best minds thinking about this matter. And I want to ask a question that kind of integrates across a couple of things that have come up in the Q&A. And I think it's, you know, highly consistent with Dr. Lindsey's point about making sure interventions are acceptable and appropriate and thinking about how we can reach people in different ways, not just through the -- kind of the traditional avenues of the healthcare system which might not be appropriate for everyone.
And I'm wondering if you all can speak to any avenues that you've seen that you feel are particularly promising and what the training implications might be there. So, you know, there was a question about social workers embedded in primary care offices, like integrated care, but then I'm also thinking about, like, community health workers, you know, schools, all the various places where we can reach people in the community, and then thinking about kind of what that means from a training lens. So I'm curious if anyone can take an answer to that question.
MICHAEL LINDSEY: Yeah, I'll jump in there. So I do a lot of work in schools, and I am a huge proponent of schools being a really important context for meeting needs of youth and families. I think about even Peter Lyman and Tony's work in schools at the university level that becomes really important, and a way of addressing, you know, from a school culture perspective what everyone knows about how to best support kids and how kids can support each other with respect to behavioral or emotional needs.
From a policy perspective, I think it's important for social and emotional learning standards to be consistently done across states from pre-k through 12th grade, not just for the younger kiddos, but even for middle- and high school-age kids who still need to learn prosocial ways to address their challenges interpersonally, or even the ability to articulate what's going on with them to a caring adult so that they can get the support that they need. I think about Sherry Molock's work in churches -- black churches around suicide prevention that become really important.
And I think in those contexts you have great opportunities to train and support the training of allied professionals, like schoolteachers or other folks in the school building, or maybe clergy members, et cetera, and churches. And so I think about it from that perspective.
RINAD BEIDAS: Thank you. Any other thoughts on that before we move to some closing thoughts?
RINAD BEIDAS: Okay. So I was hoping that each of you could leave us with kind of one key insight or takeaway that you hope that the audience will have gleaned about barriers and facilitators to training and suicide prevention this morning -- afternoon. I don't know what time it is. So I'm going to ask us to go in the same order that we went in for presentations, but if there's just one thing that you want to make sure folks took away from what you said, and if you each of you could take about 20 to 30 seconds.
EDWIN BOUDREAUX: I think health systems are bounded by their traditional training model of virtual trainings. I don't think that's going to go away. I think it's really important for us to figure out how we can do better with those virtual trainings because if we don't, then we'll continue to rely falsely on this virtual training that we think is getting the job done but it simply isn't. So I think that's where some prioritization should happen is how do we do that better.
ANTHONY PISANI: And I think maybe I'll make my one thing build on to Dr. Boudreaux's one thing. My one thing is that I think we can have scalable training that involves more than just information, but involves interaction and ultimately having people be able to think and talk together without having to have big, you know, workshop trainings that aren't really feasible for healthcare settings. But again, begin thinking about how do we do what Ed just described, get better at virtual training, but build into that a group-level component.
RINAD BEIDAS: Thanks. Dr. Lindsey?
MICHAEL LINDSEY: Yes. So I think that we should be thinking about traditional signs of suicide risk and beyond those traditional markers because it might be more nuanced with respect to who was sitting in front of us.
RINAD BEIDAS: Dr. Prince?
DANA PRINCE: And I will just add to that, that unasked questions are going to lead to unmet needs, and if we're not training folks in how to do risk identification and asking questions that are culturally affirmative of all young people, that we're not going to be able to target specific interventions that are going to be helpful.
RINAD BEIDAS: Wonderful. Well, with that, we are at time. It was such a pleasure to learn from you all in our shared mission, and thank you all very much.
DANA PRINCE: Thank you.
ANTHONY PISANI: Thank you. Appreciate your moderation.
STEPHEN O'CONNOR: Okay. Thank you all so much. That was a fantastic start to the second day of the workshop. Excellent session. So now we're going to move on to the second session of the day, and I'm pleased to introduce my colleague and branch chief, Dr. Adam Haim.
ADAM HAIM: Hi. Good afternoon or morning, everybody. I'm really excited to moderate this session which is titled, "The Added Benefits and Drawbacks of Digital Tools and Technologies in Suicide Prevention." And we have four fantastic presenters: Dr. David Atkins from the University of Washington and Lyssn, Dr. Pat Arean from the University of Washington, Dr. Craig Bryan from Ohio State University, and Dr. Igor Galynker from Mount Sinai. Dave, you're up first.
DAVID ATKINS: All right. Well, it is my great pleasure to be here, and I want to thank Stephen for the invite. And my goal today will be to briefly touch on some of our work on using and developing AI to assist training and supervision in counseling and psychotherapy. Before jumping in on that, I want to both acknowledge the generous support from NIH over the years initially in our university-based work which was really on AI and machine learning development, and now at Lyssen where that initial work is being developed into commercially-available clinical technologies. And I am a co-founder and I do own part of Lyssen, so please keep that in mind.
Let me talk very briefly. My goal here is not to give a thorough overview of traditional training, but just to hit on a couple points. And so in doing that, I want to focus on motivational interviewing which has really been a leader in studying and training. And this 2014 meta-analysis looked at 21 training RCTs, all of which included workshop training, some of which included some kind of follow up, feedback, and/or coaching, and importantly, for each one the key outcome was observer-rated skills of trainees.
You could probably guess what these findings are, but just to run through them, the meta-analysis found that there's large improvement directly after the workshop. Skills tended to deteriorate over follow up. To the extent that there was any additional feedback or coaching during the follow-up phase, that was helpful, and there is significant attrition during follow up. Of course, typical training in the community does not map incredibly well to some of these stronger points in our research. Typical training in the community is often a single workshop. The majority of time in the training is often spent in a lecture format. Practice opportunities during the workshop are quite limited in general. And when they do occur, they often are limited to role plays with minimal feedback and/or specific coaching. And provider turnover in community settings is incredibly high.
So I want to focus on kind of two things here in this Danny Kahneman quote which I have helpfully bolded for you. Kahneman, in talking about the acquisition of skills and the development of expertise in general, really emphasizes this opportunity to practice and get rapid and unequivocal feedback on that practice, on the skill development. And these are really two qualities of effective training that are very, very challenging with counseling and psychotherapy.
They're challenging first and foremost because counseling and psychotherapy is fundamentally about language, and it is about interpersonal interaction. And so, traditionally, we have used humans to evaluate that, whether that's in traditional supervision, or in fidelity competence, or adherence coding. And so, for the better part of a decade, from 2008 through 2017, our research group was really focusing on kind of basic science of AI and machine learning and could we develop systems that would allow us to go from a recording of a counseling session to an automated evaluation or automated generation of fidelity codes. And the good news is that with appropriate training data and with state-of-the-art AI, we have demonstrated repeatedly, particularly with motivational interviewing and now with cognitive behavioral therapy, that we can train machines to essentially act like human experts, that they are interchangeable with human experts. In most settings we see that confrontation continues to be a challenge mostly because it's rare in our data.
So that work now has moved over to Lyssen, which is a startup that came out of our university-based work, and we are now developing that into different types of training tools. So CBT Pro focuses on CBT for psychosis -- Sara Kopelovich is the PI of this project. Really it is a kind of traditional training except that it is online and it has AI-assisted practice. ClientBot here in the middle I'm going to spend the rest of my talk focusing on, so I won't say too much about that right at the moment. And then finally Lyssen also developed software for typical clinical use where we are recording and providing feedback either on motivational interviewing or cognitive behavioral therapy.
So for the duration of my time I want to focus on ClientBot as one example of what training can look like with AI-assisted feedback. So if we think about ideal characteristics of a training platform, one of the most challenging is having many unique practice opportunities. So there's a fairly long tradition of simulated patients and of computer-based simulated patients. However, almost all of the ones that I'm aware of have a kind of choose your own adventure style forced choice type pattern. And so they might be effective for one or maybe two practice sessions, but after that then you are aware of all of the various kind of paths that you can take through the choose your own adventure.
And so a challenge has been many unique practice opportunities that assess varying skills have different types of content. And what I mean by that is it would have different types of clients or simulated patients. And again, thinking about Kahneman's quote that there would be immediate feedback and summative feedback that you could track over time. And so each one of these characteristics is what we have tried to build into ClientBot.
ClientBot is a training platform, but really it is about practice and feedback. And so, it focuses on basic counseling skills coming out of motivational interviewing, things such as open questions, reflections, affirmations, and the like. And so this is all the didactic content that exists within ClientBot. The goal is really to just orient you and then get to practice.
The practice bots are conversational neural agents, and I am not our CTO, Mike Tanana, so I will not try to give a thorough introduction to them. But these are AI computational models that allow open-ended conversation. In addition, we can create personas, so we can essentially create different types of clients. And so, the front end of our current research has been working with clinical partners to understand what are the prototypical clients that your providers see, and then to develop a bot that it captures some of their characteristics, which could be content areas. It could be a specific behavioral health problem. It could be the challenge or difficulty of the individual.
And thankfully for present purposes, one of the personas that we have developed to assist Doyan Darnell in her NIMN-funded K award is a suicide prevention persona related to Emily, who is a young woman with a history of suicide attempts.
Just to give a sense of what practice looks like, this is a text-based chat platform, and it is kind of open ended. Emily is going to respond in an open-ended fashion to whatever you ask. You'll notice that there are these little purple indicators in this case for an open question, so that is an automated AI assessment of whatever it is you ask a trainee type in. So, "Hi, Emily. How are you feeling today," and within a second or two that would get tagged as an open question. In addition, there are prompts along the way orienting you to the skills that you are practicing. It's also tracking how well you are using and implementing skills, and so you're getting feedback along the way. If you're struggling, you might get an example of what an open question is.
And just to highlight the potential strength and power of these models is that I started a second practice session with Emily, and I started with the exact same question: "Hi, Emily. How are you feeling today?" And we see that she responded in two different ways, both including some suicidality, but two unique responses. And so that is the potential power and promise of these AI neural agents is that they can have open-ended conversations, in essence, simulated sessions with a suicidal client.
Finally, in addition, there's a dashboard that will track [your progress]. You can start new practice sessions. You can choose which persona you want to work with. You have some summative feedback and can track and review previous sessions. And this is going to be evaluated in a larger training study that we'll be kicking off in 2022.
And just to include a little bit of research, the current development work for ClientBot was based on an earlier study by Mike Tanana, our CTO. And what he found is that this immediate feedback relative to a control condition that included some didactic instruction on the front end but no feedback, that getting this immediate feedback both increased the overall skill learning in general and proved much more robust in terms of maintenance of skills at follow-up periods.
And so that is a quick whirlwind tour of ClientBot and some of the applications of AI to training in psychotherapy and counseling. And it is now my great pleasure to hand it over to my colleague, Pat Arean, also at the University of Washington.
PAT AREAN: Great. Thank you. I'm just going to share my screen.
So thank you very much. It's my pleasure to present today and quite an act to follow. I'm going to start by saying that I'm going to be talking about intelligent tutoring systems, and what Dave Atkins just talked to you about is very, very similar to what an ITS is. But my goal for today is to talk about how these things work, the different formats that they can take. I’ll give you an example from my Center, where we've used intelligent tutoring systems in underserved rural and minority communities, and then talk a little bit about the evidence base for intelligent tutoring systems across the board. But I really want to focus on some of the challenges these systems have, including what it takes to build them and how you can implement these in underserved communities.
So what is an intelligent tutoring system? I'll go over this really quickly because, essentially, what Dave did was to describe to you what an intelligent tutoring system is. They are very different from online training. If you think about your experience with having to do human subjects training through CD, those are online training examples, but they're not smart, they're not intelligent, and they don't use AI models. If they did, it would not be the thing that you dread doing. It would be an interesting experience and you would learn something from it.
Intelligent tutoring systems actually give you that opportunity to personalize your training experience based on adaptive algorithms. How you answer questions, the way that you progress through the training system is personalized based on these algorithms so that you get training in the areas that you're weak and you can kind of blow past those areas that you're particularly competent in.
These are not new. In fact, historically, these intelligent tutoring systems are now actually pretty widely available in training engineers in the Navy and the Air Force. Sherlock is an example of such a model where the intelligent tutoring system has been used to train naval engineers in how to identify engineering problems on fighter jets. It's also been used in schools for training kids in basic concepts of math and algebra as well as reading comprehension. But our colleague ME Lambert has been studying their use in psychotherapy training since 1998, and Skip Rizzo at the University of Southern California has been studying their use in immersive environments for psychotherapy training as well, particularly around PTSD.
One thing to keep in mind is that intelligent tutoring systems are meant to serve as a stand in for one-to-one training or tutoring, which is why they're called tutors, making the teaching experience more scalable and efficient and allowing the trainer to really focus their attention on students who are in need as well as designing their curriculum so that their classes can be more efficient.
So how it works is basically the teacher sets the curriculum, but feeds it into the intelligent tutoring system. And then the intelligent tutoring system basically provides the student with a task. The student provides an answer, so you saw that kind of illustrated in Dave's talk. And then the intelligent tutoring system gives the student feedback, basically support - like if they got the question wrong, it might give them another task to think about, like, why do you think this question is wrong? And if they keep trying and they keep getting it wrong, we'll give them some more information, kind of hints and tips so it's not a frustrating experience. So the student will adjust their answer based on that. And throughout the whole process, the intelligent tutoring system is adapting itself to the student's strengths and weaknesses.
They can take various shapes and forms. These are four examples of different intelligent tutoring systems. The one on the upper left-hand side is by my colleagues, Zoran Popovic, who created a -- basically a gaming system, a video game called Foldit, which teaches citizen scientists basics in biophysics so that they can do protein folding. And this very interesting puzzle has resulted in a lot of cool discoveries from citizens just by training them in the basics of biophysics.
On the lower left-hand side is an example from a commercial product called Cognito -- that trains college counselors as well as resident assistants in dorms in how to recognize distress and suicidality in college students using a conversational agent very much like what Dave just described. And you get the chance to play both the student and the faculty to get the kind of experience. On the bottom right-hand side is actually an intelligent tutoring robot called Cardiac Tutor, which teaches provides medical students with different signs and symptoms around cardiac disease, and based on how the student responds, the patient lives or dies without actually being a real patient. And then on the upper right-hand side is an example of our intelligent tutoring system which I'm about to describe right now.
So the problem we were trying to address is that we -- in my center, the [NIMH-funded] ALACRITY Center, we're very much focused on how to integrate evidence-based behavioral interventions into underserved communities. And in rural and low-income areas, as we all know, the mental health workforce is pretty sparse. A lot of our colleagues in federally-qualified health centers have been relying on peer counselors or bachelors-level care managers in primary care medicine to provide light touch evidence-based practices in primary care. But the challenge has been, as others have described, there's a tremendous amount of turnover in staffing. It takes a long time for people in these settings or at this level to acquire the skills they need to be effective in delivering these low touch interventions. It's very expensive to access somebody like me for these settings to provide that high-quality training.
And then the other thing that's really important here and I will describe why we went with the platform that we did, is that there's limited access to advanced technology. In fact, most of the students that we've worked with in our studies and the universities and colleges that we partnered with have older generation computers, older generation phones. And so while it's really nice to see some of these slick examples of computerized training, when you're in the rural and low-income communities, you're really faced with some limitations. And so you can't be quite as fancy as you would like to be [with technology].
So our approach was to develop an intelligent tutoring system in collaboration with a number of different universities in rural areas, including Appalachia State Heritage University in Eastern Washington, and then see whether or not we can accelerate training and evidence-based principles in these low-income communities. We are working on a new name for our program called UWITS. It's a transcript-driven interface that adjusts to the individual’s maximum learning and mastery. There are three levels of adaptation that my colleagues in Engineering have developed. One is that there is in-problem reaction to various learner errors, so the -- as the learner makes mistakes in the program, the program adjusts itself immediately so that the student is engaged in what's called productive struggle. So the ITS, or the intelligent tutoring system, experience is hard enough that they learn, but not so hard that it's frustrating. The sequencing of problems matches the learner's proximal zone of development, so we have a variety of different kinds of question types and experiences for the student, and we have a lot of different ways of providing feedback, and creating educational experiences so that we can promote persistent struggle in the ITS.
So just to give you an example, and I want to go through this quickly. This is an example of a multiple-choice type where, you know, you meet Celeste, a 52-year-old divorced woman who feels like she's going through a midlife crisis. And then you go through a series of transcripts that we developed where the patient and therapist react to each other, and then the student gets to decide whether or not the answer the therapist gave is a good, in this case, a good demonstration of empathy or a poor one. And based on their feedback, they get information about why their answer is right or wrong.
This is an example of a transcript-driven one where the student highlights sections of a transcript that they think are problematic or especially good, and they have to select the reasons for why they do it. And it pretty much operates like this, just to give you a little bit of a quick example, where the student highlights the transcript, they submit it, and then you get some feedback about whether or not your answer is right. The initial set of feedback is not super detailed. You have to really think about why it is you got your answer wrong, and then we keep giving you more and more. And then if you're really struggling, we'll give you an opportunity to review what it is you're missing.
So from the perspective of the teacher’s view, they get an opportunity to see how the class overall is doing and by specific student. So we give them information, for example, in a reflection competency, that for the particular problem type, five out of six students are really struggling with it. They have some example errors and some practice problems that the teacher can send the student. But what's really nice is you have an opportunity to see which students are doing particularly well. That's usually judged by the time spent in the actual ITS. So Nicole here is doing really well. She's blowing through all the questions very quickly. However, Eric is struggling, and so the instructor may want to pair Nicole up with Eric as they do their own practice around reflection exercises.
So the overall effectiveness of intelligent tutoring systems is that they seem to have equivalent effect sizes when you compare students who are trained with an expert one-on-one versus using this adaptive intelligent tutoring system. Students who go through intelligent tutoring systems seem to outperform other students when it comes to real-world problem-solving tasks, including what we were talking about earlier, students' self-efficacy about asking really hard questions. So this could be pretty important for addressing the concerns that people have already mentioned around the anxiety a lot of even trained clinicians have about having to follow up on a suicidal statement.
The results seem to be particularly pronounced in students from, rural areas, non-native English speakers, low-income backgrounds. There's something about the intelligent agent tutoring system that seems to really promote and engage the student in these activities and learning. And in our data, we found that it results in much faster skill acquisition if the student is trained with an ITS, but we're still looking for the long-term impact on drift.
Overall, students and teachers seem to really like the system, and so that's great. But our lesson learned so far is that building these things is pretty time consuming and can be fairly expensive. Actually, the reason ours doesn't look quite as slick as, say -- this is on the right-hand side – an example of Skip Rizzo's intelligent tutor, is that, you know, we were funded with an R34-level money for this as well as a little bit of seed money from the university. It took us a really long time, too, to build this this training program. In fact, it took about two years.
We first had to interview experts to define the content that would go into the ITS, so talking to people who have worked with bachelors-level or lay clinicians around what they saw people struggled with most, post-training. We also spent probably about eight to nine months building out the content of the ITS; and this is not simple. This is not just here's a transcript of a psychotherapy session, and I think this answer is good and this answer is bad. We actually had to work with the engineers to identify what are called negative tags and positive tags, which is really in the weeds. Like, what makes this a bad answer? What makes this a good answer?
I'm really distilling down what it is that, when I'm reading somebody's transcripts from a psychotherapy training, why is it I make the decision that this person is doing really well in a skill versus not doing so well on this skill? I think I know more now about what makes a good psychotherapist than I ever did just by that process of doing the tagging.
We ended up highlighting the ITS in three different contexts with a lot of iteration around trying to improve it, and we're just now -- after three years -- ready to deploy the ITS in a statewide initiative with community colleges training in a B.A. certificate program for students who will be working in primary care medicine. Another thing to recognize, too, that these are not smart out of the box. The more it's used, the smarter it gets, so you need a lot of students to interact with it for the algorithm to build. And you can't take the teacher out of the equation completely. In fact, these intelligent tutoring systems are meant to be part of a broader curriculum and to support the teacher and the student body. Integration within a learning management system we found was not all that complicated. You just really need a link. However, not all learning management systems in colleges and universities are all that easy to work with, and so that's something to consider as well.
So I'll end here by just thanking my team who has been working very hard on this project and continue to: Zoran Popovic who runs the Center for Game Science at the Computer Science and Engineering School at the University of Washington; Patrick Aue and Brenna Renn, who did all of the content development for the ITS; Emily Friedman who's a user-centered design researcher; and then Ryan Allred who is our research assistant who helped us collect all our data and get our students onboarded. So thank you. I'll turn it over to the next speaker.
CRAIG BRYAN: Okay. Thank you. Appreciate the opportunity to come and speak with everyone today. My name is Craig Bryan. I'm a clinical psychologist at the Ohio State University. My research primarily focuses on developing and testing interventions and treatments to reduce suicidal behaviors. And today I'll be talking about one of the efforts that we've been working on for a few years focused on translating a lot of our intervention work into digital therapeutics.
In terms of disclosures, I have several here on the screen. The ones that are most directly relevant to this presentation today are, first, a grant from the National Institute of Mental Health, an R44, that was awarded to Oui Therapeutics to conduct a clinical trial testing the digital therapeutic I'll be discussing today. And then also I received a consulting salary from Oui Therapeutics to help them to develop this technology.
So I'm going to begin by talking about, you know, some of the key ideas behind what we think matters the most for effective interventions to reduce suicidal behaviors. Researchers have been talking about this and thinking about it for several decades. A little over, I guess 12 years ago now, David Rudd put together a nice review of all the published clinical trials at that time focused on suicide prevention, and it was really aimed at trying to understand what are the characteristics of some of the treatments that seemed to reliably reduce suicidal behaviors and what are unique about them as compared to control conditions or other therapeutic modalities that do not seem to have that suicide reduction potential.
And based on his review, he identified six central ingredients that are components of the strategies or interventions that were typically better at reducing suicidal thoughts and behaviors, the first being a basis in scientific literature, having a clear, coherent, scientifically-supported conceptual model, high fidelity by the clinician, adherence by the patient, and an emphasis on skills training prioritization of self-management, and then finally, easy access to crisis services. Now, today what I'm going to be focusing on -- I'm not going to go over all of these ingredients, but I'm going to focus primarily on this second one, which is high fidelity by the clinician, which is a topic that's been addressed now by several other presenters today.
And, you know, we do now have accumulating evidence suggesting that fidelity does matter. And so this is not only true in other domains. This has been documented, for instance, in trauma therapies as well as depression therapies. There's emerging evidence now that the same is true also for suicide-focused treatments. In these two separate studies, for instance, looking at fidelity to safety planning or quality of the safety planning -- in both cases they were able to establish that better, higher-quality interventions were generally correlated with the outcomes that we would look for -- reductions in hospitalization, reductions and suicidal behaviors.
Unfortunately, what these studies also found was that, on average, the typical clinician does not really maintain high protocol fidelity, does not really develop high-quality interventions. And so perhaps part of the problem with interventions not being as effective as they could be in routine clinical practice is that clinicians just are not delivering them in the way that they were originally designed. And so this is something that many of us have been thinking about, the primary focus of these workshops here. And we've already talked about some of the traditional strategies to promote and to ensure fidelity to protocols.
And I would say as someone who's been doing all of these things for the better part of a decade, if not longer, they're incredibly resource intensive. They take a lot of time, a lot of effort, and a lot of expense to really do well. And I've gotten to a point now where I'm fairly convinced that training workshops in and of themselves probably actually don't make much of a difference, but they're usually the cheapest and their easiest, and so that's what most systems of care tend to focus on. I found that, you know, role playing is perhaps a little bit better than simply doing a didactic lecture, but there are oftentimes many clinicians who feel uncomfortable or don't necessarily want to do role plays. We've tested different strategies with using pocket guides, fidelity checklists, different forms that can be used with variable success. And of course, the highest level here is actually having clinicians and therapists audio record or video record their sessions which are then reviewed by experts to provide direct feedback hopefully as soon as possible after the session has been recorded.
And so as has been mentioned today, I think last week as well, these probably just are not scalable solutions. We just cannot meet the demands of the workforce by using these very time-intensive strategies. And so this is magnified when we think about some of the recommended or empirically-supported treatments for suicide. I'm using as an example here brief CBT for suicide prevention or BCBT, which has these different components. They have different phases. There's a conceptual model. And so when we think about training clinicians on how to do this well and how to preserve that reliability and protocol fidelity, what we're really talking about is not only teaching them how to do lots of technical activities and skills, but how to remember to sequence them, how to fit them in with an overall conceptual model. There's a lot of complexities, a lot of pieces that have to fit together. And so I think last week there was a presentation where one of the speakers commented that a modular-based training approach is probably going to be more sustainable, and that's something that I've certainly talked about as well.
And so another way to think about this and what I've been working on with several collaborators now for about five years is to maybe think about protocol fidelity in a different way. And so up until now, I would say the state-of-the-art, or at least the predominant way that we approach mental health clinician training and implementation, has really focused on teaching human clinicians to do certain things in very specific ways, and to try to create different scaffolding structures to help preserve their consistency in sticking to those processes and procedures, you know, over time not only within a single patient, but over many, many patients. And then hopefully that would stick around for perhaps many years into the future. But as has already been noted by Dr. Atkins, a couple speakers before me, we know that there is a sort of drift over time.
And so perhaps another way of thinking about this is to the interventions and the procedures delivered by artificial intelligence, by a computerized digital therapeutic platform which then provides that opportunity to reliably administer a procedure many, many times over and over again and, thereby, freeing up attention and demand from a training and implementation perspective towards other aspects of clinical encounters, such as therapeutic relationship, suicide risk assessment, case conceptualization skill sets, as opposed to focusing on the specifics related to technique delivery.
So one of the digital therapeutics that I spent the most time working on and consulting on is the Aviva smartphone app. This particular intervention is based on two highly similar CBT for suicide prevention protocols, the first being the Cognitive Therapy for Suicide Prevention approach by Greg Brown, and then second, the Brief Cognitive Behavior Therapy protocol developed by David Rudd. The smartphone app is designed to deliver the procedures of these protocols via 12 skills-based modules directly to the patient, and this is done via avatars, and it's chatbot-driven technology. There's inclusion of clinical vignettes as well to demonstrate concepts where they have portrayals of actual clinical cases from the patient's perspective about how a particular procedure or concept has been beneficial, how they use it in their lives. So there's also a lived experience integration into the technology which can then be used to model the desired behaviors or the desired outcomes for the patient.
In some of our pilot testing thus far with this smartphone app, what we have found is that, interestingly, we seem to have higher adherence rates by patients, so that's what's demonstrated here in the graph. In general, what we see is that for the first six to seven sessions or modules of the treatment, we're actually seeing patients remaining engaged with the app to a higher degree than what we would typically see in traditional face-to-face outpatient mental healthcare delivery systems.
We've also been pilot testing this with actual clinicians who are now providing services through third-party payers to see how it actually works and how -- you know, what are the barriers to using this and what are the benefits. And some of what we have found thus far on the benefits side is that, as I alluded to before, the use of the smartphone app does ensure the reliable administration of procedures and treatments within and across patients.
And so what this has been able to do is, when we meet on a regular basis for clinical consultation and case review, the conversations now focus on other aspects of how to work with challenging cases, complex comorbidities, things like that, as opposed to spending a lot of time focusing on did you do a breathing exercise in the correct way, or did you guide the patient through this worksheet in a way that's going to maximize learning. And so we've been able to, in essence, shift the focus of the clinicians' attention and their thought process as they approach each of the cases.
Another benefit is that this app could be prescribed by a clinician and used as a treatment adjunct. And so, in essence, a human clinician could deliver mental health treatment in whatever sort of standard of care or treatment as usual approach that they're most familiar with, and then the smartphone app becomes this adjunct to the treatment process so that now the clinician, in essence, can focus on the things that they know well and that they're very good at while also being able to ensure that the patient receives the evidence-based treatment in a very reliable, high-fidelity manner.
Another benefit as well is that there's a potential here that the app could be prescribed by non-mental health clinicians or by non-experts. And so, for instance, non-psychiatric providers, particularly those who are working in low-resource areas where there might not be easy access to a human clinician, especially one trained in suicide-focused strategies. But now we can, in essence, prescribe the clinician through this digital format and get that treatment into the hands of the patient.
And then finally, another benefit that we're starting to recognize is that, because of the use of the technology, we can build into the process these automated reinforcement feedback systems that encourage skills use and treatment adherence. So push notifications, for instance, can be pre-programmed into the app to remind patients to use different skills and different strategies throughout the day. The hope, of course, is that in the future, we'll be able to link this up with passive data-sensing technologies, maybe EMA technologies, where, in essence, a patient can provide updates about their status and then the system can interpret their scores and push back to them here's a recommendation of what you could do right now to help address, for instance, your high negative emotional state.
And of course there are drawbacks as well, one of which is, of course, with a technology-based platform like this, there's less ability to customize the content of the intervention to the unique needs of the patient or to customize the content to individual differences of patients. And so, of course, the way to address this and handle it as we come up with multiple iterations, different video vignettes, for instance, that convey the concepts that we're hoping to teach to a patient, but having -- but having portrayals by people who represent different backgrounds, gender, race, sexual identity, career fields, things like that to provide enough of an assortment that a patient can find an experience, or an avatar, or an actor, or portrayal that best fits with however they identify.
Another drawback is the inability to quickly adjust the presentation to ensure uptake. And what I mean by that is, you know, sometimes when we're teaching a skill to a patient, maybe the way that we've conveyed at, the language that we use, the examples that come to mind don't immediately click with the patient, and so they asked for clarification. They ask for assistance. And so there isn't always a ready way to be able to quickly think on our feet about a new video or a new way of conveying the information if we're too dependent upon the technology. And so as a result, right now we're seeing that there's a lot of advantage for the technology being used in conjunction with that human clinician to potentially adapt to this drawback.
Another drawback is that it reduces theory--consistent protocol adaptations. There's some accumulating research, especially in the trauma literature, that when clinicians deviate from the protocol, but do it in a way that actually aligns with the empirical conceptual model of the problem, you can improve the efficacy for that patient. Conversely, if a patient changes the protocol or deviates from fidelity in a way that does not align with theory, then we actually tend to see worse outcomes. And so we've been thinking about what this means as it relates to suicide risk and are there times where it might be good to deviate from the protocol in a way that is in alignment with our understanding of suicide that could actually make the treatment more effective. That's not always possible when you have a digital therapeutic where everything is sort of lined up in advance.
And then finally, we've noticed that sometimes clinicians may not use the tool in a theory-consistent manner. And so what that means is they're using the tool, they're introducing the concepts, they're guiding the patient to use the smartphone app in a way that would actually depart from our empirical understanding of suicide. And what sometimes happens -- it hasn't happened very often, but it can create this confusion for the patient of why am I using the smartphone app because it doesn't seem to fit with what you're talking about and telling me to do as a clinician. And so we do think that the ongoing consultation and support that we provide to the therapists who are ultimately using these tools can be a way to help to reduce any conflict that might arise there.
Overall, though, this is sort of like the take-home message of what we've learned thus far. The use of digital therapeutics has significantly reduced training time and effort. And, indeed, what we have found now is, when the clinicians are using this tool, we're not necessarily having to come in with trainers and instructors to teach them the mechanics of how to deliver a protocol in a particular way. We can, in essence, put the tool in their hand and just say here's how the smartphone app works, and then the app delivers those procedures on that clinician's behalf. And so it immediately eliminates the time that's required in actually teaching clinicians the details of the protocol.
And then we're finding that the consultation and supervision now tends to be much more focused. We're able to emphasize process-related factors that occur within the treatment process versus the specifics of how to deliver a particular technique. And so we're able to get into sort of the nuance of how to address complex case presentations, how to address patients who may be perhaps less motivated to engage in treatment. And so we think this might be part of the reason why we've observed that patients as a whole seem to remain engaged with the smartphone app at a higher rates than in the traditional therapeutic approaches, face-to-face human clinicians. We think that maybe now the clinicians are able to attend more to these other sort of soft aspects of the treatment relationship that might actually keep patients engaged.
Okay. So that's all that I have for today. Look forward to further conversation. I will turn it over to Igor Galynker.
IGOR GALYNKER: Hello, everyone. Can you hear me?
SPEAKER: Yes, we can.
IGOR GALYNKER: Okay. Great. Thank you. So I'm Igor Galynker, and I'm from Mount Sinai in New York City, and this talk will be a little bit different because it will touch on aspects probably other than virtual human intelligence. But the talk is "Training Clinicians in Emotional Self-Awareness Using Virtual Human Intelligence (VHI) Successes and Pitfalls.” So my acknowledgements, the research was supported by NIMH and American Foundation for Suicide Prevention, and there are a number of wonderful people I worked with at several institutions, including the International University and University of Florida in Gainesville. And I have no conflicts of interest.
So the talk is about how we use VHI to deal with clinicians’ emotions. Emotional aspects of clinician-patient relationship, I think, have been probably underestimated. The clinicians working with acutely suicidal patients frequently have negative emotions towards them, such as anxiety and avoidance. That was mentioned a couple of times today, including by Barbara Stanley who asked the question why actually we don't ask people about their suicide. They were so reluctant probably because we have anxiety and avoidance as clinicians. Negative emotional responses that we may have to patients may be sensed by patients and result in adverse outcomes. We actually have some data about that. So it is critically important to train clinicians in recognizing and managing their negative emotional responses towards patients with suicidal thoughts and behaviors.
So what is our role as academic psychiatrists in recognizing and managing our emotions? So we have published a paper or framework for teaching suicide risk assessment that takes emotion into account. We need to help clinicians develop emotional self-awareness, particularly with negative emotional responses, because at the moment, actually we are trained not to have emotional self-awareness, but rather soldier on and be professional without paying attention to our emotional responses. Once we learn emotion self-awareness, we need to manage our emotions and that requires management skills, and then we can and should use our emotional responses clinically because they have clinical implications, as I'll mention. Particularly this all needs to occur in interactions with high-risk patients because this is where the stakes are the highest.
So we need to construct our training so people will learn empathic skills which are at least of two kinds: verbal empathic communication and nonverbal negative facial effective behaviors, or NFAB -- because if we communicate verbally empathically but have actually discordant facial expression which does not convey empathy, I think that would create an adverse outcome. And in order to do that, it's important to use feedback of those with lived experience, and that's what we did in some of our studies.
So we train people in how to assess the emotional responses in relation to the proposed DSM diagnostic criteria for the suicide crisis syndrome. And this is the checklist for the syndrome, and I'm not going to go into it. It has been submitted to the committee, and there's been a number of publications about it, and this is basically how you diagnose it. That's also an indirect answer to several listeners and speakers who brought up the issue of reimbursement in terms of how to get reimbursement for suicide prevention services. And suicide crisis syndrome is real syndrome which was replicated several times now across the globe. And a diagnostic code for assessment -- for the assessment of this syndrome would actually probably resolve the reimbursement problem.
The clinicians, as we have identified in our research, have at least three emotional responses which are predictive of imminent suicidal behavior. One of them is simple, which is distressed response. If you feel distressed when you're speaking to the patient, that usually indicates substantial risk. It is the risk compared to what you would think otherwise and a couple of complex emotions. One is a combination of hope and distress which is paradoxical because if you're hopeful, why are you distressed? We call this anxious over-involvement. And the other one is a combination of hopelessness and calm because if you're hopeless about the patient’s prospects and prognosis, why do you stay so calm as opposed to reacting? So we call that collusion and avoidance or denial.
So the emotions and conditions of emotions are critical in suicide risk assessment because, as you see from this slide, if you measure the rational factors alone, the risk assessment scale, it has no direct relationship with suicidal behavior within one month, neither do clinician emotional responses. However, both of these in reasonably equal manner go into clinicians' judgment in relation to upcoming suicide risk where clinicians make their decisions. So both rational and emotional responses, rational factors and emotional responses, are critical in suicide risk assessment.
So clinicians' emotions are unutilized resource because they're an invaluable tool in their work with suicidal patients and must be experienced rather than suppressed. For risk assessment, clinicians must feel the emotions so that they might manage the emotions. And currently there is really no training in emotional self-awareness, not for psychiatrists, not for any other medical specialties. Training is needed, and we need to start from scratch. And the training ideally needs to be web-based and scalable for reasons that have been discussed.
So with this, I'm going to talk about results of our Training in Empathic Communication for Suicide Intervention protocol. So we have the schema, for the protocol on the next slide. But we broke down the clinicians' behavior into several components. One is emotional self-awareness, subjective awareness of negative emotions towards suicidal patients, and there's an instrument to measure that called TRQ-SF. The other one is verbal empathetic communication expressed in words with a virtual human, and we have a system to measure that called ECCS. And finally, facial control of negative emotional responses, which is analyzed with NOLDUS face read. And we provisionally called the combination of these three components empathetic signature, and when we measure this empathetic signature in initial interaction with virtual human intelligence, it turns out that clinicians differ wildly in all three indices of emotional/empathetic signature, and some have a pretty weak empathetic signature, and some obviously have a strong empathetic signature, and that would be important going forward.
So this is the study. We recruit clinician participants, and they're randomized into two virtual human interactions. For one of them – the group is given feedback including, depending on the study, by those with lived experience. The people in the other group get just the risk assessment. And then we repeat it and see how actually people improved, so that's training effectiveness. And all these clinicians have patients, so we also actually see how whatever they learn affects their working alliance with suicidal patients as well as the treatment outcomes.
So these are two of the virtual human interaction interfaces or virtual humans that we work with. On the left, is Cynthia Young, and on the right is Bernie Cohen. So you can interact with them by typing or by speaking. They understand speech through speech recognition and they respond to you in voices so you can have a conversation with them. And interestingly, you can treat them pretty much like real people to a significant degree. And so clinicians with high and low empathetic signature also treat them as different people with different degree of, I guess, trust.
So what are our preliminary results? They're highly preliminary, but this training improves negative emotional responses in empathetic communication, but only among clinicians with weak empathic signature, which kind of makes sense. Only those who need the training the most respond to this training, which is admittedly somewhat limited. So you see in blue are those who have strong empathetic signature. They did not change during this training, and this is specifically with regard to the verbal empathetic communication; and those with weak empathetic signature improved this training. Also the training is effective because those in red who got the training itself improved and those who just got feedback on suicide risk stayed the same. And finally, the emotional self-awareness also improved, but only in those with weak baseline empathetic signature.
So this is all positive, but the interaction between the clinician and virtual human is far from seamless. And the response rate of virtual humans is about 80 percent, and it means that about 20 percent of responses that they give are nonsensical. And the clinicians react to the imperfection of virtual humans, and, you know, they reported to us a lot of irritation on imperfections of the program, although it's actually quite good, probably as good as what others presented here today.
And so we were able to get granular and separate clinician’s responses to sensical [virtual human] responses, clinicians' emotional reaction, and understand what virtual human responses actually do make sense and when it does not make sense. And this is actually fascinating what you'll see in blue, and it's percent of time when facial responses by clinicians when conversing with virtual humans, makes sense. And you see the predominant emotions are happy and surprised. On the other hand, if the virtual human is not responding appropriately, you see the response is overwhelmingly angry. So when you actually try to understand how effective you are training in teaching the clinicians, you need to separate these two responses. So that's one technological component that we need to pay attention to.
The other one is how much scaffolding do clinicians want. And it turns out that if you have a lot of scaffolding, as you have here in red, the result is that you have more medium-level responses, empathy levels, and less low-empathy levels. I don't have the slide here, but clinicians do not like this interaction between you and me [scaffolding]. They would rather not be scaffolded and not be interfered with. On the other hand, they learn less when that happens.
So the positive of virtual human intelligence training is that VHI could play a role in suicide prevention because people learn empathic communication, particularly for those with weak empathetic signature. They learn awareness. They learn verbal empathetic communication, and they control their facial-effective behaviors. Virtual humans actually could play real humans, and that means that technology would not have limitations that it has right now. I think it would be a lot more feasible. And that's why TECSI is effective and is improving, at least at the moment, only weak empathetic signature because these are the people who really, really need the training and they get it.
So pitfalls are that the current level of technology is not adequate, and this is where the cost comes in because if you need to build the database and the algorithm, it takes a lot of time and a lot of effort, and also the images need to be a lot more lifelike than they currently are. Nonsensical responses result in negative patient effective behaviors, and they would conflict with and undermine verbal empathic communication creating the opposite effect. Scaffolding outcomes are complex, and we need to find an optimal way of supporting and giving feedback to the trained clinicians because no scaffolding would result in worse training, and too much scaffolding will result in irritation on the part of clinicians.
So these are critical issues, but, however, the virtual human intelligence training should be invaluable. It would solve problems in teaching clinicians how to deal with people who are at critical risk for suicide. And there is one other thing that I forgot to mention. Let me go back to the slide as I was listening to the previous speakers. Sorry. My apologies. Can't find it. So thank you for your attention.
ADAM HAIM: Great. That was an amazing series of presentations, and I wanted to thank each presenter for their great insights. And I'm really excited about how each presenter highlighted the incredible potential technology to develop new approaches for training providers and expanding the reach of existing interventions. And it's great to see how these promising types of technology are moving from research into practice. I also liked how each presenter highlighted tools that were in different stages of development.
For Dr. Atkins' presentation, I really liked how he highlighted how community-based person-to-person training for providers is not always optimal, and how there are alternative approaches to hone our skills via continual practice and AI as well as practice spots. He showed that for motivational interviewing and CBT, AI can be leveraged to provide feedback to providers. The immediate feedback really does appear to greatly enhance outcomes, and it's exciting that this technology has moved from the lab into the small business realm.
Dr. Arean highlighted intelligent tutoring systems which allows for trainers to focus on individuals who have greater need and how this approach is adaptable to many different training needs. She showed how powerful this approach could be in training students. What was important to note from my vantage was the result of this training included a teacher dashboard, so, again, teachers can focus on the students with the greatest needs. She also highlighted that you can't completely remove the teacher from the system and rely completely on technology. And finally, she highlighted how technology development is slow and it's expensive, and we might want to think about having a better pathway from early development to honing the product for broad rollout.
Dr. Bryan discussed the essential ingredients underlying effective interventions and focused on clinician fidelity, and why many of the approaches to increasing fidelity are not scalable. He chatted about Aviva which is a digital therapeutic based on two CBT suicide prevention protocols and suggested that this approach reduces training time and effort and allows for more focused in-person supervision. Importantly, he touched on some of the limitations around the limited flexibility once the technology is rolled out.
And wrapping up, Dr. Galynker showed how negative emotional self-awareness in clinicians may be impacting behavior in patients and how clinicians might effectively develop skills in emotional self-awareness. The conceptual model underlying how to empathically communicate was elegant, and it was impressive to see how this model is used to further develop the training. And it was nice to see the data he showed, how the training improves emotional self-awareness. It was interesting, I thought, to see that 20 percent of the time, the technology did not respond accordingly and how that might impact outcomes. It would be interesting to see how this training might play a role down the road in being applied to prevent suicide.
So at a high level, I'm really excited about the presentations and how they identified opportunities to use technology to provide new research findings to providers who integrate this into the care provided to patients. I do want to reserve a bit of time for some questions for the panelists, and I thought I might kick it off. So how do we integrate these tools into practice at scale? So how do we move things out of the lab more quickly and get them into the hands of providers and really sort of begin to bend the curve here? Pat, you had mentioned some of the challenges. What are ways we can do this more quickly?
PAT AREAN: That's a good question. I mean, some of what we've been trying to do here in Washington State is address the -- you know, the limited workforce we have in Washington State and also in the WAMI region. So we also do a lot of work with Alaska, Montana, Idaho, and Wyoming. And here in this state we've been working on a credential for bachelors-level providers in primary care medicine and thinking through a lot of things, like, you know, how do we do this so that these are people who are paid more than they would get paid if they worked at McDonald's. You know, that it's actually a meaty kind of intervention.
And so part of that is that community colleges in our state have been really interested in taking this on and actually they did a lot of the initial work to get this credential put into place. But they also recognize that, you know, they don't have, you know, the expertise necessarily or, the resources to help train students so that they're really good and not dangerous, right? So in explaining how these intelligent tutoring systems work to kind of expand and make this more scalable, I think that at the college level there was a lot of interest, particularly during COVID when everybody had to go virtual. That made it even more exciting for them to buy in.
And so I think part of it for us, too, is that our platform, you know, is relatively inexpensive to disseminate because it's basically -- it's part of our Center. You know, the engineering support, we were able to get donor money from the Bahmer Group to help us with that. But the long-term sustainability, like, keeping it refreshed, keeping it on -- you know, up-to-date technologically, that's going to be a challenge we'll have face. And I think Dave actually has more experience with the implementation side of this then I do, so I'll turn it over to him.
DAVID ATKINS: Yeah, just a few quick comments. You know, if there's any silver lining from COVID within our field, I think, one, it has raised a kind of societal awareness of behavioral health problems in general, but obviously it has greatly increased digitally-mediated treatment. And in particular for some of our work where we're analyzing the content of the therapeutic conversation, that move to telehealth has been a huge benefit because then it is much easier to kind of capture that recording and to be able to process that. So, you know, I would say that that combined with kind of cloud computing are two technology advances that make the deployment and implementation much more straightforward. Particularly when we think about quality assessment, quality improvement of service delivery, I think there we're really trying to leverage both policy and payers. And everyone seems to be able to recognize challenges with quality in the community and that we don't currently have tools to assess quality.
But as those tools are coming online, you know, the reality is no one right now, particularly with the fee-for-service payment model, is paying for quality. And so I think that's going to be, can we generate enough creative tension broadly within the system to begin to capitalize on the shift towards value-based reimbursement. Can we actually get folks to start to have enhanced reimbursement for demonstrations of quality? You know, then I think all of the training technologies that we talked about in our panel today would have much more impetus if there's actually some dollars tied to demonstrating the quality of services being provided.
ADAM HAIM: Let's move on to a different topic and talk about inclusion. So "how do we ensure that the digital mental health training tools that are being developed and deployed are within reach for all providers and effective for all patients? And with regards to Dr. Bryan's presentation, in what population do you think your approach would be most useful? Have you thought about how it might easily apply across lower-resourced populations or be able to be used with fidelity in those settings?"
CRAIG BRYAN: Yeah, I would say -- I guess maybe I should start with a caveat. We don't know yet which populations it would be best suited for -- our suspicion, what we think, is it probably would be most attractive to young adult populations and adolescents. That is definitely where we had seen, just in our work as a whole, more familiarity with digital therapeutics, smartphone apps, things like that. So, you know, more to come. We'll be definitely looking at those questions, but that's our best guess right now.
And I would say with regards to ensuring access across various settings, that is something we've thought about. One really important consideration that's often overlooked when it comes to digital therapeutics is this assumption, I think, this faulty assumption, that many people have, especially those of us who live in more urban areas, that everybody has access to Wi-Fi or broadband service. And so, you know, there are regions of the country -- for me, I'm in Central Ohio, but just a couple of hours southeast of here in Appalachia, that is not a fair assumption. There are certainly lots of regions where you cannot get cellphone reception.
And so being able to deploy a technology like this would need to take into consideration things like, well, what's the download speed? How large is the file? And then another pivotal question that we've been wondering about and thinking about is, you know, can you use the technology when you do not have Wi-Fi service or internet access? And I think that's not only applicable for rural areas, but there are broadband deserts in densely-populated urban areas as well. And so being able to take into consideration, I think sometimes, remembering, you know, that there are lots of people in our country and in the world who do not have access to that sort of basic necessity of life is going to be really important whenever we deploy any sort of technology-based solution.
ADAM HAIM: Great. So we chatted about the incredible promise underlying these approaches, and I got really excited thinking about, you know, these being delivered at scale. And there were a few points during the discussions where we talked about some of the drawbacks in using these approaches for training focused on suicide prevention. But again, those were mostly focused on the cost delivering it to scale. Dr. Galynker, you really identified an area that I thought was interesting in that 20 percent of the time the technology doesn't respond accordingly. And I imagine for all the technology that was discussed there's this, you know, percentage where it glitches, it doesn't respond accordingly, and the person receiving it loses that connection with the technology and things begin to go downhill. What are some other issues that might arise, some drawbacks to using technology when engaging with clinicians as well as patients?
IGOR GALYNKER: I would say the lack of connection is probably the most prominent one, and the other one that we see is, generally, the screen time demands, the computer demands, the electronic connect -- medical record demands, is a general kind of overload with technology that trainees seem to have. And our experience has been that the more human the virtual human is, the better the connection. And so we're looking forward to having probably more technology improved and more available. In fact, we changed our platform to a more lifelike technology. That would be the single issue.
I want to come back to what I mentioned -- the slide that I was looking for is the slide that was related to the key point I forgot to mention: that suicide crisis syndrome and its diagnostic value rely on self-disclosure of suicidal ideation, okay, which was brought up many times during the talk. So this would be an adjunct approach in situations where suicidal ideation is not reported, including African-Americans where people attempted suicide without ever having suicidal ideation or reporting it.
ADAM HAIM: Great. So we have about two minutes left, and I would like each of you to give sort of a parting comment to this question in about 30 seconds each. What do each of you believe must be on the road map for funding agencies and regulatory agencies around the next generation of digital health training tools to improve the behavioral health for everyone? In 30 seconds or less.
DAVID ATKINS: I can jump in. You know, I think technology and AI, in particular, is moving forward at such an incredible rate that there is a lot of hope and promise there. I think that needs to be tempered by a, you know, a very kind of sober assessment of what it can do and where it might not perform well. So along with Dr. Galynker's comments, there are times where our chatbots will get into, you know, strange conversation topics. And so there definitely are areas where it's, like, we still need that development. It's also the case that, you know, no algorithm is ever perfect, and so there has to be a kind of continual development, adaptation, and calibration because where you want to deploy it has to map to whatever training data trained that algorithm. And so, you know, we'd never want to be in a situation where we're generating biased data for a particular population or setting where we haven't trained that algorithm. So a couple thoughts.
PAT AREAN: I think my contribution to this would be that the technology is really cool, and we need a lot of support around training the AI and all of that. Also considering what technologies are more accessible versus not. Like, we don't have the same problem with our tools around chatbots and robots kind of glitching out because we don't have, like, that kind of interface. It's textually based. It's not as pretty, but it's really efficient, and we don't have to worry about those things.
However, I would say those are sort of technicalities, and what would be really more important for an institute like NIMH to focus on is the use of these tools in the context of pedagogy. You know, like, what is it that would engage a student or a trainee in such a way that they would have greater self-efficacy in the skills that they're training to, that they would I feel that it's role appropriate for them because that's what you run into with suicide. "This is my job and I'm not going to ask it." I'm trying to think about the three things our Center is focused on or starting to focus on.
But, you know, appropriateness, efficacy, belief that I can do this and, you know, those kind of things, I think, in terms of outcomes are more or as important obviously that people are safe, but at the same time, that they're more likely to use it. And so what is it about technology that could facilitate that? I think those are the questions you guys should be focused on.
ADAM HAIM: Great.
CRAIG BRYAN: I would say my response would be to think about individual differences and precision medicine types of approaches. I mean, I think with some of the interventions -- I focused on CBT for suicide prevention, for instance, and they are, you know, maybe a dozen different things that we do in CBT for suicide prevention. And my experience clinically is that some of those things land with some patients and other procedures land with other patients.
And so as we move towards digital therapeutics, I think one of the potential advantages is we might be able to start figuring out, in a much easier way, who responds to what. What are the things within the treatments that are most effective for which people, and how can we then start moving towards a more customized treatment experience, which may increase the potency of the therapies.
IGOR GALYNKER: I just have one thought, and it is if we use technology, the most efficient use would be to make a better clinician as opposed to improve the method. So it would be great to aim at the clinician and clinicians' quality as a mental health professional, and that would apply to all treatment modalities.
ADAM HAIM: Well, I want to thank each of you for your really insightful presentations. It was -- it was fantastic. So we're going to go on a break now and we're going to start again at 2:00 and see you all then.
STEPHEN O'CONNOR: Okay. Welcome back. We're going to go ahead and start the third session for this day. I'm pleased to introduce moderator for the session, Dr. Richard McKeon from the Substance Abuse and Mental Health Services Administration.
DR. MCKEON: Thank you very much, Stephen. It's a real pleasure for me to be able to moderate this upcoming session on Paradigm-Shifting Opportunities. And we will have as our three speakers Greg Brown from the University of Pennsylvania, Brian Ahmedani from the Henry Ford Healthcare System, and Colleen Carr from the Educational Development Center. I've had the pleasure to work with each of them over the years, and I'm looking very much forward to hearing their remarks. So with no further ado, I will turn it over to Dr. Gregory Brown.
DR. BROWN: Okay. Thank you, everybody, and I'm going to share my screen. Let's see. Can everybody see that?
DR. BROWN: Okay. Thank you. So I'm going to talk about Advanced Training in the Safety Planning Intervention that was developed for the VA providers.
SPEAKER: Dr. Brown, if you want to come into presentation mode so we can see the slides full screen.
DR. BROWN: Sure. Is that better?
SPEAKER: That looks great. Thank you.
DR. BROWN: All right. Thanks. So I'm going to be talking about Advanced Training in the Safety Planning Intervention that was developed for VA providers and sponsored by the VA Office of Mental Health and Suicide Prevention and is administered by the Evidence-Based Psychotherapy Training Program Section. And I recognize my colleagues, including Wendy Batdorf and others from the VA as well as Barbara Stanley from Columbia University. And of course, these opinions I express today are not of the United States Government but are my own.
Just a little bit of background, the Safety Planning Intervention, the main evidence base from that is a study that we did within the VA, a cohort comparison study of the intervention with follow up for veterans who were at risk for suicide who went into the ED, who were evaluated there and who were subsequently discharged, and comparing that to usual care treatments in the ED. And it found that the intervention was associated with 45-percent fewer suicide behaviors identified in the medical record. And this was mentioned in the VA DOD Clinical Practice Guidelines as was as crisis response planning.
So as Craig Bryan mentioned earlier, safety planning quality matters, and there are two studies supporting the idea that higher-quality safety planning predicted decreased likelihood of future suicide behaviors approximately six months later, and although this safety planning has been widely disseminated and implemented in VA since about 2008, it is widely variable in terms of its quality. And, in fact, we see a lot of low-quality safety plans, cut and paste and that kind of thing. So we wanted to develop a better safety plan program than what's currently available, which consisted mostly at the time of a manual form, a brief instruction guide. And we know from many studies that just reading a manual or completing a form is not sufficient for promoting quality.
So this Advanced Safety Training in Suicide Prevention was developed over several years. It is for VA providers who regularly encounter veterans at elevated risk for suicide and who routinely complete safety plans. It's based on what we call a blended learning model that includes interactive didactic training with skilled demonstrations, experiential roleplay exercises that include individualized feedback for expert training consultants, and evaluation of the safety planning intervention using standardized role plays and standardized rating measures of competency and fidelity.
And I forgot to say that this training is done within the context of the VA that has a broader culture in training. This includes its support by leadership at top levels. It's valued by clinicians who are accustomed to following detailed protocols. It is supported by TMS, the Employee Education System, that develops web-based training programs. It includes medical record templates on safety planning that augment the training with detailed instructions and also includes performance reviews and other quality assurance policies that go into this training. So this is not a typical standard healthcare organization that we're talking. A very well-developed system.
So just to dive in a little deeper into the training program itself, there are four components. The first component includes uploading a previously-completed de-identified patient safety plan into the portal; attending a program evaluation meeting to go over the policies, procedures, and expectations for the training; reviewing a well-described VA safety planning manual; reviewing both of the safety planning intervention fidelity measures; and reviewing the safety plan note template user guide; and then completing a three-hour safety planning web-based training course that includes a lot of role play demonstrations. I did 11 of them for this course, and so short clips, but still 11 just developing the various components of the Safety Planning Intervention.
If the providers completed that they moved on to Component 2, which includes experiential training. And this is participating in three, consecutive, weekly, two-hour calls with a training consultant and approximately four fellow training participants, so small group learning. And each person is required to complete at least one role play in the role of provider, and people then also completed their Safety Planning Intervention rating scales and helped to provide feedback, and just created a culture of learning.
And then once people complete those three two-hour calls, they move on to Component 3, which is participating in one-and-a-half hour individual evaluation with a training consultant that includes detailed feedback, and that training consultant was different than in Component 2. And then if they did well on that and met our minimum scores, they moved on to Component 4, which was a follow-up evaluation conducted approximately three months later that included a similar evaluation with detailed feedback and also discussion of implementation challenges. And so people must achieve a minimum on competency measures that I'll describe in a minute.
So we are just about ready to start Cohort 7. These are the first three cohorts of providers in the program [on the slide]. We started with 123 providers, approximately 40 per cohort, and you can see they're flowing through the program there. And we had minimal dropout. Some changed jobs. Some withdrew from training. A couple didn't meet competency requirements at either the post-training or three-month follow up. But of 123, we had 105 complete the program.
And these are some of the results, and we looked at knowledge of safety planning, attitudes over time, including their intent to use safety planning and intent to use various components of safety planning. And we see here an increased knowledge in safety plan from Component 1 to Component 3, and a significant increase from Component 1 to Component 4. There was no significant differences between cohorts over time, and no significant difference in the three-month follow-up means.
So the rating scale I mentioned earlier. This was developed in part when Barbara Stanley and I were listening to tapes for our clinical trials and discovered that many of the times, people were leaving out a significant, major portion of the intervention. They seemed to focus more heavily on completing the safety plan form and not on the things that go around filling out the form. And so this includes obtaining a description of recent suicide crisis, which is important for understanding patients' experiences, conveying empathy, and kind of formulating how you're going to intervene on that acute suicide risk, reviewing the suicide risk curve, providing a rationale for safety planning, describing safety planning as a collaborative process. It's not just, “Here, fill out this form.” Explaining how to follow the steps for using the Safety Plan in a stepwise fashion, and then talking about where you're going to keep it, who you're going to give it to, and what's going to get in the way of using it.
And so these are things we emphasize in our training now. We spend a lot of time on the aspects of the intervention, and so much so that we wanted to have it scored independently from the safety planning steps, and each step requires a minimum score of 14, with a maximum score of 18. So about 78 percent of the scores need to meet criteria.
So the competency ratings using standardized patients, what we found was that that the global ratings were maintained over time between Component 3 and Component 4. We don't have competency ratings pre-training, so this is just the two program evaluations of Component 3 and Component 4. And in general, some people did not pass the first go-round. They were given a second try. And we find about 97 percent of the people do end up meeting our competency criteria at the second attempt.
So to evaluate a safety planning intervention over time from pre-workshop to post-training, we use what's called the Safety Planning Intervention Scoring Algorithm which is used to evaluate quality and completeness of written safety plans during the intervention. And each response is evaluated for the degree of completeness, personalization, and accuracy as well as detailed ratings for step six, which is making the environment safe, including firearm safety, access to opioids, providing a copy of the plan, and contacts for a safety check.
There are several scores that describe the written safety plan, including a total quality score, a total completeness score, and a global impression score. And we've developed a safety planning intervention scoring algorithm codebook with updated scoring projects, in part that was funded by an NIMH project from New York State, Zero Suicide, of which Barbara Stanley is the PI. And if we look at SPISA Total Quality Ratings, we see a significant increase from Component 1 to Component 3 and then maintained to Component 4, with no significant differences between cohorts in change over time.
So as the Implementation Needs Assessment, which was a survey that we gave between Components 3 and 4, the providers rated the Safety Planning Intervention as moderately to highly effective. Most plan to use the Safety Planning with 51 to 60 percent of veterans on average. And the group identified several barriers that get in the way of providing Safety Planning after the training is over, including not having enough time -- we hear this all the time. We think that the Safety Planning Intervention takes about 45 to 60 minutes to really do it well, and often people are left with, like, 10 minutes or even less, and you just can't do a high quality safety plan in 10 minutes. There needs to be more education for veterans regarding safety planning and its benefits and then increased leadership support for Safety Planning within the clinic setting. And there's a lot of comments about, you know, making sure that the clinic itself is supportive of the intervention.
So the lessons learned from this are that this advanced training and Safety Planning improved provider competency in the intervention, yielding higher-quality safety plans, and that we saw improvements over the first three cohorts, and that providers, they actually wanted more training. When we did the first cohort, we only two, two-hour calls. That was it. They wanted more training in the experiential role place. We added another two hours, so I thought that was interesting.
The next steps for this program include a comparison of training providers in advanced training versus usual training on Safety Plan to evaluate the effectiveness of the advanced training on veteran suicide behavior traits and to improve the efficiency of the training. Like, can we leverage some of these virtual standardized patient technologies that we've heard about today to promote competency and maintain fidelity to the interaction?
So with fidelity, the overall evaluation of Safety Plan quality within the health system is key for guiding the adaptation of provider training. Providers often assume that they don't need training in Safety Planning. It's just a form, right? So you just sit down and you fill it out, or you have the patient fill it out, and this is one of the big hurdles that we've tried to overcome and certainly needs to be addressed -- Safety Planning Intervention is more than filling out the form. Providers had the most difficulty in mastering tasks, such as doing the narrative interview of a recent crisis, explaining the nature of suicidal crises, providing a rationale for Safety Planning and other tasks not listed on the form.
And finally, I come back to Kate Comtois' note about the need for reach, and that there is a strong need to provide different levels or types of training for different providers, and messaging strategies, and Safety Planning for patients and for providers of different roles. So one size does not fit all. In particular, there's one type of training for providers, clinical training that we've mostly been talking about today, another training for supervisors or leaders, another training for professionals in informatics, or for people who are doing smart phone app development. And I can't tell you how, as we consulted VA and as they continue to turn out amazing products in suicide prevention, that referring people to the web-based training for education has been very helpful in order to facilitate the development of other technology. So it's really, really important that the training be done not just with one clinician, one type of clinician, but it's done much more broadly.
And that's the end of my talk. Thank you very much.
RICHARD MCKEON: The next presenter is my esteemed colleague, Brian Ahmedani.
BRIAN AHMEDANI: All right. Thanks, Greg. Nice work.
Let me just try and screen share here. All right. So nice to see you all today. Thanks for the invitation to talk at this really interesting workshop. I'm glad that we're doing this. This is kind of one of those issues that's really central to implementing these suicide prevention interventions and processes within healthcare systems and within care in general, and, you know, we really need to think more about it, so I'm glad we're doing this.
I just want to acknowledge funding support from a few NIH grants, a SAMHSA grant, as well as our new consortium supported by Blue Cross Blue Shield which is working with provider organizations across the State of Michigan to implement suicide prevention practices in each of their systems.
So you all know the background here, but what I want to do is set the stage because we've talked a lot about different environments, and we've talked about different individual interventions. But I just wanted to kind of set the stage that, you know, healthcare systems are really important for suicide prevention, and so we really have to think about the best ways that we can implement training activities in healthcare systems.
As you can see here, and I use this every time I say anything, so if you've seen this slide, well, you'll probably see it 4,000 other times. But it's my favorite one because it's basically showing that, you know, most people touch the healthcare system before they die by suicide. And what's really important is that a lot of people don't actually have a mental health diagnosis. In fact, half of people who touch the healthcare system don't have a mental health diagnosis before they die, and most of their visits are occurring actually in primary care and non-behavioral health settings without a mental health diagnosis. So we have to take that whole context into mind.
We have a lot of great interventions. Many of them have been discussed in this two-day event, some intensive, some brief interventions, and certainly about a third of people who die by suicide get to behavioral health, but most people actually don't. So we have to think about approaches for training in both contexts, and that's why I want to set the stage with this.
So for the first time, we actually have interventions, and there's really real support, an opportunity to implement these interventions within healthcare systems. A lot of national support, Federal and state support tools are available. There's real evidence in evidence-based interventions, and there's now clinical requirements for suicide prevention activities. And now hundreds of healthcare systems have really started to implement all of these different components of what we call Zero Suicide in their healthcare systems, a range of different interventions, many of which we've talked about over the two-day period.
And so just to kind of show -- tell you a little bit about my experience, at Henry Ford we launched Perfect Depression Care in 2001, and we've evaluated that. It was the first real model of Zero Suicide implemented. We're currently leading a multi-site evaluation project of different versions of Zero Suicide implemented in each of those systems. We created a real new exciting model of Zero Suicide that is being implemented across all of our emergency departments here at Henry Ford. And then also, as I mentioned before, we're going to be working with all the provider organizations across the state implementing different kinds of suicide prevention practices and working with them on rapid cycle quality improvement as part of a statewide initiative here. It started this year. It's moving forward over the next few years.
So this context is based on most of the feedback that we're getting from all of these different health systems as we're trying to talk with them about doing all these activities. So each barrier or obstacle is also an opportunity for research in the future, and that's kind of where I'm headed. So there's no question that training is required to teach clinical skills. Let me rephrase that. It's really important to recognize that we have to have training in order to do things, and health systems and providers, they don't just start doing things right, but sometimes even with training, it doesn't exactly go as planned.
We've made a lot of progress. There are a lot of trainings now available for some of these evidence-based interventions. You know, we just heard from Greg about Safety Planning, a really exciting development, and all of these other kinds of big time interventions we've been implementing over the years. Many of them all have their own dedicated training modules associated with them, and that's really good, but there are also some challenges. And so I want to spend most of my time really focused on this slide because this is where we're focusing on not only the training barriers, but also any time there's a barrier, like I said, those barriers lead to opportunities and opportunities to really create research approaches of models to fill these gaps.
Just to kind of set the stage, all of these things have to do with system-and provider-level issues with training that happened within the context of health systems. All of these wonderful trainings, dedicated processes that we've heard about over the last couple days, all of them are at least influenced in some way by each of these components when we actually try to take those out of those trainings and put them into practice.
So, you know, obviously the first one, trainings are really often long and they're very expensive, thousands of dollars for each clinician, tens of thousands of dollars sometimes. And not only do we have to pay for the training, but we have to take our clinicians out of the clinic, which is actually the big problem. So the cost of taking someone out of the clinic or all of our clinicians out of the clinic for a couple of days is exponential, you know, and so not only is it a cost barrier, but it's also an access barrier. So we already have these long access gaps and we take clinicians out of the clinic, and then that exacerbates those access issues.
So how can we address that kind of a scenario? Is there a way to make shorter trainings or do them creatively? We've really tried to come up with some creative approaches to this, like offering Saturday trainings or trying to find shorter trainings that address the same content, but it's really a challenge in this dynamic between getting kind of the top content but also balancing the time requirements.
There are a lot of trainings just in general, but those trainings are often for interventions that aren't evidence based, and healthcare systems often don't know the difference. So there's a lot of trainings, and we get a lot of questions. "All these people are doing this training?" Well, that's not even an evidence-based intervention that would even make sense to use in this situation. But healthcare systems hear about all this information from other healthcare systems, and so sometimes they can be letting it slide into providing a bunch of trainings. Sometimes trainings -- multiple trainings on the same issue, doing it three of four different ways can be confusing for providers and for their staff, and they're teaching them things that really haven't even been tested yet.
We do actually have a lot of evidence-based interventions like I've been talking about, but sometimes the trainings for those interventions aren't evidence based. So we don't really know if the skills being delivered in those trainings actually get the providers to do the things that we said we wanted them to do in the intervention. That's not always the case. You just heard from Greg about all their great work with the Safety Planning Intervention training, but many of them are not this way. And so, you know, how can we make sure that those trainings for those evidence-based interventions are evidence based?
So sometimes trainings teach us things that really we could never do in practice. What if I want to do a safety plan in a primary care visit? Well, I’ve got a 15-minute visit and sometimes it takes me a little bit longer. How can I -- how can I implement a suicide prevention process if I'm training my providers to do it within a time that takes longer than the visit itself? More than the time they have allocated for that person? And that's not anything wrong with the training itself or the process, but maybe the fact that we're sometimes using them with the wrong people or in the wrong settings.
We really focus our trainings on institutions, you know, the number one goal of the National Strategy [for Suicide Prevention] was to connect people across business sectors. And so when we train somebody in this setting and the patient goes to the next setting, we don't really know how that's going to -- what's going to transpire there, and they may not know anything about what's being used.
Most interventions are really used in bundles, and yet we train on individual interventions. So if I'm supposed to do a caring contact, safety plan, cognitive behavioral therapy, et cetera, and I'm really only training on each individual intervention, how are those things used in context, or how can we develop training programs that use those things as a model together? Health systems are all very different, but the trainings are mostly the same in some context. So sometimes we train different organizations the same way, but they all have unique resources and staffing. Some trainings don't include stakeholders and asking them what they really want.
The burden of training is always on the health system or the provider because most of these trainings aren't available in educational programs or even required for CEU credit, so health systems are burdened with taking their people out and doing all this training from scratch. Many of the trainings are for interventions that require in-person visits, but we need to figure out how to also train people to do some of this stuff virtually since that's where the field is going. And we really need more about who trainings are for. Sometimes a lot of the trainees we have are really for specialty providers, but we need something else for primary care physicians where everybody is actually going before they die by suicide. How do we train the right people in the right place, and maybe that's a different type of training. And then there's logistical issues like dose of training, frequency of re-training, when and where and who should get what. So all of those logistical issues about training are equally as important to understanding how to do this work.
And I talked about process, too. It's not just, here's a training, but it's also how do I communicate to my providers and my staff about that training? When's the right time to do it? And how do I communicate who's supposed to get which training? How do I align all those trainings? So there's a lot of things that we've had to figure out as a healthcare system that there's not a lot of evidence to support. And we're getting a lot of questions about all these things, and we're having to make up things on the fly and really try to adopt within all these big healthcare systems. But we really, really need kind of the next level of paradigm-shifting opportunities where we take training to the next level and start addressing some of these healthcare issues now that we have some really great trainings available.
So I'm going to stop there.
RICHARD MCKEON: Okay. Next up is Colleen Carr from Education Development Center. So, Colleen, please take it away.
COLLEEN CARR: Great. Thank you, Richard.
Good afternoon, everyone. I'm Colleen Carr. I serve as the director for the Secretariat of the National Action Alliance for Suicide Prevention at the Education Development Center, and I'm really going to be building on what many panelists have already spoken about but talking a little bit more about the policy space and how do we leverage and really build some policy research that can help impact and strengthen future training programs.
So disclaimer. The Action Alliance receives support from both public and private sector funding, but the Secretariat itself is supported by the SAMHSA-funded Suicide Prevention Resource Center at the University of Oklahoma.
And so I'm going to start by providing an overview of some of the current national recommendations that we have for clinical training, talk about some of the policy approaches that we have seen take off in recent years, identify a few opportunities for progress, and then really outline a number of key policy research questions that remain to be answered that could really help strengthen our clinical training provider efforts.
For those of you not familiar with the Action Alliance, we launched in 2010 and serve as the nation's public/private partnership for suicide prevention, bringing together the public and private sector together to advance the National Strategy. And so we work with many of the folks on this call, including NIMH and many of the panelists, to really align and strengthen our suicide prevention efforts nationally, but really guided by the National Strategy. Thinking about why clinical training, in everything that we're doing, we're thinking about a comprehensive approach to suicide prevention. And time and time again, we've -- the literature has demonstrated that clinical training really is a core piece of a comprehensive approach. I won't go over the research in detail as it's been outlined by many of the researchers at today's workshop, but we know that clinical training can lead to changes in how we're delivering care, adhering to best practices, and can even contribute to change in clinical and organizational policy. And we know that when providers have higher confidence, they're more likely to use recommended practices. So as we're all working together to increase the delivery of evidence-based suicide care, clinical training really has continued to be a core component of what that looks like going forward.
So as I mentioned, our work is really guided by the National Strategy and working with partners to really advance those goals and objectives, and there are a number of national recommendations that exist now for clinical training. So I'm just going to walk through them very quickly to give a landscape as to what are the recommendations that are out there and how are we doing in moving them forward.
So starting back in 2001 with the first National Strategy for Suicide Prevention, Goal 6 recognized the need to implement training for the recognition of at-risk behavior and the delivery of effective treatment. So there were objectives in this strategy that talked about really engaging education programs and medical residency programs to integrate training around the assessment and management of suicide risk and identification. It included an objective around social work and graduate programs and really looking at how do we get education programs to include clinical training around suicide risk. And then it also discussed how do we engage licensing programs and recertification in clinical training efforts. So that was in 2001.
In 2012, the National Strategy was revised. Goal 7 was highlighting the need to provide training to community and clinical service providers on the prevention of suicide and related behaviors. And so, again, similar themes of making sure we're providing evidence-based training to providers that are out there delivering care now but also looking at the development of core education and training guidelines on the prevention of and treatment of suicidal behavior, looking at graduate and continuing education programs, and, again, looking at accrediting and credentialing bodies as a really important vehicle to help go to scale with clinical training.
So building on the release of the National Strategy in 2012, the Action Alliance launched a task force and in 2014 released Suicide Prevention in the Clinical Workforce: Guidelines for Training. And these were really developed to be a minimum set of training guidelines that could be adapted to each profession given their unique situation and unique role that they play. So this is on our website to date and really is meant to be kind of a foundational report that can help each profession really develop its unique guidelines for clinical training.
And then the final report with national recommendations is the 2021, just released earlier this year, the Surgeon General's Call to Action to Implement the National Strategy for Suicide Prevention. So we recognized the 2012 strategy still had a lot of work left undone, and yet the evidence has continued to build where we can have the greatest impact in suicide nationally. And so in this Call to Action, really recognizing the need to increase clinical training again. And you'll see similar strategies that have been called on for progress time and time again, again education programs, accrediting bodies, professional associations, and really making sure that behavioral health providers are trained with evidence-based practices. And all of these reports are on the Action Alliance website if they would be helpful to you in your organizational efforts.
So we recognize there's a number of recommendations on the books around how we need to really elevate clinical training for providers. There has been some movement at the state level in moving forward with policy approaches to this. So an AFSP white paper in June 2021 outlined where we're making some progress here. There are 10 states that currently mandate training in suicide assessment, treatment, and management for health professionals. Those are California, Indiana, Kentucky, Nevada, New Hampshire, Oregon, Pennsylvania, Tennessee, Utah, and Washington. And then there are four states that encourage training for specific health professions. And what you see as you dive deeper is that there's also now 10 different ways that states have mandated or required training. So it's a combination of who's included and what professions are required to receive training. Everyone from psychologists, to counselors, social workers, EMS, marriage and family therapists, mental health counselors, PAs, RNs, NPs, and others, and each state has a different combination of providers. Some have very few -- one or two professions listed and others have gone quite broad and deep in which professions are included.
And it also really varies as to how much training is outlined in the policy. You know, some are six hours one time to get a license. Others are maybe one hour for a license renewal or two hours. A couple have requirements that every couple of years they can undergo some continuing education around suicide risk assessment and treatment. So it really, again, varies broadly as to how much training the policy is requiring in these states. So a combination of who is being trained and what they're being trained in, and how much. There are 10 different options.
There has been some recent progress with the California Board of Psychology. A number of years ago, they decided to require completion of minimum of six hours of coursework at one time for the licensing. I mentioned the Clinical Workforce Guidelines that we released a number of years ago. We did see the American Psychiatric Nurses Association really take those guidelines and use them as a foundation to adapt them to their own profession's needs, and they created psychiatric mental health nurse essential competencies for the assessment and management of individuals at risk for suicide. And they have been able to build that into training within their field to really increase competencies in that space. And so an example of how, at the national level, an association can really take a leadership role and adapt and make this an organizational priority.
So there's a number of opportunities for progress. We continue to need more professions to adopt the competencies to their unique workforce needs and the role that they play in the delivery of suicide care. We need increased suicide risk assessment- and treatment-based questions on professional recertification examinations. We really need to explore ways we can incentivize clinical training and education programs. I think we're seeing more and more that there are classes being taught and training opportunities, but there's not a systematic approach to it. It is a class here or a leader here, but how do we really incentivize it to go to scale within education programs so there can be an expectation that people coming out of that program have been trained in suicide prevention? And then really an evaluation of these initial policy efforts to legislate training, and I'm going to walk through some of the kind of policy questions that remain in those spaces.
So when we look at some of the state policy efforts, what's the recipe for success? So how do we really do more research to understand what are the core elements for state legislation if f we want to experience the greatest benefits for those policy efforts? So what are the professions that should be included? How much training should be legislated and detailed versus for definition later? How often is it? Is it a one-time for greater hours or, I think, as Dr. Ahmedani was saying, what is that dosing that really would set us up for the greatest success? And what type of training? Some of the policies were somewhat specific in what should be included and others deferred that to implementation efforts to be defined later.
So research on the impact. For states that have had these policies in place for a while now, how did how did implementation go? How did increased clinical training statewide increase adherence or did it increase adherence to best practices or drive organizational-level changes or even behavior change around suicide? And then really what is the alignment between the training requirements that are getting support in the policy space and the system delivery realities? Are the people who are most likely in the greatest need of that training the professions that are being identified as the target audience? So do the requirements map to the delivery system realities in that space and in that state, and how do those realities from the delivery care system really become part of that policy conversation as well as to who needs to be at the table?
When we look at the environment, we know clinical training is critically important, but we know it also needs to be supported by the organization environment and policy practice and protocols. So more research is needed as to what are the environmental factors that really set this type of policy intervention up for success? So investments in clinical training or policy requirements around on clinical training can have the greatest impact on suicide care overall.
Implementation. What happens after the policy is passed? You know, we know there's been at least 10 different states that have gone through this process and had 10 different models. What are the characteristics of implementation at the licensing board level or elsewhere that really bolster or limit the impact these training requirements can have, and how can we learn from the states that have gone through this experience so when new states are exploring this or considering writing legislation, that they are learning from the states that have gone before them?
And who's missing and how do we think about ways to reach non-licensed professionals or their future workforce through policy that maybe isn’t included in the efforts to date that really focus on licensing board and licensed professionals? And we know -- we've heard from many of the speakers during the workshop of the key role that other non-licensed professionals in the workforce play in supporting individuals at risk for suicide and being part of the delivery system.
When we look at education programs, there's been a number of efforts to really articulate how common suicide risk assessment is in education programs, and how do we really keep our pulse on progress around scaling up clinical training and education programs? How many trainings are now requiring this as part of their curriculum, and what types of programs, and how do we start to understand who is receiving this as part of their training effort? Specifically, in the education systems, you know, what are those implementation barriers? We've heard about limitations in time, and particularly when we get out of behavioral health specialty areas, how there's real significant challenges around incorporating suicide prevention into those areas because there are so many other demands, whether it's primary care or elsewhere. But how do we understand those implementation barriers specific to education programs so efforts to really create some systems to go to scale with clinical training are informed by the realities of that system and the education programs and all of their requirements they already have to meet?
And then really defining success. How do we know we're making progress? How can we really articulate what our process and outcome measures would look like to get to a place where there's consistent suicide clinical training and education programs, and understanding how would we see the impact of those investments when we look at suicide outcomes or delivery practices? And then really looking at incentive. What practices, policies, or incentives could be most effective to embed suicide clinical training in education programs nationwide? I think we've seen this in some other health areas whether it's around workforce development in overdose prevention or elsewhere, but how do we start to also incentivize the development of a workforce that is trained in suicide care?
Overall barriers to progress. You know, how do we start to think about those market forces, and how do we increase market demand for clinical providers that have received this training in suicide care so when they enter the market as a recent grad or as a new hire, that we have started to create that market demand for providers that have received this training? And I think going back to the previous conversation as to how much of the burden is on the organization at this point to train their workforce, and how do we start to spread that out among a number of the players in the market so everyone is playing their role from the education system to the organizations to the policy? And again, what are those model policies, not just at the state level, but what are some policy levers that haven't been activated yet at the national or organizational level that would support clinical training efforts?
And then I think this was also mentioned. I think Brian and Greg both mentioned this. But as we look at a changing delivery system where telehealth is increasingly how we treat patients, when we think about state lines and efforts to facilitate providers working across state lines, how does that change in the care delivery system? What opportunities does that present to help, again, incentivize provider training because we've taken away some of those state-specific efforts. So what opportunities exist in this new environment to again prioritize clinical training so whichever -- wherever you are seeking care it's not dependent on which state and what policies are in place or whether your provider has been trained?
So in closing, you know, literature is consistent that improved clinical training in suicide prevention increases the use of best practices, and that is what we're looking for. We want improved delivery of suicide care. There's been a few policy efforts to translate these findings into population-level efforts and increase the training of providers overall. But even with that progress, key policy questions remain to be answered as to how we can most effectively scale up clinical training requirements using levers like state legislation, licensing board, education programs, accreditation, et cetera, so really needing to start to answer some of those policy questions so we can go to scale with clinical training for providers.
Thank you so much for having me as part of the workshop, and, Dr. McKeon, I'll turn it back to you.
RICHARD MCKEON: Thank you so much, Colleen, and thank you also to Greg and to Brian. I'm just going to make a couple of comments and then we'll get to the questions that folks are asking.
You know, the title of this session was on paradigm shifting initiatives, and I think that each of our presenters in their own way has identified some really important paradigm shifts. And let me start with Greg Brown's important talk on safety planning. You know, prior to the development and dissemination of safety planning as an initiative, part of what was happening clinically all around the country was that what clinicians were relying on in dealing with suicidal patients were no-suicide contracts. And they were being used before people were discharged from inpatient units in emergency rooms, everywhere. And, you know, the response of the suicide prevention field was to, you know, to kind of be scolding of these clinicians who were using no-suicide contracts which had no evidence to support them. And, in fact, there was even some evidence about potentially harmful effects. But it wasn't particularly helpful to simply scold clinicians for using no-suicide contracts.
There was a need for something to replace it, something that was deliverable by a clinician in a busy environment. And that was where I think safety planning was particularly important because it was something that could be learned and delivered potentially in a variety of different settings, so I think that the use of collaborative safety planning in many ways is a paradigm shift. And it was great to hear from Greg the various ways in which his team and others are tracking not only on whether safety planning is being delivered, but is it being delivered well. So again, so I think this was a major shift with significant implications of the future in terms of clinical care for suicide prevention.
You know, Brian in talking about the work of Henry Ford in Zero Suicide and before that and its Perfect Depression Program, there's no question that Zero Suicide is also a paradigm shifter if not a paradigm buster. The very name "Zero Suicide" is really a headlong rebuttal of the pessimism and fatalism about suicide prevention that is sometimes experienced in clinical settings because we all know of instances where we or those we knew did everything we could think of and a suicide happened. And so we have to be careful about the pessimism and fatalism about what can be done to prevent suicide in clinical settings that comes because of those types of experiences.
And, you know, Ed Coffee during his time as leader at Henry Ford, really emphasized the importance of the goal of Zero Suicide, and now we have at SAMHSA an over $20 million Zero Suicide grant program. Both the House and Senate in their draft appropriations bill for this coming year provided an increase to Zero Suicide, so we'll see what happens with that. But very much a paradigm shifter the idea that we could and should systematize everything we know about suicide prevention in clinical systems and try to move it forward.
And just another area, one component that Henry Ford has really been the leader on that, I think, is also a paradigm shift, also potentially transformative was the idea that a healthcare system should be tracking how many people under their care that they lose to suicide. And Henry Ford did that at first and published on it, and I think it helped attract a lot of interest to the Zero Suicide initiative. Michael Schoenbaum at NIMH has been a really powerful advocate for more of this being done in our Zero Suicide grants from SAMHSA. We're now asking for this information, not because we're looking for anyone to blame, but because we owe it to those that we've lost and their families to take a close look at what happened and to try to see if there's something we can learn to improve. We found that groups like Center Stone have been able to do this, so again, very much of an important paradigm shift.
And then finally with Colleen Carr's presentation regarding the Action Alliance, well, you know, it shouldn't be a paradigm shift, the idea that mental health and other healthcare professionals should be trained in suicide prevention, but it actually is because so often it is not part of the training. I often give the example of how, at an American Psychological Association approved Ph.D. program at the University of Arizona, and an APA-approved internship at Yale, the only training that I received, the only lecture that I heard on suicide prevention was one that I gave because our psychopathology teacher said pick a topic and present to the class, and I picked suicide, and that was it. Now, there are great training programs, but, you know, the movement toward making training accessible, how to best do it is really important. And Colleen referenced the different states that have moved in this direction. We're at something like 30 states that have actually mandated training for school personnel, but we know very little about what impact it has had, what's the best way to do it, the very questions that Colleen helped pose for us.
So thank you to all three of our presenters. We do have a couple of questions that came in, and one is for Greg Brown. And the question was, "What is your sense of the generalizability of collaborative safety planning when we look at the potential for it to being used not only in the VA, but in crisis centers and in various kinds of healthcare settings?" Can you speak to the generalizability?
GREG BROWN: So -- yes. Safety Planning has been used quite widely. I referenced the study with Barbara Stanley in New York State that's being done at, I think, 358 clinics where we did training on how to use Safety Planning in outpatient mental health where, in fact, a lot of the patients who kill themselves actually attend, not the inpatient units, but outpatient mental health. And so Safety Planning has been done there. It's been done in specialty clinics. It's been done in primary care. It's been done in jail settings in Jennifer Johnson's and Lauren Weinstock's study. We could go on and on with a long list, but it is pretty widespread.
RICHARD MCKEON: Thank you, Greg. Brian, you're doing work in emergency departments. What are the challenges for doing training in busy emergency departments? There are obvious challenges in many areas. What about those specific to emergency departments? Important to mention because the White House released just last week a Veterans Military Strategy, and it really emphasizes the important role of emergency departments and of imbuing Safety Planning in those emergency departments. How much of a challenge is that going to be to get that training done?
BRIAN AHMEDANI: Yeah. I mean, I think that's exactly it. I mean, the bonus is that I think some of these interventions are really well adaptable to the emergency setting. So like Safety Planning is a perfect thing to do in the emergency setting because you have a person who's there for a while, and you can deliver an intervention during that period. The challenges are always, you know, who should do it, when should they do it. So do we do everything during intake? Does that get complex or confused with all the other intake things that are happening, or do we do it when they get into the room? When then do we do the intervention? And then, is that done by the actual ED provider or do we have embedded people in the emergency room?
And then I think a lot of it is about process. So we discharge people right from the emergency room. And the thing that we really are trying to focus on in our emergency department work is really making sure that person gets to the next setting. The problem is that EDs are often not connected to other forms of care, so people get discharged into the abyss. What we really need to do is develop a process to connect them to the next level of care. Some people go to the hospital, but a lot of people get discharged to primary care or outpatient behavioral health specialty settings, so knowing when and where people should be triaged from that setting and knowing the process for how to do that is really key.
And one thing I think we're missing, but one thing we really have the opportunity to figure out, I mean, what's great is we have all these wonderful interventions, and now we have to figure out all these logistics about then, you know, about how to how to implement them, and who should do it, and how to train people. So, you know, it's just who should do all these things, and I think maybe that differs by place. And then also there's often huge gaps in time until the next visit appointment in behavioral health, so how can we train providers to make sure that we work directly with patients and recognize their suicidal needs, and then work to try and find appointments that are more appropriate with a timeline.
All these things are logistical challenges, but they're also incredible opportunities with this issue of transitions that I know, Richard, you care so much about and so do I.
RICHARD MCKEON: Thank you, Brian. We also have not a question, but a good suggestion from an audience member who says, "I find the cost and time the biggest barriers with people trying to get suicide-specific training. If someone would formulate a list of offerings that are free or low-cost, available virtually, that are vetted as good trainings, [that would be ideal]. This is especially a barrier I see with healthcare and first responders." I think that that's a great point. At SAMHSA we'd be happy to work with NIMH on seeing if we could develop such a list of no-cost or low-cost trainings for that. So I thank the audience for that.
I did have a question for Colleen. Colleen, as you know, because the Action Alliance has been very much involved in this and working to try to help prepare us all for the coming of 988, and we know that that has training implications. Actually one thing that SAMHSA has just funded is adaptation of the counseling on access to lethal means curriculum to crisis responders. It's more focused for outpatient right now. But what I'm wondering, Colleen, is what do you see as some of the key policy issues coming with training that would be associated with the move toward 988, which, as most people know, July 16, 2022, is the date by which every cellphone, every landline, every voice over internet provider has to make 988 accessible by phone.
COLLEEN CARR: Yeah. Thanks, Richard. I think there's a couple of things that come to mind, and I think this is a real opportunity that we're in right now. And I think when we think about the political will and the attention on mental health and crisis as a country right now among policymakers and major stakeholders, there really hasn't been. This is just a really unique time, and you kind of have that sense of urgency, and also if we're not going to do it now, when is it going to happen? And so I think when we're thinking about the workforce, I think your point of thinking about 988 implementation is really key.
There's the whole continuum of how are we training those who are now going to maybe be playing a different role in crisis response or a changing role in crisis response from the provider perspective, from mobile response to public safety, to others in the health system? And it really is a reframing of crisis response potentially, and so I think there are a number of workforce needs. And I think the other item that's come up in some conversation is that pipeline of future crisis service workers. There's not all of a sudden this huge infusion of a new workforce for crisis care or for crisis centers. That needs to come so we have more capacity to work with those transitions, as Brian was talking about and you were talking about, from the ED to community or from mobile response to outpatient, and these different flow of patterns of where people go to get help.
So how are we incentivizing people to enter crisis care or mental health care when they're considering their professional track? And I think there's things around levers that have been used in other fields to incentivize people to go into that workforce and stay in that workforce and not get burned out and leave quickly because I think there is this need to really develop a more robust and deeper workforce in crisis response. But I think, you know, when we're talking about clinical providers who are working now or the future workforce, I really feel like we're in this moment of we need to make sure we've got -- we're building on the successes we've had to date and bringing really good solutions to the table because that political will does seem to be there in a way at the state and national level. And the leadership we're seeing right now just really is so impressive, and we want to make sure we're bringing the right answers.
So just as the state example that was given, you know, people are investing in training, we want to make sure they're investing in good training, evidence-based training, and we're giving really concrete solutions for people to be advancing as part of this time.
RICHARD MCKEON: Thank you, Colleen. There's a question from Will Moore for you, Greg, that says, "Have you or others used your Safety Planning in other pathologies -- diabetes or in other areas like pediatrics? We're looking at applying some of these methods as part of overall health."
GREG BROWN: Yeah, the examples that come to my mind are using Safety Planning with any of the impulse disorders, so like bullying or opioid use where there's kind of an impulse to -- an urge to act on destructive behaviors. And so their evidence is not out, and perhaps the studies have not even been started yet, but we have seen applications in disorders such as those.
RICHARD MCKEON: Thank you, Greg.
RICHARD MCKEON: There's a comment from Jody Mulholland saying, "We need mental health drop-in centers, halfway houses, et cetera.” There's certainly an awful lot that we do need. Thank you for that comment, Jody. You know, another paradigm shift that's certainly been present over the last couple of years, at least in suicide prevention, is utilizing and optimizing peer support. What do the panelists or other folks think about how we can best optimize peer support?
BRIAN AHMEDANI: Well, I'm happy to chime in on that. I mean, I think this is potentially one of the most important questions that we face right now in this field. I mean, peer support is really popular. Many patients love it and many will say this is the most powerful thing I have in my entire treatment program, and yet we don't really know a lot about how to do it. There are a couple of -- there are some good examples in the substance use field. There are some examples in mental health. Very few examples that are evidence based in suicide prevention. We actually have a trial ongoing right now testing a peer support model following discharge from an inpatient mental health hospital, and I think that this is vitally important.
We first have to figure out how to do the intervention right, and then we have to figure out how to build the training for the intervention. So this is two kind of paradigm-shifting issues. I think we're on the verge of, you know, figuring some solutions out for some -- a very structured process for how to how to do this because I think it's very needed, but also we need to know how to do it. And so, there's a lot of states that have certification programs. I mean, it's not clear in this field yet what we're certifying. You know, they're certifying the peer specialists, but not specifically for suicide prevention. And so I think what we need to do is figure out how we can train peers, but not only train peers, but also train the providers and the health systems for how best to work with peers and integrate them into the clinical process. Both sides, I think, need training.
So this is a really important thing, and I think Greg looks like you're going to chime in.
GREG BROWN: Yeah. So we have a grant -- Barbara Stanley has a grant for the American Foundation for Suicide Prevention with Christa Labouliere, Kelly Green, and myself to do just that, to develop a training in Safety Planning and see how well we can train, and then you use it, you know, with peer support specialists. There's this really strong need. And I have to say one of the things that we learned about Safety Planning in the VA has been the veterans themselves have been training other veterans. They Haven't been [formally] trained in it, but they've been showing other veterans how to use Safety Planning. We know that anecdotally. We think it's a natural fit.
RICHARD MCKEON: Thanks. Colleen, any thoughts on that?
COLLEEN CARR: Yeah. I think those are both good examples of the opportunity to really scale up the workforce, but with the research questions that are still needing some answering so we know how to do it most effectively.
RICHARD MCKEON: Yeah. Let me ask others on the call if they would like to respond to that question about how do we optimize peer support.
ANTHONY PISANI: There's a number of really helpful frameworks and guideline documents that are coming out in Australia, I think, at least in terms of development as well as some nice systematic reviews. I think that they're a few steps ahead in terms of getting kind of national high-level buy-in of the role of lived experience and peers in support. So I think it's exciting to hear the data that's going to be coming out. I would encourage people to look at that, including there's a -- I can try to look for it -- but there's a systematic scoping review about peer work in suicide prevention. You guys probably already have seen that, but I found it really useful to see that there is, although not gold standard research yet, there is some data there, and that was encouraging to me.
RICHARD MCKEON: Great. Thank you, Tony. Dave Jobes has his hand up.
DAVID JOBES: Thanks, Richard. I think one of the things that I've really been struck by is a Small Business Innovation research grant that we've gotten from NIMH looking at an evidence-based approach to an ED-based intervention, and it was originally an avatar that was based on me, which we did as a proof of concept. And then we engaged a panel of people with lived experience who basically said get rid of him and let us kind of take over -- take the wheel to shape this intervention.
And so we've had eight to 12 individuals who have lived experience of suicide and having been in emergency departments, who have directly informed and shaped Jaspr Health, which is the intervention I'm describing. It has been transformative because they're saying, you know, I've been in this situation, and they really are the experts now that are driving the evolution and the innovation of the intervention, and we've found that to be incredibly helpful. So that's my thought of engaging a panel of people with lived experience to shape and guide interventions, especially in the technology lane like we're trying to do.
RICHARD MCKEON: Thanks, Dave. Ursula Whiteside, you have your hand up, too.
URSULA WHITESIDE: Well, Kate Comtois was talking last week about just this idea that we learn by doing what we're going to teach the people how to do. So, you know, our training is in the background of dialectical behavior therapy and DBT skills. And so ultimately, there's a world where we’re all peers because we've all learned and used the tools that we're teaching others, the tools for managing really intense emotions that for some people drive suicidal thoughts and for other people, drive them to lash out at their partner. So I just want to think, oh, there's another level here that's even a little bit more meta where, wait a second, we're actually all peers here and how can we get closer to that.
RICHARD MCKEON: Great. Thank you, Ursula. And I note that Tony put into the chat a link to the article that he was referring to, and thank you, Tony, for referring us to the international literature. I think there's an awful lot that we can learn from other nations. There's a group called the International Initiative for Mental Health Leadership that regularly shares information, and I was -- I was struck by the importance of work with indigenous communities around lived experience that was common across all of the participating nations in Canada, in Australia, in New Zealand, in England, Ireland, and Scotland. So that's really important to be aware of, so thank you, Tony, Ursula, and Dave.
Are there other questions that folks have? We have perhaps a minute or two before we will be giving this back to Stephen.
RICHARD MCKEON: I don't see any, so. Well, thank you, everyone for your contributions, for the great presentations and for the great two days of meetings. So, Stephen, back to you.
STEPHEN O'CONNOR: Okay. Thank you very much, Richard. We are now going to invite comments from Dr. DeQuincy Lezine and Ms. Shelby Rowe, so I will turn it over.
SHELBY ROWE: Hello, everyone. Quincy and I talked, and I'm going to go first today with my comments and then he'll chime in a little bit later. First off, I want to say something continuing the conversation that was just going where Dr. McKeon was asking about peers. And I think that some of that is really trusting in recovery, trusting in peers to be able to have that therapeutic value in the team. I think as someone with lived experience who worked in the public health world, when I'm talking to clinicians, it seems like it's kind of a catch-22 or a mixed bag where we know there are not enough trained clinicians, and over the last few days, we’ve really been stressing the training gaps and the quality gaps. So even if we do encourage everyone, you know, to go get help, we know that that help doesn't always exist.
So peers not only fill in a vital gap, but someone who is a peer specialist is someone who knows from personal experience that recovery is possible and probable. And so I think on that gut instinct, what they may lack in clinical training, they can make up for with a sincere belief in recovery and support for that individual at risk.
As you're going through the trainings today and last week, I have to say, it really strikes me that when we're talking about, you know, quality control and the barriers, while they are very real barriers, and I totally understand budgets and the time and constraints on our healthcare systems. But any level of life-threatening care, it's time and investment. And as a field, I would love to see that paradigm shift where quality isn't optional, that having the best-trained clinical workforce is not an added bell or whistle, but it's a required standard of care across the country.
While it's great that we've got 10 states that mandate training, it would be excellent if every state in their education programs, if not a single person graduated from the MSW program, or a clinical psychology program who didn't have sufficient training, [not a single person graduated who] could not demonstrate that they could do a suicide assessment intervention and effective treatment, to make that standard of care the norm instead of the exception in our field because lives depend on it.
You know, one thing that I have to cling to is, although it's dire and the training is dire, I thank my partner, Dr. Lezine, who will probably lighten the mood a little bit. I'm pretty heavy today. But we know from research that up to 92 percent of those who survive a medically-significant suicide attempt, that they survive, that they make it. They don't go on to die by suicide. That is a higher survival rate than heart attack, stroke, or even breast cancer. So in spite of our poorly-trained national mental healthcare system, many of us -- like DeQuincy, myself, and so many others, that we have survived and we will continue to survive. And we may not have all of the answers, but by listening to those with lived experience, those who have survived that journey of recovery, we can only strengthen our systems and make things stronger.
I'm going to hand it over to Dr. Lezine for his initial comments, and I think both of us have some other things to say and other questions for the presenters.
DEQUINCY LEZINE: Thanks, Shelby. I think that what I would to try to contribute and add to that for sure, I think that having those types of policies that you were talking about, like having the emphasis on provider training that we've seen in California and some other states is definitely a good start. And I would like to see those expanded in ways that really do make an emphasis on having these be the standard of practice as opposed to being something that would be a great addition to what folks feel like they are able to take on within their limited time frames.
I want to go back to what Dr. Pisani was saying near the beginning of the day in terms of including people with lived experience throughout that clinical education process from co-design, co-development, even the evaluation of the outcomes. I think it's pretty similar to having people who can represent the culture that is nearby, represent the local culture as well as local perspectives, and having that engaged throughout the process. It was also part of an NIMH discussion that happened at some point around AI and integration of training algorithms.
And one of the things that was noted there was the importance of having perspectives engaged in the beginning part of the process because that's where the trajectories are set up for a lot of these things, and then there are small adjustments that are made based on the input from the trainees. But that initial base that goes into there is extremely important. Same with some of these policies that we're going to try to adjust based on those initial policies, things like a California policy had six hours required, but the things that were included in what could be in those six hours was so vague that pretty much anything could count for it. And so the ending policy, although it was very well intentioned initially, became so watered down that it was practically meaningless. So I would want to encourage folks to examine what was intended from that, but not necessarily copy the details from it.
But then also what does "lived experience" really mean? Does it mean I'm glad you lived, or does it mean that this is part of the life I've lived. In my perspective, it's the latter. It's the totality of my experience influenced by identity, and location, and social group, and education, and biology as well as other experiences, and those contribute to times of suicidal crisis. And then those periods with suicidal intensity become part of my life experience. Over pathologizing and over medicalizing the suicidal experience leads us to miss things, and that was noted earlier in terms of somebody missing gender dysphoria because having a top-down provider-patient type of perspective, it often ignores the internal experience for the person who you might be working with. And if they had explored that from the person's perspective, then probably those types of things that they were struggling with existentially might have been brought to the surface.
The last thing I would note kind of going along that policy perspective is something that comes to mind from -- I was recently working with my kids on a robotics experiment and then working with folks who were presenting for physics outreach. And one of the demonstrations was around leverage and just the point that if you go further out and are still connected to the point that you are trying to change, then you have a lot more power to move something.
If we go to the community level, then there's sufficient distance as well as sufficient energy to try to get some movement, but it has to stay connected. Sometimes when we move so far out to national or to state, at times it becomes so disconnected that you're not really able to make that type of change. And if you go too small, too centered into the clinical or medical context, it's too close to really be able to make a lot of difference. But when you go to the community level, people are still connected to the local context, connected to the issue emotionally, but it's also far enough out so that you could get that type of political will that could really make a difference in having some leverage on the types of policy changes that we want to see happen to integrate lived experience and to move suicide prevention forward in terms of the training as well as especially the outcomes that we have in helping people to live lives that they really want to have.
SHELBY ROWE: I want to add a little bit on how the Suicide Prevention Resource Center is now even more committed to integrating lived experience. We have created a Lived Experience Advisory Committee, or LEAC as we're calling it. And it's now SPRC policy for members of our LEAC to advise on all SPRC projects. We're working through all of the logistics on that, but we're really hoping in the next year or two that we can come back and show some of the results because I know that it will make our effort stronger in the field of making sure that every project, be it one of our clinical trainings or the executive summary from the state needs assessment that's about to go out, making sure that everything -- that we're getting input from our lived experience advisors on what is beneficial, what is helpful, what is life saving, and what is needed in the field to make sure that their voices are at every table.
I'd make one more comment to that, and then I think we may have some questions from the group is, I just wanted to commend -- I was excited to hear from Dr. Gregory Brown on the emphasis now on quality control of Safety Planning. I've long been a fan of safety plans, the Stanley Brown Safety Plan, very effective. Years ago I ran a crisis center and we used that Safety Plan a lot with our callers, but again, it is that quality, and what we don't measure we can't improve or can't change.
And so I commend those efforts that of a possible 18, it has to have a 14 to pass because I think when we have quality standards, most people with graduate degrees, we're overachievers. We like to do well. We like to get A's and create measurable, tangible A-pluses where you know objectively -- you did a good job. And so I see that as a step in the right direction. Because I think sometimes we're like, well, you know, they're really trying and -- yes, I commend everyone trying. But we now have evidence-based models, so we have to be doing well. We can try -- I can try to sing every day of the year, I will never be a good singer. There are things that people just are not that good at and they need to strive at excellence, or maybe they are not the one doing the Safety Plan, but holding that accountability in the system because the veterans of the VA system and really anyone at risk deserve someone to do that model effectively. So thank you for creating measurement units for that.
Any other questions for us? I have a feeling that Dr. Lezine and I have not answered all of the lived experience perspective questions, but I do think and I hope that everyone who's attended, is feeling compelled and, like, what’s next? What are we going to do moving forward to advance suicide prevention care in our clinical practices? Dr. Pisani?
ANTHONY PISANI: Hi. Thanks for your comments. I have a question. It's actually, like, very in the moment for me right now. I'm working on a review working with the VA in Australia actually on a review about collaboration between clinicians and peer workers and what their role might be, should be, could be in relation to one another when it comes to suicide risk. As you know, there's a lot of questions. Is the person with the experience who's engaging with somebody, is it part of their role to try to identify risk? Do peer workers do safety plans or not and what is -- how is the training different? Greg and Barbara are addressing some of that in their [intervention].
But I'm just curious about your perspective about how -- you know, what would be some ideal ways -- where do the roles overlap and how are they distinct specifically with respect to suicide risk?
DEQUINCY LEZINE: Well, I think from my perspective, any time that I was working with a client within a clinical context or working with a research question, you kind of examine what the question is and then pull relevant parts from your experience, whether it's from your clinical experience, your clinical training, or from your life experience and life training. So I think when you are going into a situation and you have a group, it's going kind of depend on the situation that you're approaching in terms of who's going to play which roles and who might interact with a person first. There's going to be definite times, I think, where somebody who has lived experienced will make that initial interaction faster because it's often easier to establish rapport just by having that type of connection, then you can build from there. There's going to be times where you might need to do some kind of more medical emergency type of situation care first, so kind of lifesaving basic life support or advanced life support type of thing first and then work with a person so that they are at a point where they are able to really have a discussion with somebody who has lived experience to kind of work through their topics at that point in time.
I think there's a lot of potential ways that you can go and utilize all of the types of experience and expertise that are being brought to bear with situations. I think that probably there are folks who have mobile crisis outreach teams, who have somebody who's a peer specialist engaged, who probably have a lot of, you know, in-the-field, on-the-ground experience with navigating who is going to interact and what roles people are going to have when somebody is dealing with a suicidal crisis inside the community. Shelby?
SHELBY ROWE: So I think the -- it's important to kind of clarify, and I think it will vary from organization to organization. I'm just thinking about the communal health systems that I know of and familiar on the level of staff training between the staff and the peer recovery support specialists, and not exploiting the peer recovery support specialist, so making sure that's that clear line in expectation. I would see it being a clinician that could do that initial assessment and studying that treatment plan, and then the peer recovery support specialist of being you know, encouraging, reinforcing, checking up, and seeing if they need help along the way, being that peer.
But when it comes to changing any clinical treatment methods or things like that, that that would need to fall back in the lap of that treatment professional, and so making sure there's those clear guidelines. I've heard amazingly empowering stories, and I've heard others where peer support staff feel very exploited. So making sure that there's that good balance. And again, I think it's that good communication with the team and working together as a team to keep everyone as safe as possible and really healthy and responding effectively with treatment so we're improving the quality of lives.
ANTHONY PISANI: Thank you.
STEPHEN O'CONNOR: I have a question. First, I wanted to bring attention to everyone that we have a new funding opportunity announcement that's focused on enhancing interpersonal focus strategies for suicide prevention interventions. Really explicit in that it's an emphasis on not just stabilization, but something, Shelby, that you've mentioned both days that you've spoken is about enhancing over time. And I guess what I'm hearing from you is that in a lot of the clinical trainings that you've sort of, you know, come into contact with, I'm kind of sensing maybe that that sense of recovery is really missing. And instead, there's really an emphasis on more of the severity and the danger, and just kind of the way maybe, that creates an implicit response about expectancies on the part of those being trained about who it is that I'm actually trying to help here.
So wondering, number one, is that accurate, and number two, what do you think that we as a field could do to really kind of address that in training efforts?
SHELBY ROWE: Yes, I have seen that, so that is an accurate observation, but I am an enthusiastic supporter of care transitions. I look at that at, you know, sort of a model where once that immediate risk is past, that you have systematic ways of shifting in a care pathway to connect to that next level, that it's not going from acute care to no care, that there are step downs and that there are things like suicide attempt survivor support groups, like the Dee Dee Hirsch model, and things like that where individuals can step down.
I am a proponent of the community of care and the campuses of care of groups like RI International where you have peer respite centers, crisis center, and inpatient/outpatient drop-in centers kind of all on the same geographic campus. So if you can get a bus and you're there for inpatient, you can take the same bus back for other levels of care but making sure that we're doing things that promote recovery. And I think just because of time constraints and budget constraints, that our field, we focus so much on the intervention and then making sure, like, oh okay, are they safe, and then if we don't have those good, strong care transitions, we don't have good, effective safety planning that's been collaborative and a plan, we are just dealing with an acute crisis, but we're not setting people up for that improved quality of life and real recovery. And once we keep them alive, we need to help them rediscover how to live well.
STEPHEN O'CONNOR: Okay. Well, I think that that would bring this fourth and final session of day two of the workshop to a close. Thank you, Shelby and Quincy, again for offering really exceptional feedback and comments each day of the workshop. Really appreciate it.
So what I would like to do is just provide some closing remarks. This has been a great experience to help put this workshop together, and again, I thank all my colleagues and The Bizzell Group for all their support in creating this. And I'd also like to thank all of our presenters and moderators for creating a successful workshop. At NIMH, our mission is to transform the understanding and treatment of mental illness through basic and clinical research, paving the way for prevention, recovery, and cure. We emphasize studies that expand reach and have a deployment focus to have the greatest impact on the field.
I want to acknowledge several funded grants that we have that are currently conducting novel research in the area of advancing training clinical skills for suicide prevention. Dr. Aaron Norr has an R21 to develop a virtual standardized patient to enhance training in safety planning. Dr. Doyan Darnell is developing and testing a chatbot training tool for nurses working in trauma centers in her career development award. We heard Dave Atkins speak about that earlier. And, of course, Igor presented his work developing a clinician training program to improve the empathic signature of clinicians. In each of these studies, there's a path forward to larger definitive tests of the effectiveness of these training approaches to reducing risk at the patient level. And each of these important projects includes a rigorous measurement plan of the conceptual framework in order to analyze the degree to which proposed targets account for distal outcomes.
I'm going to channel my colleagues, Joel Sherrill, by emphasizing the need to not only train prescribed behaviors, but also away from proscribed behaviors. This resonates with information provided earlier on considerations for distinct populations as described so well by doctors Dr. Michael Lindsey and Dana Prince, as well as the comments offered on day one by Dr. Lezine on the importance of language in describing suicide-related experiences.
We need to position training to support lifelong learning in suicide prevention clinical care. Our field has taken great strides over the past 20 years. As Richard mentioned about what standard of care looked like 20 years ago, when I worked as a case manager in community mental between 2001 and 2005, no harm contracts [no-suicide contracts] were unfortunately the frontline practice for managing acute risk. Here we are in 2021. We have evidence-based treatments. We can train clinicians in those approaches, but there remains a lower than acceptable likelihood that a service user will receive such an evidence-based treatment, and even less of an assurance that it will be a quality treatment experience.
Essentially, our healthcare system can help reinforce the likelihood of positive outcomes in suicide prevention by ensuring that it's the greatest quality experience for those who are part of that system. We need to convince people that it's worth taking the risk to share those experiences that you're having related to suicide, and convince them that, yes, we can provide quality, good treatment that matches your preferences, and we can work together on that shared goal.
NIMH has three existing funding opportunity announcements to support training research in the near term in the form of the service-ready tools, identification, prevention, and treatment of individuals at risk for suicide RFAs. There are options for studies that definitively test the effectiveness of the tools and technologies for pilot studies seeking to adapt or refine and test preliminary effectiveness, and small business innovation research opportunities as well. So please contact us at NIMH if you'd like to discuss these options and your research concept in greater detail.
I also want to mention that I appreciate the focus in our workshop on creating a clinical environment that provides support and fosters confidence in clinicians who are on the front lines every day working to help those seriously considering ending their life. Part of the slides on the importance of training in general skills related to suicide prevention. We need to watch out for pushing clinicians into clinical situations in which they're not competent to avoid untoward effects. That can be a traumatizing experience for clinicians and, in turn, unfortunately reinforces the desire to avoid working with suicidal clients.
So those are my final comments. We're ending 15 minutes early. I'm very happy to give you 15 minutes back. And again, I want to thank everyone for attending these workshops as an audience member, and wish you all the very best. As a reminder, we will be archiving these workshops on our NIMH website in the future. Thank you very much.