Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

Identifying Research Priorities for Risk Algorithms Applications in Healthcare Settings to Improve Suicide Prevention

Date

June 5–6, 2019

The National Institute of Mental Health (NIMH) convened a two-day meeting with researchers and suicide prevention advocates to identify and prioritize research needs in the application of predictive modeling in suicide prevention among healthcare providers. Meeting participants aimed to discuss best practices for developing, implementing, and evaluating these predictive tools in clinical and research settings.

In 2017, more than 47,000 people in the U.S. died by suicide, while the age-adjusted suicide rate was 14.0 per 100,000 standard population – a 33% increase  from the rate in 1999. Identifying individuals at elevated risk for suicide is a key step in a comprehensive approach to suicide prevention.  A study of eight health care systems  found that about a third of suicide decedents have had some type of health care encounter in the 7-days prior to death, and up to 90% do so in the year before death. Nearly half (44%) of suicide decedents had an emergency department visit in the year before death. One effective way to improve identification of at-risk individuals is via proactive screening for suicide risk among individuals presenting to hospital emergency departments and other relevant healthcare settings. Another approach, that complements traditional risk screening methods, is to develop and use suicide risk algorithms. These approaches employ population- or panel-based predictive modeling using individuals’ data from electronic health records (EHRs). Research in military /veteran  and civilian  health systems has demonstrated the feasibility of identifying small groups within a larger population who have a predicted near-term suicide risk 20 or more times the average. Individuals identified in these at-risk groups can then be offered enhanced assessment and personalized treatment.

Building on this evidence of the feasibility of developing suicide risk algorithms in real-world health systems, and on the early experience of several health systems (particularly the Veterans Health Administration) in incorporating such algorithms into their clinical practice, the meeting considered the following questions:

  • What are the major statistical challenges to developing, validating, and using suicide risk algorithms for clinical practice?
  • What are the major roadblocks regarding provider and patient understanding, acceptance and use of results from suicide risk algorithms?
  • What are the major ethical concerns regarding the development, validation and use of suicide risk algorithms for clinical practice?

The first day of the meeting consisted of three sessions with brief presentations from 24 expert panelists (see participant list and meeting agenda) who discussed their research experience related to risk algorithms. Presentations were delivered by experts on topic areas related to:

  • Biostatistical Challenges
  • Provider Use and Understanding of Algorithms
  • Needs for Clinical Decision Tool Development
  • Patient Understanding of Algorithm Use and Ethical Considerations.

On the second day of the meeting, participants addressed questions and discussion topics that arose during the presentation sessions. Participants were asked to offer actionable research recommendations and next steps. Those recommendations are summarized as follows:

Biostatistical Challenges

Meeting participants agreed that the specific clinical purpose of an algorithm should be defined prior to its development. This may inform predictor selection and analytic approaches, and it may also aid in translating the results from predictive models to support clinical decision-making. Those who create risk algorithm systems with the intent of improving care need to understand when and how often prediction occurs, where each piece of data comes from, and what intervention resources are or could become available. Meeting experts also agreed that algorithms are better suited to grouping individuals into risk categories rather than to predicting suicide outcomes at the individual level.

Recommendations for addressing identified biostatistical challenges include:

  • Explore ways to translate algorithm results for providers, patients, and other healthcare system stakeholders.
  • Identify best practices in the design of the cohorts used to develop risk algorithms, to ensure that the design is suitable for the purpose(s) for which the algorithm is being utilized.
  • Identify best practices in assessing the performance of suicide risk algorithms – including possible effects on equity in treatment and outcomes. (Presenters provided examples of tradeoffs involving equity, e.g., increasing equality of outcomes might reduce overall performance. Such efforts could streamline development and evaluation of algorithm applications in healthcare).
  • Assess the benefits and limitations of risk algorithms that consider multiple or composite outcomes, such as risk for non-fatal as well as fatal suicide events, fatal and non-fatal accidents, and others. Such approaches may have clinical benefits (e.g., individuals with elevated risk for suicide may also have elevated risk for accidental overdose or other accident mortality), and/or logistical benefits (e.g., organizations may find it burdensome to develop, maintain and act on multiple separate algorithms). Suicide risk algorithms to date have mainly considered single outcomes (e.g., suicide death or suicide attempt).
  • Assess the benefits and limitations of developing and operating multiple risk algorithms for specific subgroups within a larger population/cohort (e.g., those with certain characteristics or experiences, or in certain geographic or care settings), versus one algorithm that is used with the entire population/cohort.
  • Develop criteria to determine when it might be appropriate to use an algorithm developed in one setting or population/cohort and apply it in a different setting or population/cohort (e.g. if health system B uses an algorithm developed and validated by health system A); versus developing and validating algorithms for each specific setting or population/cohort that seeks to use this approach.
  • Explore whether risk algorithms preserve or – worse – potentially expand disparities in treatment and outcomes; and, if so, develop possible methods to mitigate this.
  • Explore approaches to adequately de-identify data that could be shared.
  • Compare risk modeling benefits and challenges over alternative time periods, e.g., outcomes within 1 vs. 3 vs. 12 months. For example, if long time periods take place between algorithm calculations, at what point are the outcomes ‘stale’ and less clinically useful?
  • Examine the relationships between absolute (predicted) risk, on the one hand, and how identified patients may respond to intervention, on the other. In particular, some experts hypothesized that individuals in the highest tier(s) of predicted risk for suicide might be less likely to benefit from intervention than individuals in the next lower risk tier(s).
  • Research efforts should consider patient responses to targeted clinical interventions in the development and use of risk algorithms (e.g., will patients engage in the interventions offered; what are rates of response to the intervention).
  • Assess the potential added value of drawing on various types of data in developing risk algorithms, e.g., structured measures from health care claims/ encounters and electronic health records, patient-reported measures, measures derived via abstraction or natural language processing from patient care records, and measures derived from passive sensor monitoring (e.g., via smartphone), among others.
  • Assess the potential added value of drawing on data on individuals’ characteristics and experiences from sources outside healthcare, e.g., social media, public records, and commercially available data such as credit information, among others.

Provider Understanding, Acceptance and Use of Results from Suicide Risk Algorithms

There was agreement that clinicians play a critical role in defining useful outcomes, providing data, and specifying how prediction models should be used for medical decision making. Most participants felt that it is important to involve healthcare professionals at every stage of clinical support system development. In addition to increasing provider trust in predictive models, when providers are engaged in this process, they can contribute valuable information about real-world implementation and the potential unintended consequences of predictive modeling that can occur in clinical settings.

Because many providers are unfamiliar with predictive analytics for suicide prevention, clinicians may need additional education and support to enhance their understanding of the algorithm process and explain risk identification to their patients. To that end, predictive tools should be fully transparent, and stakeholders should have access to culturally sensitive education tools. Providers should be empowered to constructively challenge algorithms or make recommendations for improvements. Identified research needs for enhancing provider understanding included the following:

  • Once predictive models are successfully implemented across systems, there is a need to determine how medical providers should best talk about suicide risk with patients, perhaps especially patients who are considered at risk for suicide based on algorithm results but have not previously been diagnosed with a mental health condition.
  • Clinicians need support and strategies to facilitate their jobs and reduce workload, so researchers should consider the aspects of risk algorithms that might hinder provider efficiency in clinical settings.
  • Provider use of algorithms is likely to vary across settings; what might be the implications be for developing clinical decision tools based on different needs?
  • To what degree does provider use of algorithms depend on perceived options for interventions, including the availability of behavioral specialists for referral?
  • Are safeguards needed to prevent providers moving towards more coercive intervention actions if patients deny risk in the face of algorithm estimates?
  • Clinicians’ risk assessment can differ from the risk algorithm assessment, and there is a need to merge the two to form a more comprehensive understanding of suicide risk.

Patient Understanding and Acceptance of Results from Suicide Risk Algorithms, and Ethical Concerns

There was agreement among the meeting participants that researchers and providers should be vigilant about ethical issues associated with developing and applying risk algorithms, particularly those related to working with people who are at high risk of suicide. Transparency about program goals and risk calculations may increase patient trust and comfort. Preserving patient privacy is necessary when sharing data, particularly if algorithms are tested by groups outside the patients’ healthcare system. Researchers and providers need to ensure that algorithms are applied with careful attention to confidentiality, validity, and decision-making that supports every clinician’s obligation to do no harm. For example, researchers and providers need to consider the ethical implications of alerting other parties, such as families, support systems, insurance carriers and law enforcement, to a patient’s high-risk status. Developers should aim to embed ethical principles into technical processes as well as human processes to ensure that predictive modeling is fair, equal, and unbiased. Researchers and providers should be well prepared to address patients’ fears that being flagged by a predictive model for suicide risk will impact their jobs, relationships, and self-concept.

There were a number of ethical research issues raised that would improve the application of risk algorithms applied to suicide prevention. They include:

  • Research is needed to assess stakeholder priorities and perspectives on these issues, that in turn can inform risk algorithm applications. For example, to what degree do patients see algorithm use as ‘intrusive’?
  • Research is needed that tests approaches to educating patients about risk categories and how clinical care and personal actions may change risk status.
  • There is a need to develop a body of standards to help institutions understand when another stakeholder’s algorithm provides better standard practice, and how that is translated to patients.

Other Issues in the Final Discussion

  • Participants discussed many additional data points that although seemingly distal, could potentially enhance algorithm performance in predicting suicide risk. This included information on the physical environment, such as rurality, altitude, and extreme weather.
  • Although predictive tools could be updated to include new interventions and treatments, clinicians should avoid using predictive models to make causal inferences about the direct effects of interventions on patient risk.
  • It may be helpful for providers and patients to know that they have access to the input factors (e.g., self-reported symptoms; test results; billed services) that feed into various levels of risk indication calculations.
  • Many participants felt that predictive risk models are best used as inputs for clinical decision-making models. This key goal of translating algorithms into action requires input from experts in clinical care, statistical design, healthcare delivery, and qualitative research.

Finally, but certainly not least important, clinicians need to understand that individuals identified at higher risk for suicide may be at elevated risk for other adverse outcomes and events (i.e. multiple morbidities and mortality), and these individuals may also require additional treatments and interventions that are tailored to their particular needs.

Additional Event Information

Agenda

Participant List