Skip to content

Harmonizing Clinical Data Collection in Community-Based Treatment Programs for First-Episode Psychosis

Date/Time:

Location: Rockville, MD

Sponsored by:
National Institute of Mental Health

Purpose

NIMH plans to develop an Early Psychosis Intervention Network (EPINET) among Coordinated Specialty Care (CSC) programs for first-episode psychosis (FEP) based on the principles of a “learning healthcare system,” to include standard clinical measures, uniform data collection methods, integration of data across service users and settings, and rapid analysis and reporting of pooled data to clinicians, patients, and scientists.

The purpose of this meeting was to understand stakeholders’ perspectives on opportunities and barriers to adopting a harmonized approach to clinical assessment and data collection across CSC programs, as envisioned in EPINET, as well as to learn how key aspects of CSC are being measured in community-based CSC programs. A principal goal of the meeting was to define a core battery of important CSC variables and measures that could be feasibly implemented across CSC clinics. In addition, participants were asked to consider opportunities for efficiently implementing standardized CSC data collection that could support a multi-state learning healthcare system for early psychosis.

Participants

Meeting participants were:

  • Representatives of states with multiple CSC programs (i.e., ≥5), including California, Illinois, New York, Ohio, Oregon, Texas, and Virginia;
  • Thought leaders from other states where academic research programs collaborate with CSC clinics on CSC measurement issues, including Arizona, Connecticut, Massachusetts, Maryland, Michigan, and Pennsylvania;
  • Federal agencies invested in the implementation of CSC, including the Substance Abuse and Mental Health Services Administration (SAMHSA), the Centers for Medicare and Medicaid Services (CMS), and the Assistant Secretary for Planning and Evaluation (ASPE); and private research organizations active in this space.

See full Participant List.

Meeting Structure

Drs. Robert Heinssen and Susan Azrin from the NIMH Division of Services and Intervention Research (DSIR) co-chaired the meeting, which took place on September 7-8, 2017 in Bethesda, MD. While the first day focused on identifying which treatment outcomes and processes should be measured in CSC programs, the second day focused on how these outcomes and processes should be measured. Both topics were addressed throughout the meeting as they are closely related.

The meeting format consisted of the following:

  • Presentations on NIMH’s EPINET initiative; the natural course, treatment, and outcomes of FEP; standardized measures of FEP clinical assessment; a representative model for a CSC learning healthcare system; and state efforts to harmonize clinical data collection in community-based CSC programs. Presentations are summarized in the next section.
  • “Think Tanks” that guided small-group brainstorming sessions.
  • Large-group discussion following all presentations and Think Tanks.

See full Meeting Agenda

Presentations

Day 1

Overview and Purpose of the Meeting

Robert Heinssen, Ph.D., Director, NIMH Division of Services and Intervention Research (DSIR)

Dr. Heinssen described the current early psychosis ecosystem in the United States, which includes both academic research clinics and a large and growing number of community-based programs that offer CSC services. He introduced the EPINET concept, which envisions a learning healthcare system grounded in the principles of science-based interventions, easy and timely access to services, and a culture of collaborative, person-centered care. EPINET goals, tasks, and timelines were presented. Dr. Heinssen highlighted issues to be explored in the meeting, including the key domains of FEP course and treatment, standard measures that could be used in FEP clinical assessment, and challenges to implementing standard measures in community clinics.

Key Domains of FEP Course and Treatment

Larry Seidman, Ph.D., Harvard Medical School

Presented by Tara Niendam, Ph.D., University of California Davis

Dr. Niendam presented Dr. Seidman’s review and summary of research evidence and clinical concepts related to the natural history of FEP. These concepts include characteristic clinical features, predictors of illness course, adverse events (e.g., medical/substance use comorbidities, incarceration, suicide behaviors), and key clinical and functional outcomes (e.g., symptomatic remission/relapse, social, educational, and vocational adjustment, quality of life). This presentation grounded subsequent discussions of FEP domains to be addressed in the envisioned FEP learning healthcare system. The Appendix presents information from two slides that summarize Dr. Seidman’s conceptualization of key features of FEP and important aspects of FEP course and recovery.

Dr. Niendam, in the role of discussant, applied these concepts to her discussion of how UC Davis CSC programs are implementing outcomes evaluation within clinical practice, as well as findings from her preliminary evaluation of CSC programs across California. Key points included CSC programs’ need for assessment that creates low program burden and provides immediate feedback, and the importance of measuring outcomes valued by multiple stakeholders, e.g., patients, family members, clinicians, payers, and local, state, and federal mental health authorities.

Standard Measures for FEP Clinical Assessment

Lisa Dixon, M.D., M.P.H., Columbia University Medical Center

Dr. Dixon described the Phenotypes and eXposures (PhenX) consensus process, and how this method was applied to select assessment instruments for the PhenX Early Psychosis Clinical Services Collection. She identified available measures within a variety of key FEP clinical domains as well as gaps in the Collection. Dr. Dixon discussed anticipated difficulties in implementing standard measures in community CSC programs and potential strategies for meeting these challenges. Dr. Dixon explained how standard measures have been introduced in OnTrackNY clinics for FEP, and demonstrated how data from individual service users are combined, analyzed, and reported to OnTrackNY clinicians, team leaders, and administrators in a quality improvement framework.

David Shern, Ph.D., National Association of State Mental Health Program Directors Research Institute (NRI), served as discussant and reflected on the real-world opportunities and challenges regarding standardized assessment in community CSC clinics. Dr. Shern drew upon his interactions with state mental health personnel and CSC providers during the public feedback phase of the PhenX effort, as well as the environmental scan on CSC clinical measures produced by NRI to inform this discussion.

Day 2

A Regional Model for a CSC Learning Healthcare System

Vinod Srihari, M.D. and John Cahill, M.B.B.S., Yale University

Drs. Srihari and Cahill presented the Specialized Treatment in Early Psychosis (STEP) program, a regional model for a national learning healthcare network. Dr. Srihari described how STEP applies a population health framework to engage a broad network of local stakeholders to improve pathways to and through FEP care in the greater New Haven area. They explained how they developed a core set of patient-centered outcomes to audit and improve performance in STEP, be accountable to local stakeholders, and support translational research. Dr. Cahill then presented the informatics platform developed for STEP, which is designed to support dissemination, utilization and refinement of the population-based approach.

Erik Messamore, M.D., Ph.D., Northeast Ohio Medical University, in the role of discussant, described how Ohio’s FIRST CSC programs have improved their data reporting processes by reviewing data elements and assessing each one for accuracy/reliability of reporting, utility to individual agencies (and to the BeST Center), ease of data reporting/acquisition, and sensitivity to change. They ultimately chose to focus on high value variables that are relatively easy to report.

Think Tanks

Think Tank 1: Elements of a Core FEP Clinical Assessment Battery

Think Tank 1 asked participants to consider what treatment processes and outcomes should be measured in CSC programs and included in a core FEP clinical assessment battery. Participants discussed the following:

  • Extent to which measures in the PhenX Early Psychosis Clinical Services Collection are practical for CSC programs and which measures should be included in a core CSC clinical assessment battery.
  • Alternative measures that meeting participants may be using to assess domains covered by the PhenX measures.
  • Key domains missing from the PhenX measures and how CSC programs are collecting data in these areas.
  • Whether working groups should be formed to address measurement gaps.
  • What elements are most critical to measure in a core CSC assessment battery.
  • Amount of patient and clinician time needed to ensure a CSC battery is completed and what level of burden is reasonable.
  • Frequency of assessments necessary to track client progress.

Think Tank 2: Implementing Standard Measures in CSC Programs: Practical Issues

In Think Tank 2, participants explored challenges and opportunities in implementing standardized data collection in CSC programs. Participants discussed the following:

  • How assessment data would be used by different stakeholders and for different purposes.
  • Which staff can administer standard measures and the training they would need.
  • The role of client self-report versus clinician-administered measures.
  • Practical constraints to conducting standardized assessments in community CSC programs and how they might be overcome.
  • Feasibility of alternative assessment approaches, such as remote centralized assessment or specialized assessors to perform data collection across multiple CSC programs.

Key Themes from Think Tank and Large Group Discussions

Elements of a Core FEP Clinical Assessment Battery

Much discussion focused on which elements are essential to measure in community-based CSC programs. Participants voiced the following as critical to guiding the selection of key elements in a core FEP clinical assessment battery:

  1. Most participants thought that obtaining an accurate diagnosis is very important, though challenging, given the heterogeneous nature of schizophrenia spectrum disorders and that diagnostic tools are not user friendly in community programs. However, given the diagnostic ambiguity present early in the course of illness, some questioned the reliability and usefulness of early diagnoses and suggested that accurate identification of psychosis and ability to determine that this is not due to substance use or a medical condition might be more informative.
  2. Participants varied as to the relative importance of assessing symptoms, with many participants prioritizing measurement of functioning and outcomes valued by clients and families, e.g., independent living, quality of life, and work and school participation, over symptom measurement.
  3. Some symptom measures could be obtained from patients in real time via smartphone apps, reducing clinician burden and providing feedback rapidly to clinicians and patients.
  4. Assessments should be culturally relevant for diverse populations.
  5. Participants endorsed using portions of standard assessment instruments rather than the full assessment, when appropriate, to reduce assessment battery length without losing critical information. For example, use of specific Structured Clinical Interview and Rating Criteria (SCID) modules rather than the entire diagnostic interview.
  6. Publicly available Patient-Reported Outcomes Measurement Information System (PROMIS) measures should be considered.

Gaps in Available Measures for Community-Based CSC Programs

  1. Participants identified a critical need for a duration of untreated psychosis (DUP) measure suitable for use in community CSC programs. Programs find it particularly challenging to identify psychotic symptom onset (i.e., the starting point of the DUP). There is also disagreement in the field as to what constitutes initiation of adequate treatment for psychosis, which defines the endpoint of the DUP period. Programs have been working around this by categorizing DUP length, e.g., less than 1 year, rather than obtaining a precise measurement, e.g., actual number of days.
  2. Some suggested that assessing duration of illness (DUI) might be more feasible in community clinics, as it only requires determining the date of onset of psychotic symptoms, and that DUI could serve as a proxy for more burdensome DUP measures. In addition, because a key goal of CSC programs is to provide care as early as possible, DUI may be a more direct measure of the program’s success in providing care early in the course of illness. However, others thought measuring DUI could potentially be as burdensome as measuring DUP.
  3. Accurate data on hospital episodes, crisis services, and school and work participation are needed.
  4. Some participants endorsed the ICD 10 Z-codes for Social Determinants of Health for inclusion in a core FEP battery, but noted the need to develop probes for standardized administration.

Recommended Assessment Elements

Overall, participants identified the elements in Table 1 as important to consider collecting in a standardized assessment battery for community-based CSC programs, noting that different elements may be more valued by certain stakeholders and prioritization will be necessary. (“What you value should inform what you measure.”)

Table 1. Elements to Consider in a Standardized Assessment Battery for Community-Based CSC Programs

Clinical

  1. Demographics
  2. Diagnosis
  3. DUP
  4. Employment and education
  5. School outcomes (e.g., passing to next grade)
  6. Family engagement
  7. Family burden
  8. Social and role functioning
  9. Quality of Life (QOL)
  10. Well-being
  11. Symptom severity
  12. Insight
  13. Level of independence/disability
  14. Recovery
  15. Alcohol, substance and tobacco use
  16. Incarceration
  17. Independent living
  18. Housing/homelessness
  19. Physical activity
  20. Physical health, including BMI
  21. Suicidality
  22. Violence risk
  23. Side effects
  24. Treatment adherence
  25. Cognitive functioning

Program

  1. Outreach
  2. Referral tracking
  3. Eligibility
  4. Team functioning
  5. Post-hospitalization follow-up
  6. Discharge reason
  7. Penetration rate (unmet need)
  8. Characterization of treatment/service elements
  9. Fidelity to CSC model
  10. Satisfaction with services
  11. Retention

Administrative

  1. Wait time for CSC services
  2. Service utilization within CSC program
  3. Social determinants of health (ICD 10 Z codes)
  4. Service utilization outside of CSC program (including inpatient, outpatient, hospitalization, crisis, primary care involvement)
  5. SSI/SSDI recipient
  6. Mortality
  7. Costs

Desired Characteristics of Standardized Measures in Community-Based CSC Programs

Participants repeatedly voiced the following as desirable characteristics for any measure included in the core assessment battery for community-based CSC programs:

  1. Validated
  2. Practical
  3. Feasible
  4. Poses minimum burden to patient and clinician
  5. Has high utility to stakeholder, and preferably to multiple stakeholders
  6. Is critical to measuring CSC program effectiveness

Training and Program/Staff Capacity to Conduct Assessments

  1. Great variation exists across CSC programs and states in assessment capacity, as a function of how CSC programs are configured. For example, if a CSC program is modelled on a measurement-based care treatment approach, then assessments are more easily incorporated.
  2. For many CSC programs, standardized assessments represent a paradigm shift and major structural change for the organization. The addition of standardized assessment may seem a small change, but it occurs within a complex system and hence may not be a simple process.
  3. It’s critical that clinicians view assessments as informing clinical practice. Clinicians may not necessarily see the benefit of some assessments they are asked to complete. Those tasked with administering assessments must see value in collecting this data.
  4. The Plan-Do-Study-Act implementation approach might be useful for implementing new assessments, as the process allows for identifying problems, developing solutions, and continuously improving the processes in a cyclical fashion.
  5. A common problem is that CSC programs invest in training assessors, who then leave before the program realizes a suitable return on the training investment.

Users and Uses of Assessment Data

  1. Different stakeholders often value different measurement domains and types of data. Measurements that address the needs of multiple stakeholders are preferred.
  2. Consumers and families prioritize psychosocial functioning and QOL. Families also prioritize family burden.
  3. Clinicians may use assessments for measurement-based care and to treat to target and improve clinical outcomes. Real-time feedback to clinicians is valued, but requires technology and dedicated evaluation staff. Assessment data should be available to the whole treatment team. Medically-oriented clinicians, such as psychiatrists, may particularly value symptom measures. Positive and Negative Syndrome Scale (PANSS), Clinical Global Impression (CGI) scale, and sections of SCID sections (but not the entire SCID) were frequently mentioned as valued by clinicians.
  4. Program administrators seek data for quality improvement (QI) purposes.
  5. Mental health authorities and funding decision makers may want to see data that can show return on investment and demonstrate effectiveness. Some thought this use of assessment data is vital; having strong outcomes and data to support cost savings may be critical to obtain grant funding from federal, state, or county agencies and/or private insurance coverage of CSC services.
  6. System level outcomes, such as access to CSC services, reduced emergency room and hospitalization rates, and overall treatment costs, are important to administrators who manage complex systems with multiple levels of care.
  7. Researchers seek to increase understanding of FEP interventions and patient characteristics related to treatment response, and to learn how to improve patient outcomes.
  8. Feedback loops should be built into the assessment and reporting process such that each stakeholder receives data as rapidly as possible and in a format useful to that stakeholder. Feedback loops can take various forms, e.g., paper reporting, live dashboards, and analysts who can interpret service and outcome data for clinicians and administrators. The feedback process can become quite complex when data are used to identify targets for quality improvement and to evaluate the effectiveness of innovative practice solutions.
  9. There should be transparency as to how assessments will be used in the FEP learning healthcare system.

Assessment Methods

  1. It was noted that several important aspects of FEP, e.g., work and school outcomes, housing status, contact with law enforcement, may not require a PhenX-level measure, or a lengthy psychometrically validated assessment tool, but could be assessed via an operational definition of the construct and a set of standard probe questions that could be asked in a uniform way across CSC clinics.
  2. Centralized remote assessment of symptoms and functioning was endorsed by many, especially for obtaining a valid diagnosis. It’s a very efficient way to administer the SCID and using WebEx is inexpensive and HIPAA-compliant. Centralized remote assessment reduces clinician assessment burden; multiple participants reported that patients seemed to like it.
  3. In contrast, some participants expressed concern regarding the implications of separating assessment functions from on-site clinicians. The effectiveness of services might improve if standardized assessments are integrated into routine care. Some noted that several clinician-based assessments could be collected during CSC team meetings.
  4. Web-based assessment solutions are needed. An assessment portal that talks to the electronic health record (EHR) could be very efficient. However, many different EHR systems are in use and some are proprietary, making it hard to customize to support assessment.
  5. Adaptive assessments administered electronically might reduce the number of items administered, as might application of Item Response Theory and looking at measures as item banks and having an adaptive process for selecting items to create the smallest cluster of critical items that are important.
  6. Passive data collection, such as with mobile apps, reduces data collection burden on patients. California is doing this, but as part of a research project.

Assessment Challenges and Solutions

  1. Challenges to standardized data collection in community CSC programs frequently mentioned were lack of time, trained staff, funding for assessments, and staff turnover.
  2. Funds to train assessors and pay for assessments and maximizing billing for assessments by building assessments into what’s reimbursable, when possible, would partially address these challenges.
  3. Tying assessments to value-based payment might advance standardized assessment.
  4. Using data to demonstrate FEP program value might support an increased reimbursement rate for FEP services and incentivize data collection.
  5. In some instances, assessment is supported by contracts for FEP services that require assessment, as is the case in the states of Illinois, Maryland and New York, and in some California counties.
  6. Leveraging existing data collection systems that are already collecting data required by states, SAMHSA, CMS, and others could support FEP assessment. This might involve technical harmonization of existing data with the CSC program domains we want to assess. For example, National Outcomes Measurement System (NOMS), which is already collected by states, might be modified to meet CSC program assessment needs.
  7. Linking claims data and EHR data to assessment data would be ideal. Since large data collection requirements already exist, the question is how to tie special data collection efforts, such as for FEP programs, into these larger efforts. Staff may need training in FEP services billing procedures to capture these data.
  8. Data that is already being collected, such as psychosocial outcomes documented in a client’s health record, might be harvested and that data collection standardized.
  9. Establishing a culture of assessment, feedback, and systematic quality improvement may represent a significant culture change in many CSC programs, but is essential to support standardized data collection and a learning healthcare system.

Assessment Data Quality

  1. Data accuracy and completeness should be high priorities.
  2. Incentivizing accuracy of assessments is important.
  3. Strategies for improving assessment quality are needed.
  4. Centralized rating might be one approach for ensuring data quality
  5. If assessments are conducted internally, external checks are needed to ensure data quality.
  6. It may be preferable to have individuals not involved in the patient’s care conduct certain assessments (e.g., symptom scales) to avoid reporting and measurement biases. As noted above, centralized remote assessment of symptoms and functioning is relevant to this issue.

Other Assessment Considerations

  1. Clarification was requested regarding the target of the CSC assessment battery, e.g., for EPINET clinics, for use by all community CSC programs, or something else? Dr. Heinssen clarified that NIMH was trying to develop a data bridge between translational/academic CSC programs and typical CSC programs, for the benefit of each community.
  2. SAMHSA, NIMH, and ASPE are sponsoring an evaluation of community-based CSC treatment programs supported by Mental Health Block Grant set-aside funds. This evaluation, being conducted now by Westat, can inform standardized FEP clinical data collection more broadly.
  3. Many participants expressed concern that an extensive assessment battery at baseline might reduce patient engagement, and engagement should not be compromised by the assessment battery.
  4. Participants agreed that it’s important to assess CSC program outcomes, but equally important to measure what’s being delivered, such as all the service CSC contacts. Assessment of service contacts can be complicated when commercial and Medicaid billing requirements are taken into account.
  5. The timeframe for outcomes should be carefully considered, e.g., short or long-term? Outcome measures must be sensitive to change over time (e.g., at 18-24 months).
  6. Youth Board involvement could promote client participation in assessments.
  7. Careful consideration should be given to the establishment of benchmarks, including who establishes them and their underlying rationale.
  8. Consider avoiding term “program evaluation,” which may connote a judgement of the program.

Recommendations

  1. Leverage the enthusiasm for creating a learning healthcare system to support standardized assessment.
  2. Consider starting with a relatively small number of highly reliable measures. Start small and build successes before expanding assessment further.
  3. Features to support data accuracy and completeness should be built into assessment training and infrastructure.
  4. Securing clinician buy-in for standardized assessment and minimizing clinician burden are essential. Strategies need to be developed for increasing the value of data to clinicians, such as assessments generating notes that serve clinicians’ needs.
  5. Build feedback loops into all assessment activities so that stakeholders receive timely information.
  6. Assessments may not happen without the necessary resources for collecting this data, so identifying the required support and how to most effectively mobilize those resources will be critical.
  7. Identify where measures may be highly correlated to potentially reduce the number of assessments with minimal information loss.
  8. To ensure commonality among FEP populations served and CSC program components that are delivered, prioritize harmonization of diagnostic evaluations and CSC fidelity across sites.
  9. Establish a common operational definition of DUP start and end points and a standardized method of assessing DUP.
  10. Include state departments of mental health in this discussion to learn about what CSC data collection is mandated, the quality of the mandated CSC data submitted, and how they are using these submitted data.

Appendix

Summary of Key FEP Features and Important Aspects of FEP Course and Recovery (from Larry Seidman’s slides)