Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

 Archived Content

The National Institute of Mental Health archives materials that are over 4 years old and no longer being updated. The content on this page is provided for historical reference purposes only and may not reflect current knowledge or information.

NIMH Director’s Innovation Speaker Series: Decision-Making and Computational Psychiatry


JOSHUA GORDON: We're going to get started in just a moment. We're letting everybody into the virtual seminar room, and as soon as that number stops ticking upwards, we'll get started with the introductions.


JOSHUA GORDON: Okay. The number has stopped ticking up so quickly, so I'm going to go ahead and get us started. Welcome, everyone. I'm Joshua Gordon. I'm the director of the National Institute of Mental Health, and it's my pleasure to welcome you to this edition of the Innovation Speaker Series for 2020 to 2021. I'm going to be introducing the speaker, Dr. Martin Paulus, in just a moment, and I'm very much looking forward to his talk. But before I do that, I just want to remind everyone that the Q&A function should be used to ask questions of the speaker. You can enter them at any point during the talk, so if a question pops in your head, just throw it right down there, and we'll get to the questions at the end. Also, for everyone's knowledge, this is being recorded. So when Dr. Paulus wows you and you wish your friends and relatives could've seen or heard it, stay tuned to the announcements when it's available on the web for future consumption. With no further ado, it's my pleasure to welcome Dr. Martin Paulus. He is the scientific director and president of the Laureate Institute for Brain Research, which is in Tulsa, Oklahoma. He's also the deputy editor of JAMA Psychiatry. At the Laureate Institute, Dr. Paulus focuses on using neuroscience approaches to develop better assessments for diagnosis and prognosis of mental health problems and to develop novel interventions that are based upon an increased understanding of the underlying neuroscience. He has published lots of papers, over 300 of them. He's been funded continuously with grants from our institute and others, and he's currently principal investigator in NIGMS COBRE grant to develop an infrastructure for young investigators to establish their research careers with NIH-competitive funding. He's also a member of the ABCD study, a really important member, as Laureate Institute is really one of the more productive sites in that regard. More recently, the Laureate Institute also has started a longitudinal study to examine the question of how those with anxiety and depression problems respond to the challenge of COVID-19, so just all over the map.

JOSHUA GORDON: But I know Martin for two other reasons. One is that he's a real pioneer in the use of computational approaches to psychiatry, and he has really been instrumental in me developing an understanding of, particularly on the clinical end of things, how computational approaches can be used to increase our knowledge of and develop novel treatments for psychiatry. And so if you're sick of hearing about computational psychiatry from me, you have Martin, among others, to blame for that. The second reason is because he's an avid cyclist, like myself. In fact, Martin is one of the few psychiatrists that I know engaged in research and funded by the NIMH other than myself who has actually biked across the country, details of which we have shared over many a cycling event. When we get together for meetings, etc., we like to try to take an afternoon or so and do a ride together. So for those reasons and more, Dr. Martin Paulus is one of my favorites in psychiatry, on the meeting circuit, and to listen to, and I'm pleased to be able to give you all the opportunity to hear from him today. Martin, take it away.

MARTIN PAULUS: Well, thank you, Josh. That was fantastic, and really, I feel very excited about the introduction and all the nice things you've said. And I'm also very excited for everybody who is online. I hope everybody is healthy in these very strange times. This will be an online presentation, so I rely on you putting Q&As into the Q&A box, and I will try to get to them during the talk. But if I can't, because maybe I'm a little tight on time, I will certainly try to get to most questions at the end of the talk. So let me get right into it. So I'm going to tell you today a little bit about sort of the decision-making aspect of computational psychiatry, both from an explanatory and a pragmatic perspective. And I will make it very clear what I mean by those two perspectives. But before we get to that, I want to just basically tell you that I get royalties from writing about methamphetamine use disorder from UpToDate, but that won't be relevant really today because I won't be talking about methamphetamine use disorder. And as Josh pointed out, we are one of the ABCD sites for which we are funded, and then the NIGMS funds the COBRE awards, that I listed here.

MARTIN PAULUS: So before I begin and dive right in, I would like to give you an outline of the presentation today. There are really five parts to the talk. First, I will outline my perspective on computational psychiatry, what the basic approach entails, its opportunities, but also the challenges. And second, I will talk about a particular aspect of decision-making, which has not received a lot of attention, but which is increasingly being considered by research groups around the world, and that is aversion-based decision-making, which is opposed to reward-based decision-making. This will also give me the opportunity to compare and contrast reinforcement learning approaches or value-based action selection from active-inference-based action selection. And then third, I will give you specific examples of different computational approaches to delineate the process dysfunction in anxiety. These approaches are based on reinforcement learning, motor control framework, and on active inference. And together, basically, these approaches point to a common set of dysfunctions that we can observe in anxious individuals. And fourth, I'll give you an example for pragmatic approach to computational psychiatry that uses model simulations to help integrate and generate novel assessments. In particular, I'm going to talk about medication adherence and how it can be understood within the context of active inference. And finally, I try to summarize it all and leave you with a few take-home points.

MARTIN PAULUS: Okay. So let me go into the background of computational psychiatry. So in a recent viewpoint, I argued that there's been a disconnect between stakeholder demands and research in psychiatry. Computational psychiatry, from my perspective, has the unique opportunity to start with stakeholder demands, develop research for questions, and apply relevant models. It's important to consider what the goals of these models will be, and specifically, computational models can serve to build new mechanistic understandings of the disease processes that are based on empirical evidence and not on some juristic musings of psychiatrists from over 100 years ago. Instead, what computational psychiatry is trying to do is develop process models that are proposed hypotheses of the underlying processes that generate the observed behavior. The goal is to refine these models by a direct model comparison and to arrive at a generating model that is both compact and accurate and a representation of the pathophysiology of adaptive and nonadaptive behavior. When generating explanatory models, it's important to consider the level of causality we can apply to the experimental approach. In general, many of the studies that I will present-- many of the computational psychiatry studies have been based on case-control studies. The problem with case-control studies is that they can't really arbitrate deep levels of causality. And that's mostly because there are many confounding factors either observed or non-observed that can contribute to the model differences but that cannot be differentiated from the disease process itself.

MARTIN PAULUS: So the issue is the goal for computational psychiatry should be to create explanatory disease models. But that's not enough. It's important to keep in mind that outcome measures and the models need to provide actionable information, and that information needs to be distributed eventually in a measurable impact to stakeholders. And then to the end of my talk, I will give an example of sort of a pragmatic approach to computational psychiatry. So it's important to emphasize that explanation is a stakeholder demand. Patients, providers, and families want to understand why specific disorders emerge, what makes them wax and wane, and how specific intervention help to improve the disorders or might even lead to cures. In many ways, the current explanatory framework in psychiatry is still based on relatively simple receptor pharmacology that dates back to the 1960s and '70s. We're still essentially telling patients about chemical imbalances that are supposed to be corrected by our medications. The problem with this approach is that it's based on really limited evidence and that the explanatory depth is relatively shallow. That is, it does not give the patient or the family a deep understanding on how these disorders actually emerge.

MARTIN PAULUS: But there are significant challenges in building explanatory disease models. First, psychiatric disorders are fundamentally mental first-person experiences, which are difficult to translate into objective biological process dysfunctions. Second, psychiatric disorders are etiologically complex and complex in two ways. First, there's a many-to-one mapping of causes towards disease. And second, there's a one-to-many mapping where even simple genetic disorders can have profound heterogeneous clinical phenomenology. Third, we have to acknowledge that psychiatric disorders are not likely to be reducible to a single-process dysfunction on any particular level, but they are really, what Kendler has called, pluralistic, involve multiple levels, and are multicausal. So it can help to contrast what I would call "the old approach" to understanding the brain with the approach proposed by computational psychiatry, whereas in the old approach the relationship between behavior and brain was mostly based on correlations or association; that is, one measures both behavior and neuroimaging and relate the two by correlation of task measures with the degree of activation state. In comparison, computational psychiatry seeks to build specific process models. These models are fundamental process hypotheses of how the investigator thinks that the individual instantiates a particular behavior. Due to the fact that these models are built within a quantitative framework, they allow one to test among competing models to arrive at a model which most likely account for the observed behavior. Thus, decomposing the behavior into processes enables one to arrive at a deeper understanding of how patterns of brain activation can be related to observe the future behavior. Ultimately, the explanatory depth is encoded in the computational model that hypothesized the relationship.

MARTIN PAULUS: In a recent review with Quentin Huys, Michael Browning, and Michael Frank, we argued that computational psychiatry views psychiatric conditions as dynamical systems, which is the result-- which is essentially the resultant of the complex interactions among multiple level of analyses, which includes, as pointed out here, genes, molecules, cells, circuits, behavior, and symptoms, but also environment. The important aspect here is that the dynamical system can organize quite distinctly from the levels generated and then generate reproducible, stable, and adaptive as well as nonadaptive behavior. So ultimately, the program for computational psychiatry is to quantify the characteristics of this dynamical system in terms of sets of rules and parameters that move the state of the system forward in time. Here's a view of how the system adapts to a changing environment. The basic idea is that behavior is generated by a latent variable model with the certain temporal dynamics. Here the hidden variable, termed H, evolves according to some dynamics but is not observed. The observation, here termed B, directly informative about H at some time point, but the extent to which they are informative about future time points depends on the dynamics of H. Learning rates, that is, the parameters that determine how fast the system is changing, should reflect the changeability of learned association. The reward expectation of learners with a high versus a low learning rate are shown in the two panels in the bottom here. Whereas a learner with a high learning rate is better able to update his or her expectations following changes in association in a volatile environment, the learner with a low learning rate really never quite catches up to the changes in the environment. In comparison, when the environment is very stable, which is on the bottom-right panel, the learner with the low learning rate accurately estimates the underlying association, with the expectation of the learner with high learning rate being pulled away from the true value by chance outcomes. The bottom line here is it's important to recognize that the dynamics of the system as governed by the learning rate may not be problematic per se but can be adaptive or maladaptive based on the characteristics of the environment, that is, whether the environment is volatile or stable.

MARTIN PAULUS: The majority of computational models are based on behavioral paradigms or tasks that are completed by individuals with different psychiatric conditions. However, there are significant challenges for task-based measures. A recent study by Russ Poldrack’s group showed clearly that questionnaire measures, here indicated in this red cloud, and task-derived measures, here with this blue cloud, cohere amongst each other, but there's very little coherence between tasks and self-reported measures. Moreover, a meta-analysis of test-retest reliabilities of task- and questionnaire-derived measures, in blue and yellow here on the right panel, showed that task-based measures have typically lower reliability than questionnaire-based measures. This is particularly important when we consider using these approaches to measure how behavior changes over time, say, as a function of disease state or treatment intervention. In general, as shown in the panel on the lower left, only half of the test-retest variants can be accounted for by differences between individuals, meaning that highly robust group-level effects are accompanied by unreliable individual-level differences. To summarize, there are two major challenges ahead: first, to connect behavior-based computational models to other levels of analysis, in this case, the symptom levels, and second, to develop more reliable models that can be used to monitor behavioral changes in an individual over time. I'm hoping to show you some examples of that. This extends also to the neuroimaging level. We've recently extended this by doing a meta-analysis that focused on self-report and imaging findings. As can be seen in these bottom three panels, it turns out that correlations between each level really are overestimated for small sample sizes and they're only really stabilized with having several hundred individuals available to compute these correlations, and typically, they stabilize at a relatively low level. We estimate that, in this meta-analysis, the imaging data could only explain about 4% of the symptom data. So that shows that we still really have long ways to go.

MARTIN PAULUS: So there are several challenges for computational psychiatry ahead. First, we need to see that we need to identify generative process models that quantify the biological process dysfunction in psychiatric populations. Second, in combination with latent variable approaches, we need to be able to better identify robust and reliable relationships across level of analyses, so from a self level or, even below, all the way to a systems level. And then we also need to develop tasks and assessments that can arbitrate between competing computational accounts, so, for example, between value-based versus inferential-based decision-making, and eventually provide pragmatic prediction tools. And as you can see from this, we're very, very early in this process. So in a recent review focused on decision-making psychiatry, I emphasize the importance of considering models that take into account that many individuals are much more sensitive to aversion as a consequence of their behavior than to reward and that computational models of aversion-based decision-making are just beginning to emerge. So just briefly summarize what is aversion-based decision-making. So, for example, an individual who chooses not to attend a party or social gathering to reduce social anxiety is an example of aversion-based decision-making, and it falls broadly under the rubric of avoidance learning. So you have, for example, active avoidance, which is when one selects an action to prevent the occurrence of an internal or external stimulus that is followed by an aversive event; passive avoidance, when an individual takes an action when an entity in stimulus occurs; and an escape response, which is a consequence of the aversive event; or avoidance behavior, which is action selection to forgo the exposure of an aversive conditioned stimulus. Within the value-based framework, an individual is assumed to be in a certain state that has a certain value, and that derives from two sources: the value of the stimuli that are associated with that state, which have been termed Pavlovian stimuli, and the actions that have led the individual to find him or herself in that state, which are called instrumental action. So a generative model basically describes the transition from one state to another state as a function of instrumental actions and Pavlovian stimuli. And that's been, in an overview way, the value-based framework.

MARTIN PAULUS: In an active inference framework, the decision-making situation is characterized as an inference problem. That is, the individual is trying to bring observations in agreement with the true state of the world by selecting actions that result in a homeostatic adaptation. The key difference between the value-based framework and an active inference framework is that actions are the results of two possibly competing objectives. The first is to select actions that are consistent with observations that are preferred, and the second is to select actions that provide more information about the true state of the world. These processes are often framed within what's called a Bayesian partially observable Markov decision process, which means that an individual has some prior expectation about the state of the world, that there are transition matrices that map the observation to internal states, as well as mapping past states to future states, and there are preferences for observations and action policies that influence the transition from one state to the next. And I will give examples of that in some of the upcoming slides. For example, a socially anxious individual, the internal state of being affiliated or alone, which is not observable, is associated with the observation of talking of the [internal state?], which is observable. The individual has action policies consisting of either approach or avoidance to change that internal state over time. Underlying the action policy is the free energy principle, which minimizes the difference between what's expected and observed and the amount of surprise associated with making an observation. As it relates to avoidance- or aversion-based decision-making, a decision policy that aims to minimize the aversive state is most consistent with, say, what we call normally negative reinforcement.

MARTIN PAULUS: Reviewing the literature, as summarized in this paper below, within the value-based framework, anxiety has been associated with altered sensitivity to rewards and punishments, slower updates to aversive prediction error, overwhelming Pavlovian biases, and altered value reference points. Within the act of inference framework, anxiety-related processing dysfunctions have been lead to habitual predictions that are computationally less effortful, excessive response costs, and altered beliefs about state observation relationships. However, it's important to point out that this field is still in its infancy and much work still needs to be done. And just to speak to that, there was just a recent paper that was just published a couple of weeks ago by Ray Dolan's group. And so here's an example that uses an aversive framework to show that learning rates differed as a function of cognitive versus somatic anxiety. Importantly, these differences emerged more strongly after recasting anxiety symptoms within a novel latent variable framework. This and other results points to future work that needs to better relate symptomatic assessment with computational process dysfunction to more closely connect symptom level to behavioral dysfunction.

MARTIN PAULUS: So let me now talk a little bit about some of the computational process dysfunctions we have focused on here at the LIBR. The data presentations that will follow are based on a study that we initiated at LIBR about five years ago. The goal of this study was to model it based on a previous NIMH R01 grant that was focused on latent variables of the positive and negative valence domain. But to scale it up significantly to be able to do rigorous hypothesis testing using both an exploratory and a confirmatory sample, briefly, we included 1,000 subjects with positive and negative valence domain dysfunction as measured by the PHQ-9 [OASIS?]. And these individuals underwent extensive assessment, ranging from genotyping to social determinants of mental health, including a two-hour multimodal neuroimaging session with simultaneous fMRI and EEG, which was supervised by Physicist [inaudible]. The aims were to discover latent variables underlying positive and negative valence domain and interception that could be used to relate variables across levels of analysis and to then utilize predictive models to determine the longitudinal objector of symptoms and function in these individuals, which could lead to clinically meaningful prediction. We also split the sample into the first and second 500. Whereas the first 500 are used for exploratory data analyses and exploratory hypotheses, the second 500 will only be used for registered reports based on prior hypotheses. And so we're currently in the process of making the first 500 publicly available to other researchers.

MARTIN PAULUS: So the first study I want to talk about was led by Dr. Jonathan Howard at UCSD and used to combined value-based choice model and a drift-diffusion model to assess decisional process dysfunction in individuals with high anxiety. So by way of background, surprising events are important sources of internal model updating, which adjust expectations of how we perceive available options and select among them. Based on previous work, we hypothesized that anxious individuals experience exaggerated surprise to predictable events, which imbues them with undue salience. So, therefore, we applied a hybrid Rescorla-Wagner drift-diffusion model to a change-point-detection task in trends diagnostic groups of individuals with mood and anxiety disorder. So here's our change-point-detection task. I don't want to go too much into detail. It's quite extensive, but it involves multiple-stage decision-making, where an individual has to find the patch that is most often reinforced, and then this patch changes about every 30 trials. And so to model the behavior that an individual expresses during this task, we used the following model approach: The model assumes that expectations regarding the target location, so one of those three spots, influences both the initial location of the choice on a trial and the response and reaction time to the random dot stimulus. So if you're very certain about it, you'll be faster, and you'll be more likely to select the, quote-unquote, "correct" or most reinforced patch. The dynamics of the model, that is, the updating of location expectations, based on the true target location on each trial was modeled with a Rescorla-Wagner model. That is, the degree of surprise of the value observed versus the value expected drives the update. Subsequently, the expectation influenced either the drift-diffusion bias parameter, the drift-diffusion rate parameter, or both. And this approach takes advantage of both the choice as well as the response time of the choice.

MARTIN PAULUS: We performed model comparisons of six different models that are listed here. All models were used to predict both the categorical location of the choices and the random dot reaction times. What's important to note here: To determine the relationship between fear and model parameters, we constructed a hierarchical model in which both subject-level learning rates depend on scale, age, gender, and the variable we cared most about, which is the PANAS fear measurement. To determine whether the relationship between model parameters and fear was being driven by general negative affect, we also construct the model where we included PANAS negative affect as an individual-level predictor. Finally, we supplement our hypothesis-driven analysis, the relationship between perceptual learning rate and fear. We also assess the relationship between the model parameters and higher-level affects dimension, and we included other PANAS domains. The model comparison using the widely applicable information criteria, or WAIC, indicated that the bias and drift dual alpha model provided the best fit for the data. Individual who reported the highest fear scores showed the lowest rate of perceptual updating. In addition, older individuals showed slower updating but not decisional updating. For the decisional learning rate, the median ICC was 0.62, which is actually quite good, and for the perceptional learning rate, the median ICC was 0.8, which is excellent, which, again, tells us that these are sufficiently stable that we may be able to use this approach in a longitudinal design. So from this study, we can conclude that anxious and older individuals showed slower updating of the internal model. That includes perceptual processing, but not the model that includes decision-making. The two models employ separate updating processes, the separate learning rate, which are only weakly correlated. And taken together, anxious individual, in this context, have difficulty updating their expectation related to perceptional circuits rather than those who relate to decision-making circuits.

MARTIN PAULUS: We then also conducted a study, again with Jonathan, that focused on the motor control aspects. And so by way of background, in pursuing goals, we must continuously make adjustments based on errors, that is, the difference between where we are and where we would like to be. The adjustment must be based not only on the current situation, that is the current error, but also on how we expect this situation to evolve, which is the anticipated future error. The development of techniques to solve this problem was a major success in automated control of industrial processes and resulted in what's called the proportional-integrated-differential controller model. So individuals must solve an equivalent problem when pursuing real-time control of goal-directive motor actions. And a deficit in this fundamental process could be related not only to gross abnormality of motor systems but also to higher-level cognitive and affective dysfunctions. For example, we know from prior studies conducted by Michael Browning that individuals with high trait anxiety were found to have difficulty selecting optimum actions when adjusting to the temporal statistics of the environment. Therefore, in this study, we used a simple motor task with proportional-differential controller model and a hierarchical statistic approach to determine the effect of fear and negative affect on motor control. So here just to show you what we did, the subjects performed a simulated one-dimensional driving task. The position of a virtual car was controlled with a gaming joystick, and each subject completed 30 trials. During each trial, subjects were instructed to drive the car as quickly as possible to a stop sign and as close as possible to the stop sign without crossing the stop line. Each drive has a fixed duration of 10 seconds. The car was controlled according to a linear dynamical system. That is, the car velocity was proportional to the joystick displacement. Throughout each trial, continuous joystick displacement was recorded with a sample window of 60 per second. At each time point, an error is calculated by subtracting the current position from the goal position. The control action, that is, the acceleration at each time point, is a linear combination of the current error and a derivative of the error with coefficients KP and KD respectively. Goal state is taken to be the final position of the car at the end of the trial. The goal state, the current position, and the accelerations are directly measured during the task, whereas the current error and the derivative of the current error are calculated based on those quantities. And the parameters, KP and KD, are determined based on a hierarchical model-fitting process similar to what I showed you just a couple of slides ago.

MARTIN PAULUS: So this basically recaptures the hierarchical model. The shaded circles here represent data, and nonshaded circles represent the parameters. Without going in too much detail, this really allows us to exclusively, sensitively, and robustly estimate model parameters. The PANAS fear score was associated with lower KP and lower KD. This indicates that those individuals with greater levels of fear weighted the current error and also the rate of change of that error less, which is consistent with the reduced approach behavior and a greater propensity for oscillatory behavior once the goal state is reached. In addition, we also found that males and younger participants were associated with greater KP, indicating greater approach behavior. The effects were observed even after controlling for negative affective in general. Moreover, those individuals with larger caudate-- without a caudal ACC volume also showed greater differential error. The advantage of the present approach includes a simple data-collection procedure, a hierarchical model-fitting approach yielding highly reliable model parameters. And our generative models specifically predict acceleration at each time point in doing each trial and reliably captured individual behavioral differences performing the task. The model parameters demonstrate relationship to self-report and demonstrate a link to some imaging-derived parameters, and we are now in the process of actually going further with this model. So in conclude of this study is using a proportion-derivative control framework, we can parse altered error control in individuals with anxiety-related problems. Anxious individuals underestimate the error of current motor actions consistent with increased inhibition. Anxious individuals also underestimate the rate of change of the error, which results in oscillatory behavior. These parameters have direct relevance for treatment targets in behavioral interventions. And that's something that we're very much interested in that we can now target these, for example, by using neuromodulation approaches.

MARTIN PAULUS: The next example is the example that was done together with Ryan Smith and Robin Aupperle here at LIBR. And it's basically on the notion that imbalances in the decision to approach or avoid when both positive and negative consequences are expected is often problematic in people with mental health problems. For example, people with depression or anxiety may choose to sacrifice participation in rewarding activities because they believe that such activities will also lead to negative consequences. So simple paradigms are used to study this approach-avoidance conflict, most of which create a conflict between receiving monetary reward and monetary punishments or pain or some other stimulus. Using a computational modeling approach allows one to precisely quantify the distinct information-processing mechanisms that contribute to decision-making. And so in this study, we applied an active inference approach of computational modeling to an affect-based conflict with the goal to separate two underlying components: decision uncertainty and relative sensitivity to negative affective stimuli versus reward, which we termed emotional conflict. Here's the approach-avoidance conflict task that was developed by Robin Aupperle. And without going too much into detail about the task, the essence here is that an individual indicates his or her preferences of whether to experience a positive and/or a negative event. And that based on this preference, we can infer what are the decision processes that drive the approach-avoidance conflict. And specifically, to model the approach-avoidance conflict task, we adopted, again, a Markov decision process model within the active inference framework. We chose this model because it's well suited for modeling decision-making under uncertainty and was designed to model inference and planning processes both with and without learning. Because the outcomes of decisions in the AAC task were probabilistic and participants were explicitly informed about these probabilities when making choices, a model that explicitly incorporated action-outcome probabilities appeared to be the most appropriate in this particular incident.

MARTIN PAULUS: So the approach here required that we specify the relationship, again, between observation and hidden states, the relationship between current and previous states, the prior preference of the individual, and this leaves us with two free parameters as I pointed out before: decision uncertainty and emotional conflict. Here's briefly the population: Because of the heterogeneous nature of the population, we create actually two samples, and in particular, we created a propensity match sample to see whether these effects was independent of age and general cognitive abilities as measured by the WART scores. Here are the results: In this graph, we show both the average, which is the bar graphs at the bottom, as well as the parameter distribution to better delineate the individual model variability. Individuals with depression, anxiety, and substance use disorders on this paradigm showed greater uncertainty in decision-making relative to healthy controls. We also found that individuals with substance use disorder tended to show lower emotional conflict. Notably, we found that averaging across participants, the model was very accurate in predicting behavior, in 72% of the trials, in fact. Emotional conflict correlated more strongly with self-reported motivation, so for example, the motivation towards reward, the move away from negative outcomes, and higher self-reported anxiety during the task. Decision uncertainty correlated more strongly with self-reported difficulty making decisions on this task and with reduced motivation towards reward. So from this study, we can conclude that, first of all, the model accurately predicted the behavior. The parameter estimates show strong relationship with both reaction times and the patient self-reported feelings and motivations during the task. Emotional conflict was uniquely associated with self-reported anxiety on this task, and decision uncertainty was uniquely associated with self-reported difficulty making decisions on this task. What's important is that both parameters were not highly correlated and showed distinct relationship with psychopathology.

MARTIN PAULUS: So based on these new studies that I showed, and we've got many more that are in process, we can say that computational process dysfunction and anxiety are characterized by the following: First, these individuals have difficulty updating perceptual processes that are relative to decision processes during a change-point detection task. These individuals also have a tenured error processing of current motor actions consistent with increasing inhibition toward motor control. And at the same time, they underestimate the rate of change of the error, which results in oscillatory behavior during this motor control task. They show exaggerated decision uncertainty in approach-avoidance tasks. These process dysfunctions are clearly transdiagnostic. They can be readily assessed with behavioral paradigms. They are associated with distinct neural circuits, although I haven't shown you some of these data yet, and can be used to develop specific circuit-based interventions.

MARTIN PAULUS: So lastly, I want to talk a little about sort of the pragmatic approach to computational psychiatry. So far, the primary goal has been to develop explanatory disease models based on computational dysfunctions and anxiety. Here, I would like to briefly talk about the possibility of using computational models to develop novel assessments that can be used to make individual-level predictions, so that's more on the pragmatic side. Again, this work was done in collaboration with Ryan Smith here at LIBR. So this was, essentially, a project that was based on the study we conducted at LIBR to examine the ability using pharmacological modulation to increase adherence. Although I will actually not talk about the study, I will emphasize here the computational approaches towards developing a better understanding of nonadherence and to pragmatically develop novel assessments to predict nonadherence. So just by way of background, adherence is one of the most important public medical problems based on the assessment by the WHO. Nonadherence is associated with approximately $300 billion annual healthcare costs. Nonadherence has profound impact on reimbursements to payers and reductions in the so-called star ratings. It is estimated that about 125,000 deaths annually are attributable to nonadherence. Medication adherence is a complex behavior, which involves multiple steps, such as making appointment, accepting a script, filling a script, taking medications prescribed, maintain supply, and return to the provider. And what's important to understand, 25% of all patients who get a script don't even fill the script.

MARTIN PAULUS: Here is a large study that was conducted to show the adherence patterns, which is consistent with many other studies that have been conducted in this field. It shows a typical pattern of adherence right after initiation of antidepressant treatment. After a sharp decline in adherence, only half continued an antidepressant therapy beyond the minimum recommended duration of six months. This graph shows an illustration of the Markov decision process formulation of active inference used in these simulations. The generous model depicted here show that errors indicate dependencies between different variables. As described previously, observation depend on hidden states where this relation is specified by what's called the A matrix, and those states depend on previous states with a transition matrix called the B Matrix. And then these transitions are influenced by a set of policies and actions which are characterized by another set of parameters. The probability of selecting a particular policy, in turn, depends on the expected free energy of each policy with respect to the prior preferences of the simulated patient. The degree to which the expected free energy influences the policy is also modulated by an expected policy precision parameter, in this model called G, which, in turn, depends on the prior expectation over the expected precision, where higher B values promote lower confidence in policy selection. Finally, the E vectors or prior distribution of the policies, which also influences the policy selection, can be thought of as encoding the patient's habits.

MARTIN PAULUS: This graph shows in more detail a hidden two-state factor model that characterizes both the symptom predictability as well as the expected policy precision. And then we basically move the agent along a severity rate on both symptom severity and side effect. And we base these transitions on empirical data that have been obtained by large-scaled meta-analyses. We then conducted several different simulations, and I'm only going to briefly describe. We have a basic simulation: Under what circumstances will the individual engage in continued adherence? At what circumstances do you find expectation-based nonadherence and also surprise-based non-adherence? So after conducting a whole set of these simulations, we basically ended up with the model-based adherence questionnaire. And we developed the simple questionnaire that contained a number of example self-reported items that, based on our model, could be useful for gathering information about the patient's adherence-relevant beliefs. And the idea is that we are particularly focused on certainty and expectation about medication outcomes and the actions that the individual choose to adhere. The next steps here is to use this questionnaire to validate it, to apply it to intervention study, and to refine the adherence model now based actually on real data. Ultimately, the goal here is individual-level adherence prediction. So this is an example where we can use active interference modeling to identify patterns of decision-making that contribute to nonadherence to medication. And here are three examples on when that can happen. These simulations can help to develop new probes to determine sources of nonadherence, and thus, computational models and simulations can be pragmatically useful for novel questionnaire development that can actually have a pragmatic utility.

MARTIN PAULUS: So finally, my last slide, the general take-home points. From my perspective, computational psychiatry provides an explanatory framework to quantitatively test hypothesis "how individuals with psychiatric disorders process decision-making situations differently." Can be viewed as identifying the critical parameters of the dynamical system that underlies the psychiatric disorder. It's a principled way to identify novel processes that can better explain observed behavior. It still awaits integration with other units of analysis, that is, integration with molecular, cellular systems, and environmental factors. Finally, it can also be used in a pragmatic context to make better predictions and to develop novel assessment tools. With that, I want to thank you for your attention, and I'm hoping I can address some of the questions that you might have had throughout the talk. Maybe I can give it over to Alex if you want to sort of be the moderator.

ALEXANDER DENKER: Sure. First of all, thank you, Dr. Paulus, for an excellent, excellent talk. I want to go back for a moment to what you said at the very beginning of your talk, which is that stakeholders seek an understanding of psychiatric illness. Families, patients want to understand the basis to psychiatric illnesses. Can you touch more on the possible challenges in communicating findings from computational psychiatry with stakeholders? Obviously, meeting with parents and telling them that their teen's condition is etiologically complex might not go over too well.

MARTIN PAULUS: Right. No, I think you're bringing up a very, very important point, and that's actually something that I'm working very hard within the computational psychiatry community. As you can imagine, people who gravitate toward computational psychiatry tend to be very mathematically minded and would like to express and are expressing these process models in mathematical terms. Our job, the way I see it, is to translate these models into something that people can understand. So I give you an example. It's a nice finding that basically Michael Browning's group as well as we find is the difficulty that anxious people have to adjust to volatile versus nonvolatile environments. And the way to actually say that is that you might be doing well if nothing changes around you, but when things change around you and you don't know exactly when these things are changing, your brain has difficulty differentiating what is true change from what is just random fluctuation. And random fluctuation is like we're trying to predict-- say, we're trying to predict whether-- here's a good example: If you're on the highway and you're behind a car and you're trying to basically use the car to predict whether the driver is a male or female, it's very clear, and people have actually tried that. You cannot do that. It's a random event. So the idea is it's very difficult. It's random. You can't control that. But sometimes, when you, for example, take one highway over another highway, there are robust ways of saying, "Okay. You should take the left versus the right highway."

MARTIN PAULUS: And differentiating between these two different problems or these two different challenges is what's difficult for anxious people. So if you put it in words, if you describe what, then, results in changes in the internal state-- and that's sort of, again, where I really like the idea of the active inference approach because the active inference approach basically says, "It's really difficult for us to get to that internal state. We could get to that internal state only through observations." And so the notion here is that when you're put in these situations, when you have to make a decision, with your particular makeup, it changes your internal state, and you have difficulty with that. So the point is that we need to translate these computational mathematical model into something that people can actually practically understand. And I think that as we're developing these models and as we're developing different types of models, we can do that, and we can communicate that. And the beauty of this is-- so I've been a psychiatrist for over 20 years, and, of course, many times, we talk about the serotonin dysfunction in depression and so on, so forth. And people are kind of willing to take it to some level, but it really never quite sticks because people want process dysfunction. "I'm anxious because this happened to me," or, "I'm doing this differently," or whatever. It's really process based. Saying that you have too much or too little serotonin doesn't give the person something to work with. I do think that these process models give a person something to work with.

ALEXANDER DENKER: Sure. And along those lines, I mean, if we do have a computational model that is robust and the results are solid and we truly believe in those results, what do you think is stopping clinical adoption? What can the field do? How can we move this field along into the realm of clinical practice, into that direction? And where do you think we currently stand on these models being included in a diagnostician's toolkit?

MARTIN PAULUS: Yeah. Right now, we're clearly living in sort of a little bit of a bubble, right? The clinicians have sort of maybe heard about computational psychiatry. They have no idea what the-- but I think that-- and I'm active in the Anxiety and Depression Association of America, ADAA, where you have consumer groups, you have therapists, and you have researchers. And I think it's going to have to come through those organizations where we begin to translate that to therapists, where we basically either through meetings, through courses, through lectures, provide these models as frameworks of explaining an individual's behavior. And then, of course, we need assessments because you know we don't want to just do a generic assessment. "Oh, you have problems with it." We want to make sure that once we have an assessment, it's you that has the problem with this, not, in general, anxious people. So I think there are two challenges here: One is that we need to have the help from organizations that link consumers and patients to researchers. And second, we need to have really robust tools that can be used in clinical practice to actually get real parameters from these models that they're now translated back into something that can be communicated to the patient.

ALEXANDER DENKER: We do have about one more minute for questions, so if anyone has any other questions they want to enter. But we do have some excitement for Bayesian models. [laughter]

MARTIN PAULUS: Yeah. I see that. And I see the question almost always comes up. And so this is a very important question. And it says, "Have you compared value-based models with active inference models, and what difference have you found? What are the advantages of the latter?" So it's a very good question. We have, and, in fact, we are currently doing this. The problem is, and we have to be very honest, that the tasks as we currently have them make it very, very difficult to clearly differentiate between the models that could generate one behavior versus another. It's a fact that many of the reinforcement learning models show very similar behavior to the active inference models. But I think what is important is-- and the difference is, of course, that in the active inference model, you are not just paying attention to what you like and dislike, their preferences, just one element, but you also pay attention to wanting to know more about the world at large because it could help you, ultimately, to better adapt to it. And so what we're currently doing is trying to develop tasks. We're really doing this exploratory, "Oh, let me see what's over here," developing these tasks so that we can say, "Okay. If you don't have that point, if you don't have that looking out for what might be around the corner so it can be better, your behavior is different." And it's a very difficult problem to really clearly disambiguate these two models.

MARTIN PAULUS: Thank you. And I think Joshua will wrap us up now.

JOSHUA GORDON: Actually, I have a real quick question first that I thought about, and then I'll wrap it up. So, Martin, you mentioned sort of towards the beginning that test-retest reliability is going to be really important. So do some of these-- do the active inference versus the reward-based learning models do better at test-retest reliability? Do we have the data yet?

MARTIN PAULUS: Yeah. That's a very good question. So we're actually looking at this. We actually just looked at the test-retest set. It does appear that we get very good test-retest reliability with the particular model that we've instantiated here. I'm very excited about the motor control model. The motor control model is very stable. It uses an error-based framework, but because it looks at trajectories, it has many more data points by which to arbitrate the model parameters. So we may end up with even a different set of computational models that have higher test-retest reliability. The problem fundamentally is the following: Any learning model, almost any learning model, that is based on probabilistic reinforcement has difficulty with test-retest. And It's a very simple thing if you think about it yourself. The first time you do a task that is randomly reinforced, you are possibly processing it within a probabilistic framework. But once you know about the structure of the task, oftentimes what happened, heuristics take over. So you have some rules. "Okay. Yeah. If I see the dot three times on the left, I'm going to do the left choice," or whatever not. And so then you basically transition to a different type of behavior, and it's understandable that the models that instantiate the first behavior are different than the models that instantiate the second behavior. That's fundamentally the problem that we're facing. So we have to find a behavioral approach that we can apply-- that we can combine with computational models that are not afflicted by that problem.

JOSHUA GORDON: Right. Well, thank you. You've given us a lot of wonderful examples of really seemingly complex models but that boil down to simple concepts, and you've made the attempt, I think, and forcefully so-- you made the case, forcefully so, that you can actually describe them in ways that make sense from the perspective of thinking about psychopathology. I think it's going to be really interesting to see how well these models, as you hinted at, map onto neural processes and how well they can be utilized to not just improve our understanding of what's going on but also develop novel treatments. Thanks a lot, Martin, for talking to me.

MARTIN PAULUS: Thank you. Thanks for having me.

JOSHUA GORDON: I hope to go out on the bike with you soon.

MARTIN PAULUS: [laughter] Okay. Me, too. And thanks, everybody, for attending.

JOSHUA GORDON: Bye-bye for now, everyone.