Implementation Science in Healthcare: An Introduction
WEBINAR OPERATOR: Hello, and thank you for joining the National Institute of Mental Health Office for Research on Disparities and Global Mental Health’s 2018 Webinar Series. This presentation is entitled “Implementation Science in Healthcare: An Introduction.” Please note all lines are in a listen-only mode. If would like to ask a question during today’s presentation, you may do so at any time through the Q&A pod, located in the lower right-hand corner of your screen. This call is being recorded. It is now my pleasure to turn the call over to Ish Amarreh. Please go ahead.
ISHMAEL AMARREH: Thank you, Kayla. And, hello everyone and thank you all for joining us today to participate in this webinar. And my name is Ish Amarreh. I am a Program Officer in the Office for Research Disparities and Global Mental Health at the National Institute of Mental Health. And I'm happy to and---moderate and introduce the speaker of this fourth webinar of our 2018 webinar series organized by our office. And today's webinar is the—is the first of two webinars on Implementation Science, and that will deal with what is implementation science research and—and we will have--and Dr. Michel Wensing, who is a University Professor of Health and Research in Implementation Science in-- at the University of Heidelberg in Germany. And--and Dr. Wensing is also the head of a two‑year master’s program of Health Services Research Implementation Science, and also the Deputy Head of the Department of General Practice and Health Services Research. And Dr. Wensing previously was a Professor at the University of Radboud Medical Centre, Nijmegen in the Netherlands, where he worked 25 years in implementation science. Dr. Wensing has a master’s in sociology and a Ph.D. in medical science. And his academic work focuses on organizational, delivery and outcome of primary healthcare.
And he has—he has over 400 scientific publications in this field. He's also the co‑editor of a comprehensive book of implementation science and is currently the co-editor of the journal of Implementation Science. And I will now—I will turn it over to Dr. Wensing who will give us about an hour, hour and a half, maybe, of going through what implementation science is. And I want to remind everyone, you can ask questions through the presentation using the Q&A portal, and I will moderate those questions and present them to Dr. Wensing as we go along. And--and Dr. Wensing?
MICHEL WENSING: Yes, thank you for the extensive introduction, and also thank you for the invitation and opportunity to talk about implementation science in healthcare. I will not so much talk about mental health or global health, because this is not really my area of expertise. I will mainly talk about implementation science, with examples that apply to primary care, mostly, because this is where my emphasis is. And I'm from Europe, more particularly from the Netherlands. I'm Dutch from--as nationality. I work now in Germany, in the University of Heidelberg, in the medical faculty.
And no conflicts of interest to report regarding this particular lecture. Umm, I was already introduced, so I can--I cannot say any more about that.
To put in one picture of Heidelberg, which is a very nice city, umm, not so large, 20,000 inhabitants. Umm, attracts lots of tourists from Asia, but also from the United States, quite a few, so that’s interesting to visit. But then this talk, as I said I will give-- try to give you a brief introduction to implementation science, an example of a study that particularly focus on theory and gives some ideas, and take the opportunity to present on some ideas about this as well. Briefly, talk about research methods. There are no special methods that are unique to implementation science, but I will say a little bit about it. And finally, some words about, umm, future research on my ideas about future research. And, umm, we will see how it goes. You can ask questions at any time – umm, I guess we, umm, need not more than an hour for the talk itself.
So, first the introduction to implementation science. A little bit of a traditional way of starting is shown by the-- these slides, which presents the research pipeline. The three ‑‑ it presents three large bodies or containers of research. There’s biomedical research, basic research, could also be psychological research. Then, there's clinical research on diagnostic tests, particularly on treatments in patients or in populations. Some of public health research could also be classified here and then there's health services research. All studies that focus on how healthcare is delivered and what are outcomes in real practice and... there are, of course, two important steps here from basic research to clinical research. That's one step or one translation, which is not easy. For instance, a study by John Ioannidis showed that many basic studies published in Science and Nature never reached clinical practice. However, this is not a topic I will focus on today so much; it’s an interesting topic in itself. I will focus mostly, however, on the second topic, or the second step or translation, which is the step from research to health services-- practice, meaning that once a new treatment or new diagnostic test or new counseling technique or new prevention program has been developed and found to be effective and even cost‑effective, the challenge is to make sure that everybody who could benefit from it or patients or, or populations actually receive it.
And as you probably are aware, many studies show that this is not the case. Just one example from the U.S. literature review of studies from, in the U.S., showing that 50 to 70% of recommended care is given, so not 100%. And, on the other hand, treatments of care is given 20 to 30% of patients receive care. That was not recommended and could be left out.
So there's room for improvement, I would say. And this is exactly the focus of implementation science. Umm, so, it could be situated within health services research, it could also be situated in clinical research, I would argue. Or it could be situated exactly between the two domains.
There are--there is, of course, a history of the fields. Also, in healthcare, there are studies already, 20, 30, 40 years ago, but a major step forward was the establishment of the journal Implementation Science in 2006. It was already announced by the Medical in Chief. By that time at the start, I was one of the editors. The journal was founded by Martin Eccles from the United Kingdom and Brian Mittman from the United States. And it defined implementation research in a way that is still valid nowadays, as the scientific study of methods to promote systematic uptake of research findings and other evidence-based practices into rooting practice, enhanced to improve the quality and effectiveness of health services and care. Since then, we have added not, that it's not just about implementation in practice, but also implementation in policy, evidence‑based policy.
Umm, so some key features of the–implement--of implementation science as it was defined by Eccles and Mittman are that it is linked and aimed at evidence-based healthcare. Also, that it is pragmatic, very focused on strategies or interventions, programs for improving healthcare practice, and also for improving outcomes in patients and populations.
Umm, the tradition or the field has a lot of focus on the rigor of methods and also, interest in, umm, conceptual frameworks or theories. I will elaborate on that—on that later. Umm, it's a multidisciplinary field, closest may be epidemiology, behavioral medicine, social sciences more generally. Umm, and finally, it's important to remark that it's not restricted or limited to one particular disease, healthcare sector, profession, or country or part of the world. And, umm, I would even argue that it is important to keep the field together and not make implementation science for mental health separately and implementation science for surgery and implementation science for public health as separate disciplines. I think it has value to have it--have this as a combined, integrated field, even if we have different traditions within the broad field of implementation science. A particular perspective to elaborate on this a little bit is that we make this distinction between interventions and implementation strategies. So, interventions are typically treatments, also diagnostic tests, preventive procedures, counseling techniques, devices, also apps. So, umm, information technology applications, umm, could be here, which are hopefully, or, umm—umm, may or may not be evidence‑based. I--I hope they're evidence‑based when implemented. And, then on the other hand, there are implementation strategies, which are actually also interventions that we do not‑‑I tend to call them strategies, because once you talk about interventions, many people in healthcare think about, umm--umm, interventions for patients or populations that-- while implementation strategies are usually applied to health professionals, or healthcare organizations, so they include educational organizational--educational activities, like continuing education of health professions, organizational changes and all sorts, financial umm---umm, interventions or incentives, for example, pay for performance schemes or technological activities. For instance, Information Technology systems applied to health professionals, healthcare organizations, or even at a high level in systems, umm--and with the aim to implement specific interventions that are evidence‑based.
Umm, what type of research is covered or done in implementation science? Umm, well I've shown here on the slides what type of studies are of interest to the journal, umm, and this covers largely what is, umm--umm done. So, there are information studies, and the journal is particularly interested in studies that have a bit of--have sufficient rigor. Umm, so cluster randomized trials, before-after comparisons, umm, with a control group I should add, and interrupted time series. I also offer the science that may be of interest, but these are ones that are usually of mostly—most, of most interest. Systematic reviews of those studies, but also many observational studies, qualitative and quantitative, of what may be called determinants of implementation. May also be called barriers and facilitators of implementation. And one particular category here is process information. So, studies that focus on different aspects of delivery of implementation strategies. I will elaborate on that a little bit more later.
Then we are a--a particular type of, umm, studies, is systematic of development of work, a systematic development of conceptual frameworks. Also, systematic development of implementation strategies can be of interest, particularly if there are empirical research elements in that. Of course, measurements development and validation, for instance, measures of implementation outcomes of interest, or a measure of organizational readiness for change, which is a potential determinant of implementation outcomes. And finally, we are also interested in research on teaching implementation science, at least to journalists.
Umm, as I said, it's in the end also a very pragmatic field, because, in a way, the aim is to support people like shown here. He is one of my colleagues, in (inaudible) he's a general practitioner, Primary Care Physician in his practice. And as you can see, he has lots of information around him. He's on the phone to discuss something. He has a manual with information on pharmacotherapy. He has a computer, which supports him throughout the day with patient records, but also with physician support and are various others--other measurement tools for collecting data.
So, it's no surprise that occasionally things are forgotten that he is supposed to do or that he, on the other hand, undertakes procedures or, umm—umm, which are not necessarily need with a particular patient. And the role of implementation science can be understood in a way that, umm—umm, that, its aim is to support, to optimize the situation for this. Could be a nurse, policymaker, or manager, of course, umm--and umm-- so, that's the outcomes in the end for patients and populations are optimized.
Umm, there's growing interest in the field of implementation science. Umm, one of the elements is that, umm—that umm, and this is illustrated by this graph, this shows the number of submissions in the journal of Implementation Science, since it’s--2005 or 2006, I should say, and... you see rising number of submissions. Slightly less sharp increase in number of (inaudible) papers. Umm, the majority of papers is coming from North America, the U.S. and Canada, and also substantial part comes from the United Kingdom, umm, to lesser extent, also from Australia and-- and from other parts of the world. One major issue or, umm, background here is that NIH supports the field of what they call dissemination and implementation science for a number of years quite strongly. And this results in funding and of course, in papers, umm--umm, resulting from that, that are submitted. An example of a study, umm--just as an example to give you an impression, this example relates to mental health, in primary care. So, you may know that in primary care, many patients have symptoms of depression and anxiety disorders, which are not necessarily recognized. The large majority is not recognized by practitioners.
Umm, so in a study, we trained primary care physicians to improve this recognition and then, of course, start treatment or do a referral. Umm, and they were encouraged to improve recognition. Umm—umm, this was all done in the Netherlands, which is a setting which has relatively strong primary care systems. So GPs, primary care physicians, have a relatively central role. And, also the role of evidence‑based guidelines is quite strong in this particular setting.
So, we had an educational intervention, so one‑day training program. There was written information, there was a flow chart, there was emphasis on using screening questionnaires if you—if patients, umm, potentially have depression or anxiety symptoms, the-- the GPs were encouraged to screen and in a more systematic way, using a particular questionnaire. Umm--there was also where to support these GPs a little bit more individual interviews and counseling. And finally, some part of the group also received financial incentives. So, this was in the end—umm, and we evaluated this program in a cluster randomized trial, 23 physicians were allocated to one of two groups. Twelve were in an intervention group, and 11 were in a control group who didn't get this intervention package. They received it after the study was completed. And then, umm, umm, a large number of patients were sent questionnaires about, to screen for depression and anxiety problems. And they-- 444 screened positive-- umm, well, they send back the questionnaire, first of all, and it screened positive, and they were used as the basis of the evaluation.
Then, there was an economic evaluation, quite extensive. There was a study of barriers, and this process evaluation based on interviews. And questionnaires for GPs, maybe, and there was an additional study on determinants of recognition.
So, some of the findings, we indeed found a small effect of recognition. So the recognition of depression and anxiety was improved. Umm, it was 42% in the intervention group, 31% in the control group. So this was significant. Umm, it's for our world, for the world of implementation science, relatively large effect. Many studies in our field show smaller effects, umm, more in the range of 5 to 10%, 5 to 8% improvement or, there also many studies who do not reach ‑‑ do not show an effect. Umm, nevertheless, it's also clear that there's still plenty of room for improvement, because 42% of patients with recorded symptoms of depression and anxiety were recognized. So the majority are still not recognized. We also found these patients who were recognized got more consultations, not more prescriptions or referrals. This is more or less what you would hope for or would expect based on recommendations in the guidelines. Patients reported better accessibility, also better-- to be better‑informed.
But then we found it was also very expensive. So if you calculated in some detail, you could say the costs of each additional patient that was ‑‑ who was recognized as a result of this intervention, compared to the control group, who did usual care, was over 6,000 euro. So in the end, it's a policy question whether this is-- this amount of money is worth the effort, but it seems quite expensive.
And the physicians reported a range of barriers, umm, psychological barriers, but honest disagreement with specific aspects of the guideline. Not all primary care physicians are convinced that they should actively, umm, recognize or diagnosis depression and anxiety, particularly depression, probably, the anxiety is less of a debate.
They also reported resistance in patience, lack of information on where to refer to in mental health. So in the local setting, they did not always know where--what mental healthcare was available. So, I don't want to overemphasize the conclusions of this study. It's just one example to give you a flavor of the type of research in the world of implementation science. The study has been published in different papers, umm-- by the way, so you can read more about it if you want.
I don't know if there are questions so far? Because--now we've come to a different topic.
ISHMAEL AMARREH: And there are no questions that have been submitted so far, and--Michel. If anyone has a question, please ask, before we move on to the next section.
MICHEL WENSING: Okay. So a little bit of talk, umm—umm, so, the next section focuses on the use of theory in implementation science. Umm, so there are different views on the use of theories. I think at some point in time, 2008 or so, umm, two main figures at that time in implementation science, wrote a “pro-and-counter” piece in the Journal of Clinical Epidemiology. And Martin Eccles argued “Theories help to build on the available knowledge, so it is important to use them.” On the other hand, Andy Oxman, who comes very much from evidence-based, umm—umm, medicine world, argued “Theories are useless and misleading.” So, what you think you have to consider for yourself, I will try to give you an overview and some arguments. I, personally, have always been interested with colleagues, umm, to look at theory, umm, because it's—umm, for different reasons, so, you know, we have some papers that we published in early days.
So, what is a theory? Quite generally, it has been defined as a system of ideas and statements, held as an explanation or account of a group of facts or phenomena. This is quite general, not limited to implementation science.
But then it quickly gets very confusing. There are many words, like framework, paradigm, model, middle range theory, et cetera.
So in implementation science, we broadly have more or less two types of theories. One type of theories could be called impact theories. These describe hypothesis and assumptions about how a specific intervention will facilitate change, as well as the cause, effects and factors. So it's very--very much about factors that lead to change or that inhibit change or implementation.
And the other type of theories are process or intervention theories. So theories that describe, sometimes normatively, how activities should be planned, organized, scheduled in order to be effective, and how the target group could utilize or be influenced by these activities.
Umm--so to be a little bit more specific, I have here, a brief overview of some popular theories in implementation science. That, umm-- it's not a complete list, there are many more, and it's also, to some extent, subjective, what I’d—what I present here. Umm, but it gets you an impression of what is available.
So, there are theories, or at least a framework, that focused very much on clinical guidelines, factors in clinical guidelines that enhance their implement ability. So what makes it more likely that a particular guideline will be implemented and what will make it less likely, for instance, it could be related to the content of the guideline, it could be related to how it is formatted? Whether there are summaries, whether there are additional tools attached to the guideline, like flow charts or decision trees. There’s some evidence on those issues, but not too much. And as a framework, umm, develops, in the context of (inaudible) implementation umm—umm, international—guideline-- international network on this issue. Then there is the theoretical domains framework developed by Susan Michie and colleagues, quite well-known. This is a broad framework for behavioral change, largely based on psychological theory, so it-- analysis of over 30 theories in the field of behavior change. Umm, but also, including some aspects of other theories, and, umm, in the end, lists a-- a limited number of factors that are influence change, like self‑efficacy, attitudes, perceived norms and a number of other factors.
So, these are all cognitions, focused on individual change. Then, there is, umm, diffusion of innovations theory, which is actually has an older tradition, umm, most famous person is probably still Rogers. Umm, this is in-- a theory about how innovations generally, not just in healthcare, spread in populations, more particularly, in social networks. Because it very much emphasizes the connections between, umm, people.
The theory was developed before internet was so present as nowadays. So, it probably has to be considered to what extent it’s still relevant. But... it—I believe it still has very helpful components.
Then there's theory that focuses on how people, umm, give meaning to their life, umm, and to changes, so how-- and what particularly, how innovations become accepted and integrated in routine processes. This is the normalization process theory or that’s one example of this. The most well‑known example, developed by Carl May from the United—from the U.K., umm, and you could say this is (inaudible) about culture, it's a social constructivist approach, umm, which has been relatively, widely applied.
Then, there's, of course, theory about form economics. The standard economic market theory, argues higher price leads to lower use, but argues that transparency of price and quality leads to better use or change use, umm, is relevant also in healthcare. So this is about incentives, you could, more or less surmise. And finally, there’s theory from organizational science and, umm, as an example, that, because it has, has been widely cited and read is the concept of, umm, of organizational readiness for change. Not of individuals, but of organizations. It has been introduced into implementation science by Brian Weiner from the United States. So, here, the focus is on systems. As I said there are many other theories, I just, umm-umm, but it also illustrates-- nevertheless, it illustrates that the theories are in many different domains from a range of disciplines—umm, and some of them are more elaborated than others. They all have something to contribute, would be my position.
Umm, in implementation science, we actually have a lot of, umm—umm, not so much, umm—umm, high‑level theories, but a local frameworks. What are frameworks? Frameworks are basically lists of concepts, factors, or domains, usually very pragmatic, meaning that they are not explicitly linked to one particular theory, but to a range of theories.
Umm, there are actually very many frameworks. Tabak—a paper by Tabak in 2012, already identified over 60-- 60--over 60 frameworks for dissemination and implementation and healthcare. So at the journal of Implementation Science, we're skeptical when a new framework is submitted because we have already, so many.
Some examples are listed here. I will present you with one or two. So, one example, umm, which—umm, in which I was involved is the so‑called TICD checklist. So this was not called a framework, but a checklist, but you could also call it a framework. It's based on a comprehensive analysis of published frameworks, and theory syntheses, and planning models, focused on barriers and enablers of change. So, it's not about process, but determinants of practice.
And, here you see, umm, that it is, umm—umm, categorized as these factors from—derived from these available frameworks, theories, et cetera and different domains, umm, which, more or less, reflect the different theories that I've just presented. And you see also, a number of examples of factors that are included. There are, in total, 57 factors in this, umm, framework.
Umm, TICD was a European-funded project, project by the European community about-- which focused on improving chronic illness care, chronic diseases care, in primary care, mainly. So an issue could be, umm, to what extent it is generalizable across this setting. I would argue it's largely generalizable. On the other hand, I will--would never argue this is the final and ultimate framework. There's always room for improvement and updated elaborative version of any framework and also this framework.
Another example, which is well‑known in-- as well, or-- or more‑known, perhaps, because, umm, umm, it is from the United States, developed from Laura Damschroder, the consolidated framework for implementation research has a slightly different categorization, less factors. Umm, actually, this particular framework was included in the TICD framework. However, umm, it also has this domain process. So everything that is related to process of, umm, improvement or implementation is also, umm, in this framework, which is not covered by T-- TICD. As I said, there are more frameworks and, umm, I will not discuss that in too much detail. So why is theory relevant at all?
In principle, there are a number of questions—answers to these questions that can help guide hypothesis, interventions, measures and data analysis. Hopefully it helps to develop more knowledge in a systematic way. This is actually quite crucial, umm, reason. There are now many studies in implementation science that I'm not sure if we developed knowledge of implementation in healthcare in a more systematic way.
Of course, if the framework is valid, it's also very practical. It makes knowledge evidence accessible in an accessible-- for practical use. And finally, you could argue, even if you say you don't use a theory, there's always a theory in your approach, even if it is implicit.
Of course, there are also risks, like anything in life, there are risks of harm using theory. The theory may be wrong or not applicable or--and may blind you for alternatives or additional approaches.
So... finally, I would say, be a little bit careful, people are very strongly proposing particular theories or frameworks. I would, umm, make a plea-- like to make a plea for, umm--- take a broad view, umm, not narrow down too quickly.
Or also ask for evidence. Many theories are not so evidence‑based or their evidence is from different sectors. And then the question is, is it applicable to healthcare, or is it applicable to global health settings? If that's your area of work. Umm, however-- and also realize that theories, so far, are not very predictive. The best theories, probably from economics and psychology, even those theories do not predict how high amounts of performance and outcomes. However, non‑theoretical approaches are also not better and may potentially be less good.
I will show, now, a little bit practical-- practically, how theories are used in studies. And then, we'll have a reflection on this. So, one…
ISHMAEL AMARREH: We have a question about theories, do you want to answer that question before we go on?
MICHEL WENSING: Yeah.
ISHMAEL AMARREH: So— so this question is from Margaret (inaudible) and her question is can you comment on how different countries or cultures, legal as well as social, and change implementation theories? For example, she gives an example, if in the U.S., a lot of implementation is done to avoid malpractice lawsuits, things like that, while in Europe, and, often doctors would be okay as long as they follow guidelines. So, there is—she’s, she's saying there's an assumption, probably that, because of the different cultures, that can explain and how implementation is done in different countries.
MICHEL WENSING: Yeah, that's absolutely correct, even within Europe. But let's say to start with if there’s a risk of lawsuits, that's not just culture, but it's a real financial risk that influences many things. So in developing countries, and some developing countries, corruption is a big problem in healthcare that frustrates everything. Umm, so this influences obviously what can be done. And also, influences, umm, how you conceptualize implementation processes. And I know two countries a little bit better, the Netherlands and Germany, which are culturally, quite close to each other, however, there are big difference in implementation science. So in Netherlands, implementation is the world of support and education of practitioners. Umm, in Germany, implementation is very much the world of financial incentives and regulations.
So the people, and also doctors say, if you want to implementation something, you have to pay or make a regulation for it. Umm, it's probably, umm—so that it would be ‑‑ it's first of all important to consider those things. They will certainly, umm, have an impact on the type of theories that you develop, umm—umm, but there's no systematic reflection on that. So that would be, umm--that's an encouragement to perhaps do this reflection more systematically, and then ultimately, the issue is how can we bring this altogether? But that’s--that's at the final stage.
ISHMAEL AMARREH: Yeah... so, so, so-- basically, implementation is local. Like it has to be adapted to the local setting. It--it just cannot be a top‑down and theory, and—and a framework you use and bring from like another—and country or another culture.
MICHEL WENSING: Well, there are local elements to it, umm—umm, certainly, and there also, disease‑specific elements to it and are—umm, practitioner‑specific elements to it. Umm, nevertheless there are also common elements. So, I mean most, of course, particularly interested in what is more fundamental in it. Umm, but let's—let’s take the difference between the Netherlands and Germany. I think the decision to implement, to start a process of implementing something is probably influenced by money and regulations. However, if you have decided to implement something and you have as-- doing it, actually, then, education and support is much more relevant. So, these two perspectives do not exclude each other, they complement each other, probably.
Umm, and I do not know exactly how the risk of lawsuits fits in this picture, but in the end, umm, there also, umm-- if a would‑be risk of lawsuits in European countries, it would also influence decision‑making and implementation here, like it does in the U.S.
More questions or should I go ahead?
ISHMAEL AMARREH: Yeah... thank you, umm, go ahead.
MICHEL WENSING: Okay. So now a little more practically about how theory is used in studies. Umm, so here's an example of a study—a qualitative studies 23 GPs, different ones from the studies that I just presented, in a cluster randomized trial of practice accreditation, in general practice were interviewed, and we used the CFIR, Consolidated Framework of Implementation Research, for categorizing the results. Umm, and we used, also, the steps in practice accreditation to--to classify the results. So here you see from left to right, this-- and then blocks around it, you see how the different statements related. Give you examples... umm, GPs we want to be more conscious of the quality of care we provide and want to reveal our blind spots. The most‑important reason to participate was to improve the quality of care we provide.
So this was one statement. This person was, umm, apparently defined positive, and he said-- this could be classified in the heading of characteristics of individuals. Another different statement, we think that it is a lot of work to be done, making improvement plans, if we're making plans, it takes a disproportionate amount of time, actually. This will be the characteristics of the intervention. So in this way, you can classify everything that is a qualitative study. There's one example of how a framework, this case the CFIR framework, is used.
Another example is the example of, umm, hypotheses-testing correlational research, again, in general practice. We had a cross-sectional study, and-- 83 healthcare providers, doctors, and nurses in 30 practices. We had clinical performance indicators—indicators based on patient records, and we had questionnaires completed by these professionals. And these focused on two aspects of, umm, team and culture. So the team climate inventory, a particular questionnaire for team functioning, and competing values framework, one way of operationalizing organization of culture. And we found a few significant, umm, correlations.
A strong group culture was associated actually with somewhat less-good quality of diabetes care, the whole study was in diabetes care. So if the atmosphere and the--the climate, I’m sorry, pleasant for practitioners, the quality of care was slightly less good, perhaps different from what you would expect. And on the other hand, if there was a more‑balanced culture, meaning both group culture, but also some competition and also some hierarchy in the practice, this was associated with better diabetes care.
No other associations were found. So this is just an example. I don't want to overstate these particular findings. Umm, I will skip this example for the sake of time.
Umm, to give you my reflections on it, so what you typically see in our field is that qualitative studies give a sort of shopping list of issues that may be relevant, and the real challenge that we face now is to go to-- to bring science further than just a shopping list of barriers of change for factors that may have influenced the impact of a particular implementation program.
So that's one end. On the other end, quantitative studies typically show some significant effects in many tests that are done. So this is also not very satisfactory, and so here is also challenge, how can we bring advanced science more than we currently do? Because, umm—umm, okay. I will leave it here. Umm, and then at the end, I will have some suggestions for future research.
Perhaps a look forward to research methods. This part is not-- somewhat shorter. As I already said there are no special research methods that only-- are only used in implementation science. Umm, methods are borrowed from epidemiology, social science, so--quantitative social science, also qualitative research methods, and perhaps even other disciplines.
So, I will briefly talk about study designs, outcome evaluation, process evaluation, and economic evaluation, and it's certainly not a comprehensive overview.
Umm, so of course, we try to, umm, promote, umm—try to, like to see high‑quality designs, high quality meaning low risk of bias. And this is in our view consistent, I believe, with epidemiological view, at least, primarily determined by the study design. So in other words, and very complex analysis, statistical analysis can never overcome basic problems in the design of a study.
Umm, of course, that's not just study design, there are also other--other elements that are relevant. And whether the study is not only well‑designed but also well‑executed. The dropout is an issue of missing values. So this is all very consistent with other, umm, methodological, umm—umm, knowledge.
Umm, it may be helpful to distinguish between goal attainment studies that many targeted—goal attainment studies that many targeted intervention impact, umm, or effectiveness. So for the latter, you would need control designs, the study designs that include some sort of control, so that you have a comparison of showing what would have happen if the intervention had not been applied. On the other hand, there are many studies that, umm, have not the ambition to show the effectiveness or impact, but just where the goals have been achieved.
So-- and these could be cross-sectional studies or slightly better before-after comparison, we have baseline or cohort studies. And this dichotomy is, of course, in reality, more likely spectrum. Some studies are highly controlled and highly, umm—umm, focused on showing effectiveness. On the other hand of the spectrum, studies that are really local evaluations of a particular improvement program, but there are many examples of studies that are somewhat-- somewhere on this spectrum between highly rigorous, generalizable, probably randomized studies, and on the other hand, local evaluations of local programs.
Umm, terminology-- sorry-- confusing, because as I said we borrow from different fields. And, umm—umm, some concepts are infiltrated or conceptualized or defined differently in different fields. For instance, a quasi‑experimental study is completely different in epidemiology compared to psychology. Umm, I will not elaborate on that. The issue is, in implementation science, what are improved—umm, umm, appropriate outcomes?
So, of course, in the end, everything is about improving health, population health, patient health and this must-- could be measured in terms of mortality, umm—umm, disease, severity, quality of life, et cetera.
However, in many implementation studies, this is not included, umm, for pragmatic reasons, umm, because the measurement is difficult, but also, because it's not necessarily needed or it's not necessarily the key outcome, if the intervention that is implemented has already proven to be effective.
So for instance, if you have an intervention aimed at improving the use of statins in patients with coronary heart disease, you don't have to prove again that it reduces mortality. There are many studies that have already done this, so, there's not particularly needed, if you-- that you show this again.
So in many implementation studies, the emphasis is on performance or behavior of healthcare professionals. Occasionally, also behaviors or performance of other people, like managers, policymakers, education of the patients.
And this is usually most directly linked to the—umm, umm, to the topic of implementation. And I would also argue, it is, umm, desirable to focus on performance or behavior, as much as possible, rather than on knowledge, perceptions, beliefs, et cetera, because the correlation between knowledge, perceptions, beliefs, or individual, psychological constructs, and actual behaviors is moderate or weak, in general. Umm, occasionally, there's also patient behavior and coping that is measured. This is always a little bit‑‑ can be a little bit difficult to include this in the implementation science paradigm. It's often part of the process evaluation. Umm, I've already mentioned experiences or beliefs, et cetera. This can be relevant, particularly in process evaluation. Umm, occasionally, it's also the only way to measure something, because actual behavior is too difficult to measure. Nevertheless, it's a little bit problematic.
And of course, finally, costs and efficiency can be relevant. I-- at the beginning of my talk, I showed an example of a study that included costs and an economic evaluation, but in reality, there are not so many of those studies in our--in the world of implementation and actually we need more studies of implementation. Because implementation is partly an-- umm, an efficiency question. If you invest more time, effort, resources, you probably are-- reach higher, higher implementation.
However, it has to be balanced against the effect impact effectiveness of this investment of resources, and we could use more studies of-- that consider costs and efficiency.
The process evaluation—umm, so in the context of, umm, implementation strategies, umm, or more generally, in the context of any complex intervention in healthcare, there are a range of questions. Umm, so let me first say it's not just about satisfaction of participants. I would say, this is perhaps the least‑interesting question. So the more‑interesting questions are listed, umm, here. So has the intervention program-- implementation program, actually reached the people you want it to reach? So if you wanted to reach primary care physicians or if you wanted to reach mental healthcare professionals, have you been successful? And have you been-- has the program been realized as planned? Or has it been adapted? Which could be good or bad?
So, for instance, you-- if you have planned meetings-- educational meetings, did people turn up to these meetings or was there no attendance or sub-optimal attendance?
What resources were used? Of, course this links into economic evaluation. It could also be relevant on itself. Resources in implementation are-- is often [inaudible] of healthcare professionals, which is also, without attaching money to it-- important issue to consider.
Then, umm, fourth, what components, and perhaps even mechanisms, contributed to outcomes? I think the field moved increasingly towards this fourth question. Umm, what mechanisms, or more practically, what parts of your large program will the components program help where useful and which were less useful? Also, what factors outside the intervention were relevant? And so in the context or in the organization or in the system, what's, what’s-- was it all accessible for users? What was more acceptable? What was less acceptable? What‑‑ were there anymore anticipated outcomes, you probably measured anticipated outcomes in a summative evaluation. But there may also be non‑expected outcomes, they could be positive or negative. Umm, and, last but not least... is the program transferrable to other settings and groups?
And this implies that you describe the key aspects of the setting and groups of your particular study in some detail. Of course the opening question is what are key aspects? It means that you have to consider this so that people in other settings or, umm--and perhaps, even in other groups, can consider what you have done is relevant to them.
Umm, process evaluation is actually not so far from intervention design, I would argue. So, once the--an intervention program, implementation program has started, it’s usually called process evaluation. But before start, you have, umm, more or less, same types of questions or you try to optimize all these issues and, umm, you may even talk with, umm—umm, people about it, about, the target group about it, do a pilot test and this is all called intervention design. But it more or less addresses the same questions.
Umm, I think I will not elaborate on this particular example in detail, but I'll elaborate on this example. So we had a study in general practice, primary care, that focuses on cardiovascular prevention and we had a very careful approach. We developed a program with different components that are listed here. They're difficult to read, but we had different components like training of counseling technique, motivational interviewing, because the topic was to motivate people to take medication and to improve the lifestyle. We had other elements, umm, included, which focused on patient education, largely. And the idea was that these, umm, strategies or interventions would address problems that we had identified in a previous study. Umm, or you could also say address barriers for implementation, with the ultimate aim, of course, to implement recommendations, like controlled blood pressure, umm—umm, healthy body mass index, healthy lifestyles, et cetera.
And as you can see, we try to, umm—umm, make the assumed links between intervention, determinants, umm, recommendations, for many behaviors and outcomes-- ultimate outcomes, explicit in this, umm, model-- explanatory model, it's also called logic model. And I would say this is a recommendation for all implementation studies, perhaps to all studies of complex interventions, which have a range of components, addressing a range of factors, addressing with the aim to influence a range of behaviors to ultimately prove outcomes for patients and populations. It's, of course, pragmatic, this particular example is pragmatic, not heavily theory‑based, but it helps to organize thinking.
Some words about the, umm, results, umm—that are published. So, we had, umm--we thought we had developed good and attractive interventions. Nevertheless, only 50% of the targeted nurses actually had adopted interventions. And even fewer patients had actually noticed, umm, that something changed. Although with the one exception is that they all checked websites. Umm, we had, umm, objective measurement of consultations and found no improvement of counseling skills. Although this was key to our intervention. No improvement of knowledge. Umm, and then, umm, we had interviews in which nurses told us they found a program quite large and that they would need more time, more follow‑up, and more reminders.
So... outcomes evaluation showed no effect, process evaluation provided a number of clues for this. And, umm—well, there's a lot to say about this particular study. But it shows, at least that it has a lot of value to include a process evaluation of a sufficient quality in a study-- in any study of an implementation program….exactly.
Then, I briefly mentioned costs and economic evaluation. I’ve brought one example, again, related to cardiovascular disease, but a completely different project, quite an old project. Umm, practices were visited, outreach visits, or academic detailing is another name for this. People visited practices to help them--to improve the organization, and also to enhance the role of nurses. And we calculated how much this has costed. So practices were visited. This obviously costed time of the visitor. The number of visits per practice varied, I believe it was one and six. So, therefore, there's a range of costs per practice. Obviously, the GP time had to be costed because, umm, he or she had to spend time on this, too. Assistants-- so these nurses, umm, had to be costed in, and their traveling time. So, this all--all added up to over 8,000 euro.
Umm, so the--just one example, the key underlying message, implementation is not for free. There are resources involved and these resources can actually be substantial. And they have to be considered both from the perspective of society, and also from the perspective of the participating practices, because, umm, this GP has to spend time, and these assistants, have to spend time on this particular program. Even if you would not calculate it as euros, it can be calculated as hours as well. And this has to be considered. It's in itself a barrier or facilitator for change.
Good. Umm—however, implementation costs are not often, umm, considered. So at least, umm—umm, perhaps starting from the bottom, usually costs related to changes in healthcare profession are considered in any economic evaluation. But costs related to the implementation intervention or strategy are often not considered. However, this has to be costed in, and then there are even, umm, costs that are higher up for development of implementation interventions. So, these programs have to be developed as well. There may be research involved. This is usually ignored. And if you would – you could either argue you have to go higher, again, there has to be a guideline or treatment protocol or something that has-- that is available for implementation. So... not original research, perhaps, but let's say a clinical guideline which only summarizes the results or the -- the research and makes it available for public. Umm, so the guideline development--development involves substantial resources as well, and it is also ignored. So…but as I also said, there are still, done, not so many studies of economic evaluation in our field and we could use more.
Perhaps I'll finish my talk and see if there are further questions. Umm… (inaudible). Any questions? Do we want to take questions now or want me to finish?
ISHMAEL AMARREH: And…we could take questions if there are questions. If there are no questions, I think I will let you finish and maybe then we can start the discussion?
MICHEL WENSING: Okay...
ISHMAEL AMARREH: Alright, I-- I don't see any questions, so you can finish if you'd—if you’d like.
MICHEL WENSING: Okay. So a few slides on future perspectives, or at least my take on that. I would say in science—in implementation science as an academic field an issue is really how we can, umm, achieve fast accumulation of knowledge. Umm, because there are now quite a number of studies in different fields, different topics. Umm, studies on barriers for implementation, studies of different implementation strategies. But if you look carefully at it, it's really a question of whether we accumulate knowledge. Umm, so, umm, and some ideas are here, listed here. I think it will be helpful to have somewhat larger and longer research programs. So not just projects, but even labs, so there's—there’s more--more so you can could more easily build up findings of previous studies more systematically.
Secondly, we need, umm--umm, I would argue for middle-range theories, which are theories which are not so high level and abstract, which are also not local-- not local theories, but theories that are somewhere in between. It's-- it's a term, phrased by the sociologist Merton, I believe, so it's not phrased by me. But it’s really something we need. It's up for debate whether you can develop theories of implementation generally or whether you focus on a particular sector because, for instance, mental health is very different from other sectors. This is all open for debate. But I -- I would say we need more of those theories, of course, based on systematic research. And third, we could also use, umm—take advantage of big data, umm, of course, not just in many fields of research, but also in this field. There are many data that, umm, that could be used for studies.
Second point, which I briefly mentioned already, umm, is that, umm, I think we need to, a separate attention for the decision to implement and also prepare for implementation. Umm, actually, before a program starts, perhaps we have ignored this phase to some extent. So it may be a phase that occurs before a formal project starts. It's talking to people, umm – umm, raising their interest at something, building support for an intervention. So perhaps this is something we have to examine in much more detail, much more systematically, because one hypothesis might be that this phase is actually crucial for the effectiveness, for the – for the impact of your, umm, implementation program, regardless of what this exactly is, this program.
Of course, we had to focus on and try to understand better how implementation, by individuals, in organizations, actually occurs. Much of this research is about individuals, perhaps less about-- in organizations, so the organizational aspect could be-- could have some more attention as well.
And last not but least, of course, sustained implementation, so longer implementation, so scale‑up is important. Greg Aarons argued there’s scale‑up and scale-out. So, scale‑up is within the same system and population, and scale‑out is in different systems and populations. So—and this is perhaps a separate topic of, umm—umm, that we need to pay attention to.
Umm, and there are of course, a range of other issues that are important, which are listed here. I don't know if I should develop this… so how to involve stakeholders in an effective way. How to--there are clinical and organizational domains that are somewhat ignored, umm, and that --on the other hand-- there are topics that are heavily studied. Umm, so it’s—umm, this is something to consider. I mentioned already, preparation and planning for implementation. I mentioned sustained-- sustainment scale‑up and scale‑out. How to design implementation strategies is also wide open for discussion for research.
Umm, there are examples of studies, umm, and papers that describe how an implementation program was designed. But there are very few head-to-head comparisons of different methods of design. Or, even if it's not a comparison, evaluations of the design method, we don't know how to design implementation strategies, to be honest. People do it in different ways.
Umm, theory development is something I would also propose, particularly beyond frameworks or checklists or shopping lists, or factors. And finally, umm, I think there's ‑‑ there's been a lot of development in the field of randomized trials, from which, also implementation research takes advantage or could take advantage. But umm, there are also many, umm, in many situations, it's not so easy to do in randomized trials, or simply too expensive. And then the alternatives which, umm, may be quite relatively close to the ideal of random-- randomized trial. They are then called -- often called -- they may be called quasi‑experimental evaluation studies.
And I think an area of development is also the rigor of those studies, how this can be optimized and enhanced. So, these are some areas that I--I would like to highlight. It's certainly not a comprehensive list, there are many other factors or many other issues that need more attention. But this is the list I have prepared for today.
So, this is my brief overview of, umm, implementation science and elaboration of theory and research methods. I didn't spend a lot of time on implementation strategies and the body of evidence that is on-- that is available regarding implementation strategies, so education, organizational change, financial incentives, et cetera. Umm, partly because it's not really possible to summarize this body of knowledge in a meaningful way. Umm, it basically shows that implementation strategies can be effective. They are not always effective. All strategies that we have show mixed effects. So, they-- sometimes they are effective. Sometimes they are not effective. None of the strategies is always effective, but also, none of the strategies is never effective, even sending paper‑based, so materials to practitioners can be effective in some situations.
So the challenge is, umm, more and more, to understand why or the – or in what setting a particular implementation strategy is effective. However, it is important to realize that there is a body of knowledge around. There are certainly over a thousand randomized trials of implementation strategies in the world. There are systematic reviews, umm, in the Cochrane Library that also -- in Implementation Science journal and other sources that summarize this body of research. So if you work on a particular topic in a particular field, I think it's a good practice to start, to search for the available knowledge.
So, thanks-- thanks for the opportunity to talk. I don't know if there's opportunity to raise questions now, I believe? But this is it from my side.
ISHMAEL AMARREH: Yea, thank you, and Michel, this was really and -- a very comprehensive and very good introduction to implementation science. As, as a, a -- a dabbler in this – in this field, I really enjoyed it. And I -- I really-- and thank you for taking the time to introduce us to implementation science. And I don't see any questions, but I will maybe ask a question that you've already answered, but -- and, maybe from a different angle. Umm, you, you -- you alluded through your talk that implementation science is very important and it's a very, umm – it will be worthwhile for us to, umm, for -- for lack of a better word, implement -- implementation science -- and within programs or interventions that we're developing. And, my -- my -- I was thinking, how early would be, umm -- in an intervention or a process, should we consider implementation science? But I think you answered that, and you said as early as, umm, you can, like even before starting an intervention or, or -- or a process. My question would be, have you, in, in -- in the last 25 years, have you, umm – have you had ‑‑ what would be your recommendation as to how can we make, umm, implementation science something that ‑‑ and practitioners or programs and developers, or people who run, umm – umm – different, umm – umm, interventions, think about it and make it as part and parcel of the whole, and, endeavor of like conducting something?
MICHEL WENSING: Yeah, umm so our statements, statement made by Richard Gruel, umm who was head of the department in (inaudible) for 20 years, is that evidence‑based practice has to be complemented by evidence‑based implementation. So, if--even if you don’t want to position yourself in evidence-based healthcare, umm, I think the starting point is you embrace evaluation. So, the, the-- embrace the idea that it is worthwhile and important to evaluate what you are doing, what the impact is, for -- in terms of what you want to achieve for patients and populations you work for. That’s the starting point. If you -- if people -- so policymakers and practitioners do not appreciate this, then the first step would be to convince people through training, through opinion leaders, that this is an important thing. Because otherwise if you don’t believe this is important fundamentally, then it's difficult to convince that research on implementation is also important. So -- and then I would argue that if you accept this or appreciate that the evaluation of activities is important, of course, you cannot evaluate everything you do, but, umm, occasionally for the most important aspects of your work, or the most important decisions you would like to have evaluations. Then, logically from that follows, that everything you do to, umm, implement, umm, this is also, umm, worthwhile to do. Of course, in the end, it’s a question of how much time and money goes to implementation research. How much goes to, umm – umm, other types of research. And there have been arguments that, umm, umm -- investing more money in, for instance, discovering better treatments, may be lost, may be not so efficient. Perhaps not lost, but maybe less efficient than investing money in better implementation. So, the example is, again that (inaudible) used, perhaps it’s -- you could invest your money and inventing in medication that is even more effective than statins to reduce risk of heart problems, heart—heart, umm -- myocardial heart infarction. Umm, but, this – this is really a challenge. It’s not easy to invent medication that is better what -- than what we have currently available. It will cost a lot of money, even if you are successful. On the other hand, we know from research that not everybody receives statins or other medications that is, umm – umm, recommended for these, umm, patients. So if you could increase, umm, the rate from it’s currently in many systems, I think 70 to 80% of patients receives recommended medications, which is not bad at all. Umm, but if you could increase this by 10%, from 80 to 90%, for instance, you would actually save lives. Umm, and umm -- this would also invest -- require an investment in the -- associated with it is also an investment in research, perhaps if you want to understand how you can achieve this. But the ultimate, umm, cost-benefit ratio may be (inaudible) more positive on an investment in implementation research, and then also in implementation activities, of course, compared to investments in other types of research.
Of course, this is just an example. There may be other areas where, for instance, if you don't have any effective treatments, umm, it doesn't make much sense to invest in implementation research. Because you need effective treatments in the first place. Umm, and then an investment in basic and clinical research may be more appropriate. But there are situations when there’s clear evidence and there’s also evidence or studies showing that current practice is not optimal and could be improved. And, umm, that an investment in implementation – and, umm, implementation research supporting this could be really worth the money.
Does this answer your question?
ISHMAEL AMARREH: Yea, yea—yea, great, it, it-- it answered my question and-- well. Umm, we have another question, and this is more about, and, and -- education and learning. Is -- are there any short courses or meetings or, or -- or things that you could recommend to the people who are – and listening to this, umm, webinar to -- to get more exposure, to learn more about -- and implementation science in-- in general and also maybe specifically in their field, if -- if it’s in mental health or in other type of – and, healthcare.
MICHEL WENSING: Umm, there are a couple of short courses around the world. Unfortunately, I'm not aware of a particular, umm, website or source that collects all these. I know, that there’s-- a yearly, umm, course, umm, on implementation science in London, U.K., umm, organized by (inaudible) Dallas. Umm, I know that Susan Michie has a yearly course, which focuses on behavior change and but also links to implementation issues. Also, in London, umm, there are -- there is the Europe -- the Nordic conference for implementation research, which is not a course, but another source of information. It’s in Scandinavian journal – umm, countries. There are also a courses in the U.S. I believe, that-- but I’m less aware of. In also in Canada, implementation science is, umm, called-- umm, or it's also called implementation science in Canada, but another name is KT, knowledge translation. Umm, and this is a related field or perhaps, I could argue the same field. Umm, and there are also course on that, but you have to search the internet to find those courses. Not all those courses have a -- are offered regularly. So, they are – there are also one‑off courses, so the fields are. So young, this hasn't been system-- systematized yet.
ISHMAEL AMARREH: Thank you. And, we have ten minutes for the time that was allocated for -- for this, and webinar. And I will give people maybe another couple minutes, if there are more questions that they would like to ask. If we don't have any more questions, I think we will, and-- end our webinar and I will turn it back to the -- and coordinator to, to close this out. So, we'll give people maybe one or two minutes? Yea…I don't see any questions, and so – and I'll turn it back to you, Kayla, if you can hear me?
WEBINAR OPERATOR: This does conclude today's program. Thank you for your participation. You may disconnect at any time and have a wonderful day.