Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

Workshop Day 2: Ultrasound Neuromodulation for Mental Health Applications

Transcript

Recap Day 1

LENNART VERHAGEN: Welcome, everybody, to day two of this virtual workshop on ultrasonic neuromodulation hosted and organized by the NIMH.

Kim, I'm very excited to be here again.  I hope you had a great day, too, yesterday.

KIM BUTTS PAULY: Yes, it was super.  It was very informative.  I think you have a slide to share, Lennart, so we can remind people what we did yesterday and where we're going today.

We had a couple of presentations, our showcase, we talked about biophysical considerations, and then physiological and clinical considerations, and it really cued us up for today.  But maybe I'll ask you, Lennart, do you have any reflections on what we talked about yesterday?

LENNART VERHAGEN: I learned a lot, already starting with these two showcases.  One was Keith Murphy talking about preclinical rodent models of ultrasonic neuromodulation.  We saw how strong these effects can be, immediate, acute effects, and Keith was showing specificity in targeting and cell type, and also showing that there might be certain spaces or certain areas in the parameter space that are more effective.  And you can really see how the field is moving forward.  We're combining neuromodulation with neuroimaging tools to have proof of target engagement.  And I was so excited that we could see that these approaches are also translated to the human applications.

So Dr. Elsa Fouragnan, she showed us quite a different protocol where we induce plasticity.  So it's an offline protocol with delayed effects, and she was using MRI to track the target engagement, both resting state connectivity, but also checking the neurochemistry, seeing a shift in excitation and inhibition balance as measured with magnetic resonance spectroscopy.

Both of these talks were such a great example of the power of ultrasound for neuromodulation in animals and humans.  But also showing some of the complexities and finesses, how some effects might be different depending on the state of the brain either in rest or action.

After these showcases we moved to the second session on biophysical considerations.  And Kim, I loved your talk.  That was such a fantastic start.  You laid the foundations and showed us how really strongly founded in physical effects, but it also showed all of us that these strong physical underpinnings really have a complexity, that there's a mixture of effects that we might not be entirely sure how everything is working, and which effects are the most prominent ones.  But I learned a lot, there was a fantastic starting talk.  Did you have some reflections on the talk from Jean-François Aubry?

KIM BUTTS PAULY: Just a couple of things.  One of them is that the skull is really a bit of a speed bump for us.  I'm super interested to see Brad Treeby's talk later today, when he's talking about simulations and modeling, and just one of the questions I often worry about is just how accurate can we be with our simulations and modeling?  So I think that would be a really important question for the field to understand how to do those simulations and modeling and what it tells us.

LENNART VERHAGEN: Yes, I'm very much looking forward to that session today.  I was also enthused by, or maybe even comforted, by Jean-François's presentation, how he showed how much we can already learn from existing regulatory limits, but he was also very clear in explaining that regulatory limits are not the same as safety limits.  There can be a large window, but regulatory limits can be very helpful to have guidelines when you have no other information present, and it might be really these simulations and the careful planning that can bring you further in estimating both your efficacy and your safety engagement.

That came strongly back in the panel discussion, too. 

KIM BUTTS PAULY: Yeah, you did ask me about Jean-François's talk, and I failed to answer, but it is super interesting that there's this sort of gray area.  There's a lot that we know at the extreme, and we certainly feel as if under the regulatory limits they're super safe.  But there's this gray area that we just feel like a little more knowledge, and then we'll be able to maybe think about how we might operate there safely.

LENNART VERHAGEN: And it's always comforting to see that there are no serious or even no severe adverse events reported so far.  So let's get the field moving forward in a conscious and informed approach, and we were moving forward to third session, on physiological and clinical considerations.  And it's amazing what we can learn from all the biomechanisms.  Shy Shoham gave a fantastic talk, really about all of those details.  And it shows the richness but also it gives me hope how we can have informed parameter space, informed based on the physics, on the neurophysiology, and then moving onward.

Was there anything that stood out for you on session three?

KIM BUTTS PAULY: One of the things just in general I think that stood out for me was sort of the difference between online and offline, where online is immediate effects.  And we see those so strongly in animal research, and then offline is sort of delayed effects.  And we see those so strongly in the human research.  And it's -- I'm trying, myself struggling, and maybe you have an answer for this, is how do we put those together and think about that we see those effects so differently in the different size models?

LENNART VERHAGEN: One, I hope that people are getting out of this workshop really a call to action to start crossing the different levels and start learning from each other and the different models.  So I think there are ways to gain in human applications on acute effects, we kind of know from the animal models.  But I was also seeing that more animal research is including or looking at plasticity induction.  And really bringing these two together, because what is happening at a delayed effect is not magic but must be induced acutely at the moment of stimulation or shortly afterwards.

That also was part of the discussion we were having in the panel discussion on session three.  We're all wondering, does ultrasound really -- is it subclass thresholds?  Does it evoke action potentials?  Can we really drive a neuronal activity, or is it perhaps more modulatory?

You can still have evoked behavior but because a  subclass of modulation of a neurocircuit.  And that led to very interesting discussions on the probability or the risk of, for example, seizure.  How can we make it more effective?  There was a very engaging back-and-forth between panelists.

KIM BUTTS PAULY: We have a super session, super interesting sessions, lined up for today.  You can see today we're going to talk about the regulatory pathway, and we'll have a panel discussion there.  Experimental planning and design, and get into how to make measurements of your ultrasound system as well as the modeling and the simulations, and lastly target engagement and parameter space and effects.  So super interesting today as well.  I'm really looking forward to it.

LENNART VERHAGEN: I'm also very happy that people are already thinking about whether we'd like to go at the beginning.  It's important that we have, for example, the regulatory and reimbursement already at this beginning.  We need to make the right steps now for the field to really move forward.  And I'm very much looking forward to learning from all the speakers and the panelists.  I'm excited for today.  

KIM BUTTS PAULY: Super.  I think maybe at this point we turn it over to Lizzy, I think.

Session 4: The Regulatory Pathway for FUS Neuromodulation as a Treatment

ELIZABETH ANKUDOWICH: Welcome, everyone.  Thank you so much for that recap, Lennart and Kim.  That was fantastic.

The first session for today on Day 2 of our Ultrasound Neuromodulation for Mental Health workshop, this session is focused on the regulatory pathways for focused ultrasound neuromodulation as a treatment.  Our session moderator is Dr. Matthew Myers.  He joins us from the Office of Science and Engineering Laboratories at the Center for Devices and Radiological Health, or CDRH, at FDA.  He's joined by our speaker for this session, assistant director Pamela Scott, who leads the neuromodulation psychiatry team within the Office of Neurological and Physical Medicine Devices at CDRH at FDA.

Welcome, Pamela, and feel free to share your slides. 

Regulatory Evaluation of FUS Neuromodulation

PAMELA SCOTT: Good afternoon.  I really appreciate this opportunity to take a moment to walk you through regulatory pathways and review considerations for medical devices.  So let's hop right into things.

I wanted to start out with this slide because I really wanted to emphasize the importance of a lot of the basic research and early research testing in clinical settings that are being performed, and how that is really important because it does inform regulatory submissions down the line.

So it's really important for investigators, researchers, and developers to understand regulatory pathways and regulatory considerations as part of the total product lifecycle.  So during my talk I'll hit on key aspects of this lifecycle. 

I just want to quickly give you a brief organizational overview of our Center for Devices and Radiological Health.  We are one of the centers within the Food and Drug Administration.  As you can see, within our center we have six offices.  I want to highlight our Office of Product Evaluation and Quality, where we are responsible for the premarketing evaluation, post-market surveillance, and compliance activities for medical devices.  I also want to highlight for you our Office of Science and Engineering Laboratories, and they're responsible for doing research as well as the development of regulatory science tools, which I will talk about later in my talk.

When it relates to the review of ultrasound neuromodulation or mental health application, our Office of Neurological and Physical Medicine Devices is going to be the main office that would be responsible for that type of review in terms of premarket evaluation, post-market surveillance and compliance activities.  And in this slide, I give you a snapshot of how our Office of Neurological and Physical Medicine Devices is organized.  We're also known as Office of Health Technology 5, or OHT5.  We have two divisions, and we have seven teams within our office.  And again, I'm the assistant director for the Neuromodulation Psychiatry team.

Now, let's hop into really talking about our regulatory pathways as well as review considerations.  But first I want to start out with just a brief overview of the classification of medical devices.  I will say that FDA evaluates the risks of various medical device types, and we place devices within one of three classifications, for marketing. 

These particular classifications define the level of regulatory control necessary to provide a reasonable assurance of safe and effective use of the device when it is on the market.  Class I devices are subject to general controls.  As you can see the general controls that I have listed here on this particular slide.  I will say that Class II and Class III devices are also subject to general controls.  But Class II devices are also subject to special controls.  I have types of special controls that Class II devices can be subject to, which can include guidance documents, mandatory performance testing, other certain types of performance testing, or special labeling or post-market surveillance.

And then with our Class III devices, they are subject, again, to both general controls as well as premarket approval, and I'll talk about the premarket approval pathway a little bit later in my presentation.

I do want to take a moment to talk about indications for use versus intended use.  And I'll highlight the importance of making sure that you have a very clear indication for use for your device, and the target population, again, a little bit later on in the presentation.

When we talk about the term intended use, the intended use really is the general purpose of the device, or its function.  The indication for use really digs into the disease or condition that the device will diagnose, treat, prevent, cure, or mitigate, including a description of the patient population for which the device is intended.

Now, the intended use of the device encompasses the indication for use.  Again, that's going to be important when you think about how you want to -- how the device is supposed to be used, what is the target population.

Before I talk about these specific regulatory submissions that are needed in order to get a device onto the market, I want to talk about some of the ways you can engage with FDA before you get to the point of a regulatory submission.  One of those pathways is the Q-submission, also known as the presubmission.  For short we call it the Q-sub or the presub. 

This particular program provides an opportunity for investigators, sponsors, researchers, to obtain FDA feedback prior to moving to a marketing submission or prior to moving to what we call an investigational device submission that is needed before you conduct a clinical trial in the United States on what is considered a significant risk device.  And again, I'll talk about that in more detail later on in my presentation.  But again, this is a highly practical way to get FDA feedback prior to moving to either that IDE or that marketing submission.

With the Q-submission, you will receive written feedback from FDA, and you have an option for meeting with us after you've received the written feedback so that we can provide clarification to anything in the feedback that may have been unclear. 

I do want to emphasize that the Q-submission is not intended for prereview of data.  The actual review of the data is conducted as part of marketing submission, and the presubmission is really to get feedback on specific questions you have before you move to that IDE or marketing submission phase.

I have a link to the guidance document for Q-submission that provides detail related to that program if you want to take a quick screenshot of this slide.

Now, I also want to highlight a couple of other ways that are important in terms of engaging with FDA.  If you have questions about what your regulatory pathway would be that's unclear, we do recommend that you submit what's called a 513(g).  The 513(g) is a request for information, and in that, as a response to that request, we can provide specific feedback regarding the regulatory pathway for your device. 

I also want to highlight patient engagement.  Here at FDA, we are committed to understanding the views of patients during medical device development as well as regulatory decision-making.  So we are evaluating ways to engage with patients and to capture their viewpoints.  I have a couple of links on this slide again that may be helpful for you, particularly if you are interested in thinking about ways to incorporate the patient view within your research and within your early clinical studies as well as pivotal studies. 

So we have some guidance documents as well as some information on our patient engagement webpage that can be helpful if you're interested in really incorporating that patient view.

Let's move into looking at the type of premarket submissions.  I'm going to start with the premarket notification.  The premarket notification is generally needed for Class I and/or Class II devices.  I will note that most Class I devices, which tend to be low- and moderate-risk devices, which are subject to general controls, which I said previously, most of them are exempt from marketing submission to FDA.  And generally Class II devices, which tend to be moderate- to high-risk devices, are subject to the premarket notification, also known as the 510(k).

What I want to highlight as it relates to the 510(k) is that the 510(k) is a process to determine whether a new device is substantially equivalent to a legally marketed -- and in most cases it's going to be a legally marketed Class II device -- which is also known as a predicate device.  So with the 510(k) we're making a substantial equivalence determination where we're comparing the safety and effectiveness of the new device compared against the safety and effectiveness of that predicate device, and the predicate device is that legally marketed device, that device that is Class II, that it's already legally marketed within the United States, and it is a substantial equivalence determination.  So with that determination, the final decision is a clearance decision, and I want to emphasize it is not an approval decision.  It's a clearance decision.  So you'll hear us refer to devices as being FDA-cleared when they have gone through the 510(k) process.

Now I'm going to move onto the de novo petition.  The de novo petition is a specific type of pathway for a new device, again, that tends to be low- to moderate-risk, for which there is no legally marketed predicate device to adequately compare it to.  So this really happens when you have a device that has a new intended use.  We talk about intended use versus indication for use, but if your indication for use statement really represents a new intended use of the device, then the de novo pathway may be the more appropriate pathway if it is a low- to moderate-risk device. 

Also, if your device has different technology that raises different types of questions of safety and effectiveness, compared to devices that are currently on the market, again, that are legally on the market, then again, that de novo pathway may be the more appropriate pathway.

Now let's move on to talking about premarket approval.  Premarket approval are typically for Class III devices, and this is an actual approval decision.  The approval is based on us making an assessment of the data and the information that is provided in the PMA, and we're really looking to assess whether or not you have provided sufficient, valid scientific evidence to ensure that the device is safe and effective.  It's not a comparison review, and it's really a review of the valid scientific evidence that has been provided, to determine does the device, in and of itself, does it present or have you presented enough information to show that there is a reasonable assurance of safety and effectiveness for its intended use. 

I briefly want to touch on the humanitarian device exemption.  This is a very specialized type of submission.  It is really reserved for devices where the annual use incidence is less than 8,000 individuals.  Again, it's for devices intended to treat or diagnose conditions with annual U.S. incidence of less than 8,000 individuals.

I do want to note that the approval standard for HDEs is slightly different in terms of safety.  We want to make sure that there's information showing that there's no unreasonable or significant risk to the device.  And as it relates to effectiveness, we're really looking to see that the data has demonstrated probable benefit.

I just want to note really quickly that for the HDE submission you do need to make sure that you have obtained humanitarian use device designation also.  There are some other restrictions around profit and use as it relates to HDEs, also.

Let's jump into when an IDE is needed.  As I mentioned before, there are times when clinical performance data is needed to support a marketing submission, whether it be a 510(k), a de novo, or a PMA or an HDE.  So, when do you need an IDE in order to actually conduct a clinical study within the United States?  An IDE is needed when a device is determined to be a significant risk device, and therefore we have, the study then is considered a significant risk study.

We've defined what a significant risk device is within our regulations.  I have the citing of the regulation here on this slide, but I don't want to go too deep of a dive into the actual technicality of the regulations, but what I have here on this slide are the criteria that's listed in the regulation that defines what a significant risk device is.

If it's an implant and it presents a potential for serious risk to health, safety, or welfare of the subject, it's a significant risk device.  If it's a device purported or represented to be for use in supporting or sustaining human life.  And again, when it presents the potential for serious risks to health, then again it's going to be a significant risk device.  And if it's a device for a use of substantial importance in diagnosing, curing, mitigating, or treating disease, or otherwise preventing impairment of human health, again, and if it presents a potential for serious risk to health, safety or welfare of the subject, it's going to be considered a significant risk device.

And then lastly, if it doesn't meet criteria one through three, but there's still a potential for serious risk to health, safety, or welfare of the subject, it would be considered a significant risk device. 

One of the important things to consider when you're deciding or making the determination is it a significant risk study, we need to also consider the patient population that the study is being performed in.  So that is a big consideration when we get into looking to criteria for and thinking about whether or not there's a potential for serious risk to health, safety, or welfare of the subjects within the study, particularly thinking about what the patient population is.  And particularly for certain psychiatric disorders, there could be, especially when you start thinking about suicidality and things of that sort.

In terms of making that significant risk or not significant risk determination, the sponsor of the study has the responsibility for making that initial determination, and then taking it to the IRB.  The IRB then decides do they agree with that determination or not.  If they don't agree with that determination, then the IRB can in a sense override the sponsor's initial determination.  If there are questions, FDA is available to also make study risk determinations.  And we do that through what we call a study risk determination type of submission, and it would be given a Q-submission number, so it falls under that Q-submission program that I talked about, but it is a specific type of Q-submission that is called a study risk determination, and with that we provide feedback on whether your proposed study is significant risk or not significant risk.

We do provide a written letter that outlines our determination for your proposed study, and why we have determined it is significant risk or not significant risk.  There is a guidance document that can be used by clinical investigators, sponsors, and IRBs as it relates to helping to make that determination of significant risk versus not significant risk. 

Just very quickly, some of the considerations that you need to keep in mind when making that determination, particularly when we're thinking about ultrasound neuromodulation, if you're thinking about using increased mechanical index values that are significantly above diagnostic level, then again, that may fall into the significant risk side.  As I mentioned before, if you're going to be targeting a vulnerable population, again, the study may be considered significant risk.  And if you're also going to be using brain targets that are more sensitive than others, again, it may be considered significant risk.

If you're using diagnostic energy levels, it could possibly be nonsignificant risk, but again, I want to make sure that you are thinking about are you using it within a vulnerable population, and other risks that can be associated with the study.

I do want to note that a device that may typically be considered nonsignificant risk could be deemed significant risk depending on its proposed use.

Let's jump into some review points to consider, and I do want to make the kind of disclaimer that this is not intended to be guidance, but this is really what we ask that you consider and recommendations that we make in terms of marketing submission, as well as investigation advice exemptions at times.

You want to make sure that you have a very clear indication for use statement.  You also want to make sure that you go into sufficient detail of your technology within the submission.  Remember, you all are the experts as it relates to that technology and how the device is being used, so you want to make sure that you're kind of taking off your expert hat when you're preparing the submission, so you can make sure that you're describing that detail for those who are not as familiar with the device and who are not as familiar with how you are using your device within your particular patient population.

You also want to make sure that you have labeling included within the submission and that that labeling is clear for your target audience, whether it be the clinician or the patient.

We also seek biocompatibility testing for those devices that do have tissue-contacting components.

Other things that we look at, we look at software and cybersecurity testing for those devices that have software components.  For those devices where there may be cybersecurity concerns, we do also look at cybersecurity testing.

Electromagnetic compatibility, electrical safety, wireless testing, and MR compatibility are also key elements of the submission where we look for a specific type of testing.  If the device is to be used in an MR environment, then you want to make sure that you have adequate MR compatibility testing.

And that can be appropriate for both an IDE as well as a marketing submission, because for certain studies, devices oftentimes we are studying the device and looking at it in an MR environment, so that you can see what's really going on within the particular brain target, so you want to make sure that you have adequate MR compatibility testing within even an IDE submission.

Some of the performance testing that we look at I've listed on the slide.  This really includes some of the general type of performance testing that we look at across different types of medical devices, but as it relates to ultrasound neuromodulation, some of the key areas may be still looking at animal studies, to really assess that safety, particularly if you're looking at, again, going above those diagnostic levels that have already been established.  If you're looking to go above that then there may be the need for some animal studies still to support the safety of the device, especially when you're moving into use of the device within humans.

And then of course, clinical performance data, when that is needed within a marketing submission.  And I will say, as I mentioned before, those early studies, early research informs early feasibility studies and the results of the early feasibility studies do also inform clinical studies. 

And then you also want to make sure that you have a detailed and well thought out benefit-risk analysis, because when we make our final decision we at FDA really look to weigh the benefits and the risk of the device.

When is clinical data needed?  As I mentioned before, for certain marketing submissions, we definitely want to see clinical data.  We typically see clinical data for PMAs and de novos.  For the 510(k) we don't always seek clinical data for that substantial equivalence determination, but if it's a situation where it's a different indication for use that may not represent a new intended use, so it can still say within the 510(k) pathway, but it's a different indication for use, it may need clinical performance data to support the substantial equivalence. 

Or, again, if you have different technological characteristics, they don't raise different questions of safety and effectiveness but there are maybe slightly different from your predicate device, then again, there's another situation where clinical data, clinical performance data, could be needed.

When we're talking about clinical data and how to design your clinical study, I'm just going to highlight some key considerations, and we're going to get into a more detailed discussion of this during our panel discussion, and Dr. Anita Bajaj, one of the medical officers on our team, is going to talk about this in a little bit more detail.

But you really want to make sure that you are considering a well-controlled study.  It's important to think about what's the most appropriate type of control group to minimize the bias associated with the placebo effect.  Ideally, you really want to consider a sham controlled study, and sometimes we also have talked to sponsors about using an active treatment control, and that could be considered when there may be an effective regimen of therapy that can be used for comparison. 

It's also important to consider randomization as well as blinding.  Blinding of investigators, patients, and study staff, and it's also important to make sure that you have a well thought-out statistical analysis plan that includes a formal hypothesis and that also has prespecified success criteria.  These are really key to supporting both study implementation as well as study success.  And again we'll talk about this in more detail during our panel discussion. 

As I mentioned before it is really important that you clearly define your indication for use statement, which means you need to make sure that you're clearly defining your target population.  Are you targeting patients who are already on some type of treatment, meaning that you want to use this as an adjunctive treatment?  Or are you targeting a treatment-resistant population?  Do you want this device to be used as standalone therapy?  These are all things that you need to think about as you're designing your clinical study.

Going back to defining your target population, you want to make sure that when you're writing your protocol that your inclusion and exclusion criteria support that target population, to make sure that you're truly evaluating the safety and effectiveness of the population that you're intending to treat.

A couple other things that I want to highlight is also, it's important that you specify the DSM-5 treatment diagnosis when applicable.

Finally, I just want to talk about endpoints.  When you're thinking about endpoints for your study, you want to make sure that you prespecify them.  You also want to make sure that you define the timeframe for evaluating your primary endpoint, and when defining that timeframe, you really want to think about the time course of that particular disorder that you're targeting and the natural course of that particular disorder.  Is there some resolution that occurs over time?  If you're looking at a more treatment-resistant population.  What's the appropriate timeframe for really evaluating that primary endpoint?  And then you'll want to make sure -- we do recommend that you use validated, objective endpoints.

It's also really important that you establish clinically meaningful endpoints, because that's going to be really important and one of the key considerations when we're looking at our benefit risk analysis, we want to make sure that you have shown that the device really is providing some clinical benefit and clinical significance in the patient population. 

With that, I'm going to move on to talking about some of our special programs, and again, this really provides access to FDA for more early engagement with us.  These are specific programs that do have some specific criteria for entry into those programs.

The first being the breakthrough devices program.  This particular program is intended to help patients to have more timely access to certain medical devices and device-led combination products.  The key to this program, and entry into this program, is being able to show that your device has a reasonable expectation for more effective treatment or diagnosis of a life-threatening or irreversibly debilitating disease or condition.

Within the guidance document that I have referenced on this page, and also in our regulations, we outline the specific criteria that sponsors need to show that their devices will meet the criteria for this program, and it's basically, again, focused on showing a reasonable expectation of more effective treatment or diagnosis, and it needs to be in a disease or condition that is life-threatening or irreversibly debilitating.

I want to move onto talking about our TAP program, which is our total product lifecycle advisory program, and again, this program is intended to promote early, frequent, and strategic communications between FDA, medical device sponsors, and other stakeholders, such as payers and providers.

Again, I want to say there is specific criteria in order to participate in the TAP program.  And with this TAP program, we're really seeking to foster better evidence strategy for faster commercialization, and really bringing together all of the stakeholders, bringing together not only FDA and sponsors, but payers, to really talk about what is the best strategy for evidence development so that we can really get novel devices to commercialization and to U.S. patients faster.

I also want to highlight our early payer feedback program.  This is another program where we can link sponsors with payers.  With this program, it's an initiative to help coordinate early action between public and private payers.  Again, we can introduce you to payers, so you can learn what data payers need to make a positive coverage decision.  I do want to note that FDA is not involved with reimbursement activities, but we are available to be involved in those discussions as it relates to outcome measures of interest in your study that would address both FDA as well as payer concerns.

You can invite payers if you're part of this program, you can invite payers to participate in upcoming Q-submission meetings, or you can have separate meetings with payers independent of CDRH meetings.  Again, I have some links on this particular slide that provide more information about our payer program, and if you have specific information about that program you can reach out to our CDRH payer communication mailbox.

Lastly, I want to highlight our medical device development tools program.  We'll be talking about this a little bit more within our panel discussion, but I just want to highlight that this is the program that enables FDA to work with outside stakeholders to qualify tools that medical device sponsors can use in the development and evaluation of medical devices.

I'll leave it at that, but it is a very important program, and again, helps with developing certain tools and methods and assessments that can be used to help with medical device development.

Lastly, I want to get into regulatory science.  This is where regulation and our regulatory work meets science.  I have a lot on this slide here, but I want to just highlight that there are a lot of things that go into -- there are a lot of factors and inputs into regulatory science.  Things that we consider, ranging from technology forecasting to review challenges to what's going on on the post-market side with devices that are already on the market, and then also getting stakeholder input.  All these things are really used by our Office of Science and Engineering laboratories as part of their research and as part of their testing, where they are then developing research articles, presentations, and then with that work, it supports device development, and it also supports the development of regulatory science tools, medical device development tools, guidance documents, as well as standards.

What I want to emphasize when we talk about regulatory science tools, that may be a new concept for many people; it's an innovative, science-based approach or methodology to help assess the safety or effectiveness of a medical device or emerging technology.  And Mei will be talking more about regulatory science tools during our panel discussion.

I want us to think about the fact that there are three levels of regulatory science activity needed for ultrasound neuromodulation devices.  We can start with the long term, where we're talking about developing standards, developing guidance documents.  The intermediate term is what I talked about in terms of developing these medical device development tools, where we've qualified certain tools.

In the short term, we have regulatory science tools, and these tools can begin to be used as methodology or approaches that can help in the development of medical devices, and specifically ultrasound neuromodulation devices.

Now to dig a little bit deeper into regulatory science tools, these tools could be used to expand the scope of science-based approaches.  And the intent really here is to help speed up or improve translation of technologies into safe and effective medical devices.  These regulatory science tools can really be used along the whole product lifecycle and at all stages of product development, particularly when there's no means to evaluate technologies and systems.  I think the regulatory science tools can also be used to reduce the need for ad-hoc test methods or designing ad-hoc test methods and allows researchers and investigators to focus on how well the new product works as opposed to focusing on how well it's been tested.  Being able to rely on these regulatory science tools.

The tools also represent a peer-reviewed resource for companies to use where standards do not yet exist.  I do want to emphasize, though, that regulatory science tools do not replace FDA recognized standards.

I want to highlight our team, specific team, that has expertise within therapeutic ultrasound devices, and on neurology programs, within OSEL.  On this slide you can see some links here that may be helpful for you, and some links to the programs and the work that OSEL does.

Lastly, I just want to emphasize if you have general questions as it relates to FDA regulation and regulatory pathways, you can reach out to our Division of Industry and Consumer Education.

Thank you very much.

Q&A

ELIZABETH ANKUDOWICH: Thank you, Pamela, for such an informative talk.  It looks like we have about maybe a minute for a question before the panel discussion.  We have a couple of questions, both in the panelist chat and from the audience.  There was kind of one that asked what is the difference between a Q-sub for risk determination, and a 513(g) letter?

PAMELA SCOTT: That's a very good question.  Within the Q-sub, we can answer informally some regulatory pathway questions.  Again, if you want an informal response we can provide that in the Q-sub.  With the 513(g), we are providing a more official response as it relates to regulatory pathways. 

ELIZABETH ANKUDOWICH: Thank you.  And when you say pre-market submission, you mean an application to bring a device to market.

PAMELA SCOTT: Correct.

ELIZABETH ANKUDOWICH: Matt, I'm going to turn over to you because I don't want to derail the panel discussion.  But thank you again, Pamela, and the panelists can turn on their videos at this time.

Panel Discussion

MATTHEW MYERS: Thank you, Pamela, and thank you, Lizzy.

We're going to continue with this regulatory theme, and we're going to transition into reimbursement.  And I think we have a really knowledgeable panel here, so if you're interested in marketing a medical device or if you're interested in being reimbursed for neuromodulation procedures, or if you'd really just like to know what goes into a risk determination for your procedure, I encourage you to pay close attention, because we really do have a lot of expertise on this panel.

We're going to continue with regulatory for about 15 minutes, and then we'll have some questions on the regulatory side, and then move into reimbursement. 

I’d like to continue with this theme of regulatory science tools that Pamela showed at the end of her talk, and Mei Ye, my colleague in the research lab at FDA, in the neuroscience group, is going to pick that up.

Mei, can you continue for us?

MEIJUN YE: Sure, thank you, Matt, for the introduction. 

First, I am a neuroscientist from Office of the Science and Engineering Laboratory in CDRH.  I think probably most of you know like FDA do research.  I just want to emphasize, yes, we do research, and we are trying to develop tools to help the developers for medical device development and the review.

Pamela mentioned about the regulatory science tool we are working on, so I want to focus here more like in the preclinical side of which kind of tools we can develop and can help the field move forward faster.

Before we, when thinking about a clinical study, I think first thing Pamela mentioned is we need to determine the risk.  When coming specifically for the focused ultrasound if the parameters is between, is below the diagnostic energy and if it's not a very special voluntary population, generally it's a nonsignificant risk.  But if it's above and if it involves like vulnerable populations, it can be a significant risk.

When coming to the higher risk, the preclinical safety testing review will definitely be necessary.  Safety is highly related to ultrasound parameters, I would think the first thing might be a well verification calculation of the performance of the devices.  For example, what's the intracranial intensity? 

It's important because there's another (inaudible) over there between different species, even with like different human subjects and even within our single subjects, there might be differences over there, too.

And another (inaudible) of the ultrasound parameters is to evaluate the safety of those parameters in an appropriate biological system.  For example, animal models.  Right now I think our probably requirement for the preclinical safety evaluation especially in the animal models might be a little burdensome for the developers or  sponsors.  There's much more results of variability, for example, where we are going to sonicate, and what are indications for use?  Also the mechanism for ultrasound is still not well-understood.  It can sort of auditory pathway may be involved in cells or other things, and there's different mechanisms of activation or injury like thermal damage, mechanical damage, and also neuroplasticity.  All those complicate the testing.

So we have two projects right now in house at FDA trying to understand all those things.  I think one is trying to understand the relationship between ultrasound parameters and the neural effect.  This would definitely help the benefit/risk determination.  I think yesterday people discussed how high is too high for ultrasound parameters.  I probably want to say there's not a single value over there, because we're looking at the values between benefit and risk.  If it's a healthy subject, the risk can be tolerated would be much lower compared to if it's more severe subjects, may tolerate a little bit higher.

So understanding the relationship probably would be more informative for the regulation of such clinical devices.  The second is (inaudible) applying is to better refine all those preclinical testing batteries, and I can mention right now that the requirement might be a little burdensome, and conventionally I think when we evaluate some novel devices, we would say okay histology, to look at the tissue damages.  But however for ultrasound I think whether they would be sufficient would be a question, because there can be short-term effect, long-term effect, and also it has already been discussed in this workshop whether it can perturb the resting activities or other things, and mainly I did a project several years ago to give really high intensity ultrasound to the mouse, and we did histology, did not find like very obvious like changes over there.  However, all those animals showed very drastic behavior deficits.

So those can really highlight the need of further understanding what testing or what kind of endpoints behavior, histology, or whether EEG will be needed to really fully understand the safety of the focused ultrasound.

MATTHEW MYERS: I need to move on to the clinical part, and we can get back to some other stuff in the questions.

Anita's going to move us from preclinical to clinical.  Thanks.

ANITA BAJAJ: Hi, my name is Anita Bajaj, I'm a psychiatrist, and I've been working at CDRH for the past five-and-a-half years.  I'll be talking about some of the clinical issues which Pamela mentioned in her talk, which are involved in the regulation of these types of ultrasound neuromodulation devices.  

First of all, I'd like to talk about the types of evidence we consider important from a clinical perspective.  Most importantly, we like to see a well-controlled study to provide valid scientific data of clinically significant results of device safety and effectiveness.

This could mean any of several different control options.  Pamela mentioned a couple of these in her talk.  The first is an active control treatment; that's when an effective regimen of therapy is used for comparison.  The next is a sham or a placebo control, and in this option you remove the active elements of the subject's device.  For psychiatry devices we usually like to see a sham because of the ability to reduce bias caused by the placebo effect.  And we like to recommend three things to consider when designing a sham.

It should do three things; maintain the blind; promote subject retention, such that those in the sham group do not drop out of the study to a greater extent than those in the treatment group; and matched time on task, compared to the treatment device.  And that's for devices that include some patient interaction.

Another type of control is treatment of usual control, which means that control subjects will receive usual-care treatment options, such as medication, psychotherapies, and other treatments for that particular disorder.

Next, I'm just going to quickly mention some procedures needed in a well-controlled study.  Randomization, that's the process of assigning participants to treatment and control groups, assuming that each participant has an equal chance of being assigned to either group.  And that's very important, again, to lessen the chances of bias due to a subject knowing which group that they're in, which could cause placebo or nocebo effects.

Related to that is blinding, and that refers to the concealment of group allocation from individuals involved in the clinical research study.  Things to keep in mind are that the patients, investigators, and study staff should all be blinded, and we usually recommend blinding before starting the pivotal trial so that you're not stuck at the end realizing that blinding was done unsuccessfully when it's too late to correct any problems.

If the blinding assessment is done after the trial starts, it could be open to a lot of different types of interpretations and explanations and could negatively impact the results of the study.

Lastly, it's important to make sure that you have a statistical analysis plan, or SAP, that involves a formal statistical hypothesis and prespecified success criteria.

Keeping in mind the nature of the patient population is also very important.  For instance, defining the indications for use and the target population; is it a treatment-resistant population?  If so, it's important to be specific about other failed therapies.  Also, specifying the DSM-5 treatment diagnosis, age, severity of diagnosis, things of that nature.  We also like to see validated measures of these for screening and measures that align with DSM-5 criteria as best as possible.  Also generalizability to a U.S. population is very important to the review team.

Outcome measures is another every important piece.  Defining them ahead of time and considering time course for the specific disorder and types of treatment for it.  Do patients have an immediate response and then tend to relapse?  Also, we've been recommending having a follow-up timepoint to assess duration of effect, so that we can see that the treatment has effect not only while the device is being used, but also can continue for some time afterwards.

Prespecifying study outcome measures can also involve having a safety outcome, which could involve worsening of the disorder or development of suicidality, and one or more effectiveness outcomes.  We prefer clinician-reported outcomes because of added strength of objectivity.  For instance, in depression, scales such as the HAM-D and the MADRS.  In the case of PROs, or patient-reported outcome measures, scales such as the PHQ-9 and the GAD-7, there is a possibility of bias due to reporting of psychiatric symptoms by the patients themselves, self-report.  So it's best to supplement with something that has a little more objectivity, maybe as a co-primary, something that might be clinician-rated, such as a CGI, or something that is based in more objectivity, such as how many hours left or something of that nature.

We tend to prioritize functional improvement, and what this means is that the functions have improved in this patient since starting the treatment, such as those impacting occupational, educational, family, and other functions.  We also do recommend secondary outcomes to support the primary outcome if needed, and these need to be adequately powered if labeling claims are to be made.

Safety considerations must also be taken into account.  Device risks, such as stimulation parameters, and also brain targets.  I know some people were wondering which parts of the brain are considered more sensitive.  So one thing to consider is whether it's been studied before, or whether this is a novel brain site, and those are things that the review team takes into account.

Also, not just the risks of the device, but risks of the population being studied.  Is it especially vulnerable?  Does it include persons who are suicidal, pregnant, or other vulnerable groups?

A risk-benefit analysis is taken into account when looking at marketing submissions, while a risk analysis of safety issues is taken into account for IDE.  For both, we consider what other treatments are available for the target population, potential risks coming from concomitant use of medications or stopping, interruption of medication use, and for the risk-benefit analysis we also factor in an assessment of uncertainty.

We also like to recommend a robust procedure to monitor the study, which could include assessment schedules, informed consent forms, DSMBs or SMCs, which are data safety and monitoring boards, or study monitoring committees, and safety monitoring boundaries or stopping rules.

Investigators should also provide detail on their risk mitigation strategies, such as whether it will be indicated for use under the supervision of a clinician, whether it will be used adjunctively or standalone.  What is the routine monitoring for adverse events, and whether labeling will be indicative of this mitigation strategy.

MATTHEW MYERS: We should stop to allow time for questions, and while people are entering questions, Mei, I just wanted to come back to you for a minute and give us a timeframe on regulatory science tools versus the other types of tools that FDA has.  We just a few seconds for that.

MEIJUN YE: I would think, actually this goes well with we develop regulatory science because it can be faster.  It doesn't, for MDDT I would think it's a couple years after its verification and validation, but for regulatory science tools, you are seeing a timeframe of two to three years, and then we would publish all those results and then it will be peer-reviewed and then the developers will be able to utilize those tools.

For the guidance development or standards, then it would be multiple years when it can finally be official.  That's one of the advantages of the regulatory science tool, it's much faster.

MATTHEW MYERS: There is a question about sensitive brain targets, and I think, Anita, that you answered that.  Did you or Pamela want to say anything more about sensitive brain targets?  I know it's been an issue in some regulatory submissions for neuromodulation that we've reviewed.

ANITA BAJAJ: I don't know, Pamela, if you want to add anything.  I was just going to say that if it's a novel brain site, we have not seen investigated previously, we would like to see some type of literature evidence of its safety for sonication in this case, whether that's animal data or other data.

MATTHEW MYERS: I don't see any other questions in the regulatory area, but you'll have another chance in about 15 minutes.  We're going to move over to the reimbursement side, and we can start with Mark Carol from the Focused Ultrasound Foundation, who's going to talk about how the regulatory and reimbursement process impacts the work of clinician and researchers.  Go ahead, Mark.

MARK CAROL: Thank you, Matt.  As a way of background, I'm an MD neurosurgeon.  I work as a senior consultant at the Focused Ultrasound Foundation, a foundation that has funded a lot of the work that was presented yesterday and have experience as a former CEO of a focused ultrasound company.

Many of you are probably asking yourself, well, I'm a researcher, a clinician, why do I care about this regulatory and reimbursement process if I'm never going to be actually marketing or selling a device or technology?

This is the overall process of bringing a new medical device to market.  And you notice that you guys are in this box up here called clinical testing and/or clinical trial.  You can see the interconnectedness of all these other functions that exist within the space and all of the factors that impact whether a device actually ultimately gets to market.

If we look at this red box here, which is the regulatory and reimbursement pathway, this is how complicated it is.  Again, these are all the interactions that occur.  So far this morning, we've been talking about the regulatory box over here, but this is the side of the equation dealing with reimbursement, and while many of us and many of you probably think that getting regulatory authorization or a grant or clearance is the most important thing necessary, and the most difficult thing necessary, in order to bring a product to market, in many instances it's the reimbursement side.

Why should you as clinical researchers be concerned about reimbursement?  The presenters so far this morning have done a great job of presenting the kinds of clinical information that needs to be presented for regulatory decisions and how important making the right clinical decisions are in the regulatory process.

But the reimbursement side also includes clinical questions.  CMS and especially the commercial carriers will look at clinical data in order to make a clinical decision as to whether they will cover a procedure that the FDA has approved or authorized for use and that CMS or the AMA have issued coding for.

Oftentimes, their requirements for clinical data is more stringent and rigid than is the data required by the FDA.  For instance, AMA requires multiple publications of data from separate groups of patient populations in order for them to consider granting a code.  And commercial carriers will oftentimes do their own independent review of the clinical data in the literature in order to make a coverage decision, and that decision can be separate from the decision in terms of regulatory authorization.

How this impacts you as clinicians is that when you develop your clinical trial format, and determine what kind of clinical data you will be generating in your clinical trial, it is worthwhile working with whomever you will be partnering with at some point if your approach or your technology comes to market, to determine what kind of data will be required on the coverage decision side in order to make sure that that data is included in your clinical trial.  Otherwise you may be forced to repeat clinical trials or clinical work in order to generate the required data for coverage.

Thank you, Matt.

MATTHEW MYERS: Thank you, Mark. 

Rhonda Robinson Beale is going to continue along this theme of evidence, including the evidence requirements for different health plans and the evidence associated with particular diagnoses.  Take it away, Rhonda.

RHONDA ROBINSON BEALE: Thank you.  I am a psychiatrist by training.  I'm a senior vice president and chief medical officer, UnitedHealth Group.  What I will be reporting is not specific to UnitedHealth Group but is a collaboration of information from several of the payers, as was illustrated in the chart by Mark.

I think a lot of the information holds true across all payers, as Mark talked and said, that commercial payers many times will want more stringent information.  It's important to understand the variability.  Even within a payer, let's say, like Aetna, Blue Cross, there are many different payers within that. 

So there are employers who are self-funded.  There are fully insured populations.  And then there's governmental populations, Medicare, Medicaid.  Medicare/Medicaid, their coverage is dictated by CMS determinations, and also by states.  The employer groups are self-funded, so they have some leeway through ERISA to be able to make coverage decisions.  What you need to understand is that somewhere between 65 to 75 percent of individuals who are covered by insurance are covered by employer/ERISA self-funded plans.  The reason why I state that is because you need to understand from an employer's point of view, they're paying for the treatment of their populations and cost, as it relates to their business profitability, is very important. 

So therefore, not only are we looking at the amount of evidence but also looking at the specificity of the evidence, not only in terms of the clinical effectiveness, but also the cost effectiveness.  I know it was mentioned before that one of the more prominent comparisons needs to be to a sham, but in a health plan situation, where a technology once it's covered goes out to an open care delivery system, so there can be variations in the familiarity, the skill level, the infrastructure, of a provider.  So therefore, there needs to be clarity as to how to administer the device, what is expected, side effects, and what type of monitoring and training and if there's certification needed.

It's also important to understand whether or not the device has been tested on a wide variety of populations.  We're now talking about not only male female, but also different ethnic groups, to make sure that there isn't variation in side effect differences that can be experienced in that way.

The other piece that is real important from a health plan perspective and which I think is also a problem in the behavioral health field is that the specifications of the application of a device has generally been linked to a disorder or a diagnosis.  One of the problems I think we all who are in this field recognize is that there's a dichotomy between the way that we classify psychiatric diagnoses versus the way one is approaching disorders from a neurocircuitry or neural brain functioning. 

The two are not necessarily synonymous.  In other words, one can use neuromodulation, which has been approved for treatment-resistant depression and has been approved for smoking, but in the off-label world it's being used widely for the treatment of certain types of autism and other types of things, but from an insurance coverage, the evidence is not there.

I bring this up as a challenge to all of us that are sitting in on this panel, to figure out how to bring the two together so that we can bring the advantages of neural approaches to mental health disorders, to be able to get them covered and be able to get them into the mainstream.

One other thing let me mention.  There is another type of payer out there, and those are what we call capitated provided systems, ACOs and other networks.  The reason why I bring them up is that they have the ability to shift their dollars around to try and to pilot new technologies, particularly if they've been FDA approved.  If they're not covered by the insurance panel that they are working with, there are some groups who have the capability through their direct to consumer advertising and their notoriety to be able to attract populations that will pay over the counter for access to new technologies.

With that I will conclude and hand it back to you, Matt.  Thank you.

MATTHEW MYERS: Thanks, Rhonda.  We are going to finish with Gerry, who is going to talk to us about steps to gain Medicare coverage.

GERALD ROGAN: Good morning.  My name is Dr. Gerald Rogan.  I've served as the medical director for a Medicare contractor.  Medicare contractors decide about local coverage of new technology.  One example is focused transcranial ultrasound for the treatment of refractory essential tremor.  Reference to this policy is in my handout, which is available by reaching out to the conference organizer.

The decisionmaker was the medical director of a local contractor.  In this case, Novitas.  Each contractor engages advisors representing all the major specialties of medicine.  The specialists advise the medical director, who often is a primary care physician like I was.

Focused transcranial ultrasound is a benefit of Medicare.  Coverage requires that the device which provides the ultrasound is cleared for marketing by the FDA and is applied to a patient according to a statement of intended use.  Its peer-reviewed scientific evidence must prove the ultrasound treatment is reasonable and necessary.

Typically, evidence must be grade 2A or better.  2A means systemic review of homogeneous cohort studies of exposed and unexposed subjects.

To better understand the evidence requirements, I recommend you read the Novitas local coverage decision entitled Magnetic Resonance Guided Focused Ultrasound Surgery for Essential Tremor, policy number L-38495, which you can Google on the internet.

Study how the indications and limitation of coverage are reflected by the scientific evidence presented.  Study the public comments made in response to the draft policy, and the responses to those comments made by the Medicare administrative contractor, in this case Novitas.

Transcranial ultrasound neuromodulation for psychiatric disorders is in early development.  When designing a study to support reimbursement, include patients of Medicare age in the study.  Indicate both sexes and the ethnic groups in the U.S. population.  Include enough patients to prove effectiveness with a P value of less than 0.05.  Measure the duration of effect.

In addition to scientific evidence, Medicare administrative contractors will want to support a psychiatric, and other relevant specialty societies.  Support by clinical practice guidelines is not required unless all the evidence available has been reviewed by the guideline committee with an affirmative recommendation.  Provider qualifications are considered when the technology is complicated and not well-suited for use outside a specialized treatment center.  You'll find an example in the policy I recommended for the treatment of essential tremor.

Thank you.

MATTHEW MYERS: Thanks, Gerry.

Shall we open it up for questions, both reimbursement and regulatory, and while people are formulating questions, I want to come back to this theme that Pamela presented about when is an IDE required for your study, and one of the criteria is whether it presents a potential for serious risk.  I can tell you that those of us at the FDA have agonized over those words, because you can have data showing that there's never been an adverse event, but if the potential is there, it could still be designated as having a potential for serious risk.

I think the way to reduce that subjectivity goes back to what we talked about yesterday about the real need for a scientific basis for evaluating neuromodulation, that we don't have right now, and I think that Mei was alluding to that.  Mei or Anita, Pamela, if you want to say anything about that in the remaining minute or so that we have, feel free.

I'm not seeing any new questions.

The other theme that I think is important to emphasize is that FDA is really making a risk-benefit analysis, that's a point that Pamela made.  So high risk does not necessarily mean that the study will not be approved.  It's that we really want to understand the risk profile for the device or the procedure.

I guess that we are out of time, and I'll turn it back over to Lizzy.

ELIZABETH ANKUDOWICH: I just saw that Pamela unmuted.  Did you want to say something, Pamela, quickly before we move on?

PAMELA SCOTT: I think the question that I saw, again, was related back to significant risk versus nonsignificant risk, and when one device may be considered significant risk for one study and not significant risk for another, and I think a key component of that is really looking at the target population, the patient population, how the study is being conducted, what measures to mitigate those risks are incorporated into the study.  Things of that sort can go into that decision and can kind of veer or direct whether or not it's significant risk or nonsignificant risk.  Hopefully that helps.

ELIZABETH ANKUDOWICH: That helps so much.  Thank you all, to our panelists.  Thank you, Matt, for moderating.  I think we're ready to move onto our next session.

Session 5. Experimental Planning and Design

ELIZABETH ANKUDOWICH: Our next session, session 5, is focused on experimental planning and design, reaching the target.  Our first speaker will be Dr. Darin Dougherty, who joins us from the Division of Neurotherapeutics within the department of psychiatry at Mass General Hospital, and he'll be speaking on clinical trial design.  He'll be followed by Dr. Bradley Treeby from University College, London, who's also our moderator for this session, and who'll be speaking about modeling and targeting with and without subject-specific images.

Our last speaker for this session is Dr. Elly Martin, also from University College, London, who will be speaking on standardized metrology and calibration. 

Welcome, Darin.

Clinical Trial Design

DARIN DOUGHERTY: Good to see everybody.  I hope you guys have enjoyed this so far.  I found it to be fascinating.  I often feel like I'm doing a lot of this ultrasound work with a few people to bounce ideas off of, but we have the whole crew here, and I think it's been really wonderful learning of what's going on.

I'm going to talk a little bit about our experience of doing some trials in human beings, both healthy volunteers and patients, patient populations.  It's going to be kind of a scaffolding of how we've done things, and some future talks are going to go into detail on many of the aspects that I'm going to discuss.

I'm going to use this study as kind of a vehicle.  We just completed a study of 30 healthy individuals, randomized to active or sham tFUS of the amygdala.  And we did the tFUS, we did a scan at baseline.  Then we do the active or sham tFUS, and I'll talk about this targeting in a moment.  And then we at the same time they're doing a shock paradigm, to induce fear, also measuring SCR.  Then we repeat all of this after tFUS.  We get a pre and post for each subject, randomized to active or sham.

Prior to doing this we had to get approval from the FDA.  For us, that involved sending the device brochure for the BrainSonix device that we use, that's the device we're using for human beings, the protocol, et cetera.  We ultimately got an NSR letter, so we were able to go forward with the healthy human being study.  And then of course we also had to get approval from our IRB.

In the scanner we use there are fiducial markers inside the transducer.  This is the transducer, and the business end is down here at the bottom, and this is a side view.  You can see there are fiducial markers.  And when you do the scan, you can see them on the edge here.  What we do is we do a 3D scout, and simply then take and draw a perpendicular line, depending on -- or maybe angle it a bit, depending on which gel pad we're using -- and we know the focal length, and we go and see if we're hitting our target or not.  Most of the time we miss it the first time, and we take them out, reposition based on what we got on the first scout, repeat.  And usually within two to three iterations, we know that we're on target and can start the study.

We also have been working more recently, that's how we did it in the older days, and now more recently we work with Richard Bouchard at Baylor who has developed software for targeting based on that scout data, and there we use a vitamin E tablet at the temporal window, and then we take an image, and we have to export that out of the MRI system onto a computer that can take, in a few minutes, can calculate where we should put the transducer, at what angle, and what target we'll be hitting, for example.  So this is a ventral capsule, ventral striatal target, shown here.  That's the FDA approved target for deep brain stimulation for OCD.  So we've been studying that target in healthy volunteers with hopes of then moving into an OCD population, so that's an example.

You can that in real time, this software, to also help with targeting.  Probably a bit more accurate than our earlier attempts at simply using the ruler function in the scout images.

You could also do neuronavigation outside of the MRI environment.  This is me in the early days; look at me, I've got a transducer on my head, and we have markers, and we're using this camera system, and we can navigate on the brain surface, the transducer, move it around, and we can know in real time on this computer screen as to where the middle of the transducer is.

We used a big threaded screw the first time, because it's all we had.  Now we actually have a much nicer prong here, I guess you call it, for the markings. 

You can move, like I said, this is your target and you simply move with the neuronavigation system until the crosshairs are over your target, and then you know you're entering and from there it's focal length, et cetera. 

Of course, after the fact, you also want to validate.  You've done your best time in real time, but now what does it really look like?  And this is work that Bastian Guerin at the Martino Center, a physicist, has put together some programs where we can actually model the beam and see where the target is.  This involves an MRI that's converted to a pseudo-CT scan.  And we're able to model that.  Here's a closeup showing the beam coming in, and this is into the amygdala, one of the nuclei of the amygdala.

Also need to acoustically couple to do the trial correctly, of course.  We have a transducer holder, and then we also have a gel pad, and there's gel above and below that pad, before we put it onto the skull.  And then these are what the pads look like, you can see that.

Blinding is an issue.  When we first started, there was really no way to blind, so we were doing the studies in the scanner, and we would have somebody who was unblinded -- the rest of us all remained blinded -- the wire going to the system, which was outside of the room, went through the wall.  And we just randomized people, and then the tech, who was unblinded, would simply plug or not plug in the device, and we all never knew if it was plugged in or not.  That's all we had, and it worked pretty well.

Since then BrainSonix has developed pads.  They look exactly the same, but one of them doesn't allow any of the energy, ultrasound energy, at all through the pad, and the other, of course, conducts that.  They look exactly the same, and we're completely blind.  They're labeled A and B, and actually BrainSonix, we worked out with them, they keep the key as to if A or B is the active or the sham, and at the end we get that information.  So everybody on our site is blinded throughout the entire study.

Parameters.  Dr. Nandi is going to talk about this later.  It's infinite, really, as I think many of you are aware, and the issue isn't so much the detail here, it's just that at the end of the day the ISPTA, we're limited in human beings by the FDA to 720 milliwatts per square centimeter, and so as long as we meet that criteria you can adjust many of the other parameters.  We had recently done a study -- I've talked about the amygdala before, which we had some nice results in -- we showed amygdala, decreased amygdala activity and decreased subjective fear in active versus sham, in that 15 versus 15, or 30 total healthy volunteers.

We also did a study in 10 patients, 10 subjects, healthy volunteers, at the ventral capsule/ventral striatal target.  It's kind of dose-finding, to find what types of parameters we want to use for when we move into the OCD population.  Here, they're all under the FDA limit.  We did a 30-second, 30-seconds off, block design.  And did two runs of 20 minutes, which means 10 minutes of total active sonication, while they're in the scanner.

The 30-second, our duty cycle, initial one, is 50 percent, we got from Martin Monti at UCLA who was doing work in the thalamus in disorders of consciousness.  Here we did explore other duty cycles.  We explored other pulse widths.  We explored multiple frequencies.  All were tolerated, I think that's the most important thing.  And we did identify a pulse set that we are going to use for the clinical trial.  But it just gives you a flavor of just how many parameter variables there are, and how does one go about doing a dosing study?  Finding out which parameters -- the space is infinite.  So you can see we tried to dial up and down different parameters to at least give us a sense of what it looked like.

Here we talked about, in the amygdala study, decreased anxiety immediately after a single session.  They reported it, in their subjective response.  In our clinical studies we're now doing a GAD study of the amygdala, and we're doing one session per week for four weeks for a total of four tFUS sections.

Another trial that's funded by the International OCD Foundation, with Wayne Goodman, who's on the call, we're doing three times a week for two weeks for six sessions.  I don't think we really know yet, and I'm sure this was the case in early TMS, we don't know how frequently and how many total sessions, or how much area under the curve, as it were, might be effective for clinical disorders.  So we're starting out with this.  We're going to have to see over time if we need to do it more frequently, do it maybe less frequently but over a longer timeframe.  But we have to start somewhere, so we're doing this work.

I will say the GAD study is blinded.  The OCD study is open label.  We've done our first subject, and her OCD symptoms improved dramatically.  She said she felt more calm and her insomnia dissipated.  Could be placebo response.  We all see placebo response, but something happened.  We'll have to see as we get a larger N.

Safety.  We also want to debrief in any of these clinical studies on how they're doing.  We start with acute self-report, and just ask them in an open-ended fashion did you experience anything.  We then later get into specific questions, which I'll get to.  But two of the healthy controls out of the 15 in the amygdala study, who were getting active treatment, and four healthy controls from the VC/VS tFUS study, reported transient tingling and vibration sensations at the transducer site.  So anywhere from 15 to 40 percent, depending on which study you look at, did experience some vibration.  None of them heard anything.  And they didn't find it uncomfortable or painful.

We also did formal assessment after the open-ended with the safety.  A tool that's been around for decades, just to assess adverse events from pharmacotherapies and devices.  And we did not find any significant changes in any of the safety domains associated with any of the tFUS sessions in the VC/VS study.  And then we always debrief on blinding.  We ask participants if they think they received active or sham tFUS, why, and the level of confidence. 

And really, I don't know the data from the VC/VS study, I haven't looked at it, but only the two who experienced transient tingling and vibrations sensations in the amygdala study, they identified it, and I think two more.  Other than those two, it was relative chance as to whether they detected it, whether they guessed correctly.  The tingling or vibration of course gave it away.

In summary, before you do anything, you need the initial regulatory approvals.  This far, we've been getting NSRs, but we'll see if that changes when we move into clinical populations, if we need to do a formal IDE.  Then you kind of have to march through all the things you're going to do in the study, and one is the targeted approach.  The next two talks are going to talk in more detail about that; like I said, I see myself as setting the stage here.

You can just do the scout in the scanner.  You can take that data into a software.  You want to confirm it afterwards.  How you blind, I mention now that I think rather than unplugging and then plugging in, I think these blinded active-versus-sham gel pads are probably the way to go at the moment. 

It can be very difficult to determine what parameters you want to use.  In one study, we used what we thought would be inhibitory for the amygdala, and it worked.  For another we had no idea for the ventral capsule/ventral striatum, so we set up some different dosing parameters and had them come in weekly to do each of those in a counterbalance fashion.

Very important in early days for us to assess safety measures.  Some of the people on this call, like Holly Lisanby, were around and doing this work when TMS came out.  We're going to have to replicate what they've been doing, perhaps with different measures, given the different mechanism of action here, something we should discuss.

And then I think the debriefing, even above and beyond debriefing about did you experience anything, were there any safety issues?  Getting a sense of what types of sensations people have experienced -- if they've heard anything, we haven't had anybody hear anything yet, but other investigators have reported that. 

And then also having them guess, if you're trying to do an active-versus-sham study.  If they can guess whether they're receiving the active or the sham.

With that, I wanted to kind of -- this is 30,000-foot view.  We're going to hear more about the modeling for the target.  We're going to hear more about parameters from Dr. Nandi later.  I wanted to set the stage and just our experience so far in doing about four dozen human beings with tFUS thus far at Mass General. 

Thank you very much.

Modeling/Targeting With and Without Subject-Specific Images

BRADLEY TREEBY: Amazing.  That's a wonderful segue into my talk.  Thank you very much, Darin.  I'm going to dive in, in my presentation, to just one of Darin's slides, where he said afterwards we did some modeling.  So I'm going to dive into how you might go about that and some things you might want to think about.

This is the scene.  You're doing a neuromodulation experiment.  Maybe like this image here on the left-hand side, maybe you have your subject set up, you have your transducer, maybe you have some kind of brain readout, and maybe you're doing a task.  In this case, a visual task.  Let's say you know where your transducer is, and your question is, for all this given setup, what is the acoustic exposure inside the brain?  So you want this picture on the right.

Maybe you know where your brain target is, and you want to know where did my acoustic focus actually end up.  Or if you're doing prospectively, maybe you want to plan where to put your transducer so that the acoustic beam overlies, as best it can, over your brain target.

So that's what I'm going to talk about today.

What is the problem?  Well, Kim spoke a lot about this actually in her excellent presentation yesterday.  The challenge that we have in transcranial ultrasound stimulation is the skull bone.  It has very different acoustic properties, very different density and very different sound speed than the rest of the soft tissues, the brain and the scalp.  And that can cause a pretty significant aberration and pretty significant attenuation. 

This is just some pictures of experimental measurements made with a tFUS transducer focused in the approximate location of the visual cortex, and these are some experimental scans.  Elly will talk in the next presentation on how you might go about actually doing these measurements, but here we have some measurements and you can see, for each skull, the beam shape is slightly distorted, and maybe even more significantly, the attenuation that you get, and this is the transmission loss in decibels, varies a lot between different skulls.  So the skull is really a significant barrier for us.

If you're new to ultrasound and you're sort of used to looking at MRI pictures, and you're used to seeing the brain like this, just a quick point that this is not what an acoustic wave sees.  Actually, the acoustic properties of the white matter and the gray matter and CSF and the scalp are very similar, at least when you compare them to the acoustic properties of a skull.

This picture on the right is really what an acoustic wave sees.  It kind of sees a pretty constant soup in the middle here, and then a big brick wall, which is the skull.  So that's what we need to think about.

Most of my presentation today is actually not going to be about how do we model the transducer or which modeling software to use, but actually how do we get the acoustic properties of the skull?

If we dive in just a little bit, Kim spoke briefly about this yesterday, about what the skull's structure actually is like.  You have approximately these three different layers, the cortical bone on the outside, and this trabecular bone on the inside, which is filled with marrow, and in the skull that's called a diploe -- actually the acoustic properties of the bony bits, this cortical bone or these trabecular fingers, are actually very similar.  The density of this part of the bone and this part of the bone are almost the same, and the composition is almost the same, save for there's not many osteons in these trabecular fingers.  But the composition is very similar. 

What makes the bulk properties of the inside different from the outside are these trabecular spikes which are filled with marrow.  So if you imagine taking an average of what the properties might be over some square here, you're going to have a lower density in the middle than you would on the outside.  And that's kind of the picture that you get from a clinical CT.  And I'm going to come back to some of these pictures in a few slides' time.

So let's just zoom in on the bone.  We saw from Kim's presentation yesterday that the properties of the bone like the thickness and the amount of trabecula that you have varies a lot both within a subject -- maybe different parts of the bone, the top to the back -- and between subjects.  So these are just three different CT scans, and you can see they're quite different.  Some are thinner, some are thicker, some have less dense regions and more dense regions.  And in general you can capture those variations using a CT scan.  So that's where I'm starting with my presentation today. 

Let's say you've taken a CT scan of your subject, and then our question is how can we map that CT scan to acoustic properties?  That's my first question.  We've got this, we know something about the individual variation of the morphology here, and we want to know how can we get from this picture to something that we can put inside an acoustic model?  How can we map the acoustic properties?

So stage one is trying to get from your Hounsfield units, your measure of electron density of the body, to the mass density, and you can do that with a calibration.  The general idea is you take a phantom, like this one on the left, there's different ones you can get, but this is just an example, and that has inserts of different density, and you know what the density is.  You stick that in your CT scanner and you use the properties, use the imaging parameters that you're going to use for your actual study.  Things like the voltage and so on.  The same reconstruction kernel, for example.

So then you take your image, and then you can take the pixel values from your image, and because you know the density, the actual density of this phantom, then you can build this port, so you can map from the Hounsfield units in your image to the actual mass-density. 

And the reason why this is important is because, depending on how you take your image and what imaging parameters you use, this curve can look slightly different.  So this is a sort of mini-round robin we did across several sites in Europe and Canada.  Several different scanners.  And you can see for everyone they scanned this exact phantom, this is the same phantom, we posted it around.  And you can see there's quite a different curve, especially at these higher densities.  So if we just zoom in on one of these points, so across all of these different calibrations, you can see for the plug that was 1,990 kilograms per meter cubed, in density, the Hounsfield readout was between 1,600 and 2,500.  So almost 1,000 Hounsfield units different.  So that's why it's important to do this calibration, so that when you're getting your density out of your CT image, you can do it correctly.  So that's just a calibration issue, but not too difficult.

So we've got our density.  How do we get to the sound speed?  There are quite a few experiments that have been done over the last 50 years or so, showing that there's roughly a linear relationship between the density of the bone and the sound speed in the bone.  So I'm just highlighting one particular study here which is an excellent study done recently by Taylor Webb, where he had a very large number of different skull fragment separated into cortical bone and trabecular bone, measured them using a lot of different frequencies, and we get this approximately linear relationship between the density and the sound speed, and this is the curve that you get.  So the sound speed is roughly 1.33 times the density.  So at least when we're doing modeling studies, we're using this relationship. 

Just to give some confidence to this particular curve, another paper from a number of years ago derived a very similar relationship, but in a completely different way -- in a way, asking the question, how should we tune these relationships such that we get optimal focusing when using an array through ex vivo skulls.  And you see almost the same curve here.  Of course, there are some error bars on here, and I'll come back to that later. 

Okay, so we've got our density, and then we map it to our sound speed.  What about the attenuation?  This is the really big question.  When we talk about attenuation, and Kim spoke about this already, we're talking about two different phenomena.  Attenuation, we're just talking about the loss of energy as something travels through a medium.  And there's two things happening.  One, we have intrinsic absorption.  So there might be viscosity, or there might be certain vibrational relaxation mechanisms, or chemical relaxations.  And they all come under the heading of absorption, so like the intrinsic absorption of the material.

The second thing that happens, and this is critically important in bone, is the scattering.  So you have this really complicated structure inside the bone, especially in the trabecular, and as the waves propagate through that medium, lots of the waves are kind of scattered incoherently.  That energy, if you like, is lost to the beam -- it's not going to the target, it's attenuating -- but it's not intrinsic absorption.  It's like another mechanism.

In bone it's mostly the scattering that dominates, actually.  I showed you some of these pictures before, these are actually micro-CT images of skull segments, so very high-resolution images, and again, you can see the individual trabecular structure.  These are on average of the order of .6 millimeters in diameter, just to give you a kind of a length scale, and a wavelength in bone might be, let's say, 6 millimeters at half a megahertz.  So as the wave propagates across these, it's getting scattered. 

But actually, our picture of this doesn't look like this.  A clinical CT does not give you this information.  It gives you something more like this information, down here on the right, or this information over here, and you can see we can't perfectly map out these trabecular structures, so we have to think of some way we can map from this information here, in a clinical CT, to our measure of the attenuation.

A lot of people have tried to do this in a lot of different ways, and what can we say for certain?  One thing is we know it's frequency dependent.  Maybe this is obvious, but higher frequencies, there's less transmission, so there's more attenuation.  We know that for certain.  But the actual curves that people derive for the attenuation in the bone vary pretty wildly.  Especially people have tried to map them back to the Hounsfield units.  So this plot on the right here, from one of Taylor Webb's papers, showing attenuation versus Hounsfield units for several different studies, and you can see there's a lot of different curves going on.  So the question is what is the right one?  We don't yet know.  We need some more information.

One of the things I think we might need to work towards is maybe some sort of individual acoustic measurement.  Maybe even on a per-subject basis.  It's certainly something we're looking at exploring at the moment, where you take a subject -- maybe they've got a medical image -- you do some kind of transmission measurement, and from that you can anchor what the attenuation might be inside the bone.

For the modeling measurements I'm going to show you in a second, we use this attenuation value shown in red, the red dashed line here, which comes from this paper by Gianmarco Pinton.  This is the value, if you're interested.

I'm not going to talk much about the actual modeling tools.  There's lots of them available.  Lots of great open source tools.  By the way, we developed KWAVE, but there's lots of other ones.  A couple of years ago, as part of the ITRUSST Consortium, we did a modeling intercomparison, which was a fantastic exercise.  We got 11 different modeling tools, all kinds of different people, and we compared them in a series of numerical benchmarks.

What we found was when the numerical parameters were well controlled, and the input parameters were well defined, we got really good agreement between the models.  I sort of have pushed this question to the side, because I think this is the least important part of trying to get your model correct, because lots and lots of models can do the same thing.  So I'm showing on the screen just one of the hundreds of intercomparisons.  There were nine benchmarks, 11 models, so that's a lot of data.  I'm showing you one benchmark, two models, which is between KWAVE and BabelVisco, which was developed by Samuel Pichardo, and you can see the agreement between these two models is excellent.  They basically predict the same thing, when you give them exactly the same inputs.  So I'm sort of pushing this question to the side, not talking about the models.  But I'm talking rather about the inputs to the models.

So we've got a way of mapping our CT scan to acoustic properties.  We need to validate this.  And currently, there's actually very few quantitative comparisons between simulations and experiments.  But there's quite a lot of experiments looking at optimal focusing or calculating phase delays, but actually experiments that measure the pressure field, simulate the pressure field, and compare them directly, there's actually very few of those studies.  So I'm going to show you some unpublished data by Alisa Krokhmal and Elly Martin in the next couple of slides, just to show the sort of thing that they’ve been working on.

In this particular setup, you have an ex vivo skull, you have a transducer, and then you're taking a measurement on the inside, and then you set up the same scenario inside your acoustic model, like on the right here, and then you compare those two results.  I'm showing you one measurement here at one frequency for one transducer for one skull, so they've got a lot of data, but this is just one example that shows, in some cases, you get really good agreement, and not just qualitative agreement, but quantitative agreement. 

This is the simulation results in two planes, for one particular skull at a relatively high frequency, so that's why there's quite a lot of aberration here in one of the planes, and then this is the experiment.  And you can see that all the features that reproduce, and if you take an absolute difference between the two and you look at the most stringent metrics, there's relatively small difference, especially in the focus, between these two, the model and the measurement, which is great.

And if you take maybe some line profiles through that, just maybe even more illustrative, and then you pull out some metrics, the differences in the focal pressure and the position of the peak and the focal volume are really small, especially if you consider how big the focus is in this case.

This is a case I picked when everything worked perfectly, and I'm showing you on the left here, what the CT scan of that particular skull looked like.  So this is the skull that was used both for the measurement and the simulation. 

On this slide, I'm showing you another simulation and experiment, slightly different frequency, and a different skull, where the measurements don't agree quite as well with the model.  And the biggest difference is not in the beam shapes, you can see actually the profiles here agree pretty well, but it's actually in the amplitude, and Kim pointed this out vividly yesterday.  The difference in this skull is it has a lot of large trabeculae.  So this skull has much more attenuation, so the simulation has underestimated the attenuation in this particular skull.

So you see, the metrics, the position of the peak, the size of the focal volume are still great, still really useful compared to the size of this focus, but we have mis-predicted the pressure in this case.  And here I've sort of hand-picked the worst example to make a point.  Out of all the things that we don't, or that we don't know very well, it's the attenuation in the skull, that us modelers really have trouble predicting from the medical images.

Quick summary.  We can get skull properties from CT.  We can map those to acoustic properties.  We take a calibration to get to density.  We map density to sound speed.  And our biggest error bar is in the attenuation.

So the question I can hear all the neuroscientists in the audience asking me is what if we don't have CT?  So that's what I'm going to try and address in the next part of my presentation.

The first thing I'll say is, if it's the dose you're worried about, then maybe you could consider low-dose CT.  That's something we've done for some of our studies.  You get really rubbish soft tissue contrast in the brain, but most often you don't care about that.  But bone contrast looks great.  So here's a skull segment of ours that we scanned at full dose and low dose, and the contrast you get in the bone is almost the same.  So if you're worried about dose and you still have access to a CT scanner, then I think this is a viable way to go.  The dose is reduced by maybe a factor of 10, compared to a standard head CT, and we didn't have too much trouble this through our board for healthy volunteers.  So that's something to consider.

All right.  What if we really don't have CT?  What can we do?  Well, what can we learn from an MR?  Can we get our acoustic properties?  So I'm going to briefly just talk about three different methods that people have used, and that we've used in the past, and sort of compare and contrast them.

Segmenting a medical image using some kind of learned model, and then doing maybe some kind of skull-specific imaging.

Starting with segmentation, let's say you've taken a run-of-the-mill T1, maybe a T2, image.  And you want to get from that to -- you want to segment the skull out of those images, and then maybe apply book values for the sound speed, the density, and the attenuation, to that segmentation.  You start with an image, you segment it first, and then from that you apply some acoustic properties to get to your CT.  What I'm showing you here is some segmentations that were created with SimNIBS, the Charm segmentation tool, some with just T1 and some with T1 and T2, which give you better segmentations. 

This is some work by Xuan Lui where she ran a whole lot of simulations at different frequencies for different sites, comparing ground truth CT with segmentations based on M1 MRI, either T1 or T1 and T2 segmentations, and what she showed was that the models based on the pseudo-CT straight from the segmentations had pretty good predictive value.  So what I'm showing you here on the left and right are just two different ultrasound frequencies for one target.  This is 250 kilohertz, this is 500 kilohertz.  X-axis is the subject number, so these lines aren't really meaningful.  But each column here is one particular subject, and each line is a different way of getting the pseudo-CT, and you can that the subject that had kind of the most transmission, or the least transmission, the pseudo-CT predicted that trend reasonably well.  But there are some pretty big error bars.  So you need to keep in mind that.

This method of creating pseudo-CTs definitely has some predictive value, but the error bars can be pretty big.  So you can get the general trends, but you need to be careful about putting too much emphasis on what the pressure values are that you get out.

Okay, what about deep learning?  Everybody's talking about it, doing it.  Maybe they've all moved on.  We worked on this a bit a couple of years ago, so the idea is you take some paired data, if you're doing supervised learning, you can also do this in other ways.  We did it with paired data.  You take a whole lot of CT scans, a whole lot of MRI scans, and you see if you can learn using a deep learning network to map from one to the other.  So in our case we tried to learn from T1 images, to learn a pseudo-CT.  I won't talk about the details at all, you can read Maria's paper if you're interested. 

I just want to show you some results.  Here's just one subject where we have a T1.  This is the ground truth CT.  This is the pseudo-CT down here in this column, and you can see it looks pretty CT-like.  It looks kind of like a CT.

But if you take an absolute difference between these two you can see there's quite a few errors at the boundaries.  So it's quite hard, just from a T1 image, where you don't have much skull contrast, you're just looking at a void, and maybe there's batch shift, there's all kinds of problems.  It's quite hard to learn a robust CT from this type of image, especially if you can't see anything inside.  You just have to kind of make up what the interior of the skull looks like.

Having said that, it works reasonably well.  It works better than classical segmentation, at least.  But there's still sometimes some reasonably big errors in our pressure, in our prediction of the pressure, compared to ground truth CT.  Keep in mind here, when I'm talking about these errors, I'm using the same attenuation for both the pseudo-CT and the ground truth CT.  So we're not talking about errors in acoustic properties here; we're talking about errors in the actual CT image itself.

One other word of warning I should say about deep learning.  Models really may not generalize well.  You might have heard this.  What does that mean?  It means if you try and put an MRI image into a deep learning network that doesn't match the images that the network was trained on, quite often you just get nonsense.  Here's just one example from a learned network that maps from MRI to CT, and if you put the image in that's been acquired in a different orientation, you get this pseudo-CT, and if you put it in in the same orientation that the training images were acquired in, you get a nice pseudo-CT.  So just a word of warning that if you are going to use any of these tools, you need to match your T1 acquisition parameters or whatever type of image it is with the imaging parameters we used to train the network.  That's critically important.

Okay, method three, and this is definitely my favorite, and this is where we're moving towards in all of our studies, is taking some skull-specific images.  On the left here is a Siemens PETRA image taken with a low flip angle, and here is a CT in the same subject, and you can see the structural similarity between these two images is awesome.  You can see the same kind of features in the skull, you can map the trabecular parts that are denser and not denser, and so on.  With certain types of MRI sequences, you can get something that's very CT-like.

If you take the image on the left and you normalize it, so you take a histogram, you get what the soft tissue peak is, and you normalize it to this soft-tissue peak, and then you map the skull values between the CT and the PETRA, you get this.  So there's a really strong linear correlation between the MRI values and the CT values, and this is what you would use to map from that type of image to the pseudo-CT, and if you're interested the code is here.  This is work by Maria.

This is just one example of a CT and pseudo-CT created from one of these PETRA images, and of all the modeling approaches to pseudo-CT I've showed you so far, this gives you the smallest error.  So now we're down below 5 percent in median, in focal pressure, and less than a millimeter, less than half a millimeter, in errors in the focal position.

What if you've got nothing?  What if you haven't got a medical image of your subject, or you want to go blind, or maybe you're writing an ethics proposal and you want to put in some data, what kind of thing might happen on average, or where would be a good way to put my transducer to target X or to target Y? 

For that, you can use templates, and there's few different ones available.  I'm showing some images from a recent paper from DTU. So this is a skull template that was created from an average of 29 different skulls, in the kind of young age.  I can't remember exactly, but maybe 20 to 50 kind of range, the range we might be using in neuroscientific studies. 

And here on the right is showing the normalized peak pressure inside the brain for different transducer locations, and the error bars -- or the box plots, rather -- are showing the results for the individual 29 skulls, and the red dots are showing what the template gives.  So on average you can predict the kind of behavior across the different sites.  It's a good tool that you might be able to capture the kind of average behavior.  But of course, a huge caveat here is that the difference between this red dot and any of these particular subjects could be huge.  So you cannot capture individual behavior using a template.  But maybe on average it's useful.

But if you really have nothing, Jean-Francois talked about this yesterday, so I'll briefly mention it again, and you want some kind of what's the worst-case transmission that you might get, maybe to compute a safety metric, a mechanical index, for example, then you can use this lookup table which came from the ITRUSST safety group -- you just look up your ultrasound frequency, the size of your beam, and then you get a worst-case transmission.  It doesn't have any predictive value in terms of what the actual target amplitude might be for an individual subject, but at least giving you a worst-case transmission that you could use to say, okay, the worst mechanical index I might expect could be X.  And show that it's within regulatory limits.

A bit of a summary.  MRI imaging can be useful for planning.  You need to be careful about pretrained networks.  Just a word -- if you take pseudo-CT, you still need a CT calibration, you still need to know how to get from your pseudo-CT to density, and if you've got nothing, then you can predict average or worst-case  behavior, but not individual behavior.

Lots I didn't talk about.  Lots of things about the specifics of how you might set up a model, a transducer, or process your results.  I'm not going to mention any of those.  Elly will dive into some.  We might talk about some of them in the panel.

What's next for us?  One thing is modeling with uncertainty estimations.  You put the error bars on your inputs into the model so you get error bars on your outputs, which I think is going to be really important, and maybe in the end tie the integration between all of these different tools.

That's it.  I will pass over to Elly.

Panel Discussion

ELLY MARTIN: Thanks, Brad. 

I am going to carry on from some of those things that Brad was just talking about, with more of a focus on the measurement aspects of this kind of pipeline of trying to simulate the pressure in the brain.

Brad showed a very similar picture to this at the beginning of his talk.  What's our goal?  Just to try to put what I'm going to talk about and the need for measurement and calibration of our sources and systems into some context.  The aim is that we could do these kind of neuromodulation studies in a reproducible and a replicable way so that we can really test parameters because we know what kind of dose we have in the brain -- dose or exposure or whatever quantity that is.  So whatever kind of transducer you have, whether you've got some kind of single element device, or something a bit more complicated with many elements, we need to understand how this works, where our patient is, and what's coming out of the transducer.

There are uncertainties on all of these things, and many challenges in going through this whole pipeline.  To begin with, we need to find a suitable source and a driving system for targeting and running our transducer. 

We need to know how our transducer behaves and what the acoustic field is that it produces.  We also need accurate and validated numerical models so that we can do the kind of thing that Brad was just talking about, using some information to try to predict what the pressure is in situ.

And then there are a whole lot more requirements and challenges, converting the medical images to medium property maps, we've just heard about, and we may hear some more about some of these other things about you might register your transducer within the participant.  We've seen people mentioning the neuro navigation systems.

Coupling of the transducer and the participant is also important, along with a lot of other factors.  There's always challenges and there's always uncertainty associated with any of these things.  So what I'm going to talk about here is about these kind of first three things, which are related to the source and the driving system and the characterization of the field, and just touch a little bit on modeling.  If we can try to pin these things down carefully, then hopefully we can have a bit more certainty in our final answer.

The first thing I want to talk about then is let's just have a think about the sources that we're using.  There are different systems that people are using from single-element transducers with fixed foci to transducers that have a few elements that allow some degree of steering to fully populated, hemispherical arrays, which allow a lot more flexibility.

The size and the shape and the frequency that the source can generate really affects the size, the shape, and the position and the amplitude of the focal region that you can generate, which you might need then to match with the physical region that you're targeting.

If we just have a think about that for a minute.  If we start off by looking at something like a transducer which has the same diameter as the focal distance at kind of an intermediate frequency that people are using for neuromodulation, so 500 kilohertz or something.  Then we get some focal region which is around the geometric focus of this transducer, and it might be something like 4 millimeters wide and 30 millimeters long, something like that.

If we then halve the frequency, then we get a much bigger focal region.  If we were to double the frequency, then that focal region would shrink again.  So obviously there are kind of restraints on what frequencies we might want to use because we need the ultrasound to travel through the skull, maybe we want to minimize aberrations, but it's also something to think about with the size of the focal region that you generate.

The other thing that can affect the size of the focal region is basically the size of the source.  If we keep to the same frequency now but we just increase the size of the source, as we get kind of bigger coverage, a large diameter, then our focal region shrinks, going to the extreme of having a hemisphere where we get quite a tiny focus now, which is only sort of a few millimeters across and 6 or 7 millimeters long.

Positioning the focal region relative to the transducer also, if you don't have any steering or you want to kind of pick your range of steering positions, then you might think, okay, what do I do?  Do I flatten the transducer so I increase the radius of curvature, so then my focus will move further away, and I can target deeper in the brain?  But what happens when you do this is that the focus becomes bigger, becomes longer.  So we flatten the transducer here a bit, so now the center of this focal region is made further away, but you can also see it's much longer, and this is another extreme.  So now, in this case, we've got a flat transducer, but we've got a focal region that's something like 100 millimeters long, which is getting quite large compared to the size of the brain.

So the other thing that changes here as well is what we call the focal gain.  For a given source pressure, given the driving level of your transducer, then at different frequencies then the relative pressure of the focus will increase.  So if we look at the low frequency, say something like 250 kilohertz with an F1 transducer, we've got a focal gain of about 9.  So that the pressure here at the spatial peak is nine times what it is at the source.  If we double the frequency then that goes up.  If we double it again, then we can see we have a very high peak.  So the energy is now concentrated in a much smaller region.  But these are kind of ideal transducers, so this is not exactly true, but you can see that the kind of amplitudes in the nearfield stay relatively similar for higher focal pressure amplitudes.

Once we've picked our source and we've decided what shape and size we want and what frequency, then we need to know something about the field that it generates.  So what information do we need to quantify the field so that we could write this down and tell everybody else what our transducer was doing?

I think the minimum kind of parameters that we've come up with, Kim mentioned this, that we've been looking at standardized reporting paper, so these are the minimum set of things that you need to know.  What is the focal pressure for a given drive setting?  The number that you dial into your drive system for that particular setting, what is the pressure at the focus, the location of the spatial peak?  Where is that focus, compared to your transducer surface, or maybe the exit plane, like the outer housing of your transducer, and how big is the focus?

So we've also heard a few people talking about whether or not this relative measures of the focal sizes is useful or not given that we don't really know what the threshold of effect is, so we are interested in what the pressure is everywhere, so we're not just confining this to a focal region, perhaps if there are other regions where the pressure or intensity would be above some threshold for effect.

I'm not going to say too much about how you actually do these measurements.  There is some information out there on that.  So I guess one thing I will say is that they can be quite complicated, and it can be quite challenging to do them very accurately and to keep your uncertainties down so that you can be very sure in the pressures that you're measuring. 

So lots of factors involved in this.  A typical measurement setup might look something like this, where you've got a source and a hydrophone that you move around, and this is all in a big water tank.  You have some kind of system which drives your transducer, so it supplies the signal to it.  And then you have an oscilloscope which grabs the voltage waveforms from the hydrophone, and then some kind of computer or something to store them.

So there are lots of aspects involved in this that can introduce uncertainties and potential errors.  So if we start with thinking about the source and how you drive it, it's very important to do these measurements and realizing that real transducers are real things and they don't behave as if they're the kind of ideal equivalent mathematical source.

The hydrophone that you choose is very important, the size, calibration that you have, and then any kind of resulting spatial averaging with directional response effects are very important.  Any averaging or what acquisition window you choose to acquire a signal in, the things that you do to your signal afterwards, so your filtering deconvolution of the hydrophone frequency response and all of those things.  And then all the kind of practical aspects of your setup, so how you position everything, how stable the temperature is in the tank and how clean the water is and all of these things.

One possibility is that you ask someone else to do your measurements for you.  If you want to have some more information about the field of your transducer, there are people that provide these services that you can ship your transducer to and they will provide you with measurement data in return.

Just a quick word about hydrophones, because lots of people are kind of looking at doing their own measurements and buying a small lab setup to do this.  What are the important concerns or considerations when choosing a hydrophone?

There are a couple of different types of hydrophones, just to quickly go over that.  On the left, I've got a picture showing some membrane and needle hydrophones.  Membranes are the kind of flat ones, they're just like a piezoelectric film, and the gold is the electrodes and coatings that you can see there.  And these are very good, they're very stable, but they're very expensive and delicate, and there's kind of a big membrane in the way, which can cause standing waves if you're not careful, especially when you're measuring focus fields.

So often we tend to use needle hydrophones or probe hydrophones, which are cheaper in general, easier to handle, but have maybe a slightly less uniform frequency response. 

But I'd say that the most important consideration is the size of the hydrophone that you use.  In general, we want the hydrophone element to be less than a quarter of a wavelength, at the frequency of interest.  So that might mean if we're making measurements at 500 kilohertz, then we'd think okay, we can choose something that has an element diameter of less than .75 millimeters.  But what happens at very low frequencies is that the hydrophone actually behaves as if it's a larger object than the nominal element size. 

These are some measurements from this paper showing that at kind of high frequencies, above 8 megahertz or so, then the effective size of the hydrophone matches pretty well to its nominal size.  But when you decrease the frequency, the effective element size shoots up.  This means that the hydrophone is effectively kind of behaving as if it's a large object and sort of as if it's seeing a larger area and averaging over that area. 

So this goes down to 1 megahertz, and we can see some of these hydrophones are getting pretty large.  So we did some further measurements at frequencies from 200 kilohertz upwards, and this effect kind of continues.  So for some of these hydrophones where the element size is given as half a millimeter, they actually look like something that is more like 3 millimeters, which is now getting on for being the size of the wavelength, so it's going to be a bit too big.

So for very tightly focused fields, these have significant spatial averaging effects, and basically you lose sensitivity for waves coming in at high angles.

Now we thought about how to measure the field, and how to know it's coming out of the transducer.  We might want to know something about the transducer so that we can then start to think about how we predict the pressure in situ.  In order to set up the model, we need to know something, say, about the skull, but we also need to know how the transducer behaves.  So how do we describe the transducer in the models that reproduces the field that we measured?

One important takeaway is that the measured field can be different to that that you would predict if you just took the nominal parameters of the source and assumed it just vibrated uniformly as s source of that shape and size.  I've got an example here of a transducer that's around 60 millimeters in diameter and 60 millimeters in focal length.  And the black dotted line is the measurement made with the hydrophone, and the blue dot-dashed line here is what you would get if you just used something like the O'Neill(?) solution to just generate the on-axis pressure.

And you can see that they don't match up.  Here the spatial peak pressure is in a bit of a different place to where we measured it, and the size of the focus as well is also a bit different.

So why does this happen?  Basically, because the transducer does not vibrate uniformly, because it's a real object, so it has some housing, it's clamped at the edge, and as it vibrates maybe that bit can't move as much as the middle, and you get these long waves propagating across the surface and they leak energy into the water, which causes these differences in the field.  And then maybe we tend to find that most sources behave as if they're slightly bigger than the radiating area, or it could be slightly smaller, because there must be some edge effects going on as well which change the shape.

So that's something to be aware of.  How do we get around this?  If we want something that matches our real source a bit better, then what can we do to do that?  What information do we need to base this source model on?  The simplest way of doing that is to make some axial and lateral measurements that pass through the focus of the field, and then basically to optimize the dimensions of the source, just let them vary a little bit until you reproduce a field that better matches the measurement that you made.

In this case, I've now added this red line, and you can see that although it doesn't match perfectly for this particular transducer, the position of the spatial peak is now very well matched and it matches kind of over most of this main loop and down the back.

For other transducers, so this is a higher frequency transducer, 1.1 megahertz, the match is extremely good.  So the black line is the measurement and then the red line is our -- sorry, the blue dotted line is our optimized simple uniform source, and it's a very good match over several of these focal lengths.

If we want to do something a little bit more detailed than that, if we want to really try to recreate exactly the field, then what we might do is obtain a hologram.  It's basically a planar measurement of the pressure which includes all the information that we need in order to reconstruct the whole field. 

So we make some kind of 2D measurement reasonably close to the transducer, and then we can unwind that pressure distribution back onto the source, and then get this picture where we can see all of these nonuniform vibrations happening.  If we then take this and we put it into our model and we project it forward, back into the water, then now what we get is something that matches very closely to the measurement.  So you can see we've got this purple line here, which now matches the black measured line very closely.

So this is possible, and I guess one other takeaway is to say that it's really important to do this, I think, but to do it in water using a very simple propagation method first before we try to do anything more complicated.

So what other cases might we have?  What about the added complication of having several elements?  So those examples that I just showed you are for single element transducers, which have quite simple behavior.  But this can be a bit more complicated to find if you've got multiple elements.  So the effect of amplitude and phase that you need to drive these elements with might be different to the ones that you dial into your equipment.  Again, because there may be some electrical mechanical crosstalk going on that kind of when you drive one element it does something to one of the other ones, and they're just not ideal objects.

So in this case, you can see that if we look at the elements individually of this four-element transducer, that they're mostly vibrating within where we expect to, but there's definitely something going on in other parts of the transducer as well, and if you just add all these together, you don't quite get what you get here when you drive them all in phase and then look at the pressure.

So in general, you might need to do a bit of optimization for this as well.

So then the next step in this pipeline, now that we have some kind of accurate description of what our source is doing when it radiates into water, we can now say, okay, we know how the transducer is behaving, so we can put this into our model.

And then I'm not going to go too far into this, because we have already heard from Brad a lot about the medium properties and the bit about intercomparison of models as well, but just to say that we always take a step by step approach to this as well.  So in the beginning, after we've done our water validation, we then go for a more complicated setup where we have something that now looks like a skull but is made of -- it's a 3D-printed phantom.  So we know exactly what the shape of it is and we know exactly what the properties are, because we have been able to measure them, and it is homogenous, so we don't have these problems of the scattering affecting the transmission loss.

So we know that when we do that and we very carefully register everything together and we measure the properties very carefully and then we propagate the sound through the skull in real life and in simulation, and we can pair the measurement to the model, we get extremely good agreement.  So you can see here that the time traces agree very closely.  The pressure amplitude here was agreed to within 1 percent in the focal position to within 1 millimeter.

So this tells us that our models -- in this case it was KWAVE -- it can model the absorption properly and it's capturing all of the operations and everything like that.  So we're happy that if we know what the properties are, then we can accurately model the pressure.

So then the next step is putting in a medium where we don't know what the properties are.  So as Brad said, we don't really know what the properties of the skull are in our values that we use for attenuation, we're not really capturing the differences in the scattering between skulls with different internal structures.

I wouldn't say too much about this, because Brad has already kind of introduced it, but this is the next step that we now put real ex vivo skulls and we make a measurement inside the skull, and we propagate this through and we're looking at doing this at different frequencies, different transducers, different skulls, and looking at where we get agreement and where we don't.

So this is really the combination of knowing something about our transducer, about our field, having a validated numerical model, and then bringing in the registration and the conversion of medical images to medium property maps.  So as Brad said, the agreement really varied; in some cases very good, in some cases not so good.  So there's still more information that we need.

So just very quickly, as I think I'm running out of time, so if we go through that whole pipeline, there's still always some uncertainty on the pressures that we predict at the end.  This is unavoidable.  There are uncertainties on the hydrophone calibration which are quite large to begin with, and that's just the first step in water.  Never mind kind of understanding how we convert from medical images to medium properties.

So if we've done our best shot and we've got some estimate of in situ pressure, is there any way that we could go another step and make a measurement actually in situ inside the brain and verify this in reality while we're doing the neuromodulation?

Kim introduced this yesterday.  The idea of using MR ARFI to look at where the focal spot position is and then to try to maybe in the future to try to get some estimate of what the pressure intensity is at the focus so that you know that you did what you thought you were doing.  So I would say at the moment in modeling we can predict the position of the focus very well, but the amplitude is more of a question.

So here there are a couple of papers already out, and more kind of being presented at conferences and on their way where we can see that we can image the position of the focal spot, check it is in the right place, then maybe we can tweak based on that if we need to re-steer it a little bit.  And then we can start to link the displacement that we measured to the effect and then maybe to the amplitude.

But if we can't do that, then maybe we need something a bit simpler and quicker, just to check basically whether our device is operating as we expect it to.  So if you do your measurements and you characterize your transducer, you kind of want to be sure that when you use it for your studies that it's still doing the same thing that it was when you had it in the lab.

So a simple way to do this is to do kind of long-term monitoring of the system behavior by maybe looking some of the electrical parameters like the impedance, the electrical power, at particular settings, and making sure that they don't change and making sure that that's what it's doing on the day of your study.

You might have access to monitoring of some pulse echo signals actually during the stimulation as well so that you can see if anything changes there.  This is a really good idea to do to check that it was plugged in, it was turned on, it's not broken, when you're doing your studies.

Okay, and that's all I want to say.  So, yeah, there's lots of things that we need to do in order to be able to achieve kind of replicable, repeatable studies and try to then move towards examining the parameter space and looking at thresholds for effect.

I will leave it there.  Thank you.

ELIZABETH ANKUDOWICH: Thank you so much, Elly, for such an informative talk.  We have a couple of minutes for questions.  The first one, if the speakers want to come back, the first one is a point of clarification for Dr. Dougherty of whether he required an IDE for the studies that he's already undertaken.

DARIN DOUGHERTY: Oh, yeah, good question.  Not so far.  We did check in with the FDA, and they gave us a letter of nonsignificant risk.  But that's been in healthy volunteers thus far.  So no IDEs yet.

ELIZABETH ANKUDOWICH: Great.  There were a couple of questions related to sham and also about the blinding, whether or not participants could perceive a difference with the sham.  There was the assumption that they wouldn't hear the transducer if it wasn't plugged in.

DARIN DOUGHERTY: The only people -- good question again.  The only people who experienced the tingling sensation and the vibration were patients -- subjects, I'm sorry, they're healthy volunteers -- subjects who were actually receiving active in a blinded fashion.  None of the sham experienced it without the sonication.

ELIZABETH ANKUDOWICH: Okay.  So another follow-up question: did you do separate blinding assessments, assessment studies, or did you do it as part of these studies, and if so, when did you administer the assessment?

DARIN DOUGHERTY: Another good question.  We did not do separate blinding studies.  We tagged those on to these studies where we were already doing a blinded study for another reason, and we assessed right after we were done with sonication, at each time point.

ELIZABETH ANKUDOWICH: Thank you.  Here's another one for Elly.  Is the MR ARFI sequence readily available across 3T systems?

ELLY MARTIN: I have no idea.  You'd really have to ask --

ELIZABETH ANKUDOWICH: We'll ask Siemens about that.  Okay, I have a related question.  Do you see there being real benefit to doing kind of interleaved or combined ultrasound MR imaging or is this something that you could really assess what's happening, do you think, or estimate or model what's happening in the brain, kind of without doing kind of -- without having the transducer in the scanner basically.

ELLY MARTIN: I guess that would be the ideal, wouldn't it, if none of this has to take place in a scanner, if you could I guess -- whether you want to use that as a readout mechanism is one question.  But yeah, if we could be sure that we could model accurately, then that would be the ideal thing.  I think from what we've done so far that we can see that the targeting is pretty good.  So where we estimate the position of the focus is normally very good.  It's just the amplitude is a bit more difficult.  I don't know if Brad has anything to add to that.

DARIN DOUGHERTY: I wasn't going to answer that.  I think you hit it on the head, Elly.  Dr. Butts Pauly responded just to us, but I wanted to share it with the group, that the AFNI is not readily available yet.  So thank you, Kim.  We're looking and playing with it, but it's still kind of in beta in my perception.

ELIZABETH ANKUDOWICH: And then there's just a really brief question about MR thermometry and can we learn some information about that that could be similar to what we might learn from the MR ARFI.  Anyone have any ideas?

BRADLEY TREEBY: I can take that if you like.  So we've done MR thermometry for some of our studies, and I think many Kim or one of the other speakers pointed this out yesterday; most of the time, the temperature change is so small, maybe .1 of a degree if that.  So it's below the S&R.  You basically don't see anything on MR thermometry, unless you're trying to make a hot spot.

So for most of the parameters that people are using now, you don't get an appreciable hot spot inside the brain.  So you can't normally use thermometry to read it out.

ELIZABETH ANKUDOWICH: Thanks, Brad.  I think we're all set on our questions if I think it might be a good time to turn over for you for the panel discussion.  Thank you to all of our speakers.  Really great talks today.

Panel Discussion

BRADLEY TREEBY: Amazing.  Thank you for this wonderful panel and for joining me.  We're going to sort of walk ourselves through a thread, a thread of inquiry, which I'm going to toss questions to my esteemed panel members, to try to get at this idea of how good are the tools that we have, how accurate do we need to be and so on, and see if we can piece it together from two ends.

I'm going to start with Fidel.  In Elly's talk, Fidel, we heard a little bit about the tooling that we have available.  So how we can change our ultrasound transducer and what kind of sizes the focal spot might be, so maybe a couple of millimeters across or five millimeters across, and maybe 30 millimeters long in some cases.  But what I'm interested in knowing is how that pairs up to what's inside the brain.

So what is it that we're trying to target?  How close are neighboring structures that could have confounding effects?  So maybe you could put the tooling in context of what it is we're trying to target.

FIDEL VILA-RODRIGUEZ: Absolutely, and it's an excellent question and I will take first a little bit of a background perspective.  I'm going to be kind of bounding my response to subcortical targets in the brain.  So I'm going to leave the cortex out of the picture, as focused ultrasound allows us to get deep inside.

So when thinking about this question, we need to keep in mind that different subcortical structures have different dimensions.  They also have different shapes, and the location of the structures also matter in terms of thinking of where the beam targets.

Also, I'd like to give a little bit of a rule of thumb in terms of thinking about volumes.  Just to use a hopefully an intuitive metric of what 1,000 milliliters is in terms of volume, that's equivalent to a regular sugar cube.  So if we picture a sugar cube 1,000 milliliters, the volume for example of the thalamus is 7,500 or seven sugar cubes.  Then another structure, subcortical, the hippocampus, it's four sugar cubes. 

Now, how much is the volume of the sonic field, and I checked in with Elly and others, the experts just to check that my numbers were correct, and if we take that transducer of 62 millimeters at 500 kilohertz, roughly the volume of that sonic field is around 250 milliliters, or a quarter of a sugar cube.

Of course, we need to now put those things together is that the shape of the sonic field is an ellipsoid.  So it's not a square.  But roughly, for a target like the hippocampus, a recent paper by Huang and colleagues estimated that the beam overlap between the sonic field and the hippocampus was around 85 percent.  Beam overlap means that of the entire kind of modeled sonic field, how much of that got into the hippocampus.  So that measure is 85 percent.

Then they estimated another very important parameter, the target overlap, which is of the entire volume of your supposed target, in this case they took the entire volume of the hippocampus, what is the percentage that the beam represents?  For that particular modeling, it was three percent.

So these figures give us a sense of overall how good of a focal tool the focused ultrasound, then the field will have to wrestle with aspects like different geometries are going to be more complicated than target than others.  It's not the same, say, aiming at a subnuclei of the thalamus than kind of targeting the hippocampus that has a way more complex geometry.  So these are going to be things that are going to be things that we're going to have to wrestle, but it seems that we have very, very good tools to model that and then also to after the fact check how much we were on spot or off target.

BRADLEY TREEBY: That is wonderful, and that's a nice segue into something we're going to discuss a little later, which is what kind of metrics we might use.  But I'm going to go to Charles next.  So we know what it is we want to target.  We know how big it is.  We have our acoustic beam.  Maybe we've done a simulation.  Now we've done the thing for real.

So Elly sort of touched on this very briefly, but what we can do in situ?  What have we got?  What tools are available right now, or maybe coming soon, that we can use to check what we've done?

CHARLES CASKEY: Great question.  We have done the planning, we're now in the moment, we are sonicating, but we have this opaque structure that we're trying to estimate an acoustic beam from.  So how do you take this careful thing we do in a water bath and then estimate?

So some of the things were mentioned.  I'm just going to kind of go through them and talk really briefly.  So the idea, so clinically we talk with thermal ablation, we use thermometry.  That works quite well and it's already been mentioned that there's this challenge of getting good signal-to-noise ratio with that scenario, and it's especially -- it's something that wasn't really touched on in these discussions is that even when you -- even if you could raise the temperature in the brain by 1 degree and start to get into the S&R, most of the acoustic apertures are also going to be raising the skull temperature by so much more that thermometry really is going to be a challenge.

So that gets us to the next version, the next thing that's oftentimes used in the magnet and that we've, that my group's done a decent amount with, and that's using the acoustic radiation force, and I think this is definitely a good potential option, but it's still kind of early in development, the pulse sequences on the MR may not be available without certain agreements with the vendors, although if people have agreements with Siemens, Phillips, and others, there are ways to get these sequences.

But again, this is another situation that if we start to think about safety in this scenario, when you need to generate these displacements, you need to use a pretty high pressure and while in that starts to encroach upon or get beyond some of the MI limits that we discussed.  And that's with current technology, though.  As we make, in the chat here, there's been some discussion as well, as we make better coils, make more sensitive imaging sequences, these are the types of things that are going to -- we're really close and there are nice studies, including an excellent one from Kim's group, where they couldn't differentiate on histology what had happened, or any evidence in the brain that there was damage, and they were able to generate nice acoustic radiation force images.  So those are two things that are currently available that give some sense of the acoustic field in the brain.

They require MR.  So that's a challenge.  So some things that started to be brought up in this, in the last couple of talks and including recent papers, is the idea of either pulse echo imaging of some kind to ensure that your current scenario, where the transducer is pointed at the skull, mimics your treatment planning scenario.  If you have bubbles or other things, you might be able to pick those things up.  Transmission measurements could -- I think there's a lot of acoustic information that we're leaving on the table right now or we haven't dug into, and I'd love to see more of that.  I look forward to what people are going to develop with that.

Then I think the other component that is really important is just the idea is your transducer really placed where you think it's placed in your simulation?  That's another thing that once you start to get these errors with optical tracking and other stuff, or just using your image space for guidance, you start to introduce errors that are on the order of the size of the wavelengths we use, and that's going to start to make the predictions challenging.

So those are my thoughts about things, information that you can get in the moment, and I'm often surprised at how challenging it is to do something in the skull, in situ, versus the water tank.  I definitely am.  So thanks for the opportunity to chat about it.

BRADLEY TREEBY:  That is some wonderful thoughts.  Thank you for bringing up the idea of capturing acoustic information, especially people that are working with and developing multielement arrays, there is another dimension which is not currently being used, at least not very much.

I want to go to Elly next.  So we've heard a little bit about the brain structures.  You obviously gave a great talk about what we can measure.  Charles just talked about what we might be able to measure in situ.  So let's say we have two things.  We have what we wanted and what we got.  So what kind of metrics, what kind of measures should we be using to say whether we did a good job or not?  Fidel mentioned overlap.  What are your thoughts on what sort of metrics we should be using?

ELLY MARTIN: I think when we've done the kind of the comparison studies, between modeling and measurement, and we try to use a range of different metrics, I guess.  What you want to do is cover the range of things that you're trying to check.  So is the focus in the right place?

You could look at that in a number of ways.  One of them is to find out where the location, the spatial peak pressure, is and compare that, you know, how far apart it was in which direction.  But that might not tell you everything about whether you accurately captured aberrations or something like that, or whether the focal shape is really simulated accurately.

So for that, then you might be more interested in the dimensions of the focus.  So often we do that relative to the peak pressure.  It's a good metric for making a comparison if you've got two things and you say is this one the same size as this one?  But it's not necessarily a good absolute measure of did we cover the right brain region, because if we don't know what the threshold is that we need it to be over to have some kind of modulation effect, then we can't really restrict that.

So I think it's important to look at the whole field as well and see how well you're predicting the pressure kind of everywhere.

I have looked at the kind of the degree of overlap.  So contouring a region which might just be the 50 percent mark or something like that and looking at how well those two regions overlap, what percentage overlap, but yes, I think in general kind of looking at maximum areas and some average error that tells you whether you got the field correct in general is also good. 

But sometimes I think with these kinds of problems where you can have a lot of changes like in and around the bone and the reflections and things between the transducer and the skull, that might be there in real life that aren't -- I think those can remain quite high.  But, yes, I think in general we need something that tells you about the amplitude, the position, and something about the distribution as well.

BRADLEY TREEBY: That's great.  I want to bring in Elsa next to sort of build on that.  So let's say we've got some metrics; maybe it's the error in the pressure, the position, the overlap.  So thinking about those or any others you want to throw in the mix.  What should we be thinking about when we say, oh, we did a good job or we did a bad job?  So what sort of values do you think are clinically or maybe from a neurophysiological perspective are kind of acceptable?

ELSA FOURAGNAN: That is a great question.  Thanks, Brad.  I'm Elso Fouragnan.  I'm a neuroscientist from the University of Plymouth, UK.  Elly was, you're just talking about amplitude and focal position.  The thing is even if we -- so first we don't know very much which intensity is needed for a certain biological effect.  So there's lots of research that's needed in the area.  But even if we did, at the moment most of the studies are made in the ways that we would fix the pressure or intensity in water and then we would sonicate everybody in the same way, but everybody has very different skull, as you said.  So the loss would be very different, and in situ pressure and intensity would be very, very different.

So perhaps one thing to consider could be to start tailoring or personalize the treatment.  So if you do have very good skull image and you have good simulations of what you're measuring in your tank and you're really relying on your simulations, you could start having personalized treatment so that the in situ value is the same across everybody.  So that would be a first step.

And then there is the issue of error around the focal position, and that's what Fidel was talking about, the cost of a mistake in the cortex where your Brodmann area is very large is going to be very different in the cost of a mistake if you're targeting a very deep brain region that is very, very tiny.  So a lateral mistake there will cost you that you're not even engaging with the target at all.

When we are planning for our TAS experiments, I think one thing that we are all very worried about is surrounding regions, the regions that we definitely don't want to target because of side effects.  I'm thinking like here the optical nerve that could be in the way of the nucleus accumbens given certain trajectory.  This would be, would have a high cost if we were to target this by mistake or because in the elongated beam.

So we do think a lot about targeting the region and also minimizing the surrounding tissue and would be great if we were coming up with a map of costs where we could start to optimize trajectory planning.  Perhaps this is going to happen in the future, but that's just a few thoughts I had.

BRADLEY TREEBY: I love that idea.  A map of targeting costs.  That sounds like a brilliant initiative.

I want to go to Darin next, because he gave us a nice introduction to sort of running some human trials on subjects and later in clinical populations.  So just paint a picture for where we are right now, at least in your view.  How close do you think you have been in terms of engaging with the target that you were trying to hit, sort of your view of the uncertainty in the pressure amplitudes and so on.

DARIN DOUGHERTY: Thank you for asking.  We're kind of going for it, obviously, and our post-modeling we use KWAVE modeling approaches on the back end.  It looks like we're hitting target 80, 90 percent of the time, and we have redone analyses based on that data.

So doing pretty well with the two methods that I described, the scout and iterate and also the software program with the vitamin E capsule from Richard Bouchard and colleagues at Baylor.  They were kind enough to provide that for us.  We're hitting the target pretty well.  So I'm feeling pretty good about that.  I mean, 100 percent would be better, and I think maybe multiarray systems will help us get there, perhaps a question came up in the chat of robotic neuronavigation that takes all the information and puts it where it needs to be, but those things are going to happen later.

The thing I'm most concerned about is the parameter space is so large.  We're kind of putting our chips down on a few small islands in a vast ocean, and on the one hand you have to start somewhere.  You can sit and think about it all day, and it's analysis paralysis and you never do any studies.  On the other hand, if there's not some thought behind it, it's just throwing a dart.  So we're trying to thread that needle between those two things. 

I think that's what I'm most worried about.  I really think we need -- I think an important project, just like people do cortical mapping with depth electrodes in epilepsy patients, where you just gradually learn different stimulation parameters at different sites and this is what happens, just kind of mapping that space; I think a very valuable database would be just simply mapping parameter space and start to fill in the gap between some of those islands.

So we've had some effects with the parameters we're using.  Is this like the best effect we can get with that parameter?  Is it the worst?  Is it somewhere in the middle?  We don't know.  So I think that's going to be probably, from a -- I'm an end user.  You guys are physicists who are doing the stuff that I need to use the device.  That's why I think it was good that I gave an overview as an end user.

But I think a lot of work for the end users is going to be determining what can we use to achieve what we want to do first of all in neuroscience experiments but then hopefully this would help people later down the road with disorders, and if we want to achieve that, we really need to know how to do it right or in the most optimal manner possible.

BRADLEY TREEBY: Very good comments.  Excellent thoughts.  I'm going to sort of loop back around to Fidel now, but I actually want to pass this question off to Fidel also to the rest of the panel, because you're all doing your neurostimulation experiments in different circumstances on humans in different populations.  So we know there is some uncertainty about what the threshold is, what the mechanism is, on our targeting.  Maybe on our positioning, on our registration.  So we know that, right?  We know some things, but we know there's huge error bars on other things. 

So how do we factor that in to when we think about planning it to study?  So we know we have some error bars.  But how do we factor that into, let's say, our trial design or our study design so that we can still get robust results?

I'll start with Fidel, and then I'll go to Darin.

FIDEL VILA-RODRIGUEZ: It is a challenging thing, more so considering that some of the target areas in the brain that we are aiming for don't have obvious eloquent readouts.  So there are areas of the brain where you modulate them or you stimulate them, they give you what we call -- and it's eloquent, something that you can easily observe and measure.  We stimulate, and I'm going to use now a TMS target, the dorsolateral prefrontal cortex with TMS.  We don't get an obvious readout.  But looking at how the TMS field is trying to deal with this, concurrent techniques like stimulating while doing EEG might be an option just to in non-eloquent subcortical areas to learn a bit about what's happening.

The challenge with EEG and subcortical areas is that it is really challenging to disentangle the signal that comes from deep in the brain.  But those would be a few avenues in terms of figuring out what are the right readouts.

And building on what Darin just said, the approach of mapping the parameter space, I also thought in kind of the other way around.  With a set parameter, would it be helpful to start mapping consistently different parts of the brain and see different readouts that we agree on and see if we get a response, kind of trying to mimic Penfield's experiments, but lots of work to do.

BRADLEY TREEBY: Yeah, excellent thoughts.  Same question to you, Darin.  How can we take this into account?

DARIN DOUGHERTY: I think Fidel brought up a good point.  I was talking about the parameter space.  That's one issue with the geography, you're right, Fidel, then we repeated a whole bunch of spots as well.

I think that that work is going to be extremely important.  It's kind of our roadmap.  It's going to be a pile of work, but I really think it will pay off down the road to have that information.  I think Fidel covered most of it.

I have one other idea that I think would be interesting that a busy epilepsy monitoring unit could do, and that would be to administer tFUS in the epilepsy monitoring unit when you have 200 to 300 contacts with stereo EEG in the brain.  We have done a lot of studies with sEEG in the epilepsy monitoring unit.

They're in place for 7 to 10 days.  They're extremely valuable patients of opportunity, and many of them will have electrodes where we can measure pretty small territory of LFP.  So that would be another, just I think Fidel covered most of it, but that would be another way to add mapping in a really unique and kind of special patient population where we get awake behaving human recordings.

BRADLEY TREEBY: I'm going to go to Elsa, and then to Charles.  Elsa, you mentioned that you don't do individual dosing, and I think most people are doing the same.  They have a set of parameters they use in water.  So how do you kind of account for that variability when you do your analysis or when you design your studies?

ELSA FOURAGNAN: Well, at the moment we regress the in situ value against the effect that we're observing, trying to see if there is a relationship, again if we believe that our simulation data are accurate enough.  But, yes, I really hope that in the near future I'm going to be more confident that the in situ value are reliable enough so that I can tailor my protocols so that everybody can have the same dose.

Til was talking about how the focal, and Elly as well, the maximum is not that meaningful if the intensity varies so much across participants, but if we start having intensity, max intensity in the brain that is pretty comparable across participants, then we can rethink a little bit more about volume as well and might make things a lot easier for us.

BRADLEY TREEBY: Amazing, and Charles, your team has put a lot of effort into sort of designing transducers, understanding what's coming out of them, calibrating.  So how do you sort of take that information and the uncertainties and then marry that up with a real life experiment maybe in a nonhuman primate, et cetera?

CHARLES CASKEY: We really rely a lot on the MRI for this, and I would say, so, pre-MR ARFI for pretty much all of our studies is the standard at this point in time.  So any time we think that we're stimulating, we'll have an associated ARFI image for it, and as I said, we're often surprised at how difficult it is to create that focus.  But we also recognize that there's limitations to ARFI.  Sometimes ARFI, you may not get the acoustic radiation force displacement or induced displacement, displacement is not the pressure field.  It is -- and you may not get the displacement that you expect.  So places really near the skull and other things.

So it's really a challenge and certainly as we've moved out of the magnet into doing electrophysiological experiments, looking for when we're actually hitting the electrode, and I think Li Min is going to speak a little bit about that.  So yeah, just anything to close that loop is really necessary and I feel like experiment we do just highlights that more.  But we really so much on the magnet right now, and it's a great tool, but it's a lot of work to do it.

BRADLEY TREEBY: We have another four minutes or so.  I'm going to end the panel discussion with sort of a lightning round, if you like.  I am going to start with Elly.  The question is what is one thing that we could be doing differently?  Let's say we're designing an experiment or we're practitioning somewhere along this chain between the device design and implementing a clinical trial, but what's one thing we can take away from this panel that we should go, aha, I'm going to do that on Monday morning?

Start with you, Elly.

ELLY MARTIN: Okay, well, I guess I thought about this before we had this discussion, and what I wanted to say was everyone should just have the most sophisticated equipment that you could possibly have, loads of elements, MRI scanners, blah-blah-blah, but we can't all do that Monday morning.

So maybe just understanding exactly what your transducer's doing, just pinning that down first.  That's what I would do, anyway.

BRADLEY TREEBY: Amazing, thanks.  I'm going to go to you next, Darin.  What's your words of wisdom for us?

DARIN DOUGHERTY: I don't have anything to do Monday morning.  I'll probably be recovering from the New England hurricane, which is coming tonight.  But long-term, it's kind of along with what Elly said.  I think we need a system where we can take the individual skull data, the window towards a target, anything that happens on the way to the target.  It really is something that's really automated, because right now we're like, oh, it's not on target.  Bring him out.  Move it a little bit.  Something that's really kind of plug and play.

I know that's not Monday.  It's going to be years.  But at some point, if we want to really have this ramp up or accelerate, I think we need something that's a little bit more plug and play.  That doesn't mean TMS for dummies, but at least easier to use than the research equipment we're using now.

BRADLEY TREEBY: Amazing.  I am going to go to you next, Elsa.  Any words of wisdom for us?

ELSA FOURAGNAN: Yes, I mean, following on what Darin was saying and that cost map that I was talking about, I would love to have time to implement, but I won't, so hopefully somebody else will do it, a way to automatize planning and trajectory planning on the basis of surrounding tissues and so on and so forth.  At the moment, on some neuronavigation systems, you can click and have the shortest distance, and that's usually a terrible, terrible solution.  So it would be great if we were coming with -- because at the moment, as a lot of us know, we just go trial and error, trial and error, and then we simulate and then -- and it can be very costly and long, tedious.

So if that was automatized, it would make my life a lot easier.

BRADLEY TREEBY: Thanks.  That's a great suggestion.  I'm going to go to you next, Fidel.  Words of wisdom.

FIDEL VILA-RODRIGUEZ: Not sure if so much wisdom, but a wish -- I'm going to go with consistent reporting of parameters, harmonization, and this is a plug for the work that the ITRUSST Consortium is doing, and from the experience from working with TMS, that was something that when the field kind of came to a place where we were kind of all consistently reporting and using the same terminology then, it allows us to really understand what other people had done and build from that.

So we're early days, and I think if we kind of work on getting all of us reporting what we do and parameters and design will certainly catalyze all this work that we have to do by Monday.

BRADLEY TREEBY: Amazing, good suggestions.  Charles, bring us to a close.

CHARLES CASKEY: Sure thing.  I just want to see more of basically the same thing.  I want to know that my treatment plan is being executed and just I don't know if it's going to be neural information or if it's going to be from the MRI, or if it's going to be acoustic, but in the moment I want to know that that treatment is being executed and just anything we can do for that is so important to me.

BRADLEY TREEBY: Amazing.  Well, I want to thank you for being such wonderful panelists, and that brings this panel to a close.  We hand back to the organizers I think for a break.

(Break)

Session 6: Optimizing Target Engagement: Parameter Space and Effects

ELIZABETH ANKUDOWICH: Welcome back to our final session, Session 6, that focuses on optimizing target engagement, parameter space and effects.  Our first speaker for the session will be Dr. Holly Lisanby, who directs the Division of Translational Research at NIMH and the Noninvasive Neuromodulation Unit with the NIMH Intramural Research Program.  She'll be speaking on targeting in the mental health space. 

She will be followed by Dr. Li Min Chen from Vanderbilt University Medical Center, who will be speaking about MRI for target engagement of acute effects.

Next up will be, after Dr. Chen, will be Dr. Miriam Klein-Flugge, from the Wellcome Center for Integrated Neuroimaging at the University of Oxford.  She'll be speaking about behavior and MRI for target engagement of delayed effects.

Our final talk for the session will be given by Dr. Tulika Nandi, who joins us from Radboud University in the Netherlands.  Her talk will focus on the parameter space.

Welcome, Holly, and please feel free to share your slides.

Targeting in the Mental Health Space

HOLLY LISANBY: Thank you, Lizzy.  Good afternoon and thank you so much for a stimulating or neuromodulating couple of days.  As Lucy said, I'm going to be kicking off this session talking about how we think about targeting for mental health, targeting of neuromodulation for mental health. 

These are my disclosures, and as Darin Dougherty hinted, I have been around the field of brain stimulation for some time, and although I have deep expertise with TMS and TDCS, ECT, MST, DBS, VNS, my disclosure is I have not worked with focused ultrasound and I'm so grateful to be learning from you, the experts, about the state of this field.  So really thank you for a very informative seminar.

So what I'm going to be talking today about is how we think about defining targets for mental health applications, how we go about discovering them, targets for mental health and in particular for subjective mental states.  How we think about reaching these targets noninvasively and selectively, and some thoughts about engaging targets.

So let's get started.  When we think about the different levels of analysis that we could search for targets from genes to behavior and beyond, the type of -- the level of analysis where our target might be may depend on the intervention that we are using.  So we think of pharmacotherapy as engaging at the molecular level, psychosocial interventions, engaging at the behavioral or cognitive level, and with brain stimulation devices, mostly we're thinking about brain sites or circuits.

But even in figuring out how do we go from a brain scan to a target, it's a bit of a moving target, isn't it?  Because we can use structural imaging, we can use functional imaging, we can look at structural connectivity, we can look at functional connectivity.  We can also with other technologies look at neural oscillations with high temporal resolution.

So when we think about defining the target, some targets are sites, some targets are circuits, some targets are neural dynamics.  When we think about targeting sites, we are thinking about putting the coil or the transducer as close to that site as possible that would be direct targeting.  If we're thinking about networks, we're thinking about transsynaptic targeting, and if we're thinking about engaging with neural dynamics, we're thinking about temporal tuning of the parameters of our form of stimulation.

Now, we have an additional challenge of defining targets for mental health, because our diagnostic system in psychiatry, the DSM, does not map well onto the biology at any of these levels of analysis that we're seeking to engage the target.  And our diagnostic system relies heavily on self-reports of symptoms, and this results in overlap between disorders and heterogeneity within disorders.  So this is a significant challenge when we think about how we can define targets that could be of therapeutic value to the condition as a whole.

But one approach is a data-driven approach to target discovery for mental health, and this approach helps us move beyond what are clusters that are based on self-reported symptoms.  So for example, patients with depression, anxiety, or schizophrenia and there's significant heterogeneity within these groups, if we could bring to bear other forms of objective data, like functional imaging, structural imaging, neurophysiology, that would then help us resort these heterogeneous clinically-defined groups into groups that are data-driven based on this biotyping approach. 

In the field of TMS, and I am going to be using TMS as illustration here, this powerfully drives efficacy.  In this study, for example, biotype number 1 was more likely to respond to TMS than the other biotypes, and that was based on resting state functional connectivity data.

So another approach to help us think about target discovery is encapsulated in the Research Domain Criteria or RDoC approach that NIMH has promoted.  The idea here is to find objectively quantifiable domains of function, like a cognitive function like working memory, and then have that you could study across the levels of analysis as a way of achieving more biological homogeneity within a heterogeneous clinical group.

This is an approach we took in this particular study, the FAST-MAS study, where we enrolled people who had either depression or anxiety, but our target was a very specific brain circuit that was related to reward processing, and you can see where that target is, which was identified by fMRI.  In our study, we were attempting to engage the target pharmacologically, but it also might be a really attractive target for something like focused ultrasound.

So another approach to discovering targets has to do with Network Control Theory.  I will give you one example from one of our studies where we used TMS applied to the middle frontal gyrus to manipulate working memory, and we found a memory load effect.  That is, there was a difference between active and sham.  You can see here on this radish plot on the hard version of the task, so that's increased memory load, but there's a lot of overlap between the groups.

So we asked how could we understand heterogeneity in our response to TMS based on network control theory.  So we computed the modal controllability of our actual TMS targets that were selected on the basis of fMRI targeting, not on the basis of modal controllability, and we learned that the degree of modal controllability here on the x-axis strongly predicted our TMS effect.  That suggests that you might think about future paradigms where you use modal controllability to select from among different targets within your network to find the one that's most likely to influence the network as a whole and the behavior that you're targeting.

Many groups are beginning to think about this in developing computational platforms to enable this.  This study shown here, for example, is an example of using individualized target search via this network control theory where you feed it in the anatomical and structural and it computes the graph constructions for each individual subject, and then in the simulation space, you can ask what happens if I were to give TMS to one or other nodes within that network.

So that's an illustration of an approach, a novel approach, to target discovery.

Now what about reaching targets?  So whether you're using scalp-based criteria, like the 5-centimeter rule or beamF3 based on 10-20 EEG or even structural or functional guidance; so those are the columns on this  E-field plot, regardless of which method you use, when you use that across people, you get significant heterogeneity.  So each row here is a separate subject, and that teaches us the importance of, in the case of TMS E-field modeling on an individual basis.

And you've heard in this workshop some of the tools that one might use for focused ultrasound to try to get more information about what that individual brain is actually seeing.

Now, I have a bit of focused ultrasound envy, because TMS is limited by particular laws of physics.  It means we have a depth/focality tradeoff with TMS.  The deeper we want to reach on the x-axis, the less focality we have on the y-axis, regardless of which type of coil you use, and we modeled a bunch of them shown here.  So this highlights part of why I'm so interested in focused ultrasound, the idea of focusing at depth is something that our other technologies can't do noninvasively.

So when we think about reaching deeper brain targets, because TMS can't be focused at depth, we had to get creative and think about how can we transsynaptically target deeper brain structures.  In this study, we wanted to target the subgenual anterior cingulate, which is one of our favorite targets for depression, and we used diffusion tensor imaging to identify the superficial cortical region that had the strongest structural connectivity with this deeper target.

And then we applied TMS.  It turns out to be a region with the frontal pole.  We applied TMS and used TMS/fMRI interleaving to ask whether we were able to engage this deeper target and demonstrate a dose response function, and that's what's shown here.

So when you think about engaging targets now, we want to show a number of things.  We want to show not only that we've reached the target, but when we reach it, we change the activity within the target in the hypothesized direction, and it's called mechanism of action, and you also, to nail it, you really want to show a dose response function so that when you manipulate the dose, you see a change in the degree of target engagement.

A few thoughts that are potentially relevant to focused ultrasound, and kind of a novel approach to engaging targets, is to think about engaging them using multimodalities.  So instead of just using your brain stimulation target, you might use brain activation to alter the brain state to make it more susceptible to the stimulation, and we think this is important, because we already know that brain effects of neuromodulation are inherently state dependent.

The simplest example of that is muscle tone.  So just putting TMS in the motor cortex to a muscle at rest, you can record the amplitude of the motor evoked potential, but if you have that muscle activated so its facilitation, the MEP grows.

So we've known for a long time that brain state matters, but in a lot of brain stimulation studies, it's not controlled for.  So a novel approach is to use a cognitive intervention to control brain state while you're stimulating the network involved in subserving that cognitive task to make the circuit more susceptible to the effects of brain stimulation.

Now we already know this can be done with precisely timed electrical pulses.  This paradigm is called paired associative stimulation, and it's shown here where you pair media nerve stimulation with precisely timed TMS to the sensorimotor cortex, and you do a series of pairings, and then you find that you've increased the plasticity of the sensorimotor cortex from before to after.

Now, we want to, though, do this with a cognitive task, so not just stimulating the media nerve, but we want to activate a brain circuit that's involved in this case in working memory.  So we have our subjects do the working memory task online, we've done event-related fMRI, so we know which areas are supposed to be engaged during different phases of the task, and then you time lock the stimulation to the different phases of the task.

And what we found was that this approach, called cognitive paired associative stimulation, or C-PAS, was able to remediate the impairment in working memory that you get following sleep deprivation when stimulating in a site-specific manner, so when stimulating the superior occipital gyrus, but not a nearby control area.  We found this enhancement.  So we're stimulating during the task performance.

And our measure of target engagement was to show that the degree of brain network expression on the x-axis correlated with the degree of TMS effect.  So that helps us to understand individual variability and also gives us measure of dose response function.

We're now expanding this approach into the treatment of depression.  So we do individualized fMRI targeting and the patient engages in elements of a skills-based therapy while they're getting the stimulation, and we have currently a double blind controlled trial under way, and our primary outcome measure is a target engagement measure which is using the fMRI to look at engagement of the particular circuit from before to after.

So to conclude, we have a number of challenges in how we define, discover, reach, and engage targets for mental illness specifically, and some of these are inherent to how we diagnose and identify and measure psychiatric disorders.  I have given you some really brief illustrations of how structural and functional connectivity might be used to reach deeper targets noninvasively with TMS, and while in the case of focused ultrasound you can already focus at depth, I would think you would still be interested in knowing what your transsynaptic action is on the rest of that network, because that network expression may be important for harnessing clinical benefits, and also for understanding attribution.  You might be stimulating one area, but we want to understand how its influence on the rest of the network yields the behavioral outcome that you're seeing.

I've shown you that TMS/fMRI interleaving can be used to validate this transsynaptic targeting approach, and I think it does highlight the role for functional imaging to be able to identify the impact of focused ultrasound on distributed networks, not just the site of where you're stimulating.

I showed you an example of structural controllability, which this may represent a new way of target selection for focal neuromodulation, and I'd like to raise the question: could network control theory also be relevant for target selection with focused ultrasound?

Multimodal approaches comprised of simultaneous cognitive or behavioral intervention with online TMS has been a means of targeting plasticity within a task-related network.  Could this approach also be relevant for focused ultrasound neuromodulation, particularly at doses that modulate ongoing activity, rather than stimulate, as we discussed yesterday. 

These are the members of the Noninvasive Neuromodulation Unit in my lab at the Intramural Research Program at NIMH, and just a plug for us; we're hiring for the next director of the computational psychiatry program at NIMH.

And I'm going to stop sharing my slides and ask the next speaker, Li Min Chen, to join.  Thank you.

MRI For Target Engagement of Acute Effects

LI MIN CHEN: Thank you for the opportunity to present the study from our group, and I'm going to show you some of the results, using functional MRI to assess the target engagement during acute ultrasound neuromodulation.

Functional MRI has been used to assess target engagement, and in this study by Wynn's group, showing when ultrasounds delivered to the motor cortex and the volume of activated -- the hand region, or the digit region, actually increased compared to the sham group.  Using functional MRI measurements, there was a limitation that is -- and imaging signal cannot really tell you whether the net outcome comes from excitatory neurons or inhibitory neurons, because both of those type of neurons actually cost energy, and then here I'm showing a review one of the papers from He's group and looking at the direct stimulation effect of ultrasound on actually the cortical neuron.

And He's group has looked at the effect of pulse repetition frequency on the modulation of fast spiking neurons on regular spiking neuron, and this figure showing actually the frequency, the high frequency has a more selective modulation of the regular spiking neuron.

And in our study, we are trying to combine these two methodologies together and then assess the target engagement.  So we're focusing on the primate somatosensory cortex and its functional network, and as I just mentioned, the one of the biggest benefits using functional MRI to assess the effect is it can allow us to simultaneously evaluate both the local effect and also the remote transsynaptic network effects.

So in our study, we quantified the modulatory effects using two types of measurements and act locally at the target, we look at the dose response function and also we look at the network effect of the ultrasound stimulation, and in the end, I will show you our very preliminary data and try to use intracranial electrophysiology data to validate what our observation with functional MRI.

Our experimental setup is we're using primate as a model, and then here showing you a setup where we are combining optical tracking, and this is optical tracking, and then we place -- using optical tracking, identify the region of interest.  Then we place our MR-ARFI image plant to around that target, because we're trying to minimize the ARFI exposure and also trying to minimize the time needed for the ARFI data acquisition.  So we use optical tracking to help us achieve that precise targeting.

And then we use functional MRI to monitor the ultrasound modulatory effects.  So here showing you the spatial overlap, and between the optical estimated target and the ARFI target, and also the background, the red patch, that's the functional signal increases.

Our parameters, we have done this study in both region 70 and I'm going to show you both of the data, and here we use -- this is our parameter, ultrasound parameter, and thanks for Charles about choosing those parameters, and he did the recording studies and also the simulations, so we decided to use those parameters in our study.

The beauty about the somatosensory system is we have -- we can build in a natural control in our experiment.  So that helps us to really assess the functional engagement.  So in our case, we're stimulating the hand region in the somatosensory system, both in the cortex and also in the thalamus.  In our experiment, so we designed three experimental conditions.  One is, as I just mentioned, we have a built-in tactile or heat, normal control or positive control.  We have first stimulation alone condition, that is interacting with resting state brain, and also we have a combined condition.  We are combining tactile stimulation or heat stimulation with concurrent ultrasound stimulation.

So this condition allows us to look at the ultrasound modulation of elevated brain state, or we can call it activated brain state.  What we see or what we measure is we look at the BOLD signal amplitude and time to peak and also the latency measurements, and both by stimulating either the sensory cortex, the primary sensory cortex, or the thalamus deep brain structure.

And we look at the at-site BOLD response and also offsite BOLD response.  And in the meantime, we can also look at the off-target network response as highlighted here.  This is off-target response, and also we have used resting state connectivity or effective connective network to look at how those networks being affected by ultrasound exposure or ultrasound modulation.

And in the end, once we have all this data and then we use the imaging guidance to place the electrode to the target, and to validate the modulatory effect we observed with functional MRI signal.

And here, I'm showing you some of the -- this is the tactile stimulation alone, and as expected, we see multiple brain regions are lighting up and that's part of the sensory system in nonhuman primates, and when we deliver the high intensity or 925 kilopascal intensity ultrasound to the primary sensory cortex, here is area 3a and 3b, and we see you can see it's not only engaged the target and also we elicited activation, many off-site brain regions.  So it's driving not only the target and also off-target area.

And here I'm showing you the time course.  You can see ultrasound alone, and the time course is quite similar to what the tactile stimulus can evoke.  So this is stimulation effect based on our discussion today.  It's activating the cortex.  So this is the average time course at the target area 3a and 3b.

What's interesting is if we look at off-target region, this is the nearby area 1 and 2, here is the insula cortex, you can see the ultrasound and tactile also elicited pretty comparable BOLD signal change.  When we changed our intensity to a relative low, a moderate level from 925 kilopascal to 400, and then we see actually inhibitory effect, we call moderate FUS, and you can see that's the blue line when we combined moderate FUS with the tactile together, you can see this is the significantly lower than the tactile stimulus alone.

And this study led us to think when we change the amplitude of the ultrasound, seems like introducing a different outcome, and it's state dependent.  It's bidirectional and also state dependent.  Then we come up, this experiment, so we varied the dose of the ultrasound and to three level, we have high, medium, and low.  They are all in the low intensity range, of course.

And then we have this building tactile as a control.  So here you can see this is at the target, and the tactile response is robust as expected.  And then the higher intensity as we show in previous slides and has a strong excitatory effect, and when we lower the intensity, the moderate ultrasound actually introduced the most severe inhibition or the suppression of the tactile signal. 

Then the lower intensity actually is less suppressive, and we look at other off-target regions.  The reason we are trying to look at the off-target regions, we are hoping by looking at their dose response function, that can help us to understand what is the mechanism active state and off state, the off-target places.  The reason is each individual area, for example in this slide I'm showing, in the 3a and b, this is a target, and all these areas are interconnected are part of that somatosensory system, but their connection, the functional and anatomical connection, varies from area to area.

And then by looking at this dose response across target and off target regions, that will provide us some insight about the potential connections or potential interactions between the targeted region versus the offsite effect.

So I want you to look at this red curve, that's what shows the FUS stimulation alone.  So when we lower the intensity from the high to the medium and as you expect the BOLD signal change is going down.  And then other areas like somatosensory, secondary somatosensory cortex, they are follow, but there is one region, for example, the insula doesn't follow that region, and then we can dig into more because insula has a different connection types than area 1, and we also we looked at, and we look at overall, and then once we have the ultrasound here for the blue, the dots, you can see when we have ultrasound on top of the tactile response and the most significant suppression occurred at the moderate level ultrasound.

And we were trying to understand why the off-target region and have a different follow-up profile than the target, and we did correlation study of other areas and how they follow the changes in the area 3 in two conditions, one is the FUS alone condition, and then the FUS alone plus, FUS plus tactile condition.  So here these two plots, you can see one is FUS alone condition, this area is pretty much correlated in a pretty high .2 r correlation coefficient, but when the ultrasound is combined with tactile and then to the thalamus nuclear, there is no correlation.

Here, I'm just showing you, the B panel is showing you can see the high intensity ultrasound barely introduces any inhibitory effect, but the moderate and the low intensity did.  So based on this observation, and then we come up this hypothesis -- I'm sorry, and also we looked at, this is a different display to showing you the ultrasound neuromodulatory effect of the circuit during different states.

So at the resting state is indicated by this dotted line and the solid indicating the tactile activation state.  So this is the high ultrasound, moderate intensity ultrasound, and the C is the low intensity ultrasound.  What I'm trying to show you is if you look at the dotted line versus solid line, at different intensity, their relationship at different targets or different off-target varies.  So that brings a quite complex interaction or the connections.

That observation led us to hypothesize that the different intensity of the ultrasound likely engage in different proportion of excitatory versus inhibitory neurons.  So that led to different outcomes, because the different proportion of excitatory neurons being modulated at the target and then that can subsequently affect the off-target at the brain region.  It also depends on their connectivity strength and connectivity type.

And this slide is showing you we also look at the connectivity change before and after ultrasound exposure, and this resting state network, resting state data, acquired is over a period of three hours, and here you can show -- you can see this is just lumped all the ROI together, and the correlation.

This is the matrix, the pairwise matrix, and I give you this two star. I am showing you this is S2 MCC connection and S2 PCG connection.

So after, this is before -- I'm sorry, this is a typo.  After ultrasound and everything -- all the connectivity gets suppressed, and this is the measurements.  You can show significantly suppressed, and this connection gets completely lost, and here is also significantly weakened.

And also we look at a different target and I'm not showing any of the data.  We talked also the PVG and also see drastic reduction in the functional, resting state functional connectivity network.

I just want to show not just the tactile response.  We also look at the heat response in the thalamus region.  So we're moving from the cortex to the deep brain structure, because that's one of the advantages of ultrasound stimulation.

So when we stimulate the ultrasound the thalamus, you can see there is not just the target and also there's other areas engaged, and we look at the time course of the target, you can see the signal is significantly suppressed, and this again confirmed our finding from the sensory cortex and this map just show you the network change overall on the inflated brain.

Just because we talked about the target engagement, I want to show you off this figure.  This is the one we just published recently.  The paper got accepted yesterday.  I am trying to show you because as Charles mentioned, we rely on so much about the ARFI, because ARFI tells us exactly where we're targeting and we have the confidence.  But how and when we are talking about deep brain structures and the structure is much more dense packed and then here we're showing you this is the original ARFI map from this individual animal.

So we're trying to say, okay, so what targeted has been influenced by this ARFI, and then this ARFI particularly we're choosing based on our functional MRI response.  We tried to use functional signal to localize, and then we're doing functional localization.  With that, we translate that to the normalized, the brain, and then eventually through the template.  So by using template and then we can see, okay, so actually what regions are being covered or being stimulated by the ultrasound beam.

So we're using a 3-millimeter radius as a way to identify the ROI, but you can see it involves actually more than one target were intended, were intended to stimulate the VPL as equivalent to the central sensory circuit, the system.  But where actually the ultrasound beam actually influence the multiple nucleus.  And we look at all across all the animals.

This slide is just showing you the time course measurements.  This is the target, as I just showed you, and also the target, off-target region, and also universal suppressive effect when we modulate the sensory thalamic nucleus.

This is the hypothesis I just mentioned, and we think this is the take-home message.  I think when we change the intensity from high to low and likely we're engaging or we're modulating proportion of excitatory versus inhibitory neuron, and that can lead to a different effect based on the connectivity strength.  So that's our hypothesis, and when we have a low and high and we have a different -- it's a different color of the circle indicate different type of neurons.  So here, the blue is inhibitory neuron and the big one is the parameter neuron.  Then, to validate this hypothesis, we did intracranial electrophysiology experiment.  So what we did is we developed this high precision focused ultrasound modulation system, and this is a system adapted from human DBS study.

We put the anchor into the brain and then we can precisely target and also calibrate our target inside the magnet.  So we can come up a grid and then using ARFI as a readout to precisely measure with this frame where the ultrasound eventually end in the brain of this particular animal.  We have done two of them, and we have -- and then we have electrode placed into the thalamus nuclear, we see BOLD response, and here is what we see with our intracranial electrophysiological recording.

So this is the overlay of 32 channel recordings.  This is 32 channel linear electrodes.  And when we have a tactile stimulation on/off, 500 milliseconds, and you see the local field potential increases.  It's increased going downwards, and the signal change.

When we switch to ultrasound stimulation and this is what we see, we see robust local field potential signal change and something lasted a little bit in this particular case, and also we look at the spiking activity.  This is a tactile response, and you can see at the on/off tactile, the probe on/off the hand, we see all the tactile response.  When we deliver FUS stimulation, we see robust spiking activity during the FUS stimulation.  We don't have any activity based on this condition.

We also try to understand whether there's any aftereffects related to this, and then we blew up the pulse and then each individual red line behind, that's the 5-microvolts, the ultrasound pulses, and then the short black bar, that's when we detected the spiking.  So you can see there's no fixed relationship between the ultrasound pulse and then the spiking activity.

So this observation give us some confidence, the spiking we related, we isolate, it's not actually artifact from the ultrasound beam, ultrasound pulses.

This is very new data from last, very recently, and then we also looking at -- we also actually tried to separate the narrow spiking versus broad spiking neuron, as I just mentioned, and the narrow spiking usually the fast spiking neurons, inhibitory neuron, and the broad spiking neuron usually is the parameter neuron excitatory neurons and interestingly enough, from the thalamus nucleus recorded predominantly are 80 percent, around 80 percent inhibitory neurons, and with the tactile stimulation.

So this one is the tactile stimulation, and the red is the fast spiking neuron and the blue are the broad excitatory neurons, and then when we deliver the FUS and predominantly activated inhibitory neuron.  This is from the thalamus.  We have other data coming from the cortex and the cortex seems to have a different ratio between the excitatory versus inhibitory neurons.  So that again indicate, it's very important to understand the neural response properly at each target where we're trying modulate.  This just gives you one example.

This is just my conclusion, and then we hopefully -- I showed you some of the functional MRI and electrophysiology evidence to support the direct engagement of the ultrasound with the cerebral cortex and deep brain structure, and the ultrasound can introduce transsynaptic neuroactivity, and the effect can spread across multiple functionally associated brain regions.

It can change the strength and the organization of the resting state connectivity network.  We also look at the effective connecting network.  We also see robust alteration, and hopefully when we dig into more, we should be able to provide some information about how to link the BOLD signal change to the underlying neuronal activity change in the future under the conditions of ultrasound neuromodulation.

Thank you so much.  As you can see, this is definitely teamwork, and with Charles and Grayson, and all the team members and the funding source.  Thank you so much for your attention.

Behavior and MRI for Target Engagement of Delayed Effects

MIRIAM KLEIN-FLUGGE: Thank you so much, Li Min, for a great presentation with so much data in it and thank you to the organizers for a fantastic workshop so far.

I will try in this presentation to shed some light on how we can use behavior and MR-based measures for showing target engagement, but unlike in Li Min's talk just now, focusing more on the delayed side of the effects. 

Just as a reminder, when we talk about delayed effects, the timeline tends to be that we're applying focused ultrasound in the range of seconds to minutes.  Then we transfer our participant or patient to an MR or behavioral testing chamber and take readouts of the neural detail, any behavioral markers that we might be interested in, and this is in sort of plasticity time of minutes to hours.  Typically, we're looking in the hour following the sonication.

So the behavior that we tend to measure is simple choices, reaction times, accuracy, and we can use some more sophisticated behavioral models, as well.  We can measure physiology at the same time, and then obviously I will try and touch on the evidence we have so far with MR-based markers, including resting state, task fMRI, and mass spectroscopy which we've heard about from Elsa yesterday already, and then also some ASL, material spin labeling.

So one thing just to briefly remind us in relation to also Kim's talk yesterday, is auditory confounds are less of a concern here for these delayed effects, because the readouts happen at a separate time point.  It's still very good to have good sham controls and active controls, and I'll get back to that.  But at least we are measuring things that are separate in time, from the point where patients or participants might hear the actual ultrasound envelope.

So let's start, I was looking at behavioral readouts.  Sorry, before behavioral readouts, just briefly looking at the protocols that have been used very frequently, they're not all of the ones that are available, but one that has to be and tends to be right now at the same time still frequently used in humans, we're using it at the moment, Elsa presented it yesterday and originally it was published by Robert Chen's group is this data burst pattern protocol where the duty cycle is 10 percent, so we have burst of pulses for 20 milliseconds over a time period of 200 milliseconds.  So it's a relatively low duty cycle, and that protocol lasts just over a minute, 80 seconds altogether.

Then I will also present some data in macaque monkeys where frequently this protocol was used, which has a higher duty cycle, so over 100 milliseconds 30 milliseconds are active sonication, so a 30 percent duty cycle, and often this been applied for 40 seconds but sometimes for 20 as well.

So slight differences here, but in both cases, the idea is that we can look at behavior and MR and in about half an hour to an hour after the sonication.

So now we're getting to some of the behavioral effects that we've observed.  I will start with the study from ourselves, from our lab, but move on to others as well, just to give you a range of things that we've seen.  In this particular study, we were interested in whether monkeys can actually perform novel choices.  So you can imagine you can easily decide between going on a holiday between two destinations you've never been to, and somehow we are able to simulate that experience without having experienced it before, and we were curious to know if monkeys can do that as well.

As part of a bigger study I won't go into the details of, we also used ultrasound to test whether such choices would causally rely on a part of the medial prefrontal cortex.  So just to briefly let you know what the animals were actually doing, they were trained on two sets of stimuli.  The first one showed that different colored cues led to different drops of juice.  So from green leading to only one drop to blue leading to 10 drops of juice, and this was at a fixed probability of 60 percent.

And then they learned the second set where it was shown that the numerosity or the density of the dots would always lead to the same number of drops but would change the probability that that outcome would actually be delivered to them to .1 to 1.  And so they were overtrained on these trials, and so they are obviously not new to them.  They've experienced them many times, but you can already see that these stimuli make up just two rows in this two-dimensional space, spanned of the dot numerosity and color, and so there is lots and lots of stimuli that we can generate that they've actually never seen before and that they've never experienced the outcome for, and so in a series of fMRI studies, we showed that the medial prefrontal cortex is important for this and the part that I'm focusing on here is that we then tried to intervene with the medial prefrontal cortex to see if that is causally really the case.

So we used that protocol that I just showed you, which lasts 40 seconds and a 30 percent duty cycle, and then had macaques do a choice task for the 45 minutes that followed after that, and there were two stimulation sites.  So the target area of my interest was the medial frontal cortex and then we had a control site here, still on the medial frontal surface, but much more ventral and a little more posterior.  So relatively adjacent, but not directly next to the target area.

And then we had the macaques do these choices between two stimuli that they'd never seen before.  So they had never experienced the outcome for.  So they were novel, but they were also conflicting in the sense that one of the stimuli had a higher drop juice number associated with it and the other had a higher probability associated with it.

So the optimal way of solving this is to form the product between the magnitude and the probability, and without going into a lot of computational modeling details, we basically just tried to capture if they are forming choice values in this optimal way using multiplicative values.  So you would basically just take the product of magnitude and probability shown here in orange to get multiplicative value.

And our hypothesis was that this integration was relying on the medial prefrontal cortex and might fail so that instead the monkeys would rely on a heuristic where you can just compare the magnitude and separately compare the probabilities, and then just make a choice based on those separate comparisons.  So we extracted an integration coefficient which just captured how much the animals relied on multiplicative versus additive value.

So what you can see here then in three monkeys is when we compare control site versus sham versus the target medial frontal cortex sonication is that in all three animals we see this decrease in how much they integrate value across the two dimensions of the stimuli which are drop number and probability of outcome.  So we thought this was evidence that ultrasound to the medial frontal cortex impairs the monkey's ability to base choices on integrated value.

To unpack that a little bit more of the specific experimental design choices, I think two things are very important here.  So one is that we had a control set and that we showed that our effects were specific to our target location for the ultrasound application.  And then the second was that another parameter and many other parameters actually that I'm not showing here were unaffected by the ultrasound.  So there was also behavioral specificity.  We didn't just impair the entire behavior altogether, but those effects were very specific to the process that we thought was being taken care of by the area that we sonicated.

So we're using some very similar approaches now in ongoing human work, which is very preliminary, so I'm just flagging it here briefly for those who are interested.  So here we are targeting either our area of interest, which is the amygdala shown here in the copper color, or a target and control region which is the insula in this case, and we have only done a couple of participants so far, but again, we're trying to be behaviorally specific in the questions that we're asking, and we're trying to have acoustic simulations that show that we can be spatially specific with our effects.

So participants are doing a simple emotion approach avoid task, where they have to find out the correct action for a given emotion, and what we can see so far in a very preliminary analysis is that it seems after amygdala sonication, their accuracy is slightly improved over insula and sham, and that's true in three out of four people.  So again, it seems to be when they get negative feedback they need to abandon their current choice behavior to do something different, they're more likely to be flexible and change after amygdala sonification.  But this is far too early to be confident in it.  But it is the sort of approach that I highlighted in the monkeys that they're following here, too, and so one can definitely ask is this definitely proof of target engagement, just because we're affecting an emotional process?

So this is trickier to be sure for regions with like higher order cognitive processes, such as the ones I've just been talking about, but there are some examples where the behavior is much easier that we also already heard about earlier.  So for example, in regions that have a direct measurable output, such as the frontal eye fields for steering saccades or M1 with direct MEP measurements.

So in this study here where Pierre Pouget and Jean-Francois and others, again in macaque monkeys, applied focused ultrasound to the frontal eye fields or supplementary eye fields or a control site, V1 or M1, and they had the monkeys do this anti-saccade task where they had to fixate and once a cue was shown on one side of the screen, they had to make saccade to the opposite, so an anti-saccade to the opposite side, and what they showed was that the ultrasound led to a shorter saccade latency in the 18 to 31 minutes or so following the sonication, and so even if there's a lot of (inaudible), what you can see is that that was specific to the ipsilateral side, so the orange and yellow bars compared to the green and purple ones, and it was also specific in all three monkeys to the target being frontal eye fields or supplementary eye fields over M1 or V1.  So there is behavioral specificity, and there is spatial specificity again.  And similar results have been shown for M1 by others.

So proof of target engagement in the absence of any neural measures, is it even possible?  I think we just have to be cautious about our interpretation.  So there is a risk of reverse inference.  Just because we are changing a fear behavior, have we definitely sonicated the amygdala; how could it have been a different region?  So I think the best we can do is to be very careful in our design.  So have good spatial and behavioral controls, including sham, to show specificity and to also show really good acoustic simulations, which we have heard about a lot in the previous session, and ideally based on individual skull anatomy.

But obviously much better if we can also combine our behavioral measures with MR measures.  So I am going to show you a couple of examples of studies that have done that.  So this is one study that actually had both behavior and a neural readout in it.  It was conducted by Nima Khalighinejad here in Oxford, as well.  So he was interested in sonicating the interior cingulate cortex or the basal forebrain, and had a sham and a parietal operculum control condition.

Without going into the details of his task, he had some MRI data which was resting state data, and here I'm showing the one from the basal forebrain sonication, and what we're plotting here is for each voxel in the brain, I can compute its correlation with every other voxel in the brain and so I can then compare that pattern in the TUS target condition, so basal forebrain, and compare it with the control conditions.

So what you can see here is that the changes in coupling are very specific here to the basal forebrain, so they're to the local target, and that was sonicated or targeted, and then also to some very strongly connected circuits that are even at rest very strongly connected to the basal forebrain.  So they showed enhanced coupling with the basal forebrain that was targeted.

And then together with that, Nima also showed that there were some very specific effects on behavior.  Here I am showing you the effect for the ACC, but there was another one that was specific for the basal forebrain.  So one small caveat here is that the MRI was acquired in different sessions and in some cases even different monkeys than the behavioral measurements.  So they were not simultaneously recorded.  But it was done using the exact same TUS procedure.  So that's one really good start for showing that those two things go hand in hand.

But let's briefly review what these different measures can tell us a little bit more for the MR based measures specifically and what we have at the moment, and which way they might be useful.  As I mentioned, I will talk a little bit more about resting state, spectroscopy, ASL, and task-fMRI.

So let's look a little bit more at the nature of the resting state effects that we tend to see with ultrasound.  So here this is work by Lennart Verhagen, which we definitely have to hear about in this workshop I think, and so here Lennart and the group sonicated in macaques again either the supplementary motor area, SMA, or the frontopolar cortex here at the frontal part of the brain.  So what he is going to look at is how the connectivity fingerprints, again, connectivity established as just BOLD fluctuations that co-occur in time between different parts of the brain, how they change with ultrasonication.

And so I'm going to walk you through these graphs in case you haven't seen them before.  So what is shown here on the left in this spider plot is how strongly the SMA connects with other regions of the brain, for example here in the ventromedial prefrontal cortex under the normal control condition in blue, you can see SMA connects less strongly.  So less far on the circumference of this circle with the ventromedial prefrontal cortex for example, but we know it connects strongly with M1 or a midcingulate area.

So what Lennart then showed is that when you sonicate the SMA, that fingerprint tends to get exaggerated.  So the regions that are already at rest not so strongly connected with the SMA show an even less strong connectivity profile with SMA.  So you can see here they go closer into the center of the circle, while the other regions that are already strongly connected to SMA are at rest in a control state, get even more strongly connected with the SMA.  So the resting state connectivity becomes stronger.  And that's not true when the frontal pole is sonicated and we're looking at the SMA fingerprint.

And then what I first said, when we look at the frontal pole, you can see that the rest in blue is very similar to the SMA in red, but the fingerprint gets exaggerated, basically made more extreme in terms of its normal natural state when the sonication is to the frontal pole.

And then there's also a plot on self-connectivity.  So we can ask the SMA how does it connect to itself, to adjacent voxels in the same region, and you can see that when it's sonicated it increases self-connectivity to itself, and the same is true for the frontal pole in yellow.

So I think it's quite important to understand these, because a lot of the time we are thinking about are effects just really local, could they even be vasculature or are they also remote, and so where we can see that they are definitely local, there's definitely also remote changes, but those remote changes seem to be very specific to the network that the targeted region is part of.  So it seems to be selective and interpretable, and you can make very clear predictions about what regions we might change when we sonicate a specific circuit.

So Lennart also looked at how long they lasted; in this monkey protocol it even lasted for something like up to two hours.

What we haven't talked about is how task-fMRI might be useful in coming to this, and so in terms of task-fMRI, there's not really any human study to date that I'm aware of at least that's published and out there, but there is work again coming from Matthew Rushworth's group here by Davide Folloni, and Elsa was involved in this work as well.

So here they were applying ultrasound to a part of the lateral orbital frontal cortex area, 47/12o, and bilaterally, and here as far as I know for the first time they measured behavior and task-fMRI at the same time.  So simultaneously to show whether the two might match in terms of showing target engagement and the predicted behavioral effects.

I'm not going to show you the behavioral changes.  They're very beautiful and you can read the paper and, again, they're very specific and as predicted, but I'm going to show you the changes in the task-fMRI networks that were detected.  So here you can see just during the sham, so the control state if you like, what the network looked like that were specific for adaptive choice representations, and then here you can see what that network looks like when we compare sham with the targeted sonication.

And so again, it seems that there is both local changes that are more specific to or close to the sonicated area, and that's maybe what we would most likely predict.  And then when we look at slightly different contrast that also involves some activation in the normal sham control state in this anterior cingulate region, we can also see that that region, which is again a connected region part of the same network, and especially here part of the same functional and task-relevant network, it's also changed after the ultrasound, even though that is a remote region.

So again, like with the resting state, we don't see random, remote effects that are widespread, but it seems specific, both locally specific and functionally relevant remote changes in connected regions that we see.

Just briefly also mention, there have been two studies that I'm aware of at least on arterial spin labeling, and so they can look at local and remote effects as well.  So this is to assess cerebral blood flow or perfusion and so in this study where the right amygdala or the left entorhinal cortex were targeted, there seemed to be some increased perfusion, both locally around or at least close to the sonicated area and also elsewhere, but then in other studies seemed to show more of a decreased effect on the perfusion measurements.  So I think it's early days for this type of measure, and I'm not an expert, but maybe some other people can comment on it.  I think it's interesting, but we don't quite know yet why effects might go in quite different directions at the moment.

Then I will just briefly flag, because it's part of the MR-based measures, but this has obviously been introduced beautifully by Elsa yesterday in the first or the second lecture of this workshop, with magnetic resonance spectroscopy, we can really look at remote -- cannot look at remote changes, but we can beautifully look at local changes.  So we have to place the MRS voxel in the area that we're interested in and that we're targeting or maybe a control area, and so what Elsa showed beautifully is that in the posterior cingulate cortex, she finds this decrease in GABA.  So here MRS is great for looking at specific metabolites and at slightly higher field strengths we can look at even more fine-grained distinctions between those.

So I thought I would bring one more readout which we haven't talked about that much, more for a discussion point maybe later before I close, and that's maybe whether another way to increase our confidence for target engagement for example maybe in later on when we get brave and we start sonicating brainstem regions, could be physiological monitoring, but not just for the brainstem in fact, but I think for me as someone who's looking at the amygdala in our human study at the moment, we know that the central nucleus of the amygdala can have an influence on breathing apnea that participants would have no awareness of, and so monitoring physiology can be another way of showing target engagement potentially or at least give us some further confidence that we have targeted the correct areas or the relevant areas.

So there's many things we could record here from ECG to pulse and breathing to also skin conductance or other things, and so I think in our data at the moment, where we're also avoiding the central nucleus and trying to get the basolateral amygdala mostly, we do not see any differences in breathing or pulse or ECG before versus during versus after the sonication, but again, we don't have that much data in our human participants yet.

So let me just summarize before I close.  I tried to highlight the importance of behavioral and spatial specificity for proof of target engagement.  I've also tried to highlight the importance of acoustic simulations, especially for behavior-only studies where we have no other way of knowing where we have targeted, and then neural readouts obviously can be much more direct in showing neuro-target engagement.  We were talking about MR ARFI earlier; that would be fantastic to have.  We don't have it at the moment.

But different neuroimaging modalities already now are sensitive to different kinds of effects, local versus remote, metabolites or connectivity patterns, et cetera.  And so I think one additional thing might be to report changes in physiological markers, as well as the behavior stuff we all like to study.

I think clear evidence of target engagement is there in a lot of NHP studies, but emerging in humans as well.

With that, I'd like to thank you for your attention and thank my wonderful lab and the organizers again.  Tulika, over to you.

Parameter Space

TULIKA NANDI: Thank you, Miriam.  In this last session today, I will be talking about the parameter space, which is something everyone has heard about in a number of talks by now, but just to recap: when we want to run an ultrasonic stimulation study, we need to start by deciding what fundamental frequency, intensity, and duration we want to use, but a lot of the studies are actually using pulsed applications which introduces this new level of parameters which is the pulse repetition frequency or the PRF and the duty cycle.

And finally, we want to know how long we want to make these trials.  Now these could be on the order of milliseconds for something like what we call online effects or online studies where we want to measure the immediate effects, or they might last for longer for offline effects where we are looking for longer lasting neuroplastic changes.  We also have to decide the interval between these, which could again be on the order of seconds, or it could even last hours and days, depending on what we are looking for.

What I have been thinking about a lot is as a human researcher, there's all of this wonderful in vitro and animal work which compares different stimulation parameters, and I've been trying to look through those to figure out if it can help me decide what parameters I should use for the particular applications that I want in humans.

Through this presentation, I'm going to show different bits of data to try and find some general ideas for how we can set parameters.  Unfortunately, I will be saying we don't know a little more often than I would like.  But I do think that there are some general principles that we can take to help us plan our studies.

One thing that I would like to mention before we go into this is that we can use the protocol to optimize the active ingredient in ultrasound, and these can be divided into two major categories, which is the thermal effects and the mechanical effects.  Thermal effect of course is heating.  The mechanical can be acoustic pressure leading to particle displacement strain, or the acoustic radiation force, which can lead to acoustic streaming, and finally stable or unstable cavitation.

Now what we can do by using our parameters is tweak these a little bit.  So as Kim has mentioned in her talk yesterday, all of these will be simultaneously present.  We are not going to turn one off and turn the other on.  But we could kind of bias our protocol towards one of these effects.

So for instance, the particular displacement strain is inversely proportional to fundamental frequency, while the ARF is directly proportional.  So if we think that our active ingredient is the particle displacement strain, then we might want to use lower fundamental frequencies.

Another example is the particle displacement is proportional to pressure, whereas the ARF is proportional to the square of pressure or intensity.  So if we raise the pressure, the ARF is going to increase quicker than the particle displacement strain.  This is just a framework that I would like you to keep in mind as I go through a number of the parameters, and I'll try to come back to this a little bit again at the end.

Let's start with the fundamental frequency.  This is something that you have already heard a number of times during this workshop, that the skull attenuates higher frequencies more than lower frequencies, and you've already seen this figure where we see that by the time we get to about 900 kilohertz, the human skull will actually attenuate about 50 percent of the ultrasound.  So even if we think that our active ingredient scales with the frequency, so let's say the ARF, we have to remember this limitation that in the brain we will lose a lot of energy if we use higher fundamental frequencies.

But along those -- if we set the attenuation aside and look at EMG responses in mice, what we do see is that the normalized EMG amplitude is actually higher at lower frequencies, and this is significant, because in mice there's not a lot of attenuation.  The skull is very thin, and most of the energy is actually getting through, and still we see that the lower frequency is more effective.

Here's another study that compared the number of different frequencies and in this range from .3 to .6 megahertz, we see that the success rate -- so the probability of eliciting an EMG response is higher for the lower frequency.  And they also looked at higher frequencies in the megahertz range, and the thing that I want to point out here is that the x-axis looks completely different where to achieve the same success rate with higher frequencies we need to use much higher intensities.

One possible explanation for this is that the focal size is much larger at low intensities.  So maybe we are just stimulating a larger volume of neural tissue, and because of that, we are getting a stronger response.  But I've already shown you that in vivo there's a bit of evidence that even if we set aside the attenuation, there is higher efficacy at lower intensities -- sorry, lower frequencies, and possibly this is pointing towards a particle displacement driven mechanism.

What I haven't shown you, but there is a little bit of evidence that in vitro actually, the higher frequencies tend to be more effective.  This might be driven by acoustic radiation force.  We don't know.  And we cannot rule out the possibility that in different settings, there are different active ingredients which are important.

And this is something that I have already mentioned and given all that we know about this at this point, I am inclined to say that for human studies, it is useful to use lower frequencies that will actually be transmitted into the brain.

Having made that decision, let's move onto the concept of dose, which has been mentioned several times again, because this is something that we as a community are actively discussing, but before I actually go into dose, I would like to make this distinction between dose and exposure, which is a concept that comes from radiation therapy and has been discussed in the context of ultrasound in these two papers that I've cited here.

So we are exposing the tissues, in this case the brain, to a certain amount of ultrasound or a certain intensity.  But only a part of that is actually contributing to neuromodulation, and one place where this is relatively easy to think about is thermal effects.  So only a part of the ultrasound is actually getting absorbed in the tissues and is leading to any thermal effects or neuromodulatory effects, and that is what we are interested in.

We are not necessarily interested in exposure, for neuromodulatory effects.  We are still interested in it for other reasons like safety.  But what I want to point out is that what we've been talking about so far is really exposure rather than dose.  Even when we are talking about intensity in the brain and not extracranially, we're still talking about exposure, because we don't know how much of that is actually contributing to neuromodulation.

There is another concept, again borrowed from radiation therapy, which is that of effective dose.  So let's say some of the ultrasound is absorbed.  We do need to factor in a couple other things to basically get to this point of what is effective dose.

So let's say we have absorbed dose or exposure.  We might need to factor in the mechanism of action.  So let's say that the acoustic radiation force is our active ingredient.  In that case, if we have the same intensity, the higher frequency stimulus is actually going to be more effective.  The other thing we need to factor in is the brain area that we are interested in, and when I say brain area, what I'm really talking about is the types of neurons in that area, because we've heard from a number of speakers now that different types of neurons might have different sensitivity to ultrasound.

And only when we have factored in all of these things can we get to what is the effective dose.  So just to reiterate, the ultrasound effective dose likely depends on the mechanism of action and the target brain area, but also on the desired biological outcome.

This directly relates to what Li Min Chen just mentioned where if you are interested in an inhibitory net outcome, then potentially we want to use lower intensities than if you want an excitatory outcome.

For now, going with the assumption that what has been reported in most of the studies is actually exposure or exposed dose, I'd like to propose this provisional definition of dose as an integral of pressure or intensity over time.  This is of course not something that has been pulled out of thin air.  It is based on ideas presented in these papers, and I'll show a little bit of data to support this provisional definition.  Here is some in vitro data where they looked at calcium response in cortical neurons, and you can see clearly that as the intensity of ultrasound increases, the calcium response scales with it.

Similarly, as the duration of the pulse increases, the response increases.  If we look at in vivo data, here we are interested in mouse EMG response, we see a similar effect.  Again, as intensity and duration of sonication increase, there is an increase in the success rate of EMG and -- of eliciting EMG.

From this same study, what they looked at was what happens if you simultaneously manipulate the intensity and the duration.  You can see from this figure that at the higher intensities, higher durations, and especially at the combination of those two, we get the strongest effects.  When we fit our model to this data, what came out of that was these lines of constant success.  This is, to those of you who are familiar with electrical stimulation, very reminiscent of a strength duration curve.

To achieve the same outcome, we can either use a high intensity stimulus of short duration or a really long stimulus of low intensity. 

So going back to our provisional definition, I've shown you a little bit of data from different models that our definition of dose as integral of pressure or intensity over time is supported by some empirical data.  But there's a lot of points that still need to be investigated or discussed.

One thing we don't know is the nature of this dose response relationship.  So in all likelihood, there is a threshold or a minimum intensity or duration that is required to get an effect or a saturation point beyond which increasing the dose is not going to be useful.

We don't know whether the effect is linear or nonlinear.  Based on the data we've seen, I suspect it is nonlinear, but why this is important is because it will help us to titrate dose.  So if it's related to pressure or intensity, when we increase the pressure, the intensity actually goes up quadratically, and that is something that's important for actually titrating our dose.

And what I've shown here initially was calcium data, and then I showed you EMG data, which as you can already see are quite far removed from each other.  So the dose might not be the same in both cases here.

There could be homeostatic effects that Lennart mentioned yesterday.  So if I apply a neuromodulatory protocol today and it has some sort of neuroplastic effect, if I come back and apply the exact same dose and protocol the next day, I might get slightly different effects because the system has adjusted already.

In this section, the last thing I want to mention is this idea of dose rate.  So this is easier to think about with thermal effects.  If I have a very high intensity stimulus, which lasts for a short duration, I would actually get more heating than a low intensity stimulus that lasts for a very long time.  In this case, it's possible that the heat is being conducted away or it's been carried away by blood circulation, but basically even though the intensity, the integral of intensity over time is matched, we still get more heating in this condition.

We don't quite know from mechanical effects, but this is something to keep in mind, that the rate at which we increase the dose might be important.

So far, all the data that I showed about dose was looking at continuous ultrasound, but let's now introduce this new layer of pulsing.  So here's an example that looked at pulse versus continuous ultrasound in the C. elegans worm.  You are seeing on the y-axis the response frequency, the orange curve, the orange line is temperature increase.  So please leave that aside for now.

But here are the two points I'd like to focus on.  If we look at 50 percent duty cycle, we see a much higher response frequency compared to 100 percent, which is essentially just continuous ultrasound, and this is despite the fact that there is a lower integral of intensity over time.

So I'm now asking myself, do I need to throw out my idea of dose that I've been talking about for so long?  Luckily that's not true.  This is data that you have seen from Shy Shoham yesterday, and what they showed was that this is all pulse ultrasound.  There's no continuous here.  If you either maintain the same peak pressure and change the duty cycle, so here when they're changing the duty cycle, you can think of it as changing the dose just by increasing the duration of ultrasound application, or maintain the same duty cycle and increase the pressure, and in two different types of neurons, they have shown that the response, the calcium response, scales with both pressure and duty cycle.

So it seems like increasing the pressure and the duration of sonication are still helpful.  So then what is it about pulsing that gives us some sort of added benefit that is either independent of or added on top of the dose effect?

The way we can think about this is that the active ingredient in ultrasound might be turned on and off by pulsing, and we could actually turn it on and off at biologically relevant frequencies.  This is an idea that has been used a lot in electrical and magnetic stimulation.

I'll show now a few examples of data showing that the repetition frequency or the PRF can be an important factor.  Here's some in vitro data where they looked at, again, calcium signals and compared high PRF of 1,500 hertz with a low PRF of 300 hertz, and in this particular case, the 1,500 hertz was more effective.  This data is particularly convincing -- sorry, one more point before that.

The integral of intensity over time is matched here.  So what I'm calling dose is matched in these two conditions.  But the reason it's particularly convincing is because this is in vitro and there is no auditory confound.  If we instead look at in vivo -- now this is spiking activity in rats, again the dose is matched, we see that the regular spiking neurons or presumably excitatory neurons, scale their response when the pulse repetition frequency is changed.  But this effect is not seen in inhibitory neurons.

Now, one thing that is missing here is that if we look at the audiogram of rodents, we do see that they are more sensitive to the higher frequencies, and we can't rule out the possibility that these excitatory neurons are responding more at higher frequencies because this is more audible to the rats.

Also what I've been talking about so far is online effects of PRF.  So short stimulation looking at immediate effects.  There is more convincing data when looking at offline effects, and this is the 5 hertz protocol which has been mentioned a few times now.  So on the y-axis, we are looking at motor cortex excitability measured using TMS, and when we look at the red line, which is a 5 hertz protocol, we see an increase in excitability that lasts for a little less than an hour.

If instead they use 1,000 hertz protocol, which is also nested within another frequency, we don't see the same effect.  There is another example which is the 10 hertz protocol.  In this case, we actually get a suppression of the MEP which lasts for at least an hour.

These are the two protocols that have been used a fair bit now, and for 5 hertz for instance this is also used in TMS, we know it is a frequency that is important for memory formation.  So we can look at what frequencies that we are interested in biologically and potentially try those in our ultrasound protocols.

So preliminary evidence suggests that the frequency of pulsing is relevant for neuromodulation.  What we don't know yet is what specific frequencies we need for optimizing or maximizing specific outcomes.  The other thing to think about is is the duty cycle simply a way to manipulate dose, or does it have further significance?  One thing I can think of is that the ultrasound needs to be on for a minimum duration to interact with ion channels or whatever other mechanism we're interested in.

But also what was mentioned by both Keith Murphy and Shy Shoham is that it's possible that certain duty cycles are more effective for activating inhibitory neurons.  So at the lower duty cycles, we might see a net inhibitory effect, but at higher duty cycles, when we are activating both types of neurons, the net effect we see actually depends on the proportion of those neurons in the brain region that we are targeting.

I would like to leave you with a few last thoughts.  The first one is this idea of the slew rate.  This is a graph you've already seen from Kim.  If the ultrasound is turned on all of a sudden, there are these complex frequencies which are audible, leading to an auditory brainstem response.  If it is smoothed, we don't see the same auditory response.  But the thing is we don't know whether these smoothed pulses are as effective as this.  Is there something about turning it on quickly that's actually helpful?

So in humans, there are two bits of data that I'd like to show.  This is again a graph you've already seen, where we used this sort of rectangular pulses, and what we see here is that the on-target ultrasound, the active control, and just a sound without any ultrasound at all seems to have the same effect.  It is possible that there is some neuromodulatory effect hiding behind this, but with this particular outcome measure, we cannot tell it apart. 

In this other study, we looked at the effect of ultrasound on a visual evoked potential, which is what you're seeing in red, and when we apply ultrasound, which you see here, we see that in some parts of the visual evoked potential, there's a bit of a modulatory effect from the ultrasound.

This was using ramped pulses.  So we do think that ramped pulses can also be effective, but you will also notice that we had to drop the pulse repetition frequency to actually have a ramp that was long enough to remove the auditory confound.  So that's something to think about.

The idea of state dependence has been brought up a few times, and this is just some data from action potentials in awake behaving monkeys.  In this graph, please ignore the triangles which are latency measures, but if you just look at the blue and the red, the blue is without ultrasound.  If the ultrasound comes on while this neuron is not active, then you see in red that when the ultrasound is on, there's an increase in activity.  This is just showing the difference.  So you can see there's a net increase in activity.

If on the other hand the neuron happens to already be active when we turn on the ultrasound, then we actually see a small dip in the activity of the neuron, which is again shown here in the difference plot.

The very last thing I'd like to say is we really need to think about what we are optimizing for.  What is it that is our desired outcome here?  I've already alluded to this a bit, because we are looking at spiking or calcium response, and then we jump to something like EMG and as you can see immediately, there's a lot in between.

Then when we go to clinical effects, there are other things in between.  So a protocol that is optimal for one outcome might not be optimal for another outcome.  There might also be differences in different applications.  Let's say you're starting a cognitive process that is very easily affected by auditory confounds.  Then it's quite difficult to use an audible protocol for that.

For clinical applications on the other hand, if we are able to convincingly show that there is some effect of the ultrasound over and above the auditory confound, then it doesn't really matter if the patients can hear it when we are using it for treatment.  And also there's the issue of whether we want acute or chronic effects.  Here's an example where they use this 10 hertz protocol, which has now been used in a number of studies, a number of which Miriam also mentioned, and what we see is that if we look at the amplitude of the somatosensory evoked potential and we look at sonication of the VPL in open circles versus off-target, right at the beginning we actually don't see any effect.

So if we look in this bit, we would say that this protocol is not effective.  But if we look a little bit later, we see that it has quite robust effects, which as I said have now been shown in a number of studies.  So let's go back to what we started with.  What I would love to be able to say is that X is the active ingredient in ultrasound, and it interacts with the target neurophysiological processes.  We can optimize X by tweaking our parameters in a certain way.  So this would be designing protocols based on a knowledge of the active ingredients and their interactions with both neurophysiology and the parameters that we can tune.

Here's what I started with.  So this would be one great example.  If I know that particle displacement strain is the active ingredient, then I will use lower fundamental frequencies.  Being a little bit more pragmatic, most of the data that I have shown you essentially says we can optimize the desired neuromodulatory effect by selecting a certain parameter to something.

And we can then go back and say that this parameter is relative to a certain active ingredient and make an inference about that.  This is a very resource-intensive method, and we might be missing parts of the parameter space, because there are so many things we can change that we might be missing the optimal just because we cannot practically try all the combination.

What I would hope to see is that we'll be building mechanistic models and also developing effective protocols in parallel, which is the most likely scenario.

I'd like to thank Lennart, Kim, and Charlie, who helped me a lot to develop the ideas that I have presented here.  Thank you.

Q&A

ELIZABETH ANKUDOWICH: Thank you so much, Tulika.  What a fascinating talk.  So I think we might have still some time for a question or two for the speakers.  There was a question here about resolving some of the kind of increases and decreases in response that you have seen across studies, and could some of the speakers discuss the potential reasons for these divergent findings?

MIRIAM KLEIN-FLUGGE: I can start.  I think it depends exactly which effects we're talking about, but talking about the resting state effects maybe I think is actually quite expected that sometimes effects can mean an increase in coupling with a connected region or a decrease in coupling with a connected region from the target.  So I think that's because what we have seen generally, and that's part of Lennart's work that I presented as well, is that the sonicated region becomes more of a function of itself and its closely connected regions, but that can of course involve some closely connected negatively coupled regions.

For example, when TMS is used as a treatment for depression, it seems to be most effective when we're targeting the part of the dorsolateral prefrontal cortex that is negatively coupled with subgenual ACC.  So I would imagine that if we apply focused ultrasound to subgenual ACC, we would actually see maybe a decrease in its coupling as an even stronger negative coupling with the dorsolateral prefrontal cortex, because that's a function of the baseline of the region, and I think we can't even start with BOLD to talk about this at a mechanistic or cellular level, because it's really complex.  So it reflects the interplay of different neuron types.  We don't know if it's mainly projecting to a target region's interneurons or excitatory neurons.  So a connected region to the target we're sonicating.

So I think another thing that has been mentioned is the composition of the region; in cortex we have a different inhibition to excitation balance than in subcortex, for example.  So I think there's lots of potential factors, but maybe others can add to it.

LI MIN CHEN: We have monitored carefully the temperature, and when we started using the ultrasound the parameter we choose to use, and we didn't detect any significant temperature change at the target, and also we have recently developed a sequence can actually just taking the functional BOLD MI image and then calculate the temperature and we have demonstrated that algorithm actually can detect .3 degrees C temperature change.

So we do think that can be building to our pipeline when you acquire functional BOLD MRI data, and then if you just export the complex data.  So you can actually post offline analysis to quantify the temperature change during that modulation period.

ELIZABETH ANKUDOWICH: So that response was in question to kind of the relationship between temperature increases related to modulation of the BOLD signal.

Follow-up question: does it matter when you take the measurements following the sonication, could that explain some of the differences in the patterns of BOLD response that you're observing?

MIRIAM KLEIN-FLUGGE: I can comment.  I think it is more likely that the protocol and the composition of the target region determine the direction of the effect, but it's a good question.  I think so far, no one has like compared within the same target region the online, the offline effects within the same study.  But I would imagine that other factors play a bigger role for the directionality of the effects.

ELIZABETH ANKUDOWICH: Then one last question for Tulika: on the effect of PRF, has anyone looked at the pulsed 5 hertz or 10 hertz protocols, meaning that within the 20 milliseconds on time?  So keeping the theta frequency but then adding another layer of pulsing.

TULIKA NANDI: Thank you for that question, Brad.  I don't know of anyone who has looked at that.  This is certainly an idea that has been used in TMS where we nest one frequency within another frequency.  Given that these 5 and 10 hertz, at least in the examples I showed, seemed to have opposite effects, I wouldn't really be able to say what happens if we superimpose them on each other.  It's something that remains to be seen.

ELIZABETH ANKUDOWICH: Thank you so much.  I think that it's probably a good time to transition to the panel discussion.  Thank you so much to all of our speakers for this session for joining us, and I'll turn it over to Lennart.

Panel Discussion

LENNART VERHAGEN: Hi, everybody.  I'm afraid that it's me again.  But I am very happy that we can rely on the expertise of these panelists.  You've met quite a few of them already, either as speakers or just during the session.

We had some fantastic talks in this last session.  I would love to hear some of your reflections on the important topics that were discussed.  For example, Holly was really diving into definitions of what are targets and at what level do we even define this, and maybe Li Min Chen and Elsa, both your work, you are targeting one specific site, but then looking at all of these network effects.  Would you mind reflecting on what are we actually targeting?  Li Min Chen, if I may give you the floor first?

LI MIN CHEN: Yes, so in our study, we choose the target because we have in the past established the whole brain somatosensory network, and then how those networks relate to the function.  For example, in our ongoing work we have -- because the animal studies, if we're trying to close the loop and then we need some sort of readout, and how we interpret the network change, and then the animal will not be able to tell us the behavior in our setting, and then so what we're hoping is the human pen studies showing there was a signature of the pen, the pattern actually are indicative of the subjective pen perception.  So our idea is if we can use the network as another way to infer the behavior relevance and so that's kind of where we're going after.

Then we also did some network analysis before we do any modulation to identify the hubs that Holly has mentioned.  We have found some of the region is more serve as a hub than others.  So that also gives us some indication where we really want to perturbate.  That's our thinking.

LENNART VERHAGEN: Fantastic.  And Elsa, you have also targeted specific sites, but if we're going for maybe clinical translation, how should we take it into account or should we targeting multiple sites?

ELSA FOURAGNAN: To answer the question of what are we targeting, I think this was a very important to think about the state -- so even if we're just targeting a region, if it's a rest or engaging a task, it will engage other regions.  So the network might be very different.  It's very important to consider the state of the region that we're targeting.  But the question of multisite TUS is obviously afforded to us because of this spatial specificity of TUS compared to other NIBS technique, and we could easily think about amazing study where instead of looking at specific regions, we could be perturbating the communication between two regions.  So a little bit like in some animal model where you do crossover dissociation or you would be lesioning on both hemisphere, say, a region A on one and a region B on another one, knowing that what you care about is the communication between the two.

So you would leave the analogue intact, but on both hemispheres, you would be disturbing the communication between the two regions.  So that would be a brilliant model for actually looking at instead of just looking at a brain region to a behavior, you'd be really looking at the importance of the communication between two regions and that specific behavior or cognitive process.  I think that's really exciting.

LENNART VERHAGEN: That is a very exciting idea.  We've also seen in other noninvasive brain stimulation techniques that such specific and tailored stimulation also affords really tight controls.  You've already talked about state dependency.  That's actually one of the controls, to control the state of the area you are stimulating, or here if you're talking about the communication between two brain regions where we're actually changing the phase on how we would stimulate them, rather than the sites.  But I'm sure there are many other controls that would be appropriate, and maybe Til and Tulika, I could ask you to reflect on this.

TIL BERGMANN: I think we have a learned a lot in the last 30 years of noninvasive brain stimulation already, and we don't need to reinvent the wheel.  In a nutshell, the perfect control condition does of course not exist.  But it would mimic every aspect of the experimental condition, other than the assumed actual mechanism.

So this would ensure that we can eventually attribute our observed effects to the actual transcranial stimulation of our target neurons, and not to alternative explanations like peripheral or to tactile or heating related (inaudible) confounds, which are particularly relevant for the acute effects, or for example, the participant expectations which are also relevant for delayed effects.

So a typical thing that is done is a sham stimulation, where you either flip the transducer by 180 degrees or you use ultrasound absorbing materials, which can only mimic partial aspects like the airborne sound but not the bone-conducted sound and is therefore an insufficient like low-level control condition, per se.

What has been established a bit as a gold standard in other NIBS techniques is an active control, where you do stimulate another brain region that is not involved in the neurofunctional or behavior, not part of the same network.  So there's more of a high-level control that can mimic the sensory co-stimulation confounds much better.

And finally, focused ultrasound actually other than TMS or TS, has the chance to steer the beam, the focus, also to white matter regions or ventricles to potentially avoid neurostimulation entirely or with face areas or acoustic lenses maybe even defocus the beam intracranially completely while preserving the effects on bone and skin, et cetera.  So we have a lot of opportunities for controls, and we should make use of them.

LENNART VERHAGEN: That sounds like a fantastic idea.  Tulika, do you have something to add?

TULIKA NANDI: I think the only thing I would like to add is some of our practical experience with controlling in Til's lab where you actually need to control for it in every individual participant, because something like the auditory confound can be different for each participant, given the shape of their skull.  Which is why the active control side is going to be much stronger than just trying to mask the sound.

LENNART VERHAGEN: Of course.  That's a very good point to take into account.  We've talked about some of the controls for these effects, but a large segment of this session was devoted to actually measure and maybe hopefully quantify some of these effects, and, Miriam, I was noting that both you and Li Min were quite heavily reliant on the magnet for an MR readout of these effects.  Would you mind reflecting a bit on different neuroimaging tools that combine well or not so well with ultrasound?

MIRIAM KLEIN-FLUGGE: Sure, I can briefly comment.  I think I've hinted at it a little bit in my presentation already, but it obviously depends on what we want to measure.  So with MEG or EEG, we're much more likely to measure something that is closer to firing or maybe evoked responses.  So I think with the MR-based measures that are highlighted, we still have quite some flexibility.  MRS is very different and more local than some of the other more remote effects we can measure.

But yesterday we talked about white matter changes as well.  There's the potential to consider (inaudible) which I haven't seen yet.  And so the question I think is what we want to measure; are we focusing on local or remote effects?  Are we trying to be as close to spiking or is it more of a network effect that we're trying to measure, and then I think different combinations obviously bring different technical challenges as well, which is a separate issue.

LI MIN CHEN: In our group, I think we can share as well, so try to tackle the advantage of putting a subject in a scanner.  We have incorporated like the DTI and SWI, all sort of advanced parametric imaging, and to actually serve as safety monitoring, because those advanced MRI techniques contrast can be sensitive to microhemorrhage or the edema immediately or before or after the ultrasound exposure.  So that is part of our protocol.  Actually we will acquire those measurements before and after and try to use those imaging measurements to control or to monitor the safety, in addition to our functional modulation study.  I think that's something with advanced system that can be cut in 20 minutes when full set.  And if you determine that's a very important indicator, we can acquire.

LENNART VERHAGEN: That's a fantastic addition.  If I can criticize some of my own work, we have used plasticity-inducing protocols, and then for some stupid reasons, we're choosing a very indirect measure of these plasticity-inducing protocols.  We're actually using resting state fMRI, and Til, can I ask you to be maybe my harshest reviewer and reflect a bit on actually what would we consider proof of target engagement for plasticity-inducing protocols?

TIL BERGMANN: I don't even think that it was a stupid choice.  It's one of the possibilities.  It's important to consider the difference between excitability and excitation.  If we have a super-threshold technique or protocol that produces firing action potentials, et cetera, then we produce immediate changes, acute ones, that can be picked up more or less directly with EEG, fMRI, et cetera.  If we have a subthreshold technique that is maybe just changing the excitability online or following an offline protocol based on synaptic mechanisms for a longer time period, then we still need to have some signals, something to drive the network, to make those subthreshold changes visible.

This can be spontaneous activity, just happening in the brain, and then resting state fMRI is one of the measures, also resting state EEG, right?  To pick it up.

But if we're going to use a sensory stimulus, we can use a task, we can use TMS in the case of the motor cortex to drive it, produce an MEP that then may be modulated by the more subthreshold effects of focused ultrasound.  So we have to think about what kind of changes we believe we have produced and then choose the respective imaging modality, and it may be drive the system.  It might be easier to pick up these effects if we put it under pressure.  So if you use a task or something.

LENNART VERHAGEN: That's fantastic.  So we have discussed about effects or designs or controls, how we can measure it and what might be appropriate.  But I would also like to move here in the second half of this discussion to what we might be able to do better or what we should be really considering. 

So changing pace and topic a little bit, Elsa and Miriam, if I could invite you to share your wisdom on one critical aspect that we've heard before, especially in Session 5 and it came back here in 6: the uncertainty.  It seems quite daunting.  Are we even sure that we're delivering any relevant energy where we'd like it to go?  How are we going to go about this?  How are we going to survive this?

ELSA FOURAGNAN: To start, this is something that we are always very much thinking a lot about, and it's true that because in human studies particularly we don't have direct readout of target engagement, which would be potentially true with MR ARFI, but that's not the case yet.

So we have to rely on very good skull imaging.  We have to rely on very good, accurate simulations and we have -- and all of these can carry their own uncertainty, right?  If the skull image are not that good or if we don't even have one, or if the simulations don't match exactly what we have in the tank, and then there is the day on the day maybe the issue of coupling, the issue of correct positioning of the transducer with respect to what you've planned, when we are adding all those levels of uncertainty, as you said, it's very daunting.

So I think the first thing is to be aware of all those level, very clearly report what we've learned, and perhaps even being able to quantify it, and I'd love to hear what Miriam thinks on those, because, yes, this is a huge task for all of us.

MIRIAM KLEIN-FLUGGE: Yes, I very much agree with all you said, Elsa.  I think ideally really it would be great if we were able at one point, but I think we're not quite there yet to quantify the uncertainty, at least to have error bars around our various measures, and I think there's all these dimensions to it.  So it's what the spatial uncertainty, if your angle of the neuronavigation is just slightly wrong, that will affect deeper targets so much more than shallow targets.  That's one level of uncertainty.

And then there is uncertainty around the pressure, and Brad was talking about how we don't quite know the absorption in trabecular bone, and then there is maybe also temporal uncertainty, how long our effects last.  So I think, a, being aware of all the dimensions; b, having some way of ideally quantifying what that uncertainty is, but I think for that, all the stages where they add up still need to add, be better or become more precise I think.

What do you think about uncertainty around coupling?  You mentioned coupling.

ELSA FOURAGNAN: Well, it's one of the ones that I think -- and perhaps across labs we don't necessarily have the same way, and it's where the loss can be also nonnegligible.  So thinking about hair preparation or how different even experimenters might use different techniques.  What could be -- we could definitely try in a tank to measure different hair preparation technique or how do we minimize this.  That is just one loss, right?  You're talking one place where things could go wrong, with respect to what you've planned.  But I agree that all the issues of what you're describing in neuronavigation is also a big one here, registering the participant and the transducer in space while together, that could also be a challenge.

LENNART VERHAGEN: So that's uncertainty that happens already within one experiment, and it just so happens that maybe our participants also introduce a little bit of uncertainty.  Unfortunately, not everybody is a clone or a twin.

Tulika, would you mind reflecting a bit on how we are approaching or how we can deal with the variability that we have between subjects?

TULIKA NANDI: Definitely.  Here I think we can really draw on other noninvasive brain stimulation, because we have had years of experience with this issue and there are some obvious things that we can start thinking about.

One is anatomy.  Every one skull looks different, and that's something that's been discussed a fair bit with modeling and how we need to account for that.  The other would be state dependence, and this we can look at it in a couple of different ways.  One participant comes in after three cups of coffee; one hasn't had any.  That can cause some differences.

But also a lot of work that Til has done in the TMS field, where what is it that's happening in the brain at the moment when you apply the ultrasound?  If we don't know what's going on in two different participants, then we are going to see different effects in both of them.

And then there's of course the work that Holly presented today that if people have different functional connectivity or some sort of different baseline physiology, then the same protocol might have different effects in different individuals.  That's not necessarily bad.  We just need to be aware of when and where to apply which protocol.

Miriam, would you like to add to that?

MIRIAM KLEIN-FLUGGE: I think you've covered it all, Tulika.  Maybe the only thing that was mentioned earlier, which is something we're trying at the moment, is to really match the in situ value or pressure that we can reach within the brain of a given individual, which is related to the planning.  So I think that will relieve some uncertainty, but there will definitely still be loads left.

LENNART VERHAGEN:  We have seen a lot of variability between participants, but also in this session we saw -- and actually across the whole workshop -- quite some variability in the parameter space, but also in the effects that were elicited.  I was noting between Li Min's and Miriam's talk one more focus on immediate acute effects, other more on delayed effects, and what we're aiming for here in the end are even clinical effects.  One step or one layer beyond.

Til and Li Min, may I invite you to reflect on this?  How do we -- how are all of these different levels of effects related?

TIL BERGMANN: That's really hard to tell.  We know from the other noninvasive brain stimulation techniques that typically the online effects are a bit larger than the offline effects.  So the acute one is larger than the delayed ones, and we also know that one can predict the other.  So quite often when you get a strong immediate effect to a stimulation, then you have a higher chance of also observing offline effects later on, if you use a prolonged protocol, which may be explained by individual differences in the targeting acuity that you obtained or the intensity that you choose is actually effective in the particular subject, so it makes sense that it's related.  But then it's not 100 percent related, because it also depends on the specific offline protocol that you use, which may or may not be effective even though your acute effects were very promising.

And then to get to the clinical effects, we're typically like what has been done in the rTMS field is more helps more, and people have tried to just increase the number of pulses and the durations, and recently we have seen the emergence of this accelerated protocols, the same protocol from Stanford for rTMS treatment of depression, which puts a lot of stimulation in a short period of time, which may also be possible without increasing at least biomechanical or thermal safety with focused ultrasound, because if you have sufficiently long cool-off periods in between, at least those risks do not increase.  It might be different from the neurophysiological ones, but I think that's a promising approach.

LI MIN CHEN: This is quite desirable for us to really understand the long-lasting effect.  I think what I'm thinking, I learned a lot from this discussion, and we are at least planning or hoping to, for example, look at the lasting effect on the neurocircuit aspect, and we have looked at it, the resting state network lasted about hours after the ultrasound exposure, and then we can look at bringing the animal back and look at in days after, and also we're starting to look at the behavior and then hopefully if we ramp up to the level actually it is still safe, and whether we can see any behavior and then we can assess the behavior actually one day immediately after the animal recovers from the procedure.

So that's, yeah.  I definitely that's the interest and that's what I am thinking we can provide probably, contribute more information.  Also maybe we can combine functional MRI with MRS, you know, to look at the metabolites that has been done.  And then we have another measurements that can indicate that a lot of times also related to the plasticity and going at the target.

LENNART VERHAGEN: That would be amazing.  After translating across these timescales from acute to delayed to hopefully persistent clinical effects, Tulika, you were also talking about translating across models from animal to human and hopefully to patients.  There was a lot of work that you were describing, but what truly are the best practices or what the field should be doing and translating across all these models?

TULIKA NANDI: I don't think I have a good answer to solve that question, but I think this workshop helps a lot in terms of collaboration between people who are, let's say, doing computational work in vitro and in vivo, and people really doing the basic science experiments, looking at mechanisms.  If we can then design the next levels based on what we learn from that, we can't really guarantee that what was effective in mice is going to be effective in humans, but that gives us a much better base to start from if we have something to go with to begin with.

LENNART VERHAGEN: That sounds fantastic, and which are the critical animal or human experiments that you feel that are needed?

TULIKA NANDI: I think a little bit more systematic exploration of the parameter space, so there are a few studies where a number of parameters have been simultaneously manipulated, which is helpful to get a general idea, but it is sometimes difficult to pull out from that which parameter was having what effect, and if we can manipulate it one at a time, I think that would be more helpful.

LENNART VERHAGEN: That would be amazing.  So if I can ask the panel to stay on a bit longer, because actually there were some really interesting questions posed that we might not have addressed earlier.  So I would like to do two speed rounds.  One to see if we can still answer some open questions, and then maybe getting back to you. 

There was a question here going back to target engagement.  Is there somebody who would care to comment on the use of PET imaging?  Anybody who dares?

ELSA FOURAGNAN: I'm happy to answer very quickly.  It's obviously -- it would be -- it's a gold standard tool, so if you were to think about measuring the glial cells for example, and if you're interested in their contribution to any neuroplasticity effects for example.  But the issue with PET is that for a lot of healthy control studies, where you're exposing your participants to radiation and longitudinal studies similarly, maybe you would want to actually minimize it.  But for other animals, I imagine it's absolutely a great tool.

LENNART VERHAGEN: Thanks.  A second question.  Any thoughts on extrapolating the effect of ultrasound from its effects on acetylcholine synaptic activity?  For example, 1 megahertz ultrasound has shown ability to inhibit acetylcholinesterase, leading to the rise in acetylcholine.  And have either of those been examined or excluded as a bioeffect?  So who's our expert on acetylcholine here?  Til looks very knowledgeable.

TIL BERGMANN: I can't comment on this specific question.  I mean, unless you're sonicating cholinergic nuclei like in the basal forebrain, and there were -- if you hit at least the cholinergic neurons and not other GABAergia.  So then I wouldn't know how you would modulate the cholinergic tone, right?  Unless by stimulating those specific neuron populations.

LENNART VERHAGEN: It is actually a really interesting question.  If we have that type of cell type specificity, right, if there are different cell types with different neurotransmitters who are more or less sensitive to it, maybe because of their presentation of mechanosensitive ion channels or other properties, and also there are quite a few people who are really aiming for direct release of neurotransmitters by, for example, hitting brainstem targets for serotonin or basal ganglia for even in dopamine projections.  Now I'm excited to see this going, but it hasn't been the core focus of some research, but I'm sure that exciting stuff will come out in the future. 

But I'm bringing the panel back together.  I would love to invite for a quick speed round.  I completely stole this idea from Brad, and it was so much fun, I'm going to do it again.

I'm going to ask you this one thing.  I'm going to ask you what one thing could we be doing differently in the domain of target engagement and parameter space?  Actually I'm going to start with you, Tulika.

TALIKA NANDI: I'll go back to what I said earlier.  I would love to see researchers working in different domains come together for the same final goal that we have.

LENNART VERHAGEN: That's a wonderful vision.  Li Min.

LI MIN CHEN: I would like to see combining multimodal and then readout and to really establish the engagement, for example, doing intracranial imaging along with electrophysiological recording.  I know people have done that and then really manipulate the parameter space.  That way we can really establish the engagement.

LENNART VERHAGEN: That's fantastic.  Elsa, did you have some wise words to share?

ELSA FOURAGNAN: I would just like to say that I would love if everybody was starting to think about dose exactly like Tulika was explaining, because it was such a beautiful talk, and I think it's such an important topic that sometimes we're all overlooking.

LENNART VERHAGEN: Thanks.  I completely agree.  That was wonderful.

Til, do you have some recommendations for us?

TIL BERGMANN: I would hope that maybe in addition to the relatively slow and laborious exploration of the entire parameter space, which is highly dimensional, we would exploit adaptive parameter optimization studies, where one is searching in an intelligent fashion through the parameter space, but you would need an immediate readout that's reliable and robust, and that's what we need to find maybe.

MIRIAM KLEIN-FLUGGE: Thanks.  Things we can do is use or hopefully can do soon would be to use higher intensities and consider using lower frequencies to actually get enough energy through the skull and use good planning, but on my wish list would be MR ARFI in humans.  That would be great, if we could make that work.

LENNART VERHAGEN: That would be amazing.  Thank you all for sharing here on the panel today.  This was a fun discussion with fantastic views for the future.

Maybe I can close this panel discussion and invite Kim to join me here for the closure of the day.  Thanks, all.

Synthesis, Opportunities, Next Steps, Closing Remarks

KIM BUTTS PAULY: I have learned so much.  Wow.  Can I just say one of the things I really wish people had paid more attention to is smoothing the waveforms.  I especially worry about sort of the rodent and animal research, and it's such an easy thing to do.  So I don't know, just putting it out there.

LENNART VERHAGEN: Let's all pay attention to this.  I had a great time over these past two days, and you know, I'm a little bit biased.  I have turned into a bit of an ultrasound aficionado.  I kind of like it.  It's also wonderful to share this with all of you.  It's a longstanding dream in neuroscience and medicine to selectively modulate these deep brain regions but then safely from outside of the head.

That's a promise that we might be able to meet with transcranial ultrasound, right?  It allows unprecedented precision to stimulate virtually anywhere.  But that requirement for precision really also necessitates great control.  That's what we are seeing throughout all of these talks, people working very, very hard to allow us that control, and this is something where we have a lot to learn.  I actually learned a lot from you here today.

I wanted to highlight I saw some of the talks who were stressing the importance of targeting accuracy, others were also the importance of accurate characterization of our acoustic properties, both of the skull and of the transducers, and that seems a very strong engineering foundation, strong physical foundation, that we can stand on.  What do you think about this, Kim?

KIM BUTTS PAULY: Oh, absolutely.  If you ask me, one of the big things that I would really like or what I think is kind of like a hindrance to the field is that we have such strong engineering, but sort of getting that to be turnkey tools for neuroscientists and clinicians I think is a longer road than I would like.  So I just, I learned so much from what Elly said when she was talking about that there's different modes across the transducer, but just how do we propagate that to everybody how to be able to really understand their fields is, whew, a little bit longer than I wish.

LENNART VERHAGEN: I felt very similarly when I was learning about all of the bioeffects and the biomechanisms.  It seems that we have no clue what is going on, like everything is going on at the same time.  But when I reflect on it, I also look at the other side.  Actually we have learned so much already.  There is really systematic research.  There's very clear evidence on the engagement of the membrane and ion channels, really solid models of how this is working, and I have been so impressed with the recent progress in these years.

So we saw in vitro and in vivo clear evidence of cell type specificity, dose response mapping, and stimulation and recording.  That was fantastic.  I am a little bit noticing that the parameter space is dauntingly large, and we can map this out in animals, but also in humans?  Do you want to get started?  Don't you want to get started with mapping it out in humans?

KIM BUTTS PAULY: Yes, absolutely.  For me, I know that you know exactly what I am going to say here, that it starts with what can we do to kind of reduce the auditory effects, and then we can start studying that.  That's just how I think about it.  But yeah, this is a big parameter space.

You know, one of the things that I just kind of wanted to take a quick moment to digress here; in Session 4 when we were talking about the FDA and reimbursement, just that one of the things I learned today was that how important it is for people who are studying in humans, this technology, to start thinking about FDA and reimbursement and how to design their studies with the FDA and reimbursement in mind, that that may well change their study design, and that there are lots of resources for people to understand how to do their study design.

LENNART VERHAGEN: Absolutely.  The human applications work has followed actually rapidly after the first animal studies, and I'm really excited that we have all the stakeholders on board here, that people are thinking about the end goals and incorporating that in how they're designing their studies.

We saw a lot of requests.  Many people were asking for systematic parameter mapping, fantastic.  But we also a great call for standardization in our reporting and standardization in our measurements.

KIM BUTTS PAULY: We are preparing a paper on standardized reporting that hopefully we will be finalizing very soon.  I definitely think that.  I think everybody is sort of waiting for that and really wanting that to be there so that we can make sure that we really get the right information into our papers and really help each other out that way.

LENNART VERHAGEN: That would be fantastic.  In our ambition to do dose response mapping and either an optimized parameter space exploration, it was highlighted that a critical component is to have such a veridical readout that we can really rely on ideally an immediate readout.  We heard a lot about that in the last session, particularly.  I am wondering if maybe for physiological safety we might have more readily available readouts.  We can already learn so much from physiology and from behavioral assessments.  Maybe some of the early goals in mapping that we can do is actually have systematic mapping of burden and safety.  I would be excited about that opportunity.

KIM BUTTS PAULY: Absolutely.  You know, I am not actually as pessimistic about the use of PET as some of the others.  PET has really evolved recently with not just a lot of new tracers, but lower dose, and then deep learning, machine learning capabilities for being able to reduce dose even further and improve S&R.  So I actually think that might be in sort of the intermediate term a really nice readout and something that we should be pursuing.

LENNART VERHAGEN: Yes, a fantastic readout we could add.  Well, I want to start wrapping this up.  I was so excited here in the workshop that it brings together all of the experts and stakeholders, really from all stages, from discovery to reimburse treatment, and I think that's so important that we work together.

I see amazing opportunities to standardize our device control, to standardize our reporting, maybe have a central database for stimulation effects and adverse events, really to start studying and mapping out those safety and burdens.  We could start doing this now, and then we can think about what are the best physiological readouts, including many that were cited here.

It's really amazing, exciting times, and I think this workshop and many of the things that we heard, that's what we should be heading forward, all working together, and then we can realize this dream.

KIM BUTTS PAULY: I just want to thank everybody, all the speakers and the panelists and all the audience and thank you so much for all your questions and your interest.  Let's just keep trying to push the field forward.

If you have things that you think that ITRUSST should be addressing, then please feel free to send those to us as well.

LENNART VERHAGEN: Thanks to all the organizers.  Thanks to you, Kim, fantastic chair here.  Thanks to NIMH for setting this up.  Thanks to all the speakers and panelists, and also thank you to the audience for joining us here.  It's been a great two days.

Looking forward to seeing you again.  Thanks, all.