Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

Day 1 - Workshop on Brain Behavior Quantification and Synchronization: Sensor Technologies to Capture the Complexity of Behavior

Transcript

Welcome Day 1: May 2, 2023

YVONNE BENNETT: Hello, everyone. I'm Yvonne Bennett, a program officer at the National Institute of Mental Health. I want to thank you for joining us today in person and online. On behalf of my colleagues with the Brain Behavior Quantification and Synchronization Working Group and Sensors Workshop Planning Committee, I would like to welcome you to our workshop entitled Sensors Technologies to Improve our Understanding of Complex Behavior.

Now I'm excited and honored to introduce to you one of our inspiring leaders, Dr. Holly Lisanby. She is the director of the Division of Translational Research at the National Institute of Mental Health, which funds research supporting the discovery of preventions, treatments, and cures for mental illness across the lifespan.

She founded and directs the Noninvasive Neuromodulation Unit in the NIMH's research program, a pioneering translational research program specializing in the use of brain stimulation tools to measure and modulate neuroplasticity to improve mental health.

Currently she is professor emeritus at the Duke University Department of Psychiatry and Behavioral Sciences, where she was the first woman to serve as chair of the department of psychiatry. She has been principal investigator on a series of NIH- and DARPA-funded studies on the development of novel neuromodulation technologies, and her team pioneered magnetic seizure therapy as a novel depression treatment from the stages of animal testing, first in human, and now international trials.

A prolific author with over 290 scientific publications, she has received national and international recognition, including with the Distinguished Investigator Award from the National Alliance for Research on Schizophrenia and Depression. She is a board-certified psychiatrist and distinguished life fellow of the American Psychiatric Association. Without further ado, please welcome Dr. Lisanby.

HOLLY LISANBY: Thank you for that very kind introduction, and I want to start by thanking Dr. Yvonne Bennett, Dr. Dana Schloesser, the entire Brain Behavior Quantification Synchronization Workgroup Planning Committee, and working group. It's really been a team effort, and I'm so excited to be here today to launch this workshop.

I want to also give a special thanks to Dr. John Ngai, the NIH Brain Initiative director, for recognizing that sensor development is going to be key to our ability to establish brain-behavior relationships and encouraging us to explore this area. And that leads us to today's workshop, where we have upwards of 1,100 attendees, mostly virtual, with many of you here, which is testimony to the interest level in this area. So we are really excited to see that.

I am just going to show two slides to introduce us to the Brain Initiative, which many of you are already familiar with. Our mission is to revolutionize our understanding of the human brain by accelerating the development and application of technologies.

This is a partnership between five U.S. federal agencies and several private foundations, and this workshop itself is a collaboration between NIH and the National Science Foundation.

The BBQS initiative, the goals of brain behavior quantification and synchronization, or BBQS for short, are to develop high-resolution tools and analytic approaches to quantify behavior as a multidimensional response to environment, and to synchronize these with simultaneously recorded brain activity.

We also have a goal of building new conceptual and computational models of behavioral systems to establish causal brain-behavior relationships and to enable the development of novel interventions, such as closed loop interventions. We also seek to establish cross-disciplinary consortia, which is part of the goal of this workshop today, to provide an opportunity to mix across the disciplines to be able to develop and disseminate new tools, ontologies, research designs, and ethical frameworks to transform mechanistic brain behavior research.

That brings us then to the goals of the workshop. Our goals for the next two days are to bring together sensor developers, neuroscientists, translational psychiatrists and neurologists, computational specialists, and others who are interested in developing new sensor technologies that will help us be able to quantify behavior in the context of environment and stimulate our understanding of brain-behavior relationships. I list on this slide four funding opportunities to draw your attention to, regarding BBQS research in humans and also cross-species studies, as well as the data infrastructure, data archives, and data coordination and artificial intelligence centers to be able to support this effort.

Now I'd like to introduce and thank the co-chairs of our symposia, starting with Dr. John Rogers from Northwestern University, who is the Simpson Querrey Professor of Materials Science and Engineering, BME, and neurosurgery. He also has positions in the electrical and computer engineering, mechanical engineering, chemistry, and dermatology, reflecting the transdisciplinary impact of his work. He directs the Querrey-Simpson Institute for Bioelectronics. Among his many awards are the MacArthur Foundation fellowship and the MIT-Lemelson award, and his highly multidisciplinary research focuses on soft materials for conformational electronics, nanophotonic structures, microfluidic devices, and microelectromechanical systems, with an emphasis on bio-inspired and bio-integrated technologies.

I'd also like to introduce our second co-chair, J.-C. Chiao from Southern Methodist University. He is the Mary and Richard Templeton Centennial Chair and professor in electrical and computer engineering. He's the founding editor-in-chief for the IEEE Journal on Electromagnetics, RF, and Microwaves in Medicine and Biology. He has expertise in RF microelectronic and mechanical systems, quasi-optical wireless systems, micro and nano optics, and in addition to his extensive publication and patent record, his work on microscopic windmills and tiny turbines has been covered by National Geographic and many other media venues.

Among his many prestigious awards are the Tech Titans Technology innovation award, Research in Medicine award in the Heroes of Healthcare, and I met J-C when he invited me to speak at the IEEE sensors symposium in Dallas, which he chaired, and it was wonderful, eye-opening to see how sensor developers are developing tools that are not yet but really could be used to advance neuroscience research, and he introduced me to the IEEE sensor community, as well as to Texas line-dancing, complete with cowboy hats. I can't promise you that here today, J-C, but welcome and thank you.

Session I: Sensors Introduction (Part A)

J-C CHIAO: My name is J-C Chiao. Please call me J-C. November 1, 2020, when I, Dr. Lisanby, and Dr. Bennett are talking about a workshop. It was six months ago and today finally we realize it. One of the reasons is that it is such a huge field, and we feel like how can we bring engineers, neuroscientists, and behavioral scientists together in one place so we can have a strong multidisciplinary collaboration? So that's why we have this workshop.

However, because this field is so big, we can't invite everybody or cover every field, so we decided that we would have a special issue in IEEE Sensor letter, and we're now disclose this to all the speakers, discussants, and facilitators; you are welcome to submit invited paper to this letter, to summarize your research, overview, or review of your research, so we can share amongst the researchers.

Also, original papers, original research work, are also invited to submit to this journal. The deadline for the submission is July 1, and we aim to publish this before October 1. Hopefully this will bring our society all together to form a better bond.

Next, let me start with today's session. We have a very tight schedule, so we have to limit each speaker's time. However, I hope that after today's speed dating event we can have a better understanding of each other and we can start to form that multidisciplinary connection among us.

In the first session we have four speakers, Professor Sarkar at MIT will join us virtually. Then Professor Rogers from Northwestern University, and Professor Dahiya from Northeastern University, and then Professor Gao from California Institute of Technology. To save time, each speaker will introduce the next speaker so we don't take time from them, and please stick to your time slots so we can have enough time for discussion.

After the presentations we have 30 minutes discussion. The discussion will start with three discussants, Professor Inan from Georgia Tech will join us virtually, and Professor Besio from University of Rhode Island. I'm sorry, there should have one more, Professor Chris Roberts from UT El Paso. And they will start asking questions to the presenters.

After that we will open to the virtual participants for two or three questions, and then it will be open to the in-person attendees here.

Now let me start with session one, and Professor Sarkar from MIT.

DEBLINA SARKAR: Hello, everyone. It's a great pleasure to be here today. My name is Deblina Sarkar. I'm an assistant professor at MIT. It's a great pleasure to be here today. Sorry I could not join in person, but it's still great to be a part of this amazing workshop virtually.

I started my research journey as a nanoelectronics engineer, and from there I took a steep transition to the field of neuroscience. What I will do today is tell you the story of my journey from nanoelectronics to neuroscience and how, in my own group, I am fusing these fields together.

All of you must have noticed how your laptop heats up after working for some time. The heat generated is so high that a while ago I even cooked an egg on my laptop. So the energy consumption of information technology is so high that datacenters of large corporations use as much electricity as the whole country of Turkey. And developing low-power electronics is specifically important when you're thinking about healthcare and bioelectronic interfaces. This is because the bioelectronic interfaces, you want to make them small for them to be minimally invasive, and in the small form factors, we can provide a very little amount of energy for the electronics to work with. So the electronics need to be low power.

If we are thinking of inventing sensors and feedback circuits and analysis circuitry for the sensing data within a small form factor of a biomedical implant or a bioelectronic interface, we need to have low power electronics. That way we can increase the battery life of the medical devices, if they have a battery, or it can even open up new avenues for being powered with wireless power transfer mechanism, or even thinking of energy harvested from the body itself.

So the very fundamental building block of electronics is an electronic switch or transistor, and traditionally what we have done is we are able to reduce the size of this transistor so that you could fit more and more of them, so you can have more functionality within a small area of the chip. And you are also able to reduce the power by reducing the power of this single transistor, so you could reduce the power of the whole electronic circuits.

However, Moore’s Law has come to an end, and the fundamental limits have reached in this scaling of specifically for scaling of power. And the fundamental nature of the problem means that evolutionary techniques will no more suffice for the scaling, and you require radical innovations along multiple fronts, from materials to device physics.

Let us understand where these fundamental limitations are coming from. In an electronics transistor, the electrons need to jump over an energy barrier to cause current flow. And when this barrier is high, very few electrons go from source to drain when your transistor is off, and when you want to turn on the transistor, you lower the barrier height so more electrons can go from the source to the drain, and your transistor turns on at this time.

But this way of electronic switching creates a fundamental limitation in how steep the current can increase. Ideally, we'd want the current to increase from off to on state with application of a very amount of voltage, but because of this way of switching, of electrons jumping over an energy barrier, we actually get a very slow kind of gradual switching of electric current with the voltage that we apply. And that causes a fundamental limitation in steepness, and that parameter that quantifies that is called the subthreshold swing, which tells us how steeply the current will increase with voltage, and at room temperature there is a limit of 60 millivolts per decade. That means you would mean at least 60 millivolts of voltage to turn on the current or increase the current by a decade, or tenfold.

Now you might think how do we change this paradigm of electron transport in a transistor? Instead of making the electrons jump over the energy barrier, can you make the electrons just sneak through a barrier like a ghost walking through the wall? You must be thinking how is this possible, because you cannot just walk through a wall like that. While classical physics says that it is not possible, quantum mechanics says this is possible, because very small particles like electrons have this wave-particle duality which allows it to sneak through a barrier. The scientific name of that is quantum mechanical tunneling effect.

You might think that you would solve all the problems by this tunneling effect, but there are still many challenges. Suppose there's this guy here who really likes low power and thinks that I'm just going to sneak through a barrier. But by the time he reaches this side of the barrier, he's sized himself looking something like this, and this is because electron waves, they decay exponentially as they pass through a barrier. So that reduces the current to a point that it is even difficult to turn on tunnel-field-effect transistors.

So to develop an efficient transistor which can take into effect this tunneling property has severe intricate device design challenges that need to be solved. To increase this tunneling, probably you need to have very high electric field at the tunneling junction, and you need to reduce the barrier width as well as the barrier height, and all of this has to happen at the same time. Moreover, note that just having steep characteristics over a small range of current is not helpful. That would not help to reduce the voltage and then also voltage is related to the power supply.

To really reduce the supply voltage we have to have these steep characteristics over a large range of -- these on-off characteristics -- of the current. The international technology roadmap for semiconductors, their requirement is that you need at least this sub-60 subthreshold swing over 4 decades of current for this electronic switch to be a low power or helpful.

Now I'll discuss the transistor that we had developed which helps us to achieve that, and it's called ATLAS-TFET, which is the short form for atomically thin and layered semiconducting-channel tunnel-field-effect transistor.

Let me explain to you in the next few slides how this transistor works. This transistor basically uses 3D material as the source, it's a heterostructure of 3D material and a two-dimensional material. We use 3D material as a source, because it helps us to create a highly doped source region so that you make sure that the electric field drop in the source is minimal.

The channel for this transistor is made of 2D material, and the reason is that in the world of electronics, the rule of thumb is that thinner you can make the channel, the better you can get the electrostatics, even when the transistor is scaled down dimensionally. In this case, by using a 2D channel, that means basically in this channel the third dimension is almost missing, so these are materials such as graphene -- you all know about graphene -- but graphene doesn't have a bandgap, so we used a 2D semiconductor so that we can have a channel which is only atomically thin, a few atoms in thickness, so that you can have improved electrostatics.

This heterojunction of 3D and 2D material is (inaudible) in nature. That means there is no chemical bonding between the 3D source and the 2D channel. So that creates a very abrupt doping profile. That helps us to create a very, very high electric field at the tunneling junction.

So as I mentioned, we need three things. We need very high electric field at the tunneling junction, low barrier width and low barrier height. So this is this heterojunction of 3D and 2D material, you are able to achieve this high electric field. But you still need to have low barrier width and low barrier height.

How do we achieve a low barrier height? For that, you can choose the materials judiciously. In our case, we use germanium as a 3D material and molybdenum disulfide as a 2D material, and if you draw the band structure of this materials, you'll see that they align in a particular way so that the barrier height, which is defined by the conduction band of the channel and the valence band of the source comes out to be low.

Also, we have to reduce the barrier width, and in this case, also, the way we have designed the transistor such that the tunneling width of the electrons in this case is only 2 atomic thin layers of molybdenum disulfide. So the barrier width is here in this case only 6 atoms in thickness. We can also get ultrathin barrier width.

So with this transistor design, we showed that we can actually overcome the fundamental limitation of power of current transistors. You see that the fundamental limitation is shown by this red line here, while our transistor overcomes that limitation and achieves minimum subthreshold swing of 2.9 millivolt per decade, and when you compare with the conventional transistor, you can see that this subthreshold string always remains above this fundamental limitation of 60 millivolt per decade.

Moreover, since this transistor not only achieves sub-thermal subthreshold swing but also is atomically thin channel. So it's only 6 atoms thick, and as I mentioned, the thinner you can make the channel, the smaller you can make all these lateral dimensions. So this transistor can achieve simultaneous power as well as dimensional scalability.

Also, these kinds of transistors have, are very, very promising for making electrical biosensors. We want to make electrical biosensors, what we do is we get rid of the physical metallic gate and the gating effect is provided by the charge by molecules. Now, for transistors, which you want to use as biosensors, what kind of materials would be interesting?

So 3D materials are not very promising when they are not scaled or thinned down, because they have lower electrostatics and provide low sensitivity, basically lower electrostatic control of the gate.

One dimensional material provide greater electrostatic control of the channel to gating effect, but processing is difficult, because of their one dimensional structures.

2D materials on the other hand could be very interesting, because they not only provide high sensitivity, but also provide an easily processable planal platform. Moreover, when you are thinking of designing biosensors, we want something which can conform to the curved surfaces of our body, which can stick to the skin and conform to the skin, and our skin is highly flexible. May not be as flexible as this person's.

So 2D materials provide high flexibility and toughness, and it is ideal for implantable and wearable medical devices. So we had developed three different transistor-based biosensors made of two-dimensional materials, molybdenum disulfide in this case, and also we have shown that if you take into account different gating regions, whether it is the linear region and the saturation region and the subthreshold region, we can gate actually high sensitivity in the subthreshold region for the transistor biosensor, because in here the current basically changes exponentially with the gating effect, leading to higher sensitivity.

Then the question is can we increase the sensitivity even further compared to a traditional field effect transistor. So if you can think of the principle, when a small change in voltage can cause a large change in current, that means also a small number of biomolecules can cause a larger response or give you higher sensitivity. So steeper you can make the characteristics of your transistor, the higher sensitivity you can get out of your biosensor based on the transistor.

So instead of using a conventional field effect transistor, if we use tunnel field effect transistor, we can see that while a sensitivity of a conventional field effect transistor-based biosensor is also limited by the subthreshold swing limitation of 60 millivolt per decade, by reducing this subthreshold swing and having steeper transistor characteristics, you can actually increase the sensitivity by five orders of magnitude.

So while working on these low power electronic devices and steeper transistors, I got fascinated by the brain, because the brain probably can be thought of as the lowest power computer ever. It consumes only about 20 watts of power, similar to a mere lightbulb, and does all these fascinating functions, like thinking, cognition, emotion, feeling.

However, at present it's very difficult to understand the brain, because if you even look into a small region of the brain, you will see that it's fond of a dense jungle of biomolecules which will probably look very similar to the gum wall in Seattle.

So to decipher this dense jungle of biomolecules, you need super-resolution methods, but current super-resolution methods are either of highly expensive procedures or require expert handling, and they're not scalable to 3D brains. They only work in 2D very thin slices.

To overcome these challenges, we developed a technology based on next generation expansion microscopy and developed a technology called expansion revealing and showed that it helps us to reveal previously unseen biomolecules in the brain, and in this case, you see example in Alzheimer's brain structures, periodic structures of amyloid beta seen through this expansion revealing technologies which cannot be seen at the same level of resolution using existing super-resolution tech.

So that was my postdoctoral work, and in my own group what I've been doing, I'm fusing these fields together, the fields of nanoelectronics, applied physics, and biology, and our group's major answer to develop novel technologies for nanoelectronic computations and also fuses nanoelectronics with biology to create new paradigms for life machine symbiosis.

So what do I mean by a new paradigm?

STAFF: Pardon the interruption. Time is up and it's time to move onto the next section, please. Thank you.

DEBLINA SARKAR: Okay. I will stop here. Thank you.

JOHN ROGERS: It is a pleasure to be here. My name is John Rogers. I'm co-organizer with J-C, and it's been wonderful interaction with Yvonne and Dana to pull together this exciting workshop. I'm looking forward to the presentations that we'll hear today and tomorrow, and getting into discussions I think is the motivation is to present technologies that will stimulate collaborations among the various groups that are excited about this field of study.

My background is in engineering, and so we develop new sensor technologies and I'll share with you some of the advances that we've made over the several years in platforms that enable studies of the brain, specifically the human brain and specifically on devices that can kind of translate and scale and be used in real-world settings outside of confined hospital settings and laboratories and so on.

If you think about behavior and the brain, you'd like to be able to measure brain function, so sensor suites that allow you to do that, but also you'd like to be able to measure behaviors at the same time, and to do that in a way that presents minimal burden for the individual participants in studies that we could envision as a collective community to explore these very fascinating scientific topics.

So I'll give you a little bit of background on what we've been interested in, specifically in the development of soft sort of biocompatible platforms of electronic sensor technologies with wireless communication capabilities, starting with a focus on hospital care and ICU monitoring as a background for the platforms I'll describe to you in the next 10 minutes or so specifically in pediatric multichannel EEG and functional NIRS measurement technologies, with kind of clinical grade levels of precision, but in fully wireless wearable forms through collaborations that we've had with folks at Lurie Children's Hospital, Prentice Women's Hospital in Chicago.

I'll then talk about how you can combine those kinds of measurements of brain function with measurements of vital signs and other behaviors associated with physiological processes, and in particular, trying to think about measures that would allow you to quantify levels of mental distress and pain, in particular, in pediatric patients.

And then I'll kind of conclude with a very largescale population-level study of new sensor technologies that might allow for quantitative assessment of neuromotor development status in infants. Because as I was mentioning at the outset, my background is in engineering, but I have joint appointments with our medical school and we're deeply involved with collaborations across the clinical community in Chicago and elsewhere.

So I run a research group, but what's unique I think about our setup at Northwestern, we're also operating in the context of an endowed institute that allows us to really accelerate development efforts at the boundaries between engineering, science, and medical science. Our main focus is to try to develop sensor technologies based on advanced transistor devices, similar to what Deblina was talking about, but really with an emphasis on things that can scale and translate and be directly applied to humans, and so from a materials science and biomedical engineering standpoint, what we'd like to do is develop strategies for taking this kind of technology, which serves as the basis for all of consumer electronic gadgetry and many industrial and defense systems, and reformulate it in a way that's compatible with soft living tissue so that it can be integrated in, on, or around those soft living tissues in a chronically stable fashion to allow you to sort of blend electronics with biology in a sophisticated way that opens up new capabilities in sensing and therapeutics.

A lot of what we have done at least with human subject studies is thinking about that interface in the context with the skin, and I would say over the last 10 years we and many other groups -- and Zhenan Bao is here and many other leaders kind of in this space have developed a portfolio of technologies that allow you to really build skin-like forms of electronics that can reside on the surface of the skin in a nearly imperceptible fashion for continuous monitoring of underlying physiological processes, including brain activity, and I think through community-level efforts, at a global scale in engineering departments around the world, there has been a tremendous amount of advances in measurement capabilities and those kinds of skin-like or epidermal electronics, and I won't go through the details here, but just note that each device can be operated in a multimodal fashion.

So each device platform can have multiple sensor types co-integrated, but many of these sensors can be mounted at different anatomical locations around the body and operated in a time-synchronized fashion to develop kind of full body assessments of behavior and brain activity, with clinical grade quality, continuous and able to scale, as I mentioned before, outside of laboratory and clinical environments into the real world.

An additional area of emphasis for us is in developing low-cost platforms that could be available and useful not only to parts of the developed world, but also lower- and middle-income countries. So one of our initial areas of focus was to develop technologies of that sort to address one of the most vulnerable and precious patient populations in the hospital setting, and for us that meant premature babies, and trying to move away from old-style engineering solutions for monitoring vital signs in these critically ill patients based on hardwired interfaces to expensive boxes of data acquisition electronics, and sort of tape-based interfaces to the fragile skin of these patients to something that would look more compatible with these precious patients in terms of elimination of wires and elimination of these invasive tapes.

So it turns out you can do all of that, and this is not a talk to get into the details of exactly how you go about doing that, but a few years ago we were able to publish a paper on skin-like battery-free wireless devices that recapitulate all of the vital signs monitoring that's done even in the most sophisticated neonatal intensive care units, level 4 units, like those that we have in Lurie Children's Hospital, but without the invasive tapes and the expense of the data acquisition electronics and the cumbersome nature of the wire-based interfaces that are used today.

So those have been deployed on hundreds of patients in NICU units across hospitals in Chicago, also now deployed at a global scale in 20 different countries, FDA approved, and really I think moving the needle in terms of how we improve the care of these patients, and again, with clinical grade quality, multimodal assessments of all vital signs, now actually going beyond what's done even in level 4 NICUs, because we are measuring body sounds, as well. So we actually do seismic cardiography as well as electrocardiography, which is sort of the standard. So not only replicating what's done today, but actually looking for opportunities to go beyond.

So those technologies are quite mature now and entering the commercial realm. What I'd like to talk about today is sort of advanced versions of those platforms for measuring not only physiological characteristics, motion characteristics, but also processes of the brain, and I'll just step you through a few device platforms that we feel are reaching levels of maturity that allow them to be translated out onto patients at scale.

So I'll just quickly step through first neonatal multichannel EEG, and here we have a deep collaborations with Jeremy Wong and others at Lurie Children's Hospital, so we're able to do this now at clinical grade quality. So multi-point interfaces across the scalp, all interfaced to a wireless data collection and data communication module that's compatible with conventional Bluetooth-enabled electronics. So you can stream data multichannel to an iPad or an iPhone for example, and this has been scaled and demonstrated across about 100 patients at Lurie Children's Hospital pediatric patients again as main focus, with benchmarking against state-of-the-art clinical standards, and I won't go through the details here, but the data are essentially indistinguishable.

You don't lose anything in terms of resolution. Actually improved things in a sense, because you eliminate a lot of the noise channels that are captured by more conventional hardwired interfaces. So this is an example of measurement of an epileptic seizure in a pediatric patient with those wireless devices along with a Natus system, which is a large rack-mounted EEG collection technology that's used as a standard of care at Lurie Children's Hospital. So that's one example.

Other things that can be used in this kind of wireless skin compatible format that allow you to assess various aspects of brain function include functional NIRS, and so you can replace the wired-based systems with a very small compact flexible soft mechanics as a critical aspect of an interface to the curved surface of the scalp, and so you can do functional NIRS; you can do NIRS measurements at multiple locations across the head with very little adverse impact or device load on the patients, and again, clinical grade quality. So this is the kinds of comparisons that we've done to just show that technology is not a laboratory curiosity but something that actually could realistically replace what's done in a hospital today, but also something that can go into a home setting as well, due to the simplified user interface and the lower cost structure associated with these technologies.

So two other examples I'll give you. Now sort of blending measurements of brain function with physiological measurements and motion characteristics. One thing that we're very interested in collaborating with folks, pediatricians at Lurie Children's Hospital, is to try to develop quantitative metrics around pain levels that pediatric patients are experiencing. They can't vocalize or describe what they're feeling, but perhaps with different sensor technologies monitoring the brain, monitoring physiological characteristics, you could tease out a quantitative score around pain level.

So we developed this particular protocol that involves mounting the devices on infants, waiting for 15 minutes, doing a blood draw which was already scheduled for that particular patient, monitoring their physiological parameters throughout that blood draw, and then monitoring sort of a relaxation back to a quiescent state for some time after the blood draw.

This is sort of an exploratory effort, but the question is can you detect physiological measures that could be correlated to pain scoring that's currently done using surveys by NICU and PICU nurses. So another example.

And then one final example I'll give you is one that really exploits not only the multimodal nature of each one of these devices but this kind of time synchronized multimodal operation that I mentioned previously, and here we're using 11 wireless devices bound at strategic locations across the bodies of infants to capture a full locomotor behavior for periods of time, and so you can use those data streams to recapitulate in avatar form sort of the nature of motions, but it's quantitative data.

And so the goal here is to take that data, develop machine learning algorithms that can replace the kind of neuromotor assessments that are done by a trained neurologist in a way that can deploy out into remote locations across the country or across the globe to allow these kinds of assessments to be done to capture neuromotor delays at the earliest possible timepoints so that intervention can be delivered to babies who are at risk of delays and development of CP and other kinds of neuromotor disorders.

So this is a massive study, actually, many hundreds of babies have been enrolled in these studies. It's funded by the Ryan Family Foundation and it's all a data-driven approach to detect delays at the earliest possible timepoint to not only replicate what neurologists do in terms of assessing these delays, but actually to do better. That's kind of the aspiration to capture these delays before they're even detectable by a trained neurologist.

So we're fairly early in these studies, hundreds of patients have been enrolled. We have produced hundreds of these devices and deploy them at scale, and these are some of the initial results from the machine learning that is being developed around these data streams.

Those are some of the activities that we're looking at, and I'd be happy to talk to folks who are interested and give you all the details, look for collaboration opportunities, and certainly looking forward to those conversations.

I want to acknowledge all the senior collaborators that have been involved in this work, were deeply engaged with the clinical community. I think this is a very important aspect of how we do engineering science is to be directly embedded into that medical community. That's very important, the senior collaborators, but the students and the postdocs who do the work are the most important folks, and so I always like to conclude my presentations by acknowledging them for their hard work.

Thank you very much.

So I think my job is to introduce the next speaker, Ravinder Dahiya, who recently moved to Northeastern University, and he'll share with us some of his work in this area.

RAVINDER DAHIYA: So I'll complement what John said. I'll be speaking more about learning capability of sensors.

A brief background. The research we are doing in the field of sensors, we try to understand humans and use this knowledge to apply into robotics and prosthetics. We are developing large area skins and it's been used in humanoids as well as in prosthetics.

There's a lot that is still missing when we come to comparing robots with humans, and the key difference is learning, and that also is a key difference that I see in the field of sensors in general. We see a lot of sensors these days as wearable devices. They are outside the body. Or the sensors, implantable sensors, inside the body as well. We have seen a lot of microelectrode arrays based work also in the past, and in context with our brain, it has been also implemented, some of these have been used, to control the artificial limbs as well.

They have been used for the excitation of certain parts of the brain as well as for recording the data, but eventually all that data is recorded close to in hardware where you further process it. And that's where I would say sensors' limitations of current sensors would lie.

We have seen a lot of work on electronic skin also, and the major focus has been on developing the top layer, which is the sensing layer, but if you look at the human skin itself, the data processing starts from the point of contact itself, and that is where you have to think about behavior. If you look at the behavioral quantization, the sensors can in addition to recording the data, they can also learn the data, learn from that, and that learning over a long period of time, it will reduce the number of devices. It will also help us solve the problems related to power consumption, and in this direction there is well quality taking place, it's just that it has to be brought together.

If you look at biological level, then you'll see there is a sensory neuron and there are nerves which is related to communication channel. Then we have the soma-synaptic junctions. Some work has been taking place here in terms of sensory neuron. You see various types of sensors, heart sensors and temperature sensors and other sensors that have been reported.

Then we also come across the devices such as --which these simple circuits which create these spike networks. We also come across memristors. These are kind of being -- research is disconnected in many ways, and if you look at how the tactile data is processed in the human skin, then you will realize that it starts at the point of contact itself. So that is something we need to bring in. We have to see how human level sensation and perception can be brought into artificial systems.

In that regard, you may have to think about multiple layers of the skin, which has sensors layer as well as the neural layer or a neuromorphic layer, all implemented in hardware so that the data that you generate or sensors generate can be produced at the point of contact itself, and what comes out is the learning part.

In this regard, I will also present an example where we recently reported a synaptic transistor, a printed synaptic transistor, and this slide shows the kind of comparison between biological tactile neural pathway and artificial tactile neural pathway, and my hope is that this such examples could be used also for quantization brain behavior, particularly because this is important over time scale. So if the device, the transistor itself, learns over a period of time, then it will help us resolve the power issues as well, because the device will be turning on only when it is required. Otherwise, it won't work.

So that's the kind of sensory neuron block, and this is the cuneate neuron block that you see, and as far as touch receptors is concerned, there is analogous data sensor here which is shown by the variable resistor, and there is integrate and fire neuron, which is something that happens here as spiking or pulses take place. Then there is a transistor which learns over a period of time, and you can reset this as required, and after that, you again have integrate and fire neurons and then there is an action takes place. So it's a complete cycle from sensation to perception and then action.

In this regard, the transistor I mentioned, we printed these transistors. These are nanowire-based transistors, a large number of transistors were printed on flexible substrate. We characterized these transistors, and also then we evaluated learning behavior using excitatory and inhibitory functions. So this is the pulse that you apply, positive pulse or negative pulse that you apply, and based on that event, modulation takes place as the number of pulses increase.

The fundamental behind this is simple that in case of inhibitory signal, you have a T less than 0, gate voltage is equal to Vrest, there are some interface, some charge carriers are distributed in this way, but when you apply a pulse, then there is a redistribution takes place, and when you remove it, three distribution and release it, but at this point it may not be the same as the initial value, and because of this change at the interface, you can say over a period of time or a certain short term or long term depending on the interface properties, the distribution can be different, and you can assume you can say this is how the transistor is learning.

We evaluated this sort of behavior of the transistor in this way, and we applied for example high voltage, frequency of the signal would change. That was amplitude to frequency conversion, and we then also evaluated the efficacy of this approach in case of robotics by applying the signal before learning. For example, you see here user is pressing the sensor, but nothing happens, but once the transistor learns after a couple of times, robot also starts to react that.

So there is also electronics behind it. It's not fully printed, as you can see. Some PCBs, some breadboards, are still there, and the synaptic transistor part was printed. So our goal is to print everything and make it quite small.

Another example in this case is the temperature sensor. This is another printed temperature sensor, and in this case we mimicked the skin in the sense that the sensitive material, which is again nanowires, (inaudible) nanowires, they are embedded in PDMS, a soft material such as our skin is. The PDMS, it was not thick. It was about 100 micrometer. The good point here is that the sensor, despite the outward layer being a thermal insulator, sensor was working nicely.

In this case, sensor does not learn. So we were trying to compare in this case sensor does not learn, but there is -- you collect the data as we normally do these days, we collect the data, then apply machine learning algorithm. But like previous example where we have synaptic transistor, the same thing can be also implemented in case of temperature sensors also.

So these are the printed temperature sensors. These are lines that you, the silver color that you see there, they are all nanowires. We tested these sensors over a large number of heating/cooling cycles, and the response was quite good, less than a second response rise time and the same order in the recovery time as well. For temperatures it's quite interesting, and we tested it for 5 degree to 55 degree on different body parts as well.

The video here shows the pain reflex, the temperature related pain, in this case sensors are again placed on the fingertip of robotic hand. That was one sensor shown here, and the video shows the extreme scenario where a red-hot rod of about 400 degrees Celsius was brought close to the robot and the robot was reacting to it.

So such sensors and electronics, if we put them together, if we make our device itself such that it learns over a period of time, then the behavioral aspect can be captured by single device and number of devices that are needed to capture such behavior over a long period of time can be lesser, and that would also help us resolve the problems related to integration at system level, and also issues such as power.

I would like to conclude by saying that physical, chemical, and biosensors, they have been investigated a lot, and they are important to measure various parameters for advances not only in the behavioral sciences, behavioral quantization, but also in areas such as robotics, bionics, healthcare, IoT, and wearables, but at the same time, I would say sensors alone may not be sufficient, particularly under dynamic environments, where learning and forgetting capabilities are important and they can help unravel changes over a period of time, as in behavioral studies. So in this regard, intrinsic properties of sensing materials can also be exploited, in addition to the structure of sensors.

So that is where I would like to end my talk. Thank you very much.

And now I invite Wei Gao to present his part.

WEI GAO: Hi, everyone. It's wonderful to have this great opportunity to share our recent research on skin-interfaced wearable biosensors.

So, as we know, wearables can play a very important role in personalized healthcare, because it can continuously collect data from our body and tell us what's going on, what's going on with our health, but if you look at all these commercially available health monitors, like Apple Watch or Fitbit, they can mainly track the physical activities and vital signs, cannot provide more useful information about our health at molecular level.

So we see a major gap, which is also a great opportunity in the field of wearable biosensors. How can we perform physiological monitoring at a molecular level continuously and ideally noninvasively? Think about continuous chemical sensing. We know there is CGM right now, but that is only limited to glucose.

So we are looking at one important body fluid, human sweat, which we can retrieve conveniently, continuously, and noninvasively, and surprisingly, sweat contains many important biomarkers, including a variety of electrolytes, metabolites, nutrients, over 300 proteins, and many different types of peptides and hormones, including multiple important stress hormones. We can also identify different types of substances and drugs from human skin.

Imagine that if we could develop a sweat band, a wearable sensor, that can analyze different type of chemical biomarkers from the skin continuously and noninvasively, we could use this, like I said, chemical information for a variety of fundamental and clinical investigations, and especially toward biomarker discovery. We can combine chemical information with physical information to AI, machine learning. We can discover underlying basically the intrinsic, the role of each biomarker or combination of these analytes and biomarkers, in disease management.

Back to seven years ago, we presented a fully integrated wearable system that can perform multiplex analysis of multiple metabolites, like glucose and lactate, electrolytes such as sodium and potassium, along with the skin temperature. So this is a wireless system that can continuously collect data and process the data and send data to the cellphone through Bluetooth. We have the cellphone app, as well. You can real-time read it, analyze the concentration and save the data to the computer or in the cellphone.

This way, we can continuously collect data from our daily activities. We also made a sensor patch with high performance in a mass producible way using a laser engraving, for example. We can make this laser-engraved graphene biosensor that can very sensitively analyzing biomarkers in human sweat and that can also be used to develop a physical sensor to monitor cardiac activity or other vital signs such as respiration. We can also use the laser engraving to make a microfluidics sample sweat efficiently to get a real-time information out of sweat.

We can also mass produce this chemical biosensor using inkjet printing, in this case we use different nanomaterial, we can customize the ink, to make our own nanomaterial ink to prepare different sensors that are suitable for detecting different type of low-concentration biomarkers.

So since we are talking about sweat sensor, many people have questions, what if I don't have sweat? So we had to make sweat accessible without the need of any vigorous exercise to monitor our chemical information or health information continuously 24/7. So then we need to learn about how to get sweat throughout time in general. We learn how we can, our body, can generate sweat.

Basically, for summer regulation, our sympathetic neurons in the sweat gland secretes acetylcholine, this neurotransmitter which bonds with muscarinic receptor that enable sweating process. Instead of letting out sympathetic neuron induced acetylcholine, this neurotransmitter, we can locally transdermally deliver certain type of neurotransmitter or drug to tell our sweat gland to let them sweat.

In this case, we developed a new platform that can be used to on-demand access sweat without need for any vigorous exercise. In this case, we apply a very small current, around 50 microampere, for only a few minutes so we can deliver a drug called carbachol that can locally induce sweat for a very long period of time. Again only a few minutes very small current, and the user will not even feel anything, but you will get a continuous sweat for hours or two days. It works for me three days, if I do five-minute stimulation.

So we have this microfluidics continuously sample sweat in real time. You can see here sweat come out, we have this black dye that you see the old sweat would be easily pushed out by new sweat, very efficiently. The sensor is sitting in the middle reservoir. You see now black dye getting cleaned out. We can get real-time chemical sensing in this case.

This really opens the door for continuous monitoring throughout activities. We also made our sensor so stable, enzymatic sensor, ion-selective sensor, they can stable for days without obvious drift. In this case, we can continuously monitor our health throughout activity. We also improved our energy power system, even we can using sweat or weak indoor light to power every single module of this wireless system, including sweat induction, different type of wearable chemical sensing, amp metric, potential metric, voltametric impedance, and Bluetooth wireless communication, without using any battery even.

So regarding to sweat, like stress and mental health, we know that stress is very important. It is related to over 95 percent of diseases, including cancers, anxiety, depression, PTSD, and cardiovascular diseases. Again, even stress is so important, people are using questionnaires, it's the gold standard right now, which can be very subjective. Same thing applied to depression or suicide, still based on questionnaire. So can we have a way, using wearable sensor, to quantify stress or mental health? People try to use heart rate, temperature, or skin conductance, general not condition-specific and lack accuracy.

So we are looking at is there any chemical information we can get from our body. Of course, one of the most well known stress biomarkers is the stress hormone cortisol. But if you get blood to monitor blood cortisol, it is traumatic, because blood draw itself induces stress. That's why we can apply our sweat sensor to quantify cortisol in human sweat using this laser engraved graphene sensor. We developed the sensor so accurate and efficient we can quantify cortisol within a minute.

We also identified there is circadian rhythm in sweat cortisol, similar to blood. Cortisol is important, but you have to know the baseline, because every day cortisol fluctuates, high in the morning, low in the evening. The pattern of cortisol is very important for people with depression or people with PTSD or even people with diabetes. Their circadian rhythm cortisol is different from healthy people.

We identified sweat in circadian rhythm cortisol from subjects over six, seven-day period. The correlation between sweat and serum cortisol level is pretty high, and we did different type of stress response study apply psychological stressor, physiological stressor, here to physiological stressor we applied, aerobic exercise and cold pressor test.

We can see that the stressor cause rapid increase of both blood and sweat cortisol level, and our sensor can rapidly access the sweat changes in general.

Sweat contains many other biomarkers related to stress not only, but cortisol. You look at these, in general, the process of how our body produces neurotransmitter and stress hormones, phenylalanine, tyrosine, L-dopa, dopamine, noradrenaline, adrenaline, these are all in the process. Actually, we can identify most of these chemicals in human sweat, and they have good correlation with blood overall, and our stress biomarkers in general is not limited to these chemicals, even glucose in the serum is a very strong stressor. Actually, even we don't have dietary intake when we are under stress, glucose level quickly increase in our body actually. If you apply CGM to monitor stress as well.

So we could use this wearable sweat sensor to quantify stress biomarkers from skin, for example, here we can use this laser engraved graphene to quantify uric acid and tyrosine. Tyrosine is a precursor for neurotransmitter production. So we can directly measure the oxidation of uric acid and tyrosine from skin, and using this highly sensitive graphene electrode.

But again, this doesn't really give us specificity if we want to monitor electroactive molecules at very low concentration. Why, last year we presented a new approach. We can apply biomimetic wearable sensor, which use molecular imprinted polymer which act as artificial antibody that can selectively bond or recognize very low concentration different type of biomarker. It is a universal approach we can apply these to monitor broad-spectrum biomarkers continuously to transduce a signal to measure electrochemical signal, get a transduce recognition to measurable chemical signal, we are introducing a redox probe in the middle of this molecule imprinted polymer and laser engraved graphene. You can quantify the redox signal change to know the specific biomarker concentration.

Used in this way, we can monitor broad spectrum marker, as I mentioned earlier. Here we show that we can apply this continuous tracking every single type of essential amino acid vitamin, and of course many other stress-related biomarkers, and some examples we can look at is amino acid or metabolite level change over time during our data activity, when we have dietary intake. We also recruited different types of patients. We have evaluated our sensors on obesity subjects up to diabetes subjects. Recently we also access PTSD, COPD, heart failure, different types of patient populations.

In general, how can apply this to quantify stress? We already show this sensor can be useful to monitor stress biomarkers; specifically, how can we quantify different types of stress of stress levels? We know that stress response is actually multidimensional response. Not only it affects cardiac activity, it also affects our metabolic process.

This work actually we showed that if we couple the typical vital sign collection, we can monitor temperature, skin response, skin conductance, and the pulse. These are very important monitor stress, we can couple this with a series of chemical sensors which are also known to be responsible for stress in general. We couple this multimodal sensor patch in human trials, we apply different type of stressor, physiological stressor, psychological stressor. Using AI, using machine learning, we can train our model that can distinguish each stressor. They can also quantify stress accurately over 98, 99 percent of confidence level.

In summary, I showed that we could monitor chemical information through noninvasive and continuous sweat analysis, and I believe this type of wearable chemical sensor will play a very important role in personalized healthcare and also very important in stress, because stress and mental health assessment, especially we quantify this multimodal multiplex physical chemical information continuously around AI machine learning, we can really enable biomarker discovery to identify biomarkers for stress and mental health.

In the end, I would like to thank my group, our collaborators in different medical centers, and our funding support. Thank you very much for your attention.

Discussants

J-C CHIAO: Now I'll welcome all the presenters on the stage and also discussants. We have discussants Dr. Besio, from University of Rhode Island, and Dr. Chris Roberts from UT El Paso, and we should have Dr. Inan online from Georgia Tech.

For time's sake, we will kick off the discussion, first with the discussants asking questions to our presenters. I don't know if Dr. Besio or Dr. Roberts, which one you want to start.

JOHN ROBERTS: Sure. Thank you all for fascinating talks. Kind of an open question, I think, to any of you. As sensor and electronic experts, what do you feel is the largest challenge to applying your technologies to the brain behavior quantification effort? Can you speak about your use of commercial electronics and packaging to enable moving your research into the clinical realm?

JOHN ROGERS: Maybe I can take a stab at that. I would say a good fraction of the work that we do in my group is very much oriented around development of technologies that can scale and translate and be manufactured in a cost-effective way to allow real meaningful population-scale studies. So I think we take an approach whereby if there is a route to a commercializable solution where we're leveraging componentry and manufacturing processes that are already established for consumer electronics gadgetry, we try to do that. I think it's usually the case that modified and adapted versions of those kinds of platforms can have utility, but you had to add sort of innovation on top of that. So most of what we do is sort of a blend, but we try to leverage what's already available to the greatest possible extent.

You know, and that's kind of what we do at a translational level. I think as an academic group you should be doing like really exploratory next-generation far out stuff, but I think for real practical studies of brain and behavior science, you need devices that go beyond what can be constructed by hand, by grad student, in an academic lab.

So I think there's a great synergy between those two styles of work, and if you don't have to reinvent something, use a commercial solution is kind of the way that we approach things.

WEI GAO: I wanted to add to what John said, and I tried to include in my presentation, because behavior has a time component, so if we can also -- one of the challenges that I'm seeing is the lack of learning capability within the hardware itself, which leads to the point where we have no choice but to use conventional electronics, the same strategies, you have the sensor, you have digitalization of the data, and then the rest of the things. So if we can look at new paradigms, new ways in which sensors are fabricated such that they exploit intrinsic properties of the material to also have learning capability, then it will help resolve the questions such as number of devices can be reduced, integration challenges will be lower, power consumption will be lower. That way we could look into new directions.

J-C CHIAO: Could you speak into the microphone? Online people may have trouble.

WEI GAO: In the case of the wearable sensor, we want to collect data for behavior for mental health assessment. We think the capability of continuous data collection is very important for us, especially when we want to integrate more and more sensors. We need a multiplex real-time data collection in this way. Off the shelf ICs have much higher capability overall, that's why we try to assemble these off-the-shelf ICs into a flexible wearable PCB that allows us to real-time collect data. I think this type of system integration approach is very important to reliably, for us, to access data for biomedical applications and in this case in particular in behavior and mental health assessment.

WALTER BESIO: I have another general type of question. I'm curious -- well, it was amazing that you all were able to get your talks done in 12 minutes, describing all of that material. But what applications are there that you haven't been able to develop the sensors for yet, that might fit into this BBQS?

WEI GAO: That seems like a tricky question. John, would you like to take it?

JOHN ROGERS: Well, maybe just as a very practical kind of straightforward answer. I think you could consider things 10 or 20 years out, and I think there's a lot of opportunities in that space to think, Wei Gao, and we're also interested in chemical biomarkers and multimodal assessments, not just biophysical but biochemical. So I think there's a lot of things.

But as immediate example of a challenge is around EEG for example in sort of realistic kind of practical scenario, just due to the very low signal levels and the complications associated with noise, especially like motion-induced artifacts, Johnson noise picked up by the leads that connect the electrodes to the data acquisition electronics.

So that's something we struggle with. We can achieve exceptional performance in a very controlled setting, but in daily activities, it becomes pretty challenging. So I think measuring brain activity directly in a noninvasive way, let's put it that way in a more general sense.

I think functional NIRS, EEG, these are great measurement modalities, just because there's an established base of knowledge around how to interpret that kind of data, and I think that's a great starting point, because behavioral scientists, clinical physicians, and so on know what to do with that data, and so I think you start there and then kind of add advanced modalities on top of that, but as a practical consideration, EEG is pretty hard, but it's a pretty important kind of measurement to make.

OMER INAN: Hopefully, I won't repeat something you already talked about, but my question actually I think that Dr. Rogers' talk especially where he was discussing the quantification of pain in kids, I think brought up some really interesting questions from my end that I think seem to sort of -- seem to be pretty common in this field overall, which is sort of relating to gold standard and how do we essentially take something like pain, for example, where, sure, there's pain surveys and there's kind of subjective quantification of pain, but that can sort of fall into the pitfalls of interoceptive sort of challenges and, especially when it comes to kids, you know, we used to -- it was very difficult to ever find out what level our kids were hurting after some sports injury or something.

So how do we I guess think about for these sorts of technologies, especially when we're covering the breadth from wearables that are maybe measuring physical or biophysical parameters out to molecular and maybe even further sort of nanotechnology kind of work. How do we make sure that what we're measuring tracks the gold standard, and what is the gold standard and how should we be going about that? What do you guys think?

JOHN ROGERS: Maybe I can say a couple things, and I think Wei is deep into this space as well, and so let him comment also. I think these are great points, Omer. I mean, what is ground truth, right? I think it's not clear. So there are surveys that are used, pretty standard forms that nurses use to assess pain levels, but what we find -- I don't want to cast any aspersions on nurses, because they've been great collaborators for us -- but if you look, they're incentivized actually to downgrade pain, because the performance of a hospital gets scored in part by what levels of pain their patients are experiencing. So that distorts the outcomes of those surveys.

I think it's a great, great question, something we certainly struggle with, but I think it's the best we can do at the moment, and maybe some of these biochemical markers, we've done cortisol in sweat as well, I think Wei's probably deeper into that than we are, but I think there are some novel sensor modalities that are emerging and the idea of looking at heart rate variability and correlating that to mental distress is a pretty old concept. So you kind of want to fold that into whatever you're doing, but all sorts of additional things can be measured now, and you can track things at multiple body locations.

Anyway, I think it's a rich space and there's tremendous diverse data streams that people can now access. I think we have kind of an interesting protocol where pain is being induced so we know where the changes should occur, and the magnitudes of those changes and how they connect to the surveys is kind of an area of ongoing work. But I think it's very interesting, pediatrics especially, because as I mentioned before, they can't really vocalize what they're feeling. So that's very much our emphasis.

But I am sure Wei has deeper thoughts on that.

WEI GAO: Thanks. I don't have deeper thoughts, but in general, I think pain is like a stress. I'm talking with some clinicians on both pain and stress. It's very hard to access. Gold standard of both based on subjective questionnaires. So what we can do I think is really multimodal sensors. So I know for the pain test people try to use imaging, different types of imaging modalities, to do AI, do machine learning. For stress, we apply very similar way, by applying vital signs, chemical information, actually more data is better.

So we have to rely on still right now the gold standard, which is still subjective questionnaire, but we can train the model when the subject number is large enough, the model can be very robust. We can build a model that can predict the stress score or pain score, much better than individuals' questionnaire response. So actually this AI model would be more robust than personal answer in general, but we have to rely on these questionnaire subjective ones right now. I think the AI multimodal sensor will be the way to go.

J-C CHIAO: Yes, that's what this workshop is about, because for engineer, we need our colleagues in neuroscience and behavioral science colleagues' help how do we define the standards. So I think we will skip the online question. We have an in-person question here.

VAL GRITSENKO: Hi, I am Val Gritsenko from West Virginia University, and my background is neuroscience. So maybe if you could comment more on the future of AI applications here, because we do get a lot of data and making sense of the data from these sensors is very important to do it right, right? We want to gain insight into the mechanisms underlying these conditions, deficits, pain levels, and AI it seems like so far is notoriously bad at getting insight into mechanisms. So maybe you can expand on that.

J-C CHIAO: Thank you. I think, Ravinder, your work is very related to this. Maybe you can answer the question.

RAVINDER DAHIYA: Well, the question, current approaches using AI is that you collect the sensor data and you start using those AI algorithms. So from the way I see, we don't really gain much, because we have already collected the data. So as a future I'm seeing more AI entering into more hardware itself, so the transistors and circuits will be developed in such a way that they process the data as an algorithm would do, and that would bring this together, in a sense would reduce the amount of data, and along with the type of devices I was mentioning, with learning devices with learning capability, we may see more and more cognitive circuits going forward.

J-C CHIAO: Thank you, Ravinder. We only have time for one question, because we have to keep on the schedule, please.

KEN KISHIDA: I am just reacting a bit to Dr. Inan, Dr. Rogers, and Dr. Gao. I am Ken Kishida. I am associate professor at Wake Forest School of Medicine in North Carolina. I am a neuroscientist, and I'm really interested in biological basis of subjective experience.

So the comment and the discussion earlier about jumping to kind of physiological biomarkers that correlate to these kind of subjective surveys and assessments, in the discussion it sounded to me like there's still a huge gap that the research could try to solve. It is one thing to correlate physiological measures to subjective reports, but we don't really understand how that's mechanistically connected.

So maybe the discussants can talk about how maybe their tools and what they're measuring might be one side of the bridge that we need to build to understand how the brain generates these feelings and these reports, rather than what sounds to me like we understand one side of the spectrum and the other side, but we don't understand how it's connected, and so we're building correlations, but we may be missing the actual feeling of pain and how that comes about.

J-C CHIAO: Thanks. This is the reason we have this workshop, to build that bridge. Who would like to take that question?

JOHN ROGERS: I think it's a great question. That's why people are scared to try to answer it. I think you are getting right to the core of the issue, right? And maybe the opportunity that might arise multidisciplinary collaborative work among this community, because I think that connection is missing. But I would say, understanding that connection is missing, but I would say that connection hasn't really even been firmly established yet. So I think you kind of have to work on both aspects. I think once you have a set of measurements, biophysical, biochemical, that you know can reliably connect to some aspect of pain experience or brain function, then you can begin to sort of tease out how these parameters are actually related.

And so we do machine learning and as an engineer, I think you can't use machine learning and AI as a crutch. I think that that's kind of a flawed approach, because not only is it a black box, it can learn features of the data that aren't really intrinsic to the response that you're seeking to capture. So I think there needs to be a continued emphasis on sensor fidelity and reliability and multimodality, and AI will pair with that, but the data has to be accurate and sort of reliable.

But one thing that we're doing, just to give you an example, we try to connect physiological measurements that you can do noninvasively to blood pressure, sort of a continuing interest that's addressing a very important unmet clinical need. How do you do beat to beat noninvasive blood pressure. So you can measure things like ECG, SCG, PPG, you can collect all these data streams, and then try to develop a model that connects those measure parameters to, say, arterial line measurements of blood pressure.

So you can kind of do that, but then getting to your question, how do you then tease out what are the key features that are really leading to that blood pressure estimation, and there are ways now in explainable AI to begin to use the algorithms not only to establish that connection, but to tease out key features of the data that are driving that prediction, and I think maybe that sort of strategy could be useful in answering your question around brain behavior as well.

So that is kind of the way I would see things play out.

J-C CHIAO: Sorry to cut in. Our online people are getting frenetic. They would like to get at least one question in. Please, stick for the whole workshop, because in the end of the day tomorrow, we're going to address the AI and machine learning issue.

We have one question from online? We only have one minute.

STAFF: So, a lot of interesting questions online, if panelists could take a look at them. I'm picking one that's interesting broadly. Has anyone considered sensors that can pick up on food intake behaviors in humans in ways that are more scalable, perhaps wearable microphones of bowel sounds or anything like this?

JOHN ROGERS: I don't want to dominate. We are deep into that issue, so I'd be happy to talk to you about it offline. Many activities in that space.

J-C CHIAO: So this afternoon, we're going to have a session about remote sensing and wearables, and at that time, I think the presenter will talk about that issue.

Okay, we are really behind. We only have about 9 minutes for the break. So thank you all very much. I know this is a very deep topic. We probably need months to talk about them.

Please come back at 11:40, 8 minutes from now, and we will start the second session.

(Break)

YVONNE BENNETT: A couple of housekeeping suggestions as we start up again. Please, for speakers and discussants, when you come on stage, to please push the green button, push to talk. Make sure that's on and come very close to the microphone so that others online can hear as well.

So at this time, I would like to introduce our next Session 1 moderator for Part B, Dr. Svetlana Tatic-Lucic. Svetlana is a program director for communication circuits and sensing systems in the Division of Electrical Communications and Cyber Systems of the Engineering Directorate at the National Science Foundation. She manages the interdisciplinary science and engineering thrust in the area of sensing and biomedical applications of advanced technologies, while also addressing the underlying fundamental research.

Svetlana joined NSF as a program director in November 2021. She is also a full professor with a joint appointment between the bioengineering and electrical and computer engineering departments at Lehigh University, where she served as the associate dean in the College of Engineering.

Please, welcome Dr. Tatic-Lucic.

Sensors Introduction (Part B)

SVETLANA TATIC-LUCIC: Thank you for this kind introduction. So my task is to introduce you to the second half of the sensor session here, and we'll have three speakers. Our first speaker is Professor Reza Ghodssi from the University of Maryland; the second, Professor Zhenan Bao from Stanford University; and the third Professor Andrei Shkel from University of California Irvine. They are going to introduce each other as they go, and then we'll have four discussants, Professor Omer Inan from Georgia Institute of Technology; Professor Walter Besio from the University of Rhode Island; Professor Chris Roberts from the University of Texas at El Paso; and Professor Satrajit Ghosh from MIT.

Professor Ghodssi, the floor is yours.

REZA GHODSSI: Good morning. I hope I could be able to explain some of the ongoing work in our group in this area, but particularly with a focus on this neurotransmitter molecule serotonin that I would like to talk and focus on today.

So I'm at the University of Maryland and I'm affiliated with electrical and computer engineering, Institute for Systems Research, and also Fischell Institute for Biomedical Devices.

Let me start the talk by just pointing out that there are so many exciting researches now surfacing that are related to gut microbiome and gut-brain axis that in a way it's kind of not only exciting but overwhelming, and this very last, the latest paper that was published last week by our colleagues at Harvard in collaboration with UIUC is actually a good one and relevant to what I want to talk about today, and it offers actually human evidence supporting linkages of emotions and regulatory processes with gut microbiome, the emphasis on positive and negative emotions that are associated with a certain bacteria in the gut, and really highlights the impact of the gut microbiome in our understanding of emotions related and the association with physical health.

So on that note, I would like to touch upon some of the technologies that we are developing in my group to particularly measure this neurotransmitter molecule that is so important in this pathway, and doing it in an in vitro or what we call ex vivo and then ultimately and hopefully in an in vivo environment to generate reliable data that actually would be helpful for colleagues in neuroscience. So that's really the focus of our work, and I'm going to highlight some of the lessons that we learned at the end of the talk.

So gut-brain axis is really a bidirectional communication pathway between GI tract and brain, and it's really done through this molecular or molecules that are getting secreted and the interactions of these molecules between enteric nervous system and central nervous system. Particularly, when it's functioning correctly, GBA is actually highly beneficial in many ways for proper immune and physiological functioning, but when the pathway is regulated in the gut, this can also lead to dysregulation in the brain and sometimes in neurological diseases.

So some of the examples that are so challenging for not only scientists and engineers and of course medical doctors in the neurological disorders include depression, anxiety, stress, and so forth. The list is there. And also the GI tract disorders include some of the major area, leaky gut syndrome, IBS, and IBD. So it's really a communication between these two that result in some of this diseases and illnesses that we are facing today.

At the heart of this are neurotransmitter molecules, as I mentioned. Serotonin is one of them, which is a biomarker and is one of the key molecules that actually impacts this communication pathway. It triggers ENS and it sends signals to vagus nerve and then to the brain, and it's fair to say that 95 percent of this serotonin molecules produced in the gut, but there are also other sources such as neurons, immune cells, and so forth and bacteria, of course.

But really I think what it does is it contributes significantly to the inflammation throughout the gastrointestinal tract and is a link to inflammatory neurological conditions via the gut-brain axis. So it's important to understand not only what this molecule is doing but then also can we actually measure it and we actually measure the concentration and see how it is behaving.

On that note, the traditional measurement that's done to understand the concentration of this molecule of serotonin normally includes techniques such as HPLC and ELISA, but these techniques actually cannot detect this molecule real-time, and they're laborious and also equally important, they lack spatiotemporal resolution.

So electrochemical sensing is one approach in actually trying to do this measurement real-time, and it's suitable of course for miniaturization, but it's really important here that it's low power. So there were some really good reasons for us to pick this method, and to work with and of course there are challenges involved that include fouling, sensitivity, and selectivity of course at the heart of a lot of biosensing that we work with, biocompatibility, and of course accessing to basolateral ECC region, which is really important.

Here in our group we have taken, as I said, three approaches, starting with an in vitro model trying to understand how actually we could measure this molecule in a sort of a phantom type environment that includes Transwell and also attempting to measure this molecule in real time using an animal model, in this case is a crayfish, and then also subsequently doing this in real time and ultimately doing it for a human model using ingestible capsules.

The first approach, the Transwell, really includes two types of what I call interfacial sensors that really provides information in the dynamic of this cells, the cells that they secrete, serotonin, the ECC-like cells, and then also another set of sensors that will provide the information for the secretion of this molecule, how much will actually be secreting, all done in a membrane in a porous membrane of one diameter in size, and trying to be able to measure the secretion of this molecule as they get secreted through these ECC cells into the sensors that measures them at the bottom.

If you look at sort of the cross-section of this, this is what it looks like. But the platform actually includes, as I said, electrochemical sensors that utilizes gold as an inert material, which is biocompatible, and also carbon nanotube which increases surface area binding, and it also preferentially binds to these cationic indoles like 5-HT, and it has shown promise in reduced fouling, which is so important in measuring this molecule.

We, of course, trying to do this both in a spatial and temporal manner and the type of cells that we are using is listed there, but the whole idea here is can we actually do this measurement as the molecule is getting secreted? If you look at some of the results, we see in the lefthand side we have shown actually detections of these molecules in a flask typical environment that has been stimulated by butyrate and before and after stimulation and we see the actually reliable differences in measurement before and after this stimulation.

On the righthand side, we have done the exact measurement in a Transwell environment that actually shows that not only we can measure this, we can measure the serotonin reliably, but also it shows this significant difference between prior to the stimulation of the butyrate and after, and what is important here is that we see that there is increased sensitivity with longer accumulation time.

Moving forward, expanding to this whole work today more real-time measurement, of course from our best practices, we see that there are problems such as large footprint and nonpenetrative aspect. So we worked with crayfish, which is actually a simplified model for this type of behavioral and neurological measurement in collaboration with my colleague Jens Herberholz in Maryland, and the key features of this is to sense this in environments with biological interferences, a small sample volume, high spatial resolution, and minimally of course invasive approach in this case.

I'm not going to go through the details of this, but the method in this case is carbon fiber microelectrodes that we use and we modify it with some surface coating such as Nafion, and again, in this case a carbon nanotube. Again, but etching the surfaces to create more surface binding, which is the key in this type of measurements. And this has shown and proved to be actually quite successful, particularly with an improved sensitivity that we have achieved.

If you look at some of the results, again, in the lefthand side, we show a measurement of this molecule using these carbon fiber microelectrodes in an environment that the cells have secreted the molecule and so we are able to measure this very reliably before and after this stimulation in this case with AITC, and we have measured this at as low as 120 nanometers in concentration, and looking at the righthand side, you see that we actually segmented the tissue; obviously this is more like a homogenized crayfish tissue, that we worked with, and we use the similar method and we have achieved actually a significant difference before and after again the secretion of the molecule.

So this has shown promise that this method is actually successful, and moving forward we are working on actually demonstrating this in a wireless format. Of course this requires more miniaturization and integrations of this component, and our first attempt was to actually do this measurement after we actually injected the serotonin in the abdominal of the crayfish and being able to measure it real-time by scaling down and integrating the electronic components and putting it on the body of the crayfish, and I was promised that this is harmless by my colleague who works on this for many years.

But looking at the righthand side graphs, you see that we have shown actually reliable measurement of this molecule and with saline as a control, but it's interesting about this that we can actually repeat this with a reduced pick of the measurement, and that really goes back to what I was saying, which is the problem with the fouling. You can't really do this for a longer time, but for a short time is actually very reliable.

I'm coming to the end of my talk, but I'm going to do this very quickly. We are now attempting to actually do this measurement in an ingestible capsule type format. Of course, initially in an in vitro fashion and then moving on to the preclinical way, and we have shown the limit of measurement as well as 140 nanometer, which is promising, but again, this is done on bench and we are moving forward to actually do this in a preclinical environment, as I said.

So just to highlight, I'm not going to go over the points that I mentioned in the lefthand side, but the challenges here that we are facing today, we need to do this measurement with reduced sensitivity by biofouling in the long-term monitoring fashion. Limit of penetration of these sensors in an ex vivo tissue is a challenge that we need to actually address. We need to do this for a more low concentration of these molecules, and of course, miniaturization is an ongoing challenge that we always have to address.

But the point that I would like to make at the end is that you really actually have to do all these three methods concurrently, and you can't do this sequentially, because there are informations that are surfacing from in vitro, from ex vivo even, and then also in the preclinical hopefully ingestible devices.

So this is the sort of scope of my program. We work on different areas that are related to ingestible devices. I'll be more than happy to talk to you about some of them, and on that note, I want to acknowledge my group and the funding agencies and particularly these young bright innovative people who work with me, and I have the privilege to advise them. Thank you.

So now I'm introducing my colleague, Dr. Bao, from Stanford, and she is going to continue the presentations.

ZHENAN BAO: Thanks very much. It is a pleasure to be here, and in my group, we are working on trying -- trained as a chemist -- we're working on trying to focus on the issue of making sensors to be more compliant and more integratable and biocompatible with the human body. So this is what we call skin-inspired electronic sensors.

In this talk, I'm going to talk about several different kinds of sensors we have been developing, including physical sensors that measures forces and temperature, also electrophysiological sensors, and finally, some neurochemical sensors. What we are focusing on is trying to change the status quo of the current limitations of sensors. That is being very bulky and difficult to be integrated with human body or not providing sufficient information.

So what we envision is the sensors being very comfortable, invisible, imperceptible and biocompatible, and be able to measure information autonomously. So this is the vision we have been working towards for many years. Basically, developing electronics and sensor systems that take the form factor of human skin and basically entirely based on materials that are soft and intrinsically stretchable. So these are new generation of electronics. They are not yet commercially available. So it's going to still take some development to really get them to be widely spread and widely used. But I hope to convince you that the effort is well worth it.

In terms of examples of sensors that we have been working with, there are several categories that I kind of outlined at the beginning. The first groups of sensors are basically measuring physical information. So the pressure sensors takes the form factor of being flexible and stretchable and measuring very fine forces that can differentiate the basically as sensitive as human touch.

And then the directional forces can also be measured by incorporating bumps into the sensors, strain sensors showing over there, and combining the pressure and strain sensors can allow us to differentiate the type of object that the sensors are in contact with, whether these are soft objects or rigid objects, and then finally, temperature sensors here, temperature sensors are made so that they are not sensitive to pressure or strain, as many of the physical sensors are essentially sensitive to everything, but here is a circuit-based sensor that is only sensitive to the change of temperature.

Some of the applications are shown here. So these sensors can be made into arrays of large or small size. They can be made into fiber form factor that are easily implantable, and then to allow them to be easily attached to human body, especially to have various sensors, because we don't want to have many bulky sensors wrap around the body in different locations, we have been developing what we call BodyNet sensor network.

Basically, these sensors take the sticker format, and they have simple sensors and circuits integrated onto them, and then the more bulky electronics battery and wireless communication, those are all incorporated into the clothing that's very close to the sensor tag so that the sensors will not constrain the movement of human body and the bulky devices are being placed in places where we have the space.

So for example, in this case, we have multiple sensors that are stuck onto the arms, chest, and legs. This allows measurement of the heart rate through pulse wave measurement, breathing rate, the movement of the body, and all of these are just simple stickers and there's a central communication Bluetooth reader that's on a place on the clothing that sends information to the cellphone.

To showcase such sensors can be also made very small, and very compact, for implantation type of measurements. We made these sensors that are implanted in between the brain and the skull to allow monitoring of the intracranial pressure. So this would require still the user place the reader near the head of the mouse to read out the information. And then the one on the right is implanted sensor that can allow reading the blood flow in the major artery and if there is a clot in the artery, then this kind of sensor can detect the change in the blood flow and be able to predict the clot within 20 to 30 centimeters away from the sensor.

This is a recent application where we integrated such strain sensors into stretchable materials that was implanted into the intestine of the mouse, and the mouse is allowed to be freely moving. Here the strain sensor allows the detection of the motility of the intestine and since the intestine is hidden inside the body and the camera is too rigid to be inserted into the intestine and allow the animal to be freely moving around, so this kind of sensor potentially can allow us to monitor the motility of intestine, and furthermore, we have integrated electrical stimulation and serotonin sensor also in this kind of fiber.

This is another example of ultra-low profile skin sensors that's used for tracking motion. Here the sensor is basically the silverish material. It's a gold-coated silver material that's spray coated onto any part of the body. So in order to determine the detailed movement of all the joints, here instead of making a complex sensor array that is aligned to different joints, with the ultra-conformal sensor, basically there's no substrate. Just pick the ink and this is a biocompatible ink that's sprayed onto the finger and then we put on the Bluetooth communication tool to send the information to the computer, and through a meta-learning algorithm, this allows us to be able to quickly train the algorithm with just a few strokes of either touching the object, then being able to just after train by a few touches, then it's able to differentiate the shape of the object, or for the virtual keyboard by again just typing a few words then it's able to with a computer to guess the words that's being typed.

So this could be a very simple system that allows tracking of movement of all different joints, leg or whole body movement, with simple spray on sensors.

Moving on to electrophysiological sensor, the video is a little bit distorted. Here we use stretchable conducting polymer-based sensors to measure electrophysiological information. The advantage for such material is not only being flexible and stretchable, but also it reduces the impedance when in contact with tissue. This has been shown by multiple groups previously that the impedance at the tissue interface can be lowered by several orders of magnitude compared to that of metal.

So the implication is that it can lead to smaller electrode size and higher signal-to-noise ratio when using this kind of electrodes. So these are some high resolution mapping that was performed using stretchable electrode array that we fabricated. The first one is 64-channel ECG measurement, directly on the heart tissue, and being stretchable and conformable allows the accommodation of the beating heart, and the one in the middle is high resolution EMT measurement using stretchable arrays, and finally we are ongoing development is high density EEG and also direct insertion into the brain as neural probes.

The other advantage of using these electrodes is the lower voltage required for electrical stimulation, because of the large capacitive component in the electrode. Here on the left shows comparing the conducting polymer electrode stimulation and the platinum electrode stimulation. The voltage can be significantly decreased to cause the same stimulation of the leg movement.

On the right is high resolution stimulation by implanting the electrode array at the brainstem region where individual nerves on the face of the mouse can be addressed, whether it's the whiskers, movement, or whether it's the facial nerve movement. One of the advantages of using such soft electrode is comparing these soft ones with low modulus and also flexible devices. Even though they are flexible but modulus is ten times higher, you can see the high modulus electrode can easily cut into the brain tissue and cause damage. Another potential unique opportunity with these sensors is that the devices can be designed so that they actually expand with the growing tissue and accommodate the growth of the organ.

Finally, in terms of neurochemical sensor, we have been developing this stretchable version where there are catalytic sites, so to allow the differentiation of the dopamine from serotonin, you can see this is also electrochemical sensor, very similar to the first one, but the catalytical effect allows differentiation of these peaks going to tens of nanomolar sensitivity.

This is an experiment we have done where we implanted the neurochemical sensor simultaneously in the brain as well as in the intestine, at the same time, for awake rat, and in this case we give chocolate to the rat, and able to measure the dopamine level in the brain while when the serotonin from the chocolate arrives at the intestine, we were able to measure the intestine.

Final message is that additional opportunity with the soft electronics is now we have the ability to build integrated circuits also with soft materials with megahertz level of speed. So that means the devices can now be expanded, sensors expand to large areas, still maintain high density.

I see my time is off. This is the summary of different sensors I think are possible to understand a number of different behavior-related problems that is of interest.

With that, I'll just invite the next speaker to the podium to speak.

ANDREI SHKEL: Good afternoon. My name is Andrei Shkel. I will be talking about inertial sensors.

Inertial sensors are used in a variety of applications. It's one of the success stories, I would say, of miniaturization, and there's a question still remains: can the sensor be used as an analytical tool, and I will try to give a few examples and try to outline some challenges with it.

Let me just outline a few historical remarks. Inertial sensors were never intended for what pretty much many in this room are using it for. It's traced back to some innovations in the 19th century where first it was used to demonstrate that the earth actually rotates, (inaudible) was really origin for a lot of inertial sensors. Then around 1940, people figured out, well, you can actually use it for guidance and control, and military fully embraced this type of devices. It really was a revolutionary development, really not intended at the time that the sensors were invented, and this is really up until the 21st century, I sort of was lucky to be right at the beginning of all this great developments. So we start seeing the devices in electronic systems and cameras, gaming platforms, smart, health and lifestyle systems. Pretty remarkable.

We need to understand, however, that sensors, inertial sensors, are very, very different. Some are small and very cheap, $10, some for multiple applications can be 50 cents. Others are a little bit larger. Can be small, as well. And cost can be quarter of a million dollars up to a million dollars per single axis. So the range is huge, and this is a continuous tradeoff between size, weight, and power of the sensors and cost, of course, but performance is also part of this metric. So the battle is how to make devices low SWaP-C plus high performance.

Really what made this possible is the power of the miniaturization. These are the systems, which were used in a conventional Boeing 747; you can find devices of this sort which are large (inaudible) process of combining per individual components, making systems very precise and very well performing its function of navigation, and what was possible with miniaturization is this extremely small complete inertial measurement unit with the feature sizes of submicron.

The basis, what made this a possibility, is to adopt semiconductor-based technologies, and silicon is one of the first materials which was used for miniaturization along with all the infrastructure and the way these devices are made.

Well, silicon is not the only technology. Something that I couldn't escape but include is part of I feel my contribution of things I've been doing is to come up with three-dimensional sensors. So on the left, you see a wine glass of use for device. It's sort of, it's a golden standard what inertial sensors are. The cost of such a device is $1 million per single axis. It takes three months to produce a single device. It includes 96 components which are assembled by hands, and literally polished by hand.

And this has been my dream since I was undergraduate student to come up with a way of building these devices, and some 20 or so years later, I can report that we actually figured out how to make wineglass device using a fused quartz, say, material on the wafer level. What was used there is adapting this ancient technique of glassblowing of making devices, the temperatures over 1,500 to 1,700 degrees heat to build these devices in large quantities and reduce cost and increase performance. So silicon is not the point here. Silicon is not the only material which is used. Fused quartz is a very attractive option.

Typically, in applications, even we use, we really can't find a single application where a single sensor would be useful for the function, or a single accelerometer or single gyrus. But typically, it comes as three accelerometers and three gyroscopes, and it is pretty much what is needed for any rigid object to find its position and orientation in space.

Some more recent developments, we started integrating magnetometers. This is what is called 9 degrees of freedom system. The devices can be made using a variety of different options. You can build the entire inertial measurement unit on a single substrate. There are tradeoffs. There are advantages of course, and you can make it very small and you can use it as a patch.

The disadvantages, you are reducing performance of these devices, and in some applications it will not be a possibility. What is more common for high performance devices is to optimize a single axis device and then assemble them by hand typically, and this is what would make inertial navigational system. What we tried to do under some of the programs is combine these very rigid materials, such as silicon or quartz, and soft materials, for example polyimide, and build devices on a wafer level, very optimizing performance of a single axis device, and then folding it in a 3D configuration, whether it's a pyramid or a square, and sort of almost origami, and then fixing it, sort of fusing it, and producing then a 3D unit. You can find samples of this on display at the Smithsonian Design Museum.

Next, I would like to give some examples where inertial sensors can be used, and what is exciting opportunities and challenges as well, for making devices of this sort. So one example has been my Saturday afternoon projects for a number of years, is development of a vestibular prosthesis, and it's a balance system function is a very intriguing system. It uses as an input information from vision, from vestibular organ, from hearing, and based on this information, brain integrates this information and uses for stabilization of images for example, for posture control, and spatial awareness.

So there is a need, a large population really needs these devices. It's really a part of the inner ear. It's a vestibular prosthesis. It's very frequently forgotten, the sixth sense that we have, the sense of balance.

What it is are three semicircular canals, utricle, and saccule. When we rotate our hand or when we accelerate or when we are trying to orient ourself relative to gravity, pupilar will deform and trigger chemical reactions, chemical reactions will trigger our pulses, which are sent to the brain and brain accumulates all this information.

If something is wrong with either in the ring or pupilar or physical damage of the sensor when we get older, is also declining this organ, there is a need to find a solution of a prosthetic device to restore this function.

Gyroscopes and accelerometers, inertial measurement units, can be used for these purposes. Pretty much it will be the exact mimicking of the system would be to measure rotation and to produce pulses proportional to this orientation to enter velocity of rotation and transmit this information to the vestibular nerve.

So the challenges. Even though it's a very much needed device, one of the challenges is to keep in coordination the number of impulses sent to the brain and the angle and velocity of rotation. Small devices. The devices would be candidates to be implemented instead of the inner ear, in the inner ear, they drift over time. They drift over temperature, they drift over time, and over a period of use, the sensor will be pretty much, vestibular prosthetic will not be functioning as needed.

Technology is needed, but technological challenges still exist and they require continuous calibration of systems of this sort.

This is sort of a complicated slide. I'm not going to go over this. Horizontal is averaging time, vertical is Allan deviation. It's explaining different tradeoffs in noise mechanisms. The point here is that the minimum detectable signal a human vestibular system can detect is .5 degrees per second. If our threshold of detection is 5 degrees per second, it's only 10 times lower, it's a dysfunction. It's a much reduced ability to stabilize images.

From the performance point of view, we can reach this level performance. The challenge is ability to calibrate the sensor so that the sensor data is actually useful.

Human motion is another good example of what can be done. I will show how this type of ability can be used, for navigation. You can place sensors in any given part of a body, but a foot is preferable method for precision navigation. When we walk, our foot touches the ground; it goes through zero velocity event, and this is what is used as a concept for zero velocity update algorithm where it's a combination of prediction coming from strapdown inertial navigation system and updates coming from the stance phase detection, and remarkably, this type of solution allows to achieve a very good performance.

This gizmo, I call it Lab-on-Shoe, and this basically an illustration of how complicated it is to make a navigational solution. It's not sufficient just to use inertial sensors. It's much more complicated and this is what we use, and miniaturized over time, going from Lab-on-Shoe to sugar cube platform to ultimate navigation chip, which integrates multiple modalities.

So in sensing human -- there are a number of challenges. Our different parts of the body goes with different accelerations, from 4 g to 6 g to 15 g to 40 g when it's on the foot, which makes development of such sensors complicated, very complicated options, even though there are algorithmic solutions which are attractive.

Another few examples. My colleague Farrokh Ayazi sent this to illustrate that other uses of sensors can be used for, for example, for asthma detection and to detect the lung response and predict asthma and monitor the disease; by creating sensors beyond what is available on the market, one can actually do better than digital state-of-the-art stethoscope.

One can also use sensors for monitoring heart basically by placing in close proximity to the chest by detecting linear acceleration and angle of velocity, one can assess not just electrical signals coming from the heart, but also the mechanical response of the system. Each of these modalities do require sensors, a very special type of sensors, specifically designed for the purpose.

Just to summarize, it's been a truly revolutionary development in miniaturization of inertial sensors. It's a great story. Inertial sensors in the performance are beating Moore's Law. There is need for -- it's not there yet -- low SWaP and performance, and there is a path forward for application of specific sensors, and huge importance of sensor fusion and algorithmic layer. And inertial sensors as an analytical tool is something that will be coming up as the performance of sensors is increasing.

I would like to acknowledge a number of funding agencies for this and the group.

Discussants

SVETLANA TATIC-LUCIC: And now I would like to invite all the presenters and discussants. Thank you very much.

Three of our discussants, you would recognize from previous discussion, and we have a newcomer, Professor Satrajit Ghosh from MIT. I would like him to ask the first question.

SATRAJIT GHOSH: Fantastic set of talks. It introduces us to a whole world of sensors. So I heard molecular to start with, going through physical, neurophysiological, neurochemical, and then inertial, across the talks. One of the questions I had, and any one of you can pick this up, this is kind of a general question: all of us would like all of these things, right? Simultaneously. What do you think are the biggest challenges and where do you think are the biggest opportunities in integrating some of these sensors so that we can get more multimodal things happening simultaneously?

REZA GHODSSI: So very good question, and I think I want to emphasize what I hopefully tried to do during my presentation. There are so many challenges with each module that one needs to improve and optimize for the data to become actually validated by clinicians and neuroscientists that they need to use and analyze, that it's absolutely important that we gain that confidence at that modular level with one sensor before moving on to the multi-sensors. But I think it doesn't mean that we can't do the multi-sensors. I think we should, and I think Roger was right. It's the key to a lot of the information that we need to gain, but what I'm trying to emphasize here is like for instance in the case of gut, like Bao also showed, my colleague from Stanford, you really need to gain that confidence that you could -- your modular sensor is working both in vitro as well as in vivo, and then the limitations could be anything from miniaturizations to power to transmissions to signal-to-noise ratios, selectivity, sensitivity, all these issues are all there, and they vary from one sensor to another. So I don't know if I gave you a concrete response, but I just listed some of the challenges.

ZHENAN BAO: I completely agree that first individual sensors need to be reliable and have very low drift and are reproducible, and then depending on the applications, I guess, we do both the sensor development as well as trying to look at what applications are most suited for our sensors, and we find that it really -- the sensors really have to be designed based on the intended application, because once we know the application, the design, the form factor, and also the sensor signal, output, everything need to be adjusted to meet the requirement for integration.

And then finally, even though you see, John, so many of the wearables that look simple, but it is actually really challenging from engineering point of view to integrate everything, especially to make them to have wireless communication and data processing, everything all integrated together. So I would say there are a lot of challenges for specific design related to the specific application as well as the integration.

SVETLANA TATIC-LUCIC: Thank you very much. The next question should be asked by Dr. Inan from Georgia Tech.

OMER INAN: Thank you. I agree we saw so many excellent talks, again. I think one of my questions is that in the translational spectrum, especially for the sort of topic we're talking about today, where we're talking about really brain and behavior sensing, or maybe behavioral sensing of brain responses, I mean, how close I guess are we to seeing some more -- there's been a lot of exciting stuff happening in wearables and in sensors overall over the past maybe 40, 50 years, maybe more. I thought the IMU talk, the inertial measures talk was excellent in highlighting that.

Some of these have translated to clinical use. Many of them have not, and many of them even for sort of physiology studies, people are using the same kind of ECG, accelerometry, if that, sort of sensing. What's the big gap, I guess? Is it a technology gap? Is it that -- and this is of course one of the reasons why we have this workshop in the first place, but what is the big gap that needs to be addressed to see more of these beautiful sensors with incredible results translating into commercial domain and into the clinical domain ultimately?

ANDREI SHKEL: Let me take a stab at this question. Let me stay for a second in inertial sensors domain. It takes, to develop a good inertial sensor, takes $10 million per year for 10 years. This is how long it takes to develop a single sensor, and the reason what is happening, suddenly there's low-cost sensors wearable on the market, and the community, whatever they have access to, they got access to and they start using collecting data.

I think it will take a while until the community will reach a level of maturity to understand that the data maybe they're getting is not of high quality, and they will start formulating what is really high quality data, what is the bandwidth requirement, what is the drift requirements, what are the shock requirements, what's the dynamic range, full scale range and so on, and then we will be able to formulate what are the application-specific sensors that need to be developed.

But it doesn't mean we need to wait for the sensors that will be able to do this. I think the approach which works perfectly well is keep increasing performance of the sensors and in parallel use whatever, $5 or $10 sensors on the market, and try to look how to interface, how to extract data, maybe have exercise learning algorithms, because everything tastes better with a little bit of sprinkle of machine learning, and so basically it's good; it's good for the community to get kind of doing this, keeping in mind that all the sensors at some point, having very close look on whether this is exactly what you need for the application, take a very serious look into this.

ZHENAN BAO: I think there is the funding gap. The sensors, for many sensors, I think they're ready, but there needs to be a big market for investors to want to put in money to commercialize it, and then for research usage, it's just not enough market for funding to support the commercialization, even for blood pressure monitoring. We have a spinoff company for continuous blood pressure monitoring, but then we already get through the FDA right now, but still the funding for medical devices, it's just much lower level of funding and also valuation is low. So it's difficult for commercialization.

REZA GHODSSI: I fully agree with you. One point I also I want to emphasize which you alluded to, Andrei, the whole demographic of the sensors and MEMS devices and community has changed compared to what it was in the early days. We no longer focusing on just developing one singular module. We actually now emphasizing hybrid fabrication, hybrid integration, and that is becoming more of a norm. So that is why it is so important for us to work with neuroscientists, with clinicians, with folks who can actually guide us. Some of the work that I showed today are the simplest sensors I've ever developed in my career. But the measurements, the characterization, the actual testing of the sensors are really challenging, and this is why we need the neuroscientists and clinicians to guide us.

And of course, the presence of FDA, the communications with FDA, in terms of going toward translational work and something that is useful for society.

SVETLANA TATIC-LUCIC: Professor Besio?

WALTER BESIO: Excellent talks. What I'm curious about is the timescale that you believe is necessary for your devices to be used for either diagnosis or monitoring and what it's going to take to get them to that point.

ANDREI SHKEL: Answer will depend probably on the type of applications. I would say for some applications it is, for example, for monitoring of human gait adjustment of prosthetic devices, this type of application, I think we are ready. We are ready to deploy this type of applications for vestibular prosthesis devices. I don't know, it's a long horizon for use of sensors to collect, to measure response from acoustic waves instead of through microphones, instead of microphones use accelerometers from the skull. It's already happening in the Stories Ray-Ban glasses. So basically, things are happening, and depending on the technology, it will be probably happening as in parallel.

REZA GHODSSI: Also, the choice of material is really important for sensing, and that we need the material scientists and chemists to work with us, because we know how to develop the devices, but the starting point is obvious is the starting material and how that material gets integrated, whether we do it in a hybrid fashion or some other format. So I just want to emphasize the role of material is really important.

ZHENAN BAO: I divided the sensors into three categories, and I think the physical sensors are the most near-term, and then the electrophysiological sensors are midterm and then finally the neurochemical, especially implantable ones are the longest term, because of the material compatibility needs to be evaluated over long time. But in animal models, there are already to be studied.

SVETLANA TATIC-LUCIC: Professor Roberts.

JOHN ROBERTS: Thank you. In Dr. Shkel's talk, I was struck by the different grades of inertial sensors, and I think engineers are often driven by a roadmap of performance specs. As you are all experts working in the area, do you feel there could be stronger end user driven roadmaps and specifications that engineers could be targeting for? So if the users lay out what they want, clinical grade, et cetera, would that help drive the research in a direction that would be useful?

ANDREI SHKEL: This is an excellent question, and I think whatever we develop will need to be driven by the need, by the application. So military, for example, had its own roadmap for this type of sensors. Consumer electronics had its own roadmap. For example, what is good for cellphone to track orientation or to do games is not good enough for, for example, for virtual reality, for mapping, for immersion for virtual reality, and for example in the physical world.

So we start seeing roadmaps coming from applications that are driving the driving developments, and I think this is exactly what needs to be done in this community. It needs to start with a roadmap of performances that are needed to solve certain problems and problems could be very different, whether it's, I don't know, it could be different. I highlighted some of them.

And then this will be the basis for the community to solve this problem. This is how I see it, the most effective way to do this.

SVETLANA TATIC-LUCIC: Thank you. Now we are going to take one or two online questions that have arrived.

STAFF: So the first question is could the panel discuss the major challenge of sensor durability and fouling observed when used in vivo?

SVETLANA TATIC-LUCIC: Can you repeat it again? A panelist has not heard your question.

STAFF: Sure. Could the panelists discuss the major challenge of sensor durability and fouling observing when used in vivo?

ZHENAN BAO: In the case of neurochemical sensors, we did similar thing as your case. We coated the sensors with Nafion coating on the surface. The longest we have monitored was 16 weeks for dopamine sensing in vivo in the mouse brain. We saw a similar level of concentration we were able to measure using optogenetic stimulation to control the amount of dopamine generation, but we haven't yet done longer term implantation measurement.

SVETLANA TATIC-LUCIC: We have time maybe for one more online question.

STAFF: This question is for Dr. Bao. Is it possible to get a microphone from soft bioelectronics? Is that what the distributed pressure sensors are essentially picking up on when measuring gut motility?

ZHENAN BAO: Microphone. Well, I think it is possible if one makes suspended structures. We haven't looked into it.

SVETLANA TATIC-LUCIC: Okay, and now we are going to go to questions from the floor. So please introduce yourself and ask your question.

ROOZBEH JAFARI: Thank you so much. My name is Roozbeh Jafari from Texas A&M. So we all do sensor development with the intention of translation of these technologies. At the end of the day, they have to impact lives. You all being at the forefront of this and understanding also the TRL level and considering it builds actually on the question that Omer asked earlier, considering NIH, how they have been supportive of our work and they move forward with this mentality of investigator-driven research, peer-reviewed research, what can NIH do to close the gap before we get to industry.

Industry is going to spend on it when there is a clear market. But what we need to do in my opinion is somehow get our sensors to the hands of health science researchers, physician scientists. They can smell it, touch it, play with it, build their own hypotheses. What can NIH do to support this?

REZA GHODSSI: I can maybe start with that. I feel very passionately about this, and you ask really the right question. In fact, I brought it up with one of the organizers earlier. I wish that this is not the only meeting we have at NIH on this topic. I hope that it will continue and it would be continuity in some of this discussions. I believe last time it was on neuroscience part of this. This time is on sensors. We need to create a community that we can actually interact more, and this is the start of that. So I'm very inspired.

But then we also need to get those who do the translation aspect of this, the FDA involved, and to be part of this discussion. So I guess in a nutshell, what I'm trying to say here is that we need to continue in this movement, and hopefully some of the questions that are asked today and the points that we're going to continue discussing throughout this meeting will become new calls for us investigators to actually address and get inspired by those challenges. But I think it really needs to have a level of continuity.

ANDREI SHKEL: Can I build up on this? In my opinion, money cannot solve problems. People solve problems. So if the community is not conditioned to solve the problem, if the community is not on the same page what needs to be done, no money can solve the problem. So meetings like this and whatever building the community of people, is super critical. So when the community is ready, money will be very well used. Otherwise, it would be sort of we'll get dissolved likely with minimal impact. I think this combination of building the community and identifying these moments when the community is ready to take the money, I think this is what kind of critical part.

ZHENAN BAO: I think the need is really clear that it's having integrated multi-sensing systems and can have them even work on human, and really I think the support to get these types of systems built and also get them to the hands of clinicians, I think that's a critical need.

REZA GHODSSI: I want to end this by just emphasizing it's something that I'm experiencing in my group. These types of problems also inspire different diverse groups of people to work on it. Now my projects are more attractive to female students, because they see the cause, as well as those who actually facing those problems. So I think diversity is also one of the advantages that come as a result of this. I think we'll attract more of different groups of people working together. So I just wanted to touch on that.

SVETLANA TATIC-LUCIC: So we ran out of time, I'm sorry. So that concludes our session 1. Thank you very much for our wonderful presenters and discussants.

AFTERNOON SESSION

Introductions

DANA GREENE-SCHLOESSER: I want to welcome everybody back from lunch. And welcome you back to session two, which is multi-sensor integration for tracking movement, considerations for comparative and developmental studies.

Our moderator is Yuan Luo and I just want to say a couple statements about her. She is a Program Director of the Clinical Interventions and Diagnostics Branch in the Division of Neuroscience at the National Institutes of Aging where she oversees the division’s technology portfolio, such as using technology for early detection, monitoring and interventions for the aging brain. Mild cognitive impairment, Alzheimer’s disease, and other dementias.

Dr. Luo also oversees some of the branches, portfolios, programs and initiatives on plasma biomarkers and digital technologies. And with that I’ll welcome Yuan to the table or podium.

Session II: Multi-Sensor Integration for Tracking Movement; Considerations for Comparative and Development Studies

YUAN LUO: Thank you. Good afternoon. We have heard so many great talks this morning about individual sensors for use in the babies and ICU and also to detect neurotransmitters to be used for detecting pain level and stress level, such great talks. And the last question for discussion was about how to integrate these sensors. And that is the perfect question to lead to session two.

Session two will hear about the integration of multi-sensor data to track the behaviors such as movement and cognition and so on across the lifespan. We’ll hear something about infants’ use, development stage. Also the real life in the aging population for neurodegenerative diseases.

We have five speakers, five exciting talks. I’ll introduce our first one is Dr. Beth Smith from Children’s Hospital Los Angeles. The second one is Dr. Ulkuhan Guler from Ulster Polytechnic Institute, and then followed by Dr. Ashkan Vaziri, who is an academic spinoff of a company called Biosensics. And then followed by Dr. Thurmon Lockhart from Arizona State University. And then finally Jeff Kaye who is a Neurologist who will talk about how we uses these multi sensors in the aging population.

BETH SMITH: Thank you. I am going to briefly talk about what we do and why we do it in my lab. I am mostly going to talk about how we do it, and focus on the challenges for using sensors to analyze infant motor behavior across days in the natural environment.

So, I want to start by thanking the participants, partners, lab members, funding sources, and I also want to thank Jerry Lowe at USC for his comments on this talk.

So just to start with a brief bit of conceptual framework, this is the brain behavior link, which is part of the reason what we are here today. So we know that movement experience influences plasticity of the developing nervous system. We also know that as the nervous system develops and changes the movements that babies make as outputs change.

What we don’t know with regard to movement is what, how much, and when. And by this I mean our research questions in my lab are guided by looking at movement as an input to the developing central nervous system, so what, how much, and when, practice is necessary to learn to sit, to reach, to crawl. We also look at movement as an output of the developing central nervous system, and the questions there are focused on what type of movement, how much, and when, is representative of early identification of atypical development.

So, just as a brief example I want to make two points about how sensors can be useful in this regard. So the first as you see on the screen, this is an example from the Alberta Infant Motor Scale. This is a commonly used clinical assessment from birth to walking onset. You assess the infant in different positions, they basically get a point for each of the items they’re able to do, you can get a total score of 58.

So some of the limitations. This is a subjective scale. It is a snapshot of the infant at one point in time, and we are not capturing their full repertoire of movement. And it is an ordinal scale.

The other point that I want to make is that typical development is highly variable in course and rate. So a score of 15 on the Alberta Infant Motor Scale, five percent of infants can get a score of 15 at three months of age, and 90 percent of infants can get a score of 15 at five and a half months of age. That’s a really large range, and makes it difficult to identify atypical. So sensors of course can provide us with objective data, quantitative data, and we can capture and measure data across days and weeks. And this way we could actually finally measure variability and define what is the state space of typical development.

I’m going to give you an example now. You can see on the screen the examples of full-day movement data. You see on the left an example from a 12-month-old child with typical development. On the right, a 5-month-old child with developmental delays. And each figure contains on the X axis hours of data, so these are 12 to 14 hours of data.

And the data that you see are each child wore a wearable sensor, one on each ankle, across the full day. So we are collecting triaxial acceleration and gyroscope data at 20 samples per second, and the sensors are actively synchronized to one another during the recording period.

So, just looking at this, you can see this is the resultant acceleration for the right leg in red and the left leg in blue, just visually looking at it it is pretty clear that the 15-month-old child is making a lot less movement than the child with typical development. But how do we actually quantify this?

So the way that we approach this is with a threshold-based algorithm. So we have video as our gold standard, and we calculated the resultant of the triaxial acceleration and gyroscope signals, and then detrend the data. So what you see in this figure, we have our acceleration signal in blue and we have our angular velocity signal in black. Both of those again are the resultant. And the black dotted lines are the acceleration thresholds. There is one above and below zero. And then what you don’t see is the angular velocity threshold, but there is one there.

This is five seconds of data. And as you go across you can see there is a pink, elevated stick there that indicates when a leg movement was counted. So to sort of summarize the algorithm, the acceleration signal has to cross the upper threshold, the lower threshold, and cross the baseline twice, and there has to be angular velocity present. And if all of those conditions are met then we have identified that a leg movement happens. So each time the leg pauses or changes direction, then it is counted as a new movement.

One of the challenges to this of course for infants is externally applied movement. So what you see in the picture, there is a four-month-old infant wearing sensors, one on each ankle, and then on her right is a doll who is also wearing sensors on each ankle. And we collected data for a full day. And basically, it was a good friend of mine, and for a full day everything she did with her baby I did with the doll.

So the doll was moving around in similar ways to the infant. And the infant moved each leg around 7000 times during the day, the doll moved her legs around 1000 times each day, which of course is not possible with the way this doll, she was not moving her legs. So our estimate was that about 15 percent of what we count as infant generated leg movements are actually coming from background motion.

So one of the things, because we are using both the acceleration and the gyroscope signal, the gyroscope signal does help to essentially help filter out some of this noise, because things like riding in a car or stroller that had acceleration components only, if the baby is not moving, get filtered out.

So one complication is mechanical swings. So they are often below our thresholds, but not always, so it just depends on the power of the swing, different brands. But here you can see an example, this is acceleration, right and left leg are the yellow and blue signals, and then the gyroscope resultant are the red and purple signals. This is an example, and then the threshold square, a mechanical swing was going above our thresholds and being counted as leg movements of the baby.

So one way that we can solve this problem would be of course to identify the unique patterns of movement, and to remove them from the signal. But that is a lot of work. You have to have multiple infant-caregiver pairs, not everybody moves their baby in the same way, different swings, different devices. It would be a lot of work. The way that we have operationally solved it is by removing periods of data where there are more than 40 consecutive leg movements with less than 0.2 seconds between them. So when we see highly synchronized patterns like this in the data we know those are not human, infant produced movements.

So once we have this definition we essentially have the start and the top of the arm movement or leg movement. So in this example you see an infant wearing a sensor on each wrist. And then once we identify the start and stop we can then calculate the duration, the average acceleration, the peak acceleration, and then the type of movement.

The type of movement being is it unilateral, meaning just one limb is moving, or is it bilateral, meaning both limbs are moving. And that is again because the sensors we were using were actively synchronized to one another, which I’ll come back to in a moment.

So, briefly, I’m not getting into the results today, but one of the results we found was that full day sensor data were able to distinguish typical from atypical development, while five minutes of data were not. And this was just speaking a little bit to the noise, signal to noise ratio in the data. And once you identify the start and stop of the movement there are of course many metrics that are possible. We can calculate sample entropy, complexity, to start analyzing patterns of movement.

So, now into the challenges. Ideally, since we are using the resultant, in my mind the analysis should be sensor agnostic. There are a lot of sensors that can collect triaxial accelerometer and gyroscope data, so other than having you adjust for sample rate, we should be able to use any of them. But there are a number of things to consider. So gain, offset, drift, noise, resolution, sampling rate, synchronization, and then of course even the weight and shape of the sensor.

So I am going to focus briefly on synchronization and also on offsets. So this example is two synchronized sensors. We collected ten hours of data at a sampling rate of 20. And these data were used to validate our algorithm. Then we chose a different sensor that are not synchronized, we collected 72 hours of data at a sampling rate of 25.

And both of them have similar ranges for their acceleration and gyroscope signal. So we down sample 25 to 20 to match, calculated the resultant, detrend, we should have the same signal. It turns out we don’t exactly. So one of the problems is the synchronization.

So, unsynchronized sensors are off from each other in time by about one to two seconds after 72 hours of data collection. So this did not surprise me, I anticipated this. So the solution is that we need to remove the type of movement. We can no longer talk about what is the right arm doing when the left arm is moving, because one to two seconds is critical for that relationship.

I also can tell you that I have been asked to review papers where people are unaware of this problem, and they are collecting sensor data across many, many days, and attempting, because they have time stamps on the two sensors, thinking that they’re aligned and they can do these types of calculations, when in fact those are unvalidated raw data.

So the other problem we ran into is that the offset differs by sensor and by axis, for the sensors that we used for the 72 hours. So in the top you can see a short period where the X axis is aligned with gravity, then the Y axis, and then the Z axis, and we get slightly different values for those sensors.

This becomes a problem when we then calculate the resolvent, and you can see in the bottom, on the right, the angular velocity resolvent is in blue, the accelerator resolvent is in red, and this is a particularly bad example, but you can see that that baseline is drifting up and down.

And for our movement, counting when the resultant goes above and below threshold, our movement counting is impacted by this difference. So we can adjust for this by calibrating for each sensor, having a calibration file specific to each sensor, but again this was something that I did not anticipate.

So just to briefly touch on, there are a number of infant specific challenges for sensors, so validation of metrics, so we need to validate for infants versus ambulatory children and adults, babies move very differently.

I’ve also reviewed a number of papers where people are interested in say activity count as a measure of intensity of physical activity. And so they say well there are no cut points for babies, so we’ve applied the cut points for toddlers. And again, that is unvalidated, and it is not workable.

The other thing to consider, and this has to do with monitored versus unmonitored data collections, if you’re in a hospital setting somewhere where the baby is being monitored you can have very small sensors. However, we’re putting sensors on infants all day in the natural environment, so we need to have things that are larger than a choking hazard. So we have artificially enlarged some of our sensors with a casing so that we are making them large enough. So validation of metrics, these are just general things that you’ll hear more about them.

Sensor data are generally easy to collect but difficult to analyze. Caregiver burden, if people have to charge devices, place them correctly, start or stop them, they need a logging device nearby to communicate with the sensors, Wi-Fi access, these are all challenges and burdens for collecting data.

There is a difference in privacy if the data are stored in the sensor versus passing things through a cloud, battery life as an issue and then different data formats, every sensor has a different data format, so this is a problem for coding, your code has to be adjusted to find where are my accelerometer data in this file.

So some of the next steps create norms focusing on diversity, equity, inclusion, which I’ll come back to briefly in a moment, but we need to decide what are useful metrics, optimize the analyses, be able to predict different diagnoses and outcomes for children, accuracy, sensitivity to change, affordable, user-friendly systems implementation.

And then just because again the focus of the meeting integrating multiple data sources, so you can see the infant with the EEG cap, that’s another tool that we use in my lab that I’m not talking about today.

And just to finish, I just want to briefly mention the Healthy Brain and Child Development Study. We at CHLI are one of the sites involved. So we will be collecting data from a truly representative US sample, 7500 infant-mother dyads across the US.

And the research question is how is development affected by exposure to substances and other environmental, social, and biological factors during pregnancy and after birth. So this study is actually, wearable sensors are one of the measures, and we will be able to create a true normative database for these data. So thank you, and I’m going to pass it along to the next speaker.

ULKUHAN GULER: Good afternoon. It is a great pleasure to be here today, and I will talk about emerging non-invasive blood gas variables, and I’ll try to make a showcase through quantifying the adverse effect of hypoxia on cognitive development in early childhood.

So to be able to do this, I will start with a discussion in the medical literature. This study published in Pediatrics in 2004 by a group of researchers from various universities and hospitals, including Harvard Medical School, Tokyo Women’s Medical University, and so on.

The objectives of this study were first to analyze the impact of chronic and intermittent hypoxia on cognitive outcomes during childhood, and to assess the significance of various factors, such as intensity and age of exposure to hypoxia.

Almost 800 articles published before 2004 were screened, I’ll try to -- were screened, direct and indirect evidence criteria were developed, and 55 articles met the criteria for direct evidence why 19 articles met the criteria for indirect evidence. So 78 percent of the articles that met direct evidence criteria reported an adverse effect, and 84 percent of control studies reported an adverse effect.

Moreover, the studies were classified under five clinical categories: congenital heart disease and sleep-disordered breathing fulfilled the evidence-based pediatrics and child health criteria.

So in our two tables from this study I wanted to emphasize the P values that show the statistical significance. And this is a table for CHD, and the next table is for SDB. So for all those reasons, hypoxic infants are put on an oxygen supply. However, since excessive oxygen has tissue injury and premature infants are particularly sensitive to the toxic effects of oxygen, strategies should be developed to minimize the tissue injuries.

And I think miniaturized blood gas wearable, which can continuously monitor blood oxygen, can probably help to precisely regulate the oxygen levels in infants. This oxygenation is a very critical parameter for infants, we should take a close look into the oxidation parameters.

So the gold standard for measuring blood gases is arterial blood gas analysis. It measures arterial saturation of oxygen SAO2 and also arterial saturation, arterial partial pressure of oxygen. The partial pressure of oxygen is a direct surrogate method for PaO2, and peripheral oxygen saturation, SPO2, is a direct surrogate measure for SAO2.

Due to how easy it became to use a PPG sensor in ’19, SPO2 became an indirect survey measurement for SAO2. The pictures on the right are the clinical instruments for measuring this parameter separately. That exists and miniaturized variably SPO2 sensor, but no trans continuous oxygen sensor available yet.

So the translation between SPO2 and PAOT is done through this dissociation cure. SPO2 becoming an indirect survey measure for PAO2 is problematic for several very important medical reasons.

First of all, these two parameters are different parameters. SAO2 gives the information on how much oxygen is loaded on hemoglobin, so the fraction of loaded hemoglobin to total hemoglobin. Being able to give the information of the dissolved oxygen molecules in plasma, which are ready to be used in the tissue or organs. This is not a linear cure. At high oxygen levels SPO2 will saturate to 100, around the 80–90-millimeter marker, and at that point it will not reflect any changes in AO2.

Third, there is a risk for hyperoxia, too much of oxygen, particularly for infants, which is impossible to detect reliably with a PPG sensor. The other risk is accurately identifying the hypoxia cutoffs, because several factors such as blood pH, hemoglobin type and temperature gives this disassociation curve right and left.

An important thing to note is that the placement of adults hemoglobin with fetal hemoglobin in early childhood, which also shifts this curve significantly. On top of this, SPO2 has some limitations such as discrepancies that come with skin color differences and hemoglobin count.

In short, PPG sensors can be small, noninvasive, wireless, and we can extract oxygen saturation information nice and easy, but that is not enough to assess the oxygenation.

In this last part of my presentation, I will showcase a few selected emerging blood gas monitors. Transcutaneous oxygen sensor using luminescent sensing film can provide accurate blood gas information in a noninvasive fashion, complementary to pulse oximeters. The luminescent sensing film consists of functional groups whose fluorescence are suppressed in the presence of oxygen. And the sensor measures diffusing oxygen from capillaries through the skin.

Here is a quick rundown on the basis of luminescent oxygen sensing. And I am not going to go into the details too much, but when a luminescence material is exposed to a higher energy wavelength it can emit a photon at a lower energy wavelength.

This is called Stokes Shift, and the dynamics of this process is described by Stern-Volmer equation which shows the intensity and decay time of the emissions are proportional to the oxygen concentration. In our recent prototype we use a lifetime based luminescent sensing technique as it is more immune to optical type changes which actually indicates time-based measurements that will be less vulnerable to motion artifacts and skin color differences.

We closely cooperate with Analog Devices on this project, and they offer us using the specialized component ADPD4000 series for luminescent sensing. And our aim was to perform human tests with these time-based measurements.

So we conducted several human tests with this prototype. In the first two experiments we used the occlusion technique. So the fingertip was placed on the film located underneath the sensor, and then there is a short period of time to allow the partial pressure oxygen film to come into equilibrium with the oxygen in the tissue to create a change of pressure of oxygen. In the first example we apply occlusion with a ribbon or other hand, obviously the oxygen changes during the occlusion period.

In another test we placed the prototype on the forearm, and we applied occlusion with a pressure cup, which was inflated to 180-millimeter marker to inter-arterial occlusion. After two minutes of occlusion the pressure was released, and PtcO2 was measured in the rest phase as blood flow restored.

Beside this we employed an altitude generator to create a hypoxic environment through a face mask as in the high altitudes. So we gradually reduced oxygen amounts in the air breathed by the subjects and then measured PtcO2 values from the fingertip.

The benchmark table compress our oxygen prototype with the available ones in the literature. There aren’t many players actually on the field. So the first prototypes are our early prototypes. The third one is an invasive oxygen monitor for deep tissue monitoring. The fourth one is a PPG sensor for SPO2 measurements, since it presented some device parameters I wanted to include for a comparison. And the fifth one is only a sensor, no prototyping was presented. And then with our latest prototype, respectfully presented the human subject test under various scenarios, by using this lifetime-based methods in the prototype.

I also want to briefly introduce some details about the second reference, which is our prototype. So in this prototype we develop actually an algorithm to calculate the decay time of the luminescence. in the algorithm only three data points will be enough to extract the lifetime.

So we don’t need to extract the whole curve, so we just design a specialized custom integrated circuit to significantly reduce the data points required to calculate one oxygen value data point. With this prototype actually the transmitted data size is reduced significantly, which saves power and also area as well.

So this is the micrograph of the chip, and evolution both for the test. The measurements proved the operation of the algorithm by extracting three data points, the lifetime of the luminescence it calculated onboard, and we calculated a gas chamber test to quantify the partial pressure of oxygen versus lifetime measurements. We also tested the linearity of the outputs. With that I am concluding our trans continuous oxygen work, and either introduce some of our efforts on trans continuous carbon dioxide sensing, which is also a

And yes, we are measuring light for the sensor as well, but the sensor material and its properties are totally different, measuring all the intensity is vulnerable to many compounding factors such as light sourcing and then reflections, which will change the optical paths.

In oxygen sensing we use lifetime technique for robust measurements, but it is very challenging to use the same technique for CO2, as the lifetime of CO2 sensitive fluorescence is very fast, in the low nanoseconds order, so we came up with a ratio-metric measurement which is called the lifetime referencing metric to measure the continuous CO2 accurately.

This is the miniaturized prototype that implements this technique. The measurements prove that how efficiently this dual lifetime referencing technique reduces the measurements error compared to the intensity measurements. And we have performed the first human subject test with this DLR based prototype. We tested the prototype from the forearm and fingertip.

It should be noted that CO2 range in healthy human is very narrow compared to the oxygen range. Moreover, modulating CO2 is harder than modulating O2 in the body. However, despite these difficulties we could observe slight changes in CO2 from fingertip measurements.

We will need to improve the resolution of the system to obtain better results. But in summary this is the first test on humans, and it is very promising. Finally, I would like to thank my former and current PhD students, our sponsors and collaborators for the contribution and support, and I would like to hand it over to Ashkan Vaziri from Biosensing.

ASHKAN VAZIRI: Good afternoon, everyone. I am Ashkan Vaziri, Founder and CEO of Biosensics. As a quick introduction, Biosensics is a biomedicals firm located in the Boston area focused on developing and commercializing wearable sensors and digital health technologies for different healthcare applications. The company was founded back in 2007 by three scientists from Harvard. In 2019 Best Buy acquired parts of our assets. In 2020 we were one of the winners of the Tibbett’s Award by the US Small Business Administration.

Since our foundation, our team, has worked on development and commercialization of wearable sensors for healthcare applications. This includes our fall detection technology that revolutionized the medical alert industry. It is used by hundreds of thousands of active users, has saved many lives, something that obviously our team is very proud of.

It is used also in eight different clinical drug related clinical trials to measure falls as one of the endpoints. We also developed the first FDA wearable devices for gait and balance assessments. These are widely used in clinical studies and trials, including more than 15 trials where they are used to collect the primary endpoints of the trial.

Our PAMSys sensor technology is the most advanced physical activity monitoring solution that is used for precision actigraphy. It measures more than 40 independent perimeters of physical activity. It also enables monitoring upper limb function and motor symptoms.

And our PAMSys plus is a multimodal wearable technology that enables detection and monitoring bouts of talking, providing very promising applications in various disease areas, including for example monitoring mental health as well as neurological disorders.

Speech biomarker assessments continuously measured during activities of daily living. Another application is continuous monitoring of coughing frequency and amplitude during active daily living.

Our company provides technologies for digital measurements of motor, speech and cognitive function, in addition to our wearable sensors that I introduced to you, this includes our digital assessments for monitoring cognition, handwriting skills, life space, voice biomarker and other endpoints, video-based assessments to quantify for example facial characteristics ptosis or look at for example hand and finger movement.

In addition to providing our products and technologies to clinical researchers as well as pharmaceutical companies to use them in their clinical trials, we also perform as the clinical studies with the objective of developing digital measurement tools and digital biomarkers for tracking disease progression in our proprietary therapeutic areas.

At this point we have eight different priority therapeutic areas that are shown here. Our efforts are supported by close to $12 million of funding by the National Institutes of Health, and is in collaboration with various partners as well as multiple pharmaceutical companies and patient advocacy groups.

I am going to show you selected results on our work in ALS as well as progressive supranuclear palsy and Parkinson’s Disease. Before going there I’m going to actually introduce you to our whole platform. I am going to specifically introduce you to our platform for at home monitoring, called BioDigit Home.

This is a robust solution for collection of digital biomarkers as five different components, including our sensor technology for monitoring for example physical activity, posture, falls. Digital assessment where we have already incorporated more than 100 digital assessments into our application.

This is the only system that enables data and balance assessment at home. You can do for example two-minute walk test at the home of the participant. It enables remote physiological monitoring via off the shelf devices that we have integrated and has software features to facilitate the use of these devices in a clinical trial or study, such as the screening, consent, virtual visits.

The results I am going to present today is again from our BioDigit ALS study. They are actually at this point from ten participants who are monitored and followed for 12 months.

The studies are still ongoing, and they are still recruiting more number of participants. Each participant came to the clinic at baseline, and every three months afterwards. Each participant undergoes a complete clinical evaluation during each study visit.

And then afterwards, there are actually two wearable sensors on each wrist in order to monitor their physical activity, posture, falls, as well as upper limb function. This is supplemented by our digital assessments of speech, handwriting skill and pattern tracing, which each individual, each participant performed on a biweekly basis at home.

The results I’m going to show you, specifically I am going to focus on comparison with ALSFRS, which is the gold standard in clinical care as well as clinical trials in order to assess disease progression in ALS. ALSFRS specifically can be classified to four different categories, the bulbar subdomains which are the first three types of ALSFRS, upper extremity motor subdomains which are items four to six, gross motor subdomains items seven to nine, and the last three items are the respiratory subdomains of ALS FRS.

I am going to show you that actually our speech assessments correlate significantly with the bulbar and respiratory distress subdomains, and our sensor derived measures from dependent correlates with gross motor subdomains, and the ones from the wrist sensor correlate with the upper extremity subdomains of ALS FRS, thus providing a complete solution for measurement or tracking ALS FRS.

First, looking at the gross motor, I am going to briefly talk about our PAMSys sensor technology. As I mentioned this is used for precision actigraphy, it measures more than 40 independent parameters of physical activity. It is the only system that can measure posture using one sensor, including posture transition time and duration. Also, the sensor has 500 megabyte of memory and has a battery life of up to six months.

So from those 40 different sensor derived measurements you can see actually in this slide I am going to show you activity related or step related parameters. So on the Y axis are sensor derived measures, on the X axis is the clinical score related to cross motor sub-score of ALS FRS. Where you see significant correlations in most of these parameters with the clinical score. If you look at the posture classification as well you can see significant correlations between some of the postural parameters with the clinical score as well.

Now, moving on with our wrist sensor which enables monitoring upper limb function continuously at the home of the participant, again it measures more than 20 different parameters of upper limb function during activities of daily living, and you see significant to moderate correlation with some of the parameters that are calculated, with the upper extremity subdomain score of ALS FRS.

Moving on to the speech assessment, specifically there are multiple speech assessments that are performed. For example, brain bulb passage, by digital speech analysis measures more than 40 parameters of speech. Again you can see parameters that have good correlations with the ALS FRS bulbar score. And the next one is phonation "aaaa" which measures as you can see which has correlations with for example with pitch and loudness as collected by our device and compared to respiratory distress sub score.

With this data acknowledging that the sample size is low, what we have done is we have developed machine learning based predictive models. So what is on the Y axis is what is measured by digital and wearable sensors, essentially what is predicted by the system with no other input from the clinician. What is on the X axis is the clinical score.

So for example, for bulbar subdomain as well as for upper extremity subdomain, as well as for gross motor subdomain, you can see that the prediction or predicted model works relatively nicely in this case, providing again a solution to measure ALS FRS or the ALS disease progression.

Now, moving on to another study, this is our BioDigit PSP study focused on progressive supranuclear palsy. This is supported by NIA at this time and it is still recruiting. The results I’m going to show you is from 10 PB patients and 10 Parkinson’s patients. This is a multi-modal monitoring, as you can see, including multiple sensors as well as digital assessments of speech, cognition, and also video assessments.

I am going to show you specifically results related to at home monitoring of data balance, just to show you an example of this, this is from a patient with progressive supranuclear policy as data collected at the home while the participant is performing what is called the TUG test, Time Up and Go, using wearable sensors we can provide detailed information about for example this assessment.

As you can see actually some of the results are shown here. For example we can look at the characteristics of walking during various parts of this test, for example during turning, during standing up, sitting down, walking towards and walking return. And again if you look at here the differentiation between the PB and PSV are much better when we look at actually walk return in this case, again highlighting the value of using the wearable devices in this case.

Also, kind of similar to results that I showed for ALS, you can see actually sensor derived measurements show good correlation with both PSPRA, PSBRS, which is the clinical score for PSP, data score, as well as the modified PSBR total PSPRS score So multiple parameters show very good correlation.

And the last I want to show is longitudinal measurements, where essentially those tests are performed over 12 months on monthly basis at the home of the participants. When it is compared to the clinical score, essentially the grey area here is the clinical score, you can see that sensor derived outcomes demonstrate less variance than the clinical scores, and thus it can, using sensor-based devices, it can reduce the number or required sample size of clinical trials.

At the end I would like to acknowledge support from both the National Institute on Aging as well as the National Institute on Neurological Disorders and Stroke. Thank you. Please welcome the next speaker.

THURMON LOCKHART: Hello everyone. My name is Thurmon Lockhart. I am going to be talking to you briefly about fall accidents, like Ashkan has done. He has done basically everything. Now what I am going to do is I am going to really focus on fall accidents only, and then kind of talk about what are the metrics that are important in things like that.

To do that I am going to talk to you briefly about fall accidents and the problems associated with that. And then what it took us to develop the sensor to actually assess fall risk among older adults.

Here is what is happening here. By 2030 there will be about 73 million older adults, and we are expecting 52 million falls. And out of that, 12 million individuals are going to be injured. So this is a significant problem. In 2019 about 39,000 people died, and 2021, 44,686. The same year, motor vehicle accidents, 45,000. Very similar.

So what are we doing for the motor vehicle accidents? Well, we’ve got seatbelts, we’ve got all kinds of protection systems. What are we doing for older adults? Really nothing, to a certain extent. And if they don’t die, then they suffer, significantly. Their quality of life diminishes, and it degredates from that point onward.

What are we doing about it? Well, there are two different approaches, fall protection and fall prevention. In terms of fall prevention, we are doing pretty well. There is code of federal regulation CFR 29, things like that could kind of help us to build appropriate steps, the length of a step or type of the steps, such that that individual doesn’t get into an accident or a perturbation as we call it. Hip pads perhaps could work, side handrails and things like that, we’re doing very well.

One thing that we are not doing well is fall prevention. Fall prevention entails things like coefficient of friction modulating, coefficient of friction for slippery surfaces. And you know that is hard to do. So it is almost impossible. Currently, what I am doing right now, I am working with a CPSA, Consumer Product Safety Agency, to create a bathing surface standard that has been obsolete since 1978.

So the key question here again is how do we -- One thing that we do very well, by the way, is training. We created all these training modalities. Gait training, balance training, strength training, all of these things work very well.

The one thing that does not work too well is fall risk assessment. And currently fall risk assessments are done somewhat qualitatively to a certain extent, and the accuracy is really lax. What I call fall risk assessment is like this. If you are at fall risk and you fall like that you are at fall risk.

Anyway, it was the slow motion getting out of the bathing surface. And what we’re finding is getting out of the bathing surface there is four times more accidents than their peers, but that’s the next story.

Anyway, so how to measure fall risk. So we did this kind of experiment for 25 years, and through that period of time we learned a great deal. We actually created a slip simulator for like UPS and FedEx to train on to reduce fall accidents. So we did a lot of different developments. For older adults we really needed to get back and really understand what has happened.

Here is Laetoli Tanzania, Australopithecus afarensis probably made this footprint about 3.7 million years ago in Tanzania. And what happened was plume of ashes fell and then it rained, and then these beings walks on it, and then plume of ashes fell again and it solidified and it became a rock. Look at the beautiful footprint. This is bipedal gait, bipedal locomotion.

But more importantly, the next one, right next to it, is a smaller footprint, and the step length is about the same. The bigger one is taking a shorter step, and the younger one is taking a normal step, and walking along. You can tell clearly from 3.7 million years ago, it does tell something about their behavior. The gait in a sense, is a powerful means to assess behavior.

And so we looked at this really carefully, and what we’re finding out is that at the time of the heel contact, during a single support phase, what happens is we are falling basically, and catching ourselves. And when that happens a significant variability occurs right there. But you can see other variability that occurs near as well.

This is a phase diagram. So what we are doing to a certain extent is a linear type of assessment where we are assessing everything right at the time of the heel contact, but ignoring all other variability. That’s what we call temporal variability. And that’s the essence of what I wanted to talk to you about, temporal variability. Assessing that variability is the key to assessing instability.

So in terms of our gait, of course our behavior is related to exploration as well as an escaping type of behavior. And that is formulated through the cuneiform nucleus, that’s here, as well as pedunculopontine nuclei. And look how small these things are right here.

And what they’re doing is, one of my PhD students is actually doing a deep brain stimulation study, and going into SPN, subthalamic nucleus as well as EPN, nuclei to assess if there is a gait effect, and this is what she found.

And so far she has done three individuals, they are actually implanting STN and a PPN together, and here are the electrodes. They program this first, second, third, and fourth, and then when they feel okay and then they stick with it. This is three months right here, and then this is before. This is reaction time. We are perturbing this guy. And as you can see at three months reaction time decreases a little bit, medication on and off only decreases a little bit.

As you can see clearly PPN has a center that are relevant for gait adjustment, and for the adjustment of that reactive motion. And in a sense that is what we found. And the difference of import from younger individuals to older individuals who really fall, really not the initiation, it really is the reactive recovery. And that’s where we see the differences.

And so after about 25 years of, 20-30 years of experimenting like this we found that okay, this is happening all the time now. So how can we utilize this information to actually predict an individual who will fall given a slippery surface or that kind of a chance? So we start to look at a variety of modeling paradigms.

And so in a sense, overall to a certain extent in that brain center, hippocampus and all these areas, and this is what happens, Alzheimer’s, Parkinson’s patients, people with dementia, people who have stress syndromes and things like that, that all influences that center of brain.

And what that does is it influences spatial, emporal parameters. And by assessing that appropriately I think we could actually assess appropriate characteristics. And so overall, what gait research has shown us is that there is a nonlinear time delay that can be quantified by assessing some of this stability information.

So again, I’m going to be talking about this temporal variation. Right now what we are doing is really one way of measuring it. This right here, or here, whatever that is. There are some other methods that are available. And another thing is, all of our measurements, the bottom line is it is a voltage, current, resistance, and phase. So that’s what we’re going to get, no matter what equipment it is. And we translate that into numbers, and then look at the patterns afterwards.

And so then it doesn’t matter what we are measuring, if we do not understand the essence of this type of accidents. For example, why are you measuring step length? Why are you measuring walking speed to assess fall risk? Do we really understand that? That’s where the issue is to a certain extent I think, time and time again. And as a result of that there is a delay in development of device that can truly identify gait characteristics.

And so what we did was we actually applied a nonlinear dynamics theory, things like the (inaudible) to actually assess these temporal characteristics. And here is one example of that. Older adults, falling in older adults a little bit higher, not the younger individuals. There is a little bit of Rosenstein algorithm back in those days, and it took us about two days to run it, but now on an iPhone you can run this by the way.

And this is how we do it, usually. This is an acceleration, or you get a marker position or whatever have you there. We get into a three dimension or five dimension, whatever the dimension that you need to be, and we assess this information, there is some variability.

And we basically use the Rosenstein algorithm to assess that slope. And we do this over and over and over again. And this slope represents instability. What it means is that given a perturbation that person will most likely fall more than the other individuals who have flatter curve.

You can also assess this information using flow K multipliers, by the way. Flow K multiplier, you can cut it anywhere, map in any direction and then calculate those variabilities. And so those are more of a chaos type of measure that you could actually assess.

You could also use some nonlinear dynamics, like complexity someone talked about here, which is a beautiful measurement. And especially for the older adults. And this is a real measure of heart rate variability.

As you can see here, mean standard deviation about the same, but the multiscale entropy is a little bit different, what happens is that as we get older, our gate, our heart, the rhythm, kind of stays in a monotone, to a certain extent, it doesn’t vary too much, and as a result of that, given a perturbation, that person will not react and recover.

So the bottom line here is that again using a fall risk prediction, we actually use about 171 individuals, community dwelling elderly, dataset of 10 meter, at their home, and we use the linear features as well as the nonlinear features, and we wanted to veer away from things like zero crossing and skewness and things like that that doesn’t make any sense for rehabilitation.

And so we input that data, and what we found out was that incorporating nonlinear dynamics actually improved about 12 percent of the prediction rate. And we were just using, and this phone here by the way, Lockhart Monitor is available, it is free, you can download it, it has been tested to a certain extent, and there is another version of that.

And so this is what we’re showing the actual improvement, incorporating the nonlinear dynamics. And this is the whole story. The whole story behind it is that trying to understand what is going on here, to really understand gait characteristics and things like that, perturbed really by understanding these true characteristics, then and only then we can really start to develop devices.

And we are developing a variety of type of devices not only in mobile app but also 4CMG, a variety of systems that can actually measure polypharmacy, some of the aging effects, and Lockhart Monitor has been just now upgraded, and then there is this version, this is more of an enterprise version, hospitals or whoever can use it. So it’s already out there. And thank you very much.

JEFFREY KAYE: I am going to be focusing more on holistic and naturalistic assessment of movement and behavior in the wild so to speak. And I am going to be focusing on the importance of context and use case, and how one size may not fit all, certainly in the research space.

So for context there are many different use cases to measure various activities and behaviors. One can focus more at the higher level, the type of study, whether it is a basic, behavioral, science kind of study, clinical or intervention trials, clinical practice or even larger population health kinds of studies.

When one dives down to more specific areas, often most important to human activity is functional domain. So around this home is cognition, mobility, sleep, socialization, physiology, we’ve heard a lot of these various areas today being candidates for measurement. But also the importance of how these actually interrelate, so doing multidomain, integrated kinds of assessment is very important.

I’m going to be talking about examples that highlight principles to best address these various measurement areas. I am going to describe something that we call the architectural cart platform, which is an open use case flexible technology agnostic and sharable platform.

This was actually largely most recently developed with funding from the NIH with the NIA leading and the VA to create a system that could be used by researchers, particularly in the aging sphere, but it could be used for people at younger ages as well, to facilitate using digital kinds of methodologies in their research in these various areas.

So this system, importantly, allows one to also use other conventional kinds of data. I mean there are many standard research types of data that are used, we heard about most standard scales that people are interested in. There is lots of EHR medical record data. And external environment, the air quality, weather, and so forth that can affect these measurements.

But ultimately one wants to look at the home-based activities of individuals. That is where most people spend their time, particularly in the aging group. And so what we have done is for each of these areas of function or domains various kinds of technologies or sensing systems can be used depending on the use case.

And I’ll just give a few examples in a moment, but just to highlight this, when we talk about cognition one can measure it directly by taking a test online, or you can look at the meta-aspects of actually using the computer itself and whether it’s a laptop, a smartphone, a tablet, or even driving your car.

So one can tap into the data port of an automobile and look at driving, as a cognitive task, as well as safety kind of measurement and assessment. Importantly the field as we know moves very rapidly, so one needs to be able to integrate new sensors or methodologies. This gets back to that issue of technology agnosticism.

Some examples of the kinds of things that are measured or can be measured. So for example in the domain of what we call everyday cognition, so using a computing device, we have shown that one can see very clearly that in individuals who have mild cognitive impairment over time just the time they spend on the computer gradually declines.

And the area of mobility for example, looking at a group of older individuals, average healthy older individuals over time, some develop early mild cognitive impairment, others sort of were later. You can see here looking at the variability of their walking speed measured with passive sensing, I am going to highlight that in a moment and differentiate it.

Area of social engagement. Simple metric, time together, time apart, time out of home. This is a spiral plot just plotting a 24-hour block. Each little line circularly, this is about a month of data, just showing how couples might spend their time. This could be used for caregiver kinds of assessments as well.

And then just another example, sleep, which can be measured with passive sensing, with wearables, with a bed mat, and also very well distinguished, different in this case categories of mild cognitive impairment. Even one might consider couples and how they each affect sleep of one another. Often in clinics I find that people may be describing their disturbed sleep and blaming it on their partner.

So, what are some of the principles to most effectively or optimally do these kinds of assessments? So one thing I will strongly suggest is we want to move to be most ecologically valid. That is, we want the movement and behavior we assess to reflect everyday function, and be really based on in the community, as I mentioned on average older populations spend about 20-21 hours a day in their home.

So in that regard ideally the more passive, the more unobtrusive the measurements are the better. I’m not sure if it’s a real word, but Hawthorne-ness, the fact that you know you’re being assessed can affect the very behavior that you’re trying to look at. And I also want to emphasize that we talk a lot about passive sensing and active sensing, but it really is a continuum. But it is important, there is literally nothing that is entirely passive.

So these are examples of some individuals who are in some studies using this system, after this friendly original cart study, just on the wall there are some passive IR sensors, the pillbox is a standard pillbox but it is recording the time of day when the compartments are opened. And over on the right is a wearable, there are hundreds of wearables, they continually evolve. And it does just really I think it’s the right tool for the right job.

And really the other thing to consider is I think we often get focused on a single channel, and ideally if we can relate these different domains of function together we get a better picture that really represents the electivity behavior in the wild. It is really important. So context is everything I think is really important.

To kind of emphasize this I am going to just describe a couple of cases actually that come from longitudinal aging studies where this platform has been installed in homes. This is an individual from a cohort of about 250 people who were followed over time. They all were average, healthy, normal aging individuals.

This individual developed Parkinson’s disease a couple years after being monitored, with a sensor system showing these are passive IR sensors that are aligned in the ceiling, every time a person walks under the sensors a walking speed is captured. The little graph on the top with the star in the middle, just shows two months I believe, of walking episodes or bouts.

And the star is actually the walking speed obtained with a stopwatch in a clinic. So walking is very variable, as we all know. The plot on the bottom though shows something else, that when the person was given medication they were able to have some stabilization. They then moved to an assisted living facility. And you can see the data changes. And this is actually challenging to analyze, because one could argue that it’s the change in the environment or the movement of the sensors themselves into their new living space.

Another example of some of the contexts that we have to think about is here another person who was followed in the same study who also developed Parkinson’s disease. So here they were measured, their mobility was measured just with step counts using a wearable, and you can see how that changes over time, just showing 50 weeks before and after diagnosis and treatment with Cinemat(?).

But you can also look more deeply in the same individual if you have another methodology available. So here also passive sensing was in the same setup. You can then dive down into a more detailed look at what may be happening.

So again, here is walking speed, and it is really the average walking speed during that hour in the day. So the time of day is on the vertical axis, and then the color indicates the speed. So just to have this principle of complementary measurement of the same metric or domain of function, and then looking at people over time as they change under treatment or in different environments is what I’m trying to get at.

Here is again just another example to amplify this. So this is a person following a cancer study where we’re looking at how people respond to treatment or side effects of treatment. And in the top rows are weekly answers online to a questionnaire looking at their function. And then such as whether they went to the hospital or the emergency room or whether they fell, whether they had changes in their medication.

And then looking at multi-modal assessment using different devices or sensors. So there is a row looking at step counts, weight changes, and then sleep metrics, in the bottom two panels or lines. And just the principle that these can all be integrated to give a much more holistic view of what is changing or not over time relative to real world events.

So just a few other principles if you will. Ideally, this term technology agnostic is used a lot, but just I think it is important that things come and go, I am a somewhat older guy, so some of you may remember the Pebble, the Jawbone, Microsoft Band, which was terrific, the Actiwatch was just discontinued, and actually the Amazon Halo wristband I understand is no longer going to be supported or made.

So one has to be prepared for that and understand that the measurement has to be well understood, specified, traceable. And this is where things that are ideally non-proprietary, open, and sharable, are really important certainly for research. Whether the technologies are tested in the real world is important.

So validation is important, so biological validation, with actual people’s postmortem results, this is an Alzheimer plaque related to these variables. And then finally whether testing is done in diverse populations. So these tools have been measured or used in an RD aging project in Chicago, veterans in rural areas, low-income individuals, Hispanic participants in Miami. The context again is important. So asking people what has happened will affect data that we here during the pandemic loneliness going up when asked on a weekly basis, bottom step counts.

And then the last thing that I want to make a point about is it is not just behavior itself but also where it is happening is another really important feature. So here just showing movement about an apartment of an individual, looking at the time they may spend in particular locations during different time periods. And then lastly, or second to lastly, you can see how whether you live with somebody or not or you have some cognitive impairment or not really affects how much time you even spend in a particular location, and how much you transition from one room to another, that is a measure of mobility.

Or in a person with severe dementia, living in a memory care unit, in a room essentially, you can look at agitated behavior, movement about their room, on the left panel is agitated nights, the right panel is non-agitated nights, just showing how much less time in bed, or actually even in their room during those periods.

And with that I will end, and just remind everybody that we have come a long way I think. This is a Darwin diary of what he was doing during the 24-hour day, and then on the right is what we can measure now with sensors continuously in the wild. Thank you.

Discussants

YUAN LUO: I also would like to invite the two discussants. We have met the first discussant, Dr. Andrei Shkel from University of California Irvine, and Dr. Laurie Cabrera from Penn State University. So we can start with Laura. Laura’s background is in neuroethics. So you will talk a little bit about your work, and then we will ask you questions. You will start.

LAURA CABRERA: Thank you for the very exciting talks. You mentioned I’m a neuroethicist, so that means that I look at the ethical, social, policy implications of advances in neuroscience and neurotechnology. So hearing these talks is very interesting because I have both technical as well as ethical questions. But I only get one question, which is tricky.

So my question, I really appreciated Dr. Smith getting at that very early in your talk, where you mentioned some of the issues that you see in your own work. But my question is initially to you, but I want to open up for the rest of the speakers. So what would be a key ethical consideration of your work, and in particular populations that you’re working with. Because you’re working either with infants or in children or really adult populations. So those raise particular ethical issues. And related to that is if you currently and actively working in ways to addressing that particular issue. We’ll start with you, Dr. Smith.

BETH SMITH: With infants there are a number of different scenarios. So some of infants that we’re working with are known to be at high risk for neurodevelopmental disabilities because of how they started life. And those diagnoses are often not made until two or three years of age. So I think sensors can help us do that earlier, as we were talking about.

But what those variables are are going to likely be different in different cases. And so I think one of the things that could be valuable is by having a truly representative sample of the wide variability of normative development, we can sort of define okay, here is typical development in this large state space, and maybe autism is over here, and cerebral palsy is over here, and other conditions are in different places. I’m sorry, the second part of your question?

LAURA CABRERA: If you are actively working on ways to address that particular ethical issue that you raise?

BETH SMITH: Which particular ethical issue?

LAURA CABERA: Well the first part of the question was what is a key ethical issue, and then the second part of the question is how are you working on ways to address it.

BETH SMITH: There are a lot of ethical issues when working with children. Safety is one of them. So I talked a little bit about choking hazard being particularly relevant in working with infants.

We also have ethical issues of if we are measuring infants depending on what you’re measuring you can measure things about the home environment, things about parental interaction, for example we are mandated reporters of child abuse, so if things like this get to be tricky and you have to have a lot of conversations and a plan on how you will address things like this when they come up. Anybody else care to comment?

DONG SONG: We have not started measuring our sensor on infants yet, but it is scheduled. So one of the things, we would need to check oxygen and carbon dioxide blood gasses, and because of ethics of course we cannot really do any interventions. Like what we will do with the natural population, we will have to use this population’s data, and we can’t do anything. So this is kind of limiting our measurement scope, test scope, but this is what it is.

ASHKAN VAZIRI: I noticed that I think in pretty much all presentations commercial off the shelf sensors were used. So in a way we are piggybacking what developed for high volume applications, sort of and these high value applications that developed for some generic goals in mind. So my question is when you select the off the shelf sensor, Digi-Key or elsewhere, how do you select it. And are you satisfied with what you find on Digi Key, generic Digi Key, and what would be your Wishlist if you are not satisfied with what you find.

JEFFREY KAYE: So there is sort of a hierarchy of requirements, so there are no ideal solutions, and it is a tradeoff. I think the first part is is it going to be usable or acceptable in the population that you are doing a study in. So in the research context. Because if people don’t, if you don’t use the device or tolerate it or whatever then you have no data. So that’s actually in some ways at least in my experience often neglected. If somebody has a really shiny cool, the Apple Watch is a tremendous device, but in certain instances it is not actually that usable by some people.

The degree that the quote raw data is available, and you have a whole discussion about what is raw data, but the degree that the data is traceable and available, and if there are algorithms involved, those are also open and available is another important consideration. The cost, whether there’s any data that suggests it has been validated or measured across other standards is important. And there is a whole host of these.

But the other question that is really interesting I think, so what if there isn’t anything that is quite, so then you’re sort of faced with so then you go and try to make something yourself. And I again, just an anecdote, we originally there was surprisingly, we have always wanted to measure medication use. We’re not reminding, we just wanted to use pillboxes to know when it was likely the person was using medication, so taking out.

So the plastic pillbox is the most widely used medical device for medication tracking if you will, but there was no device actually in the market. So we went ahead and we constructed a bunch of these boxes. So we had artesian pillboxes that were hand done, but that’s not scalable.

It subsequently turned out in the marketplace that there were companies that then began to make these, and they became available. So we quickly were fortunate not to have that. But I think that there is this, sometimes I think you have to accept that the perfect shouldn’t be the enemy of the good and accept that you don’t have a perfect solution.

BETH SMITH: Just to add a little bit to that, another consideration is battery life, it depends on what you’re trying to do with a sensor, but some sensors have user interactive interfaces, and you may or may not want your users interacting with the sensor, and that also influences battery life.

So there are considerations like that, and I also want to echo about validation, whether or not the sensor you’re choosing has been validated, if you’re using the metrics that exist from the sensor, have they been validated in the population that you’re assessing.

And that’s one thing, I was actually talking about use of the raw data, in which case even at that point there’s not necessarily equivalency among different sensors measuring the same thing, which is yet another problem in terms of validation.

THURMON LOCKHART: Biosensics didn’t mention what sensors are used, who is the vendor for the sensors, what are the characteristics of the sensors.

ASHKAN VAZIRI: We develop every part of our technology ourselves, including the wearable sensors, all software, all algorithm, the backend, all portal, everything is developed by us. I think there are many obviously wearable sensors out there which can be used at smaller scale for example for performing another research.

Another consideration would be application, for example if you are specifically looking at the medical application, if you want it to be used in the context of FDA submission, looking at what FDA requires is associated convinced that all sponsors are essentially when the number of sensors is used, the sponsors need to have access to the raw data, which is not provided by many of the current wearable devices, or if they do the battery life is variable.

And the other consideration is the scalability. Many of the sensors that are off the shelf can be used in a study in 20, but really cannot scale to a solution that for example can get FDA approval and then can be put in the market. So we, based on experience, we develop everything ourselves.

THURMON LOCKHART: What sensor do they use?

ASHKAN VAZIRI: We have multiple sensors. We are actually unifying them into a single sensor. That single sensor has a 3D accelerometer and independent 90 degree of freedom, an altimeter, two microphones, wireless charging, one gigabyte of memory, six months of battery life.

YUAN LUO: Let’s answer some questions from the virtual audience first.

STAFF: The first question is about data sharing. And I would love to get the panelists’ thoughts on how might these data be openly shared for the community to analyze data as well considering the host of different sensors that data are available from. How can data be shared effectively for the community to analyze it from all these different sources?

ASHKAN VAZIRI: I can at least on our side mention that we are a business, so we do value the data that is collected in our studies. But in limited cases we actually provide the data to collaborators and others who would like to use them for research applications.

We are also for example releasing a pipeline for switch analysis before the end of this year that actually enables anyone, very similar to actually ChatGPT to online analyze the speech data that they collect using any devices. So as a small business we are protective of the data, but on a case-by-case basis we provide the data to researchers who would like to publish it.

ULKUHAN GULER: For us, we publish that in GitHub usually. So most of our raw data are in there, as well as our MatLab codes, so we can use all this stuff.

JEFFREY KAYE: If you’re accepting NIH money you are supposed to share your data. However, I am going to be maybe not that provocative, but I think what is not maybe appreciated is this kind of data, you can say here are the data streams, it’s for you, but there is so much that goes into how that data was captured, the context, even trying to annotate it adequately is a very difficult enterprise.

And in our experience, we have had many people who have asked for our data, and we give it to them, and often we get second and third asks about questions, which we are happy to entertain. But I think this idea that oh you just post up a bunch of raw data and then you’re done is not correct.

YUAN LUO: Next question from the audience in the room.

PARTICIPANT: I have two questions. I think one of the common themes among the talks, Dr. Smith, Dr. Kaye, Dr. Lockhart, was activities of daily living, motor functions are very much personalized. Tell us a little bit about when you tried to detect changes, you tried to detect deviations from normal self, tell us a little bit about how you can build models, digital twin potentially that would help you look for those changes, but the models would also continuously update.

The second question is about how we can unlock the capabilities of these technologies. If you consider a physician care provider, they want to use these data to make some determination. But looking at the raw data is not the solution for them. What are the low hanging fruit sort of directions that you might suggest that can really take you to the next level for translation?

JEFFREY KAYE: So if I heard it correctly, at least the first one, building models from the data, there is a predictive, there are many ways to do it. In fact there are a lot of spin and buzz in a way around artificial intelligence and machine learning. But if you can use a linear regression model, logistic regression model, and get a result that is reliable and reproducible, that is okay.

And in fact we have taken for example the walking speed data that I showed, and we can predict falls within a week looking at the variability of walking speed.

So variability is very important, as Thurmon pointed out, with a simple logistic regression kind of model approach. But then of course there is tremendous detail in these data streams. And so one can certainly aggregate the data and train models to then predict all sorts of outcomes. I know it is a more general answer, maybe later we can talk more about it.

ULKUHAN GULER: I was thinking about we could create, we have a limitation type of scheme given some of this information better. So that’s how we’ve used the model, I would say.

PARTICIPANT: So my question is for Dr. Smith. You said that the timestamps are not reliable. So are you making a general statement, or is it like specific cases where the timestamps from the sensor are not reliable? And secondly, when I looked at the data you shared, it was a two second difference in 72 hours, so does that matter? So does that much of a drift, or that much of an uncertainty with the sensor, does that really matter?

BETH SMITH: My point was that drift is a known situation with sensors and with accelerometers, and that you have to test for that to be aware of that. And whether or not it impacts the measurement you’re making depends on a number of things, how long are you recording for, what measure are you making. In the case of what we were trying to do, which is is the right arm moving by itself, or are the right arm and left arm moving at the same time, a difference of two seconds absolutely matters, because movements are often shorter than that. If you’re measuring something else it might not be important to you.

But my point is that’s something that people need to be aware of and then test for. It is going to vary by different sensors. It depends on the sampling rate of the sensor, it depends on a number of other characteristics. But if they’re not actively synchronized to one another and communicating, the accelerometers are going to drift apart from one another over time, and the further along you get the further apart they’re going to get.

YUAN LUO: Let’s say thanks to the speakers and discussants again. And for more questions you can still discuss with them during the social time and the breaks. We have a ten-minute break, a five-minute break, and look back for the last session.

STAFF: The time is now 3:23 Eastern. We will return from break at 3:30 PM ET Eastern. Thank you so much.

Session III: Sensor Networks, Signal Processing and Considerations for Artificial Intelligence

SVETLANA TATIC-LUCIC: Okay, I will be moderating the next session, session number three, titled Sensor Networks, Signal Processing, and Considerations for Artificial Intelligence. We have two speakers in it. The first one is Professor Lena Misra from North Carolina State University and DARPA, and the other one is Professor Honggang Wang from University of Massachusetts Dartmouth. We will also have two discussants, Professor Edward Kan from Cornell University, and Dr. Rajish Baskaran from SuperBloomStudios. And I didn’t say that Professor Misra will be virtual, so her talk is about to start now. Thank you very much.

VEENA MISRA: Thank you very much. Alright. So, thanks again for giving me the opportunity here to tell you a little bit about what we’re doing in the ASSIST Center. We are an NSF funded and now graduated engineering resource center. The ASSIST Center stands for Advanced Self-Powered Systems of Integrated Sensors and Technologies, and it is a partnership of multiple universities, and it is my privilege to present to you some recent highlights from the team members involved in the center.

So our vision right from the beginning has been to get the battery out of the picture and more accurately get the user not to have to worry about charging the battery so that continuous monitoring could be possible. We are using both the wearable platform as well as implantables.

We also are trying to make sure we have enough power coming from autonomous sources so that we can have multiple sensing modalities as part of the wearable and implantable system so that we can capture different dimensions of health.

We also would like the systems to be passively on all the time and therefore they need to be able to communicate on their own, so the communication protocols have to be very low power, and the data analytics piece of course is very necessarily to convert the data into inference and action.

So our center are just like a typical National Science Foundation ERC Center, has been very focused on systems driven approach. We have identified health use cases such as the ones shown on the top here, for example we’re looking at asthma, mental health, aging, behavior tracking, cardiovascular disease and so on and so forth.

We have built numerous wearable systems such as shirts and armbands and rings and watches in order to address the needs of these particular use cases that you see.

So the driving force has been to ensure that the power available to the sensor system from autonomous sources, such as the human body, is always going to be greater than the power that we consume, so that we get always-on operation.

Especially important are the use cases concerning behavior, because in behavior assessment the actions are not predictable, and having a reliable running sensor system is key. Already we heard just in the previous session for infants that a full day’s data was actually needed to identify differences between normal and not normal behavior, whereas five minutes of assessment was not able to do that. So the use of always on systems is quite enabling for some of these types of use cases.

The second aspect of this is the number of sensors and the types of sensors that we have in these systems, and also we need to make sure that the systems we build are actually adopted by different kinds of individuals, and the data that we’re collecting can be converted to actionable information.

So I’m going to touch upon these three aspects of what we do in the center and discuss with you some emerging opportunities. So one way to ensure that we have always-on operation is to take the battery charging out of the picture and instead rely on the human body.

In this regard over the course of the past several years we have looked at sources of body heat and body motion as powering up our sensor systems. We have built advanced, flexible, thermoelectric wearable systems that have generated quite a bit of power.

We also built novel piezoelectric systems that can harvest foot strikes and convert that to self-power gate systems. And in more recent years we’re looking at biofuels where we can collect the sweat and convert back to power using robust enzymes for lactate and glucose.

And in cases when autonomous sources of energy are not available, such as in the implantable devices, we are also looking at ultrasonic energy transfer to keep those systems continuously working as well.

And here is an example of a cardiac shirt that we built in recent years. This shirt has flexible thermoelectric devices, it has dry electrodes, it has supercapacitors, compressed sensing chips, flexible antenna, and dry ECG electrodes shown here that can provide continuous battery-free operation for continuous monitoring of ECG.

We have also recently developed very high efficiently ultrasound energy transfer for implantable devices using CMUT, Capacitor Micro-Machine Ultrasonic Transducers to transfer energy into the body.

So with these types of approaches we are able to power up these devices in a continuous manner. But that is not the complete picture. Alongside providing energy, we also need to lower the power of all the electronics that are in the system.

And this is an example of our custom electronics that we have built in recent years, where the power levels that we are consuming are miniscule versions of what we can get in the commercial space. You can see that our system here is consuming less than 0.5 microwatts of power. We have our radios that we have simplified and now operate at 150 microwatts.

In addition to that we have also built unique circuit components like an analog front-end that has four channels so it can have four different kinds of sensing modalities working in parallel, along with a multi-harvester power management system that can not only take your body heat but also take RF sources and other sources of energy available and pass it to the sensor system. So this is the kind of production in power systems that is necessary to be compatible with the energy that we’re harvesting from the human body.

Some of you may know that Bluetooth communication is the most power-hungry component of wearable systems, and if you want to achieve continuous monitoring with real-time transmission, we are also positioned to look at alternative routes. And in this particular case we looked at wireless EKG measurements using backscattering communication, which is basically communications via reflections.

And this comes in the form of looking at Bluetooth packets that are then reflected off of a backscattering component that is located on the phone. And with this approach the power consumption of an EKG sensor can be driven down to ten microwatts, which is of course 100X lower than anything in the conventional space. And again, it does not require a battery, and our demonstrations in this regard allowed us to combine this with sweat harvesting. And so this is another example of being able to achieve passive continuous monitoring without power consumption.

So with that little bit of background on the harvesting and ultra-low power communication and computation, I want to move on a little bit now to sensors. We have heard so many excellent talks today about sensors, and our team in the ASSIST Program has also been working on all these different modalities that you see here, and our approach however has been to focus on reducing the power consumption of these sensors so that they are compatible with the energy harvesting that we are also developing.

So I would like to highlight three types of emerging sensing modalities that have I think great potential in assessment of behavior, cognition, and emotion. These include optical multi-wavelength PPG, human volitome, and blood pressure. And I’ll show you some opportunities in each of these areas.

So in the area of optical sensing one immediate application is sleep. As we know sleep is connected to cognition, sleep is connected to early decline in dementia and Alzheimer’s, and also in general there is a lot of percentage of the world that suffers from chronic sleep disorders.

So our team has built a new wearable system that has the capability of combining multiple modalities, such as infrared spectroscopy, near-infrared spectroscopy, and other typical sleep-related sensor signals in a conformable patch. This system has been tested in sleep studies. We have used machine learning to further build the resilience in this system and provide a platform that can be used in clinical studies.

But there is an opportunity to take optical sensing even beyond what we have seen so far. There are many biochemicals in the blood that respond to much longer wavelengths. For example, it has been well documented that both glucose and many lipids are observed at longer wavelengths, and these are typically not possible to observe in the visible spectrum.

To this end and to this opportunity we have built rings that have the capability of generating 12 different wavelengths of light, and then these different wavelengths cannot only help us detect the total hemoglobin and hematocrit, but also the other kinds of biochemicals that I mentioned already.

And our system design for Bluetooth allows this ring to operate at power consumption that is in the 100 microwatt or less. And this can be applied towards chromophores that we are not even sure of yet that can help us assess some of the cognitive and emotional states that are being discussed.

Another emerging area that is relatively uncharted is the human volatilome. So we all have heard that breath has many signals that come out, the OC signals that come out that are connected to diseases like cancers and diabetes and more. And we also know that the environment contains many toxins. What we don’t know too much is what is happening in the skin.

And there have been studies shown down here that even human fear has a specific VOC signature. And so there is an opportunity to look at skin emissions on a continuous manner and assess them for different changes in behavior or lifestyle. And here is an example of some of the work that we have been able to detect using very low levels of VOCs coming out of skin and still being able to detect them using an array of metal oxide gas sensors.

And I think a very important parameter is blood pressure monitoring, and several studies have shown a significant association with hypertension and mild cognitive impairment. We have used ultrasound-based sensor system to look at the blood vessel diameter and use that as a way to interpret blood pressure.

In the interest of time I am going to move forward. I would like to share just a little bit about sensor systems powered by machine learning. So one of the areas where behavior is a big problem is individuals who are suffering from conditions like autism.

And right now the prediction of these is done by a caretaker, and that is prone to bias. And so we want to build a wearable platform and use machine learning as a way to create a more objective approach to predicting problem behavior.

This is what our system looks like. It comprises of many different sensors, five posture sensing nodes as well as physiological sensing, and with all of this data being put together in an algorithm we are able to detect early signs of behavior changes.

Finally, we are looking at gait sensing as well, and using this with wearable IMUs, and compete very well with COST-based expensive gait monitoring approaches.

I know my time is running out, but there is one final slide I would like to share, which is looking at robust audio training. This is being done for cough, but also for speech, because speech changes are connected to cognitive decline. And here we are looking at training deep neural networks and not lose their performance even when additional unexpected sounds are put into the network. And we are looking at transferring this to neural network chips to make the system even more robust.

And with that I will conclude and basically state that there are many promising directions for sensor systems to actually make an impact in patients, and this is possible by enabling systems that provide long-term and continuous monitoring as well as novel sensors. Now I am going to turn it over to Dr. Honggang Wang.

HONGGANG WANG: I am very happy to be here, to present my research on IOT for wireless health. So in the morning I saw a great talk related to sensor design, and also in the afternoon the research related applications, the healthcare applications. My research is a little bit different, focused on the system level field of IOT systems for wireless health.

You may hear a lot of terms like eHealth, mobile health, also NSF has smart life health. Wireless health actually is similar as those terms, but all those terms are different. Wireless health involves internet, sensing, wireless communications, computing, and intelligent techniques such as artificial intelligence in support of health-related applications.

We look at three main components. The first sensing, so basically the different type of sensors collect physiology sensors from body. Communication, wireless communications. 5G, 6G, Bluetooth, ZigBee, as people have already talked about today. They could be different as a lot of people talked today. We are not only just using sensors to collect data but are going to actually convert data to the decision, so basically give the patient recommendations.

So wireless health has two major components. The first is digitizing structure. So basically, through the wireless health we can expand traditional healthcare models with communications structures and also with devices, and also can label persuasive health monitors.

Another important aspect is through the wireless health technologies the health care services can be transmitted from the (inaudible) to the patients’ care and wellness. That means the health care services are delivered through home, workplace, and community.

Another aspect actually I said I work on my research mainly focused on IOT. So IOT has very early definition, as in today we have a lot of IOT devices as in the wearable devices are part of IOT. So the early definition was made by actually IEEE we call the Internet of Things. But now it has broader definitions.

If you look at wider definitions, interconnection of smart things, involves multiple aspects, and you look at the middle figures it shows the Internet of Things including applications, networking and data communications, and the sensing part.

So the IOT of course has a lot of applications like smart cities, transportation, manufacturing, and so on. But healthcare as well as a main application. So there are some applications, wireless healthcare applications such as home-based care applications, and today several speakers talked about in-home health monitors, you have different sensors, learning technologies as well as health management and drug deliveries like electronic pill. All those actual applications rely on the sensors people have already talked about today. So like the thermistor, ECG, SPO2, glucose, blood pressures. I’ve researched body sensor networks, body area networks. So they are also similar.

So the Wireless Body Area Networks is a key strategy of IOT for the wireless chaos. So the sensor, not just sensors, the sensors are resource limited in terms of computation power, in terms of energy, in terms of the transmission barriers. Obviously five or six years ago we provided sensor communications, basically probably about 1250kbps. It’s not a giga bps. And a lot of low power, because it has low power requirements for the sensor communications.

In the middle I show why we have developed this wireless ECG systems, we have 3D systems. The battery is still big of course, compared with coil we can actually significantly reduce of we replace this battery. So it is quite small, so the patient can carry the ECG sensors work around hospitals and at home, they take around 24 hours to collect the ECG signal. There are also other kind of applications, like EPO asthma monitoring, as in skin cancer detections. So IOT can do a lot of things for healthcare.

The Wireless Body Area Networks, the typical architecture for the body sensor networks, you have the central unit. Generally, we have the smartphone as a central unit, also you can include several miniaturized body sensor unit with other BSU, different protocols like Bluetooth as in Bluetooth today becoming popular, low power Bluetooth, ZigBee was developed a while ago, and also actually has a different kind of communications protocol for the wireless body area networks.

So there are a lot of challenges for wireless body area networks, wireless devices. The device must be small in size, lightweight, easy to use. The second is the privacy and security. The application for diabetes, we have the glucose sensors which measures the glucose level in the blood, and wirelessly transmits the glucose level to the pump, which deploys in another part of the body.

So the pump is going to inject insulin in the blood, it’s a loop system. However, if the device transmission modify this glucose level, eventually the pump injects enough insulin in the blood which kills the patient, so security is also a big concern for the Wireless Body Area Networks. A lot of interference.

Today we talk about wireless sensor on the body in the (inaudible) area so the people, because of wireless communications they could have some interference, or we cannot reduce the interference and improve the system performance and improve also the inference for the consumer mode energy.

Energy, the fundamental constraints for the wireless body sensor networks, of course there are a lot of energy harvesting technology available today, but there are still not enough to support wireless body area sensor networks reliably. We need a reliable software for the sensor systems. Cost, we want cheap sensors. Today, like the sensors, later I am going to show (inaudible) $200, that’s still very expensive, reduce the cost of sensors.

So our study actually focused on the communication aspect. So we actually developed this for 60GHz millimeter wave Wireless Body Area Networks. Today the 5G or 6G communication network use a high frequency band, from 20 to 60 gigahertz.

But if we utilize this we definitely will improve the transmission speed of the wireless body area networks, so basically can increase the system capacity, allow us to use millimeter wave communications because we use directional communications that are more secure, it is not like a traditional Bluetooth or Wi-Fi omnidirectional communication, that means a you hear your communication message, but this is not directional communications. (inaudible) actually is inversely proportional to the frequency band, so you have a smaller size antenna so basically you use a high frequency band to actually minimize the device.

(Inaudible) compared with traditional 2.4 gigahertz communication and 60 gigahertz communication, so you see the 60 gigahertz has lower pass loss. So we have some device, also a model here.

So for the wearable case study we actually focused on wearable biosensor system for remote detection of life-threatening events in infant. This research is funded by NSF SCH. So we have worked with UMass Medical school.

At the hospital we saw a very tiny infant, just one pounder you can see the difference in the ICU, different sensors attached on the infant’s body. Their skin is very sensitive, they cannot move too much because they have seen a lot of cable use to collect the devices with ECG equipment.

So we use (inaudible) and artificial intelligent technologies to help the infant. So the system itself, we built up the system not just to collect the physiology signal from the infant, but actually we were making some predictions. But also the infant has some serious disease such as arterial bradycardia hypoxia, so can we predict the infant could have bradycardia. So that is actually the goal of the project.

So you can see, so we actually developed, firstly we utilized this called BCG approach, ballistocardiograph approach. So we deployed the load of cell sensors under the leg of the four crates So the heartbeat is going to cause some kind of vibrations and movements, and those sensors are very sensitive, they can hear that, they eventually can count up the heart rates.

You can see the advantage of this approach is easy to install, and they’re also free of size constraint, because it does not contact the patient’s body so it does not require the patient’s consent.

But of course there are some disadvantages because the infant can have other kind of regular movement, so that creates noise. We need an approach to filter those noise and eventually only measure heart rates, whereas they are sensitive to the motion and environment noise. Also of course it is costly.

So larger studies use EPIC systems to monitor heart rate using a capacitive based ECG. So these basically measure electrical field change, so it doesn’t require any kind of contact to the infant’s skin. So this system we designed, basically you can see the advantages. It has wireless compatibility, but still some disadvantages shown here, susceptible to motion and environmental noise and heart rates. And also of course the wireless device needs to regularly be charged, so that is the one disadvantage.

We also developed the, the previous professor mentioned ring sensors, we also developed a similar sensor. The advantage of the ring sensors, the ring has a very close contact with the skin. So we could get away with strong signals. You can see the advantage is very clear strong signals and cost effective compared with wristwatch, small in size, we built up different kind of sensors. I said I’m doing a system, integration of different sensors. So measuring the different signals.

But of course the power is another challenge for those kind of sensors. It has close contact with the people’s skin, it requires patient contact, and is not purely long-term sensors. So we actually compare the ring sensors, based E4 sensors developed by MIT, so they actually did a real experiments by our bioengineering department. So our sensors actually have very comparative performance with these E4 sensors, therefore in terms of the heart rate measurements.

Of course, another step is the feature extraction. Once we have signals, we need trend features (inaudible). In the morning a similar speaker talked about linear feature, learning feature, we also looked at these time domain features, frequency domain features, and a joint time frequency domain features, so basically different kind of features we collect.

And then you receive three major components, data precision, data collection and individualization, data latices and prediction models, sensors, and communication, data collection part, processing, feature extraction, linear feature, joint time frequency domain features, and then we map to feature different models and we do classification. So all put this system is a prevention, basically it won’t actually predict when the infant will have bradycardia like its basically it’s a prevention aid.

So we have these emulation systems, and so basically it is not actually easy to test the staff after you build a product, you directly send it to the hospital, test it on the infants in ICU. But when you have this, we also have these emulated systems. So basically, we have cameras, different sensors, facial attractions.

We actually tried different machine learning algorithms like decision tree, (inaudible) with all the features we have, all the features because the impact kind of with the prediction at the time. So we have different accuracy, different, the latency we show here.

So it is the end of my talk, I am going to talk about some challenging issues. So why is it sensor design? We actually design different sensors, so we actually need a customized sensor. We also tried different sensors but (inaudible) specific to applications. We actually need the custom sensors.

Second is the processing. So for example we should perform motion learning algorithms. The sensor can do a little bit of processing so the motion learning can be performed (inaudible) device. (Inaudible) The data can be sent to the cloud but there is a trade-off. Communication consumes much higher order of energy than actually the computing. (Inaudible)

Real time performance for AI/Machine Learning.

Discussants

RAJI BASKARAN: Thank you. Based on your work on the system level, in the engineering space, if you have to kind of ask NIH and behavioral scientists a roadmap of requests, what aspects of sensor design do you think you need prioritization or inputs from in order to translate that to use case posture? Can you comment on what you heard from your colleagues and also what you think are still missing or what you would like to learn?

HONGGANG WANG: That is a good question. So the Previously actually, today I was talking about this infant monitoring project. That project is a collaboration between me and UMass Medical School Doctors. That actually I think is very important, as an engineer, so we build up sensors, build up a communication system, how we can work with as in the medical doctors develop kind of real-world applications.

So I feel I learned from my experience that it is very important to have good communications, focus on some kind of real medical problems, understand the needs and the requirements from the medical side, and then decide what technology, what type of sensor we want to develop. That is I feel quite important. I am not sure how to answer your question, but that is something I learned.

VEENA MISRA: I could also add maybe one aspect, one thing we would like to understand better is when there is the phrase that people use we want continuous monitoring, what does continuous monitoring mean for different use cases?

Maybe in cardiac monitoring it is really continuous, like every second counts or every even shorter than that counts, but is that true of all different scenarios, whether it is maybe behavior or cognition, what does continuous mean in different use cases as one thing that will help us better design systems and better manage the energy so that they are always available to assess.

EDWIN KAN: First, a question for Dr. Veenaa. You talked about this power harvesting from one device, will there be a power distribution network to all the different sensors?

VEENA MISRA: That is a very good question. So we realize that there is no one type of power source that will be reliable all the time, because sometimes the thermal energy is not available, if the person’s skin is covered up then the thermoelectric generator will not be able to generate the voltage because it is relying on a temperature difference. If the person is not moving then you won’t have the motion energy.

And if the person is inside in the dark room or even indoor lighting there is not enough solar available. So that is why we have generated a multi-harvester power management circuit component that actually can harvest from all different kinds of sources that are available, so that we can maintain that reliable operation. So we have been able to generate power from motion and even combining thermal and solar into one combined device is also something we’ve been able to demonstrate.

EDWIN KAN: Do you think Bluetooth is enough, or is 60 gigahertz really necessary?

HONGGANG WANG: That is also a very good question. Today we have low power Bluetooth technology. So actually this technology has been utilized for many publications, not just medical applications. I think currently for many applications the data rate, the transmission rate supported by low power Bluetooth is really low. But in the future there will be more medical applications which require the higher data rates. Today we have more kind of for example wearable applications like virtual reality, video monitoring, those stuff. So that actually requires a higher data rate.

A lot of I think (inaudible) because of the complication (inaudible) is secure. So basically it is directional communications. That - the omni direction, the traditional 2.4 gigahertz Bluetooth is largest, and also helps actually reduce the size of device, antenna size, because the frequency band. I believe there should be some better techniques in the future that replace the low power Bluetooth.

RAJI BASKARAN: Both of you briefly mentioned AI/ML in your pipelines toward the end. Can you make some commentary on how not just you but as a community we should think about how much data is enough to build a model, and how do we understand the presentation of the data and the distribution of the data, and can we think about ways we can start quantifying something like a (inaudible) not just the performance of the data in terms of how accurate things are related to a baseline, but because all machine learning data actually learns from a population, we need –

STAFF: Please talk into the microphone.

I wanted to comment on the AI/ML data, both in terms of what is good enough for building models, and how we build figures of merit for the presentation of population in the dataset to know that it’s god enough. Because unlike the (inaudible) figures of merit, it’s not about that one person, it’s about the whole population that the model has learned.

HONGGANG WANG: That is a very good question. Today we talked about learning from big data. Learn also from small data. I saw people mention small data. I will say it really depends on verification you’re going to work on, so what kinds of medical application, the problem is a medical problem you’re going to solve.

Even, I saw we in the morning talked about some machine learning apps, of course today like deep learning is very popular, a lot of people use deep learning, where you (inaudible) linear regression decision tree you can solve perfectly. So that is really I would say depending on what application you work on.

VEENA MISRA: I guess I can add that that that is a really good question, and we’re still I think at the very early stages of getting all the data that we need to train the models, so one example is we have the measurement of cough as I was showing, where if you just measure very straightforward clean coughing and you can train the model that way, but in the field you’re going to have all these unexpected sounds, like the cars going by, or the environmental sounds. So we have to actually make sure that we have all of that kind of data. I would say that we don’t have that problem yet of having too much data, so this is something that we have to build into the models.

PARTICIPANT: So I was really intrigued by a point that Dr. Wang made in his presentation where he said a con of the sensor ring was patient consent. And so that really elicited my thinking in terms of we have all these multimodal sensors, and you mentioned consent because this was direct contact with the person. But how are we supposed to think about consent when we have all these ways of measuring different things, some with context and without context, so any thoughts on that would be welcome, and that connects to the point that Dr. Misra also made in terms of continuous monitoring.

Again, if we can monitor 24/7, when do we or the patients or participants, do they have a say in what parts of the data they might not want to be monitoring, I think that’s relevant if we really want to get buy-in in terms of people using these devices. Thank you.

HONGGANG WANG: Thank you for your question. I think the long contact, what I mean, basically the contact means the skin. The long contact is basically is the sensor just touches the clothes. That is the difference.

PARTICIPANT: Maybe I didn’t explain myself. You mentioned that consent was a con of the sensor ring. So you mentioned patient consent is a con. And I am thinking why is patient consent not even relevant in all of the other ways that you mention.

HONGGANG WANG: That basically consent means if we give the sensor to a hospital, they ask the neonates to wear the sensors. So basically that actually needs kind of the agreement, because they contact the skin of the infant, so that’s what needs the consent. The room sensor is not long contact sensor.

VEENA MISRA: I can maybe add a second part of the question that you asked about the 24/7 monitoring, and how does this impact the patient’s consent. I think this is a big problem unless we show the value of 24/7 monitoring. And for that we need to show how collecting that data will provide a much better result or a much better inference for the person, otherwise I think there will be a major pushback on this much sensing. So we have to provide the value, and that’s why we need to work with the clinicians to understand what use cases need that 24/7 monitoring, and what use cases are okay with much less monitoring.

RAJI BASKARAN: So to me, even those previous examples that you provided, in terms of you have the sensors, that I am not touching the skin of the baby. If you are monitoring someone constantly, why wouldn’t that require consent from the participant? Like I wouldn’t like somebody measuring my biological measures without me knowing. Just because you can do it wireless doesn’t mean that we should do it. So that’s the point that I was trying to make. You didn’t need to specifically answer, but I think you making the point made me think in the conference world we should be thinking about those questions.

PARTICIPANT: I have a question going back to the sensors. In the open-source software world we put code out there for other people to use, we put models out there that people can use and validate. In the sensor development world is there a similar analogue to GitHub, some hardware hub or sensor hub, where you can put blueprints to kind of create this transition where other people can build out those same sensors and collect data to validate it? Is that something that is in the works that you are thinking about, or would that be a useful thing to exist?

HONGGANG WANG: I think that is a great vision. I think we need kind of the infrastructure, for the sensor connected to the cloud so the cloud has the database to collect all the data in real-time. Somehow the users or the researchers basically can access those sensors from the cloud even.

But I have seen a lot of challenging issues, because not just data itself, because if sensors are deployed on the patient’s body, somehow the patient decides some kind of a strategy to use to keep data privacy, something like that, it is hard to do for example the research, for example to control that, have sensors collect the data remotely. I think there is some challenging issue there.

But I think that is a great vision for how we had a sensor community, different sensors that we can for example collect, maybe build up (inaudible) structures for other research to use those data, the devices, I think that will be good for the research community.

VEENA MISRA: I can add just a little bit more to that. We already are doing this type of collaboration with several people. We build platforms in our center, and some of the components are cogs, and some of the components are research components, and we actually can and do send these out for other people to either add on their sensors on top of it or collect the data. So we are trying to get to that vision that you just presented of having an open-source hardware approach for sensors.

SVETLANA TATIC-LUCIC: Any other questions from the floor? Do we have any virtual questions that have arrived?

KENT WARNER: I am Kent Warner, I’m an Associate Professor of Neurology across the street at the Uniform Services University, and I do a lot of wearables sleep research.

And I am really interested, I did not hear a whole lot of sleep, I heard some from Veena and I was just interested in what kind of challenges in the sleeping realm you guys are interested in tackling. It is really challenging for us clinically and from a research standpoint to understand the impact of our therapies, either in a clinical trial or as we take care of our patients.

And that, while people are sleeping, seems to be one of the most, I guess we need the most objective data or the most monitoring, because we can’t get it from them, and bringing them into the lab is not only challenging and expensive but maybe not so accurate. So I just wanted to get your thoughts on that topic.

VEENA MISRA: I think one challenge is the size of the sleep system. So typical sleep systems are a little bit cumbersome to wear and actually get accurate sleep studies while they are at home. So that is one of the motivations for us to build the platform that I showed where it was a band aid or a very thin form factor patch that allows us to get more accurate sleep data. So that’s one big aspect of it.

And I guess coupling it with other sensors that are not available in typical sleep monitoring systems, whether it’s activity or core body temperature or making a multi-modal system, and also looking at the performance of the sleep beyond just sleep apnea, sleep apnea might be easier to detect, but there might be other more in depth signals that might be available if you have this high performing, high accurate sleep monitoring system.

SVETLANA TATIC-LUCIC: Okay, this is it. I will close session three. And the floor is going to be taken by Professor Chiao and Rogers.

Day 1 Closing Comments

J-C CHIAO: Thank you very much. I know this was a long day, and we tried to pack as many talks as possible. Because this is a very rare occasion that we can bring so many top researchers together in one room. And also this is our first in-person meeting. So without any incidents happening. So I would like to thank you all very much.

So today we heard about power (inaudible) development for sensor, wearable, implantable, and our presenters talked about device, component, system, firmware, and software, and these are being used for physiological, electrochemical, and biochemical signal sensing, and there is obviously a lot of innovation for our clinical colleagues to consider. Then we have a section to discuss about multi-sensor platform for clinical aspect, and we heard five experts to hear their real experience in using sensors to assess children and elderly, and now we also know what kind of an issue our sensor community has to overcome.

Then our next section, we have talked about power and energy issue. Dr. Smith mentioned about the battery life is very important for sensing, and this I think our sensing community can address in the near future. And then we talk about the connectivity with body area network to talk about ubiquitous networking to connect all the sensors.

And of course, during the conversation several questions kept popping up, such as privacy consent issue. And so tomorrow we have one session, we talked about remote sensing combined with AI and machine learning, which we also discussed today. The remote sensing, and since the person passively, so you don’t even know you’re being monitored, so obviously the privacy issue will come up. So after that we also have a section to discuss standard safety, ethical issues and regulatory issues. Maybe of these issues as engineers cannot resolve.

I think we have to rely on NIH’s effort to push this effort up to the Congress to pass a law to allow everybody to use the data safely. So tomorrow we will have a section to discuss this issue. And then we will also have a section to talk about a computational model that can accelerate the sensor development and try to have a predictive model for all this clinical issue. Then tomorrow we will end with a future direction discussion.

I think this very rare occasion we all are together so we would like to provide as much feedback to NIH so that they have some foundation to build up the future program. And since I got you all here, this is a very rare opportunity. So please don’t leave too early, we’re going to have a group photo after this. And now, Dr. Rogers?

JOHN ROGERS: I just want to add a few words, to JC for his excellent stewardship of the event today, maybe even more important than Yvonne and Dana and all the support staff at NIH, it really ran smoothly today. But I think it wouldn’t be a workshop without the speakers and the audience, and I think we heard a lot of really great presentations. Everybody stayed on time, which was amazing. I think we were within one minute of the schedule that we had projected for the day. But also thanks to the audience, it was just a tremendous engagement.

Great questions, great discussions, and I think that is kind of the purpose of an event like this, is to get that dialogue started and to get these diverse pieces together. But what really struck me is not only the quality of the presentations but the diversity of the content as well, it was amazing.

I think we started out talking about tunneling transistors, and we ended up discussing scale deployment of commercially available devices, and sort of everything in between in terms of device diversity, but also application to neurons, individual cells, crayfish, mice, babies, the elderly pretty much span the whole gamut there.

And in terms of modality sensing, not only biophysical but also biochemical, I think multimodality is kind of a theme that emerged from today’s discussions for deployability, accuracy, longitudinal capabilities and tracking variability, and sort of the context and the cost and so on.

And thinking about the full pipeline, not only as sort of academic research level devices, some of which may turn out to be curiosities, but others of which may represent a starting point for devices that really can be deployed and used in a meaningful way with human subjects.

So this whole idea I think Professor Lockhart I’ll steal this from, sort of research to reality, I think is a real opportunity now that there are so many different types of sensors that are available, and offering the kind of accuracy and deployability, the scalability you need to really test hypotheses around brain function and connections to behavior.

And then the question is how do you do that? How do you go from research to reality? And I think in one of the discussion sections something that really stuck out, to me anyway, was this idea of deploying and developing in parallel.

Because I think you learn a lot when these devices get out into the field and users are engaging with them and data streams are coming back, and you can kind of understand the gaps and the opportunities, and that kind of information you can really feedback in a powerful way to the developments in engineering science that can further refine the devices, and enable them to be used in a more practical way.

And so I think today really sets the stage for a lot of the conversations that will happen tomorrow around the question of how do you extract decisions and insights from the data. And I think it is going to be a very exciting day tomorrow, and so I look forward to seeing you all again tomorrow morning. Thank you.

YVONNE BENNETT: Thank you everyone, your participation in the room and online. This will end our discussions for today. For those attending online we will start again at 10:00 a.m. Eastern.