How is GEMINI using AI to turn routine hospital records into tools for national healthcare reform? In this episode we discuss how AI and health data can be harnessed to improve healthcare and how we ensure these tools are used responsibly.
How is GEMINI using AI to turn routine hospital records into tools for national healthcare reform? In this episode we discuss how AI and health data can be harnessed to improve healthcare and how we ensure these tools are used responsibly.
Amol Verma is a physician and scientist in General Internal Medicine at St. Michael’s Hospital and the Temerty Professor of AI Research and Education in Medicine at the University of Toronto. He is a health services researcher, studying and improving hospital care using electronic clinical data. Dr. Verma co-founded and co-leads GEMINI, Canada’s largest hospital clinical data research network, which is collecting data from >35 hospitals in Ontario. He also co-founded and co-leads VITAL, a multi-provincial clinical data platform. Dr. Verma completed medical training at the University of Toronto, a Masters degree at the University of Oxford as a Rhodes Scholar, and research fellowships through the Royal College of Physicians and Surgeons of Canada, the Canadian Frailty Network, and AMS Healthcare. He served on the Council of Canadian Academies Expert Panel on Health Data Sharing, is a Provincial Clinical Lead for Quality Improvement in General Internal Medicine with Ontario Health, and is the Chair of the Researcher Council and a Board Member of the Digital Research Alliance of Canada and AMS Healthcare. He received the 2022 Royal College of Physicians and Surgeons of Canada Early Career Leadership Award, the 2022 Canadian Institutes of Health Research’s early career Trailblazer Award in Population and Public Health Research, and the 2023 Canadian Society of Internal Medicine’s New Investigator Award.
Nicole Yada is the Director of the VITAL Platform at GEMINI. Prior to joining the GEMINI team, Nicole was the inaugural Program Director for the Accelerating Clinical Trials Consortium and oversaw business development for ICES. She holds a master's degree in health informatics from McMaster University and is completing her PhD in Health Services Research at the University of Toronto. Ms. Yada trained as a graphic designer in Tokyo, Japan and has a background in marketing and research journalism.
Research you heard about
Learn more about GEMINI
ICES | New data partnership to expand insights on hospital care in Ontario
Misty Pratt
Artificial intelligence is reshaping every corner of our lives, but in healthcare, the stakes couldn't be higher. How can AI and big data help the healthcare system meet growing demand and deliver better quality care, and how do we make sure these powerful tools are used responsibly? I'm your host, Misty Pratt, and this is In Our VoICES, the podcast that brings you the health data without the drama. Today, we're joined by Dr. Amol Verma and Nicole Yada from GEMINI, Canada's largest hospital data sharing network. They're here to tell us how GEMINI is unlocking insights from millions of hospital records, what a new partnership with ICES will mean, and how AI is paving the way for smarter, more equitable health care. Amol and Nicole, welcome to In Our VoICES.
Amol Verma
Thanks so much.
Nicole Yada
Great. Thanks so much.
Misty Pratt
Thanks so much for being here. So, Amol, for our listeners who may not be familiar, can you tell us what GEMINI is?
Amol Verma
Gemini is a hospital data sharing network started here in Ontario. And whenever we're talking about artificial intelligence these days, it's important to include a disclaimer up front that the GEMINI that we are talking about today has nothing to do with Google's Gemini AI tool. We got to the branding first, we like to say. So, GEMINI was started in about 2015 in Ontario and is a collaboration of hospitals all across the province who came together to share their electronic health records data, initially for the purposes of better understanding the quality of care that was being delivered in hospital units. And so, over the years, we've grown to now about 35 hospitals across the province, including all regions of the province and the hospitals participating in GEMINI care for about 60 to 70% of patients in the province, and so it has a very good coverage. What we do is we work with hospitals to extract data from the hospital's electronic health record systems, as well as some of their other administrative data, and then the hospitals share that data with our group, and we securely protect the data, de-identify the data, standardize the data, and turn it into something that's ready for research and analytics and identify opportunities to improve care. And so that's really the work that we do today. And we then take that data make it available to a wide variety of clinicians and researchers. So, for example, clinicians can access the data that provides them individualized feedback about the care that they are providing in hospital, giving them a point of reference with their colleagues, and opportunities for improvement can be highlighted. And then also scientists and their students can access the data. So you know, more than 150 projects have been completed on the platform, with about 110 active projects underway at the moment, and really a wide range of uses, but really mostly oriented around finding ways that hospital data can help inform how healthcare can be improved, whether that be a better understanding of an illness, identifying a variation in the way healthcare is delivered, or developing a new tool or an artificial intelligence algorithm that could be used to improve care. And so that's what GEMINI is today.
Misty Pratt
And what makes GEMINI unique, both in Canada, here in Ontario, but then also worldwide? So, what's special about it?
Amol Verma
I think that the major advantage of GEMINI, I would say, is actually the strength of the Canadian healthcare system and some of the strengths that we have that help us be globally unique. So, the first thing is, obviously, as a single public insurer for hospital services, everyone is included. So, if we think about other large healthcare data sets, particularly hospital data sets, globally, they're often from American networks, which largely the patients in those networks are those who are able to have insurance. And so there is some inequity, perhaps, in the sample that exists in those data sets. A second is that our data is from our population, which is enormously diverse, particularly Ontario's population, one of the most diverse high-income countries in the world. And so we have a lot of variation, a lot of diversity in the data set that produces more generalizable science, and so being able to collect data at a large scale, so GEMINI currently holds about two and a half million hospital admissions worth of data, and then that population being highly diverse and highly inclusive is what makes it a very unique data set globally. I think the real strength of GEMINI within our Ontario context and Canada's context is it's local data. It's data about our population, about the care that's being delivered locally. And we take a lot of inspiration from ICES who, for decades, has been providing information back to the healthcare system to identify gaps and opportunities to improve care. And really GEMINI is trying to do that in a more detailed way about hospital care. Like the data that we collect is very complementary to the data that are collected at ICES and the data that we collect is very focused on hospital clinical data, so things like blood pressure readings or medication prescriptions or medical imaging reports, all that detailed information that's generated every time a healthcare interaction occurs, which normally is not shared and made available for research. And so I think we're really excited about the granularity, the depth, the detail of that data, and what exciting new research potential emerges when you have that kind of detailed data at a very large scale, and particularly related to artificial intelligence, and also related to other kinds of applications, like clinical trials and other types of research that really can help us improve the way the healthcare system operates.
Misty Pratt
And on that note, my next question is that, you know, my knowledge of AI extends to using ChatGPT to generate ideas for myself when I work, but I think a lot of people listening may not even know how AI is currently being used in our healthcare system. Can you give us one or more examples of how GEMINI uses AI then to help improve hospital care?
Amol Verma
I think even one or two years ago, you wouldn't even have maybe said the point about using ChatGPT, right? Like many of us didn't even know what that was several years ago. So, this is a brand-new set of technologies which are evolving quite rapidly, I would say, even up until today. The vast majority of healthcare in Canada is not affected by artificial intelligence, right? Very little AI is actually in clinical care at the moment, the technology that has gained the most early traction is something called the artificial intelligence scribe technology, which people may have encountered when they're having an interaction with their family doctor. My own children's family doctor, uses a technology like this, which is it's a recording device on a computer or a mobile phone, which then listens to our healthcare interaction and generates a clinical note for the doctor, so that the doctor or the nurse or whoever's providing the care doesn't have to be tethered to their laptop typing while you're having the interaction with them. And that technology is increasingly being disseminated. The early evaluations of that AI scribe technology have shown that it does lead to some time savings for physicians, maybe on the order of one to three hours per week. For a family doctor, usually it's the time that they're spending writing notes or doing some administrative tasks. So, it seems to be providing some meaningful benefit. And just as an individual who's received care from someone using a tool like this, it's really nice to be able to look the doctor in the face and have a face-to-face interaction a little bit more easily than before. The work that GEMINI is doing in this area, we have a few different projects, and I'll just give you two examples. Our work really orients around the area of what's called predictive analytics, which is using artificial intelligence to make predictions about things that might happen in the future. And the approach that we are specifically taking is trying to predict complications or bad outcomes and then try to intervene early to prevent those bad outcomes from happening. So, I'll give you one example. Our team worked closely with a team of vascular surgeons, Dr. Charles de Mestral, who's also an ICES scientist, as well as a group from the Vector Institute and Diabetes Action Canada to put together a project that can predict an individual patient's risk of having a diabetes related leg amputation or serious infection requiring hospitalization. When you take a look at the patients who have a leg amputation, if you look backwards in time, many of them, 80 to 90% had a hospital encounter for some reason unrelated to their leg. Suggesting that there may have been an opportunity on that earlier hospital visit to identify them as someone who might be at high risk and try to get them connected with foot screening. And so what the team did was put together a prediction tool for when patients with diabetes are discharged from hospital for any reason unrelated to their feet, if they are predicted to be at high risk of having a foot complication, then they can be sent to a foot care clinic. And we've been using that tool at St Michael's Hospital now for almost eight or nine months, identifying patients in whom we can try to prevent these downstream complications. So that's one example of an AI tool. And the second example is another complication called delirium, where one of the most common causes of harm in hospitals or patients having adverse events is a state of acute confusion called delirium. 20 to 30% of adults who are admitted to hospital for a medical or surgical problem because of the severity of their illness, they also become confused. It's this very mysterious thing. They can become disoriented and confused. They actually have a much greater risk of dying on that hospital stay, twofold greater risk. They stay in hospital for eight days, longer, on average, than people who don't develop this complication. And it's much more costly to care for them, and it's very distressing. It can be very difficult for the nursing staff to care for them, because when someone becomes disoriented, they can become violent and hit out, and they can cause a lot of injuries. What's very interesting about delirium is that it's highly preventable. So about 40% of delirium cases can be prevented if you can help people stay oriented to the place and time, if you can help people sleep, if you can help them stay nourished, eat well, drink well, and get them moving, help them mobilize. One of the problems is hospitals seem to do the exact opposite of most of those things, right? And we don't have sufficient staff, like we our nurses are so overburdened, our other healthcare staff are so overburdened, to be able to actually deliver this kind of care. And so it creates a really exciting opportunity for artificial intelligence, where we've developed is an algorithm that can predict, at the time someone's admitted to hospital, their risk of developing delirium, and we can really focus these prevention efforts on the patients that are at high risk. And so we're now running a study that is going to launch very soon, funded by the provincial government, that will deploy delirium AI prediction and prevention efforts across about 10 or 13 hospitals in the province, one of the largest implementations and evaluations of an artificial intelligence prediction tool in healthcare in Canada to date.
Misty Pratt
You mentioned the limitations that the hospital staff already have right now. So, with this algorithm, is the thought that, the hope is you can say, "hey, we can predict this, but you need to invest in the resources to more staffing for nurses you know, and other support staff that may be there to be able to actually then implement the changes that are needed"?
Amol Verma
Artificial intelligence tool is often only a small part of a solution when it comes to healthcare, because healthcare is inherently a human care delivery process, and so what we're actually doing is figuring out, okay, how can we more efficiently use our team? So, the thing about the AI in this case, is that it can identify the patients that are at the highest risk, so that with the same resources on our units, like we don't really have more resources at this point, right to be we can advocate, and we continue to advocate for more resources, but we also need to find ways to use the existing resources better, more intelligently, more efficiently. And then actually, a big part of our intervention, our study, is designing the teamwork and how the team is going to intervene with these AI predictions. And then actually upskilling team members so teaching them about how to do the right set of interventions. And so, as with a lot of AI technologies, it's as much about the human response to the AI tool, and how you cultivate that response and carefully adapt it to use the tool that is likely to lead to success, as it is the AI tool itself.
Misty Pratt
So, Nicole, this lends to the next point. I'd love to bring you into the conversation about trust and transparency and even equity here too, about how we're approaching AI. So how are you approaching AI in a way that supports better patient care, while addressing all of these concerns that people might have about a robot doing something for them?
Nicole Yada
I'll approach this from a few different angles. The first one is more at a policy standpoint, and so this isn't about us directly here at GEMINI, but I think there is such increasing government attention on the importance of AI, both from an economic development perspective, as well as just increasing the need for regulation and standards in the development of both commercial and public sector AI applications. So I think that that is a really important foundational element about how any research group is going to be approaching AI going forward, that that kind of sets the tone, that this is something that the government recognizes as necessary, and that it is something that there is then more kind of teeth behind actually implementing any of that regulation. So, I want to just to just start with that at a higher level. I think as a research group, there's kind of two ways that we can approach this. So, one, there is a duty of researchers, I think, in the public health care system, to understand biases that exist in health care delivery, and then associated with that, of course, is the data. So, we recognize, of course, that there are underrepresented populations and that they don't have equal access to health care. And so, I think that we have an opportunity here, and I would argue, like a responsibility as researchers, to ensure that the research that we're doing doesn't perpetuate any of those harms. And so that can be like any just broad observational research study that we're doing, but importantly, when it comes to AI that we are actually building tools and models that are not perpetuating any of those biases and really understanding kind of the data behind that. And then, from an infrastructure standpoint, I think that how we actually design data platforms and data infrastructure is really critical for building trust and transparency in that. So if we think about things like clinical trials that are required to have audit logs and be able to actually just demonstrate how the results were come to I think that that's really important for how we build AI systems, so that the algorithms that we're developing and any other AI tools that that can be replicated, and it is done so in a transparent way. So again, just really at the infrastructure level, building in those functions like auditing to enable that, or to see, kind of behind the black box of how these things are developed?
Misty Pratt
Yeah, my understanding is that what's being developed, the data that's being fed into the AI that's being used for this research, could also be biased, because even though, you know, in Canada we have this universal health care system, there are populations that are not, as you said, not well served, that are facing barriers, and so is the data itself, potentially, that's feeding the AI that's growing the use of this biased and is that something that you're also looking at?
Nicole Yada
I think yes, when it comes to AI, but I would also argue that that is just for research in general, that there are multiple layers of biases built in by the time a patient is actually making their way into the hospital, and so needing to understand or acknowledge that context that the data is part of. I think that as AI is developed, that that's where there is a chance to both understand what those biases are in the existing data that AI is trained on. But then as we develop models, how can we kind of course correct along the way, so that we aren't perpetuating what we have seen historically.
Amol Verma
Yeah, I'd like to take the optimist view on this, which I think is just true, which is that the rising focus on AI and its emergence into the public consciousness has led us to ask questions about data that previously weren't asked, and has led us to identify latent biases in the data. So as an example, a couple of years ago, a few large studies in the United States found that the devices that we're using every day for basic measurements in healthcare are biased. So specifically, infrared based devices are used to, for example, measure your blood oxygen level, and we use that information to guide all sorts of treatment decisions, like whether you need supplemental oxygen, and you know, taking care of patients in hospital. Turns out that those devices systematically work less well in people with dark skin, so people of black and other racial backgrounds. And where they are systematically underestimating how sick someone is or how low their oxygen level is. And so that kind of bias is uncovered because people are doing a lot of analytics and a lot of prediction with these measurements. If we are thoughtful and purposeful about this cycle of data-based technology development like artificial intelligence and other algorithms, finding problems and then fixing the technology. I think it could lead to a much more fair and ethical healthcare system, but it will take us paying very close attention and continuing to work through what are very challenging sets of problems here in this area.
Misty Pratt
I was going to ask you, like, do you think we're being thoughtful? Not you specifically, but as a culture in North America, are we being thoughtful about AI?
Amol Verma
I think yes. I mean, of course, some people are not being thoughtful. Of course, there are a lot of moneyed interests that drive conversations in certain directions as well, and it's a large industry, and that's important. But it's a, you know, transformative technology. And so I'm not going to say that everything is perfect with the way we are thinking about it, but I do think we're elevating the public discourse, like we can have a conversation about artificial intelligence today that we could not have had two or three years ago, because it is part of the public dialog, right? And people and dinner table conversation in a way that it really wasn't. So, yeah, I think that it requires very purposeful and deep work by the people that are building and testing these technologies, by scientists and companies and by policy makers, etc., but I think the general public's like rising knowledge of this space is helping push in the right direction.
Misty Pratt
Many people worry that AI could replace human decision making in healthcare. How do you see AI being used as a tool to support rather than replace the judgment of doctors and decision makers?
Amol Verma
I think it probably will replace some kinds of human decision making and, but maybe that's okay, and we should decide where and when. Truthfully, as a clinician, I'm dependent on many different technologies today that we didn't have 20 years ago, and in the aggregate, that's okay because we're all better at our jobs, and people get better care as a consequence. So, I think that we just have to make these decisions knowing full well, though, that you know, AI technology is a little bit different than other technologies, because other technologies largely cannot operate without humans. And you know, is there some future state where some of these AI tools become so good at decision making that they could be autonomous, like I think that's a possibility that's on the table. I don't know if we'll get there or when we'll get there, but it's something that we do have to consider. But I think what we can do as people within the healthcare system right now is just focus on, how do we harness these technologies to be beneficial as possible.
Misty Pratt
And how to get people on board, patients on board, and trusting that we're doing the right thing.
Amol Verma
Yeah, how do we do the work that makes these technologies worthy of being trusted? Is the way I would say it.
Misty Pratt
Very good way of putting it.
Nicole Yada
In replacing the kind of the human element, or the opportunities to actually not replace the human element. I think Amol was talking earlier about, like the AI scribe technology, and how that actually allows your physician to be a lot more present in the conversation. I think that that is a way that it can actually really support that more human-to-human interaction, again, not about the decision making, but more of the relationship between healthcare provider and patient.
Misty Pratt
Yeah, because then you're getting the active listening, somebody's actually paying more attention than just trying to scribble things down. I know I certainly do better when I'm not having to take notes. I retain more and I understand it better. And so going back to that first layer, Nicole talked about of the protection at the policy level, Canada does have a new Ministry of AI and Digital Innovation. So how did you both feel when that was announced? Were you excited?
Amol Verma
I was very excited. I think our government, our federal government, has been forward thinking on AI actually, going back to the establishment of a pan-Canadian artificial intelligence strategy, one of the first countries in the world, if not the first country in the world, to do so. So, I think it's great to see this consolidate in a ministry. I can only imagine the challenges that ministry is facing as they try to wrap their arms around like, what does it mean to be the Minister of AI, you know, the like being the minister of the internet? Like, what does that mean? But I'm very excited to see that there's both a lot of thought and really care being placed and resources being placed around, sort of how Canada hopes to harness and work with this type of technology going forward.
Nicole Yada
And I will just echo Amol's excitement. I think that there's kind of two important things in having announcing that there is now a Minister of AI, there's one, there's the recognition of the policy importance of it in what I was mentioning before, in terms of, like having regulatory standards and structures in place there and then at the same time, kind of twinning that with the economic development component, and hopefully having those two things work in lockstep. So, there is obviously vast economic opportunity. And as Amol was saying, you know, private interests monetization opportunities, but if we can actually also have that in line with our federal government, and particularly this ministry, being able to kind of set standards and expectations, I think that that's going to be a really powerful force going forward.
Misty Pratt
You touch on that aspect of like, our data is very valuable. There are economic interests, especially private interests. So, what are the challenges there in terms of protecting this very valuable resource that we do have here in Canada?
Amol Verma
I think this connects back to your previous question about trust too. So often I'll get asked the question, why do we not have enough trust in AI? Or, you know, what do you say to someone who says they don't trust AI? And like, I think my answer is, like, good, don't trust AI. Like, we shouldn't ask people to blindly trust something, right? We don't ask people to trust other things in healthcare unless they're proven to be safe and effective according to like, a rigorous set of standards. And so similarly, I think we should ask people to say this the same things about artificial intelligence based technologies. I think it's incumbent upon the people that are working in this space to meet the bar of earning people's trust, and so that trust really orients around a few different things, like it orients around protecting the privacy of an individual's health information. And so that has to be built in from the very outset. There are things about the state-of-the-art artificial intelligence technologies that are very challenging for privacy. To give like a very simple example, if you have an image of the brain, like a CT scan of the brain, right? So that's a bunch of horizontal x-rays, basically of the brain, which, if I were to just show you a CT scan, you would not be able to identify anyone from that information. But it's actually quite straightforward for a computer now to reconstruct what your face looks like because of all of those cuts into a perfect three-dimensional picture of someone's face. And so, what does that mean for privacy? That's just like a very simple example of what we would historically have considered to be a very sort of private, not identifiable piece of information, a single slice of a CT scan to what is now something that I can say exactly what you look like, right? So, you know, we have to make sure that the tools that we use and the technologies we use in this space are very privacy protecting. We have to make sure that they are fair from a bias perspective, like you talked about. That's the second thing that people really care about with trust. The third is safety and effectiveness. So, we have to make sure we're testing these things to work really well. And then I think the fourth, the point you're raising now is, is a transparency of, how is this information, how is my personal information, my data, being used? Who is benefiting from it? Is someone making money off of it? If they're making money off of it, like, what are the constraints around that? Is there going to be some public good from that right, like all of that needs to be transparent and communicated transparently. And as a custodian of data like GEMINI, where we hold a lot of data, we take that responsibility very seriously. And so for today, companies and other groups can't use GEMINIs data for profit incentives, but we recognize that if we really want to take advantage of AI technologies and where this space is going, and we want to contribute back to a healthy Canadian economy and have technologies that are good for Canadians, we need to be able to partner with companies and with industry. So what is the right mechanisms of governance and of oversight and trust and transparency that can be put around it so that people feel comfortable with their data being used in that way, and and making sure that, you know, all of those steps are addressed really carefully, and that's really the work of our team, our platform.
Nicole Yada
I think I would add. And something that I know that ICES has a wealth of expertise in, is that we actually probably can't determine or design what trust and transparency looks like in AI without doing appropriate public engagement. And I do think that that is going to be something that will be really necessary for any policies that we develop going forward, so that we're not kind of doing it top down, but also making sure that we are fulsomely understanding how the public feels about that and balancing kind of the privacy protection with the public benefit opportunities of artificial intelligence. I think that one of the questions that you had asked at the outset was around how AI is actively being used in healthcare, can it actually simplify some of the patient interactions that are taking place? And I think that there's such opportunities there. And so really making sure that, again, we're being risk proportionate with how we balance the depth of the privacy protection, but also respect what the public is hoping to get back from it.
Misty Pratt
Are there specific ways now that you're approaching us of how to bring the public into that conversation and hear what they have to say?
Amol Verma
A lot of different approaches at different levels of how we're doing our work. So, you know, we have patient partners on our steering committees, and, you know, that kind of thing. So that we have, you know, that public voice. We also try to communicate more publicly through our website or through forums such as this, so people can hear about the work that's happening, and then individual research projects will have patient and caregiver and public community partners as well. I think we try to do it at multiple levels. And I think actually another area which we're really excited about is that we have a big use of the data that we collect, as I mentioned, there's actually like the raison d'etre of the whole organization was to use the data to find ways that hospital care could be improved. One of our GEMINI investigators, who's also an ICES scientist, Dr, Lauren Lapointe-Shaw, she's leading a national study where we're actually asking patients and caregivers, what would their priorities be for improving hospital care, so that then we can try to measure and improve those priorities based on, you know, this, this large investigation into a priority setting exercise, basically. And so, I think our approach to this is trying to do it at the various levels that we are working at. But it's, I think Nicole said it well, it's a job that's never done, right? You're always trying to do more and do better. And so yeah, certainly a work in progress.
Misty Pratt
Partnerships like the one between GEMINI and ICES can really amplify the impact of health data. So, can you share a practical example of how combining what GEMINI has, which is the hospital data with ICES, as kind of population level data, could change the way we might deliver care?
Amol Verma
We're extremely excited about the partnership, and really, I think it is a nice example of complementary strengths. So GEMINI has very deep data about what happens in hospitals, and ICES has the longitudinal data so that we can better understand what happens before and after someone has a hospital visit that's so important to understand sort of trajectories of health and illness over time and find ways to be more preventive in the kinds of interventions we can do and help keep people healthy and out of hospital, right? And so, a concrete example I'll give you two. One is we were talking about delirium earlier. Well, one of the things that's very unknown is, what is the burden of delirium on a healthcare system? How many people actually have delirium? The statistics that I quoted are from older studies and not really well established. And the critical gap there is that routinely collected healthcare data, like health administrative data, which is what we would normally use at ICES and other places, don't capture delirium well. They only capture about 25% of cases, because it could be documented with all sorts of different words like confusion or other things. It's just not well documented.
Misty Pratt
It's like the coding side of things, right?
Amol Verma
Correct.
Misty Pratt
Provider is trying to code something, and so they may do it one way and not the other.
Amol Verma
Yeah, exactly. And so, for whatever reason, delirium does not get captured reliably in our routine data. And so, we created an artificial intelligence algorithm that combs through the medical record and can identify the electronic signature of someone who had delirium, because they might have received the same kinds of tests or treatments and like it, creates a fingerprint almost in the in the electronic medical record. And so, we developed an AI tool that can, with about 90% accuracy, determine whether or not a patient did or did not have delirium on a hospital stay. So that's the first time we can now measure this at scale within the GEMINI data. So now what we can do is we can take that GEMINI data, link it at ICES and for the first time, provide an estimate of what is, what happens to people who have delirium in the year after they're hospitalized, right, at large scale, right? What are the costs to our healthcare system? How many people end up in nursing homes, or how many people die, and what kind of care are they receiving? What kind of home care do they receive? So, we're creating the first ever like population scale estimate of the burden of delirium on a healthcare system. And this is one of ICES's, is applied health research question projects, because our provincial government, specifically Ontario Health, is particularly interested in understanding this. They have a province wide campaign that is trying to improve the prevention of delirium in hospitals called delirium aware, safer health care, the DASH campaign. And so, this information is really helpful to them as they start planning and understanding the impact of that kind of intervention. So that would not be possible without connecting the deep hospital data from GEMINI, an AI tool, and then the longitudinal health data at ICES. It's really like a perfect marriage of complementary strengths to produce policy relevant data and analytics for Ontario and, frankly, for the world.
Misty Pratt
Because if we know that information, what happens after in the year and how much it's costing, then we intervene, we reduce the number of people with delirium. Therefore, we can see the kind of benefit we get after, not only for just cost, but for people's health and well-being too over the long term.
Amol Verma
Yeah, exactly. And if you think about it from a health system level, it can allow us to understand a few really important things. First, it can help us encourage healthcare organizations like hospitals to actually participate in these prevention activities by saying, "Actually, look how big of a problem this is at your hospital, specifically or across our system". The second is, it can allow government to more intelligently devote resources to this problem, or not, right? Like if interventions are proven to be cost savings in a way that they previously didn't have any window into, then they can actually make those investments in a way that, to your point, helps people's experience of being in hospital, and also, you know, helps the healthcare system. So that kind of analytics is so essential for the way we plan and administer our healthcare system and and I think it's exciting to try to, you know, use the strengths of both GEMINI and ICES and the latest technologies to do this in a more modern and more impactful way.
Nicole Yada
I think back to the topic of trust and transparency and really making sure that what we are doing with AI before it's implemented in the healthcare system is appropriately validated. I think there's really exciting opportunities for validation of any sort of predictive tools that are developed here on the GEMINI team with what is the gold standard at ICES, in terms of that data then about what is happening outside of hospital, both before and after, and making sure that we are actually have the opportunity to compare what we think is going to happen and what the AI algorithms tell us are going to happen with what is actually being said in the data. And so, I think that that's another really exciting way that we can be extrapolating on this partnership and really honing in on the trust and transparency angle for future AI work.
Misty Pratt
And if you see that mismatch where the AI hasn't necessarily predicted something or potentially that could help improve and make the technology better down the road.
Nicole Yada
Yeah, absolutely.
Misty Pratt
So, for both of you, what are you most excited about, about AI? About hospital data in general, in the next five years, what's coming up?
Nicole Yada
I'll start in just from a few different angles. I think one that there is such an overdue and important recognition of healthcare, including things that happen outside of when you are actually interacting with a healthcare professional, whether that's economic, things, that's environmental, etc. And I think that AI has a really exciting opportunity to consider some of those other factors that we may not have traditionally considered as part of healthcare into a bit of a more holistic view of of the individual. That's one thing. I think another thing is the opportunity to really reduce waste in research. I think that I come from more of a background in clinical trials, and there are so many clinical trials that take place that are not sufficiently powered, and it is not really the best use of research resources. And there are applications of artificial intelligence that can really more at scale, do things like identify eligibility for potential participants in clinical trials. And ultimately, what that means is not only kind of reducing research waste there but meaning that therapeutics that are going to be effective are actually able to be implemented more readily in the population. So, I think those are probably two of the things that I am most excited about with regards to AI and healthcare.
Amol Verma
I do think that artificial intelligence, while it appears to potentially be a very disruptive force in society, in general, in medicine, it has the largest potential for sort of having mostly good effects and less of the downsides, in that I think there's a lot of unmet need in healthcare, like a lot of people who need access to care that currently are not receiving access to care. And so, I'm really excited about the potential transformations of our ability to make healthcare more accessible, to be able to make it safer and to be able to make it a better experience for both clinicians and for patients. I think what I'm most urgently compelled by is that we don't, today, have a good approach. We don't have the data, the computing infrastructure, or the networks of people to be able to innovate, develop and rigorously test AI technologies in healthcare. And so, what that means is that you're getting a lot of hype, to be frank, about AI technologies that have been, quote, unquote proven in perhaps not the most rigorous kinds of evaluation. And so there's a lot of motivation and people pushing for AI technologies to be used where I'm not so sure that the evidence base would support it, and so I think that where I think we need urgent development is to build data infrastructure, which what we're trying to do with GEMINI, and GEMINI has recently received funding from the federal government to start partnering with provinces across the country and build a more multi provincial data infrastructure called VITAL that could build out that infrastructure so that it's easier for AI technologies to be tested and evaluated and developed rigorously, so that we can run more randomized trials of AI tools to prove that they're safe and effective, so that we can monitor these tools to make sure that they're Continuing to work well over time. And I think that that's really where we should be racing. We should be racing to develop that enabling infrastructure so that we can have a lot of innovation, but that it's done safely, responsibly, effectively and efficiently. And I think in particular, this will help us so that in five years from now, we will land at a place where we have a number of AI tools that can make healthcare more compassionate, safer, more efficient, and we will have avoided the potential misstep of spending buckets of money on technologies that were not beneficial and that were not helpful. And I'm hoping that the work that we're doing, in partnership with ICES and with other colleagues across the country, can help provide some enabling infrastructure so that we do end up at a future where we have AI enabled healthcare that's more compassionate and more efficient and safer.
Misty Pratt
Well, I want to thank you both for being here. This was a really important discussion, and I think very timely, and I know it's been helpful for me to learn more about AI's application in healthcare, and I'm sure our listeners feel the same. So, thank you.
Amol Verma
Thanks so much for having us.
Nicole Yada
Absolutely.
Misty Pratt
Thanks for joining me for this episode of In Our VoICES. Check out the show notes for links to research and any other information that we've referenced in this episode. A reminder that the opinions expressed in this podcast are not necessarily those of ICES. Please be sure to follow and rate us on your favorite podcast app. If you have feedback or questions about anything you've heard on In Our VoICES, please email us at communications@ices.on.ca, and we will get back to you, all of us at ICES, wish you strong data and good health.