Video: AI Meets Reality: Data, Burnout, and the New Barriers to Care | Duration: 3018s | Summary: AI Meets Reality: Data, Burnout, and the New Barriers to Care | Chapters: Welcome and Introductions (0s), AI in Healthcare (194.476s), AI's Systemic Impact (550.6659999999999s), AI in Healthcare (931.721s), Data Privacy Concerns (1621.026s), Healthcare Workforce Challenges (2008.3560000000002s), AI's Impact Assessment (2125.201s), AI in Healthcare (2209.746s), Patient-Clinician AI Alignment (2467.811s), Conclusion and Wrap-Up (2938.521s)
Transcript for "AI Meets Reality: Data, Burnout, and the New Barriers to Care": kick off. Hi, everybody. Welcome to this edition of a series that I'm just deciding right now to call Qwoted Live. And in the future, we will we will name it as such, but the idea is to take the experts from the Qwoted platform and actually interview them on on particular subjects every week for our media users and non non users. So we are excited to welcome to the media who are listening today. We're joined by three fantastic medical and economic experts and we're going be talking about the impact of AI on medicine and healthcare. I'm Dan Simon. I'm the CEO and co founder of Qwoted and these three experts are all from our bank of hundreds of thousands of experts inside the Qwoted platform. We'll be taking your questions throughout. Feel free to use the chat function to ask your questions, And I'm going to kick off by asking everyone to do a brief introduction of themselves, and then we will to some questions. If if whoever wants to kick off on this wonderful panel, please do just jump right in. Sure. I'm happy to get started. Hello, everyone. My name is Mika Newton. I'm the CEO of a health tech AI powered company called xCures. Very delighted to be here. Joy, why don't you go next? Good morning everyone. My name is Doctor. Adjoa Boateng Evans. I'm an ICU physician and anesthesiologist at Duke University Hospital. In addition to my sort of direct bedside care, I'm a professor in the School of Medicine and I largely teach on structural determinants of health. My scholarship and research, however, really lies in the medical humanities, sort of looking at the intersection of art and medicine and asking the question, how can we utilize art to really humanize health care and humanize medicine. And so to that end, I've started to hold quarterly sort of biannual symposia where I walk health care workers through the arts to answer these questions on meaning and purpose in medicine. Hi. I'm. Martin Gaynor. Glad to be with everyone. I'm an economist and professor emeritus at Carnegie Mellon University in Pittsburgh, Pennsylvania. And I study the economics of organizations and markets with the policy applications being antitrust enforcement and regulation, with a particular emphasis on health care. I've been studying the health care industry, oh, gee, almost fifty years. Now, in terms of policy, I've, had the opportunity to serve the country as the chief economist at the US Federal Trade Commission about eleven, twelve years ago, and more recently, in the antitrust division at the US Department of Justice just a just a couple of years ago. So to to implement and to look at these particular issues. We're glad to be here. Fabulous. Thank you. Thank you, Martin. Thank you, Adjoa. Thank you, Mika. Fabulous panelists today, and I love the fact that you've got these levels of abstraction from from someone like you, doctor Adjoa, who's working in ICU, working on the front lines, people like you, Mika, who are in the in the technology building it, and someone like you, Martin, who's kind of looking at the system. as a whole. I I I I'll kick off with some rubber meets the sky. lovely. I I I I wanna kick off with a with a question, which is quite a banal question, really, which is like, we've been talking about AI and technologies application in health care for a very long time. As long as as long as we've been talking about the possibility of AI, we've been you know, health care is one of those places that people immediately gravitate to to think how it could impact. But what's you know, maybe I'll kick off with you, Adjoa. It's like, what what is you know, what's the implications today? Where where is to to paraphrase Martin, where is the rubber hitting the road? Right? And and and feel free, Martin and Mika, to add on to this as well. Like, where are we, you know, where are we today? Where is the where is AI having an application in the real world? Is it still relatively abstract, or are you seeing it show up in in clinics in the real world? Yeah. So in in my experience, AI is is here. I would say it falls into two main buckets. One, that really can sort of guide a bit how we we care for patients. And then there are tools, more like large language models, that help us largely with documentation and some of the more rote parts of our job. So as an anesthesiologist, the most clinically mature application is intraoperative hemodynamic management. So in plain language, most patients when they're getting ready to have surgery and undergo anesthesia, they say their biggest fear like, doctor Boateng Evans, please, I do not want to wake up during surgery. And just as a factoid, you're more likely to get into a car accident than to wake up under anesthesia. It's extraordinarily rare. However, despite that, we are looking to be much more specific about the way that we titrate medications for patients intraoperatively. And so we have started to use this technology, which is essentially a process EEG. Maybe some of you have had to have an EKG where they put stickers over your chest and look at the electricity through your heart. This technology puts stickers over your forehead and looks at the electricity in your brain because every one of us on this panel requires a different amount of anesthesia to remain quote unquote asleep under surgery. And so in real time we get iterative feedback to see exactly how well the anesthetic is keeping that patient sedated and unconscious. Are we giving too much and then risking them being confused and having postoperative neuro decline? Or are we giving too little for their size, the medications that they take, etcetera? And so these sort of closed loop systems are really, really helpful because they integrate, like, these constant physiologic inputs into real time feedback for us. And then the second, in my other sort of hat that I wear working in the intensive care unit, are similar technologies that sort of integrate all of these data about blood pressure and heart rate and oxygen levels to tell us exactly, you know, how well some of the patients are responding to our interventions. Outside of direct patient care, maybe some of you are familiar with kind of described technologies. Historically, you would go to see your primary care physician or take your child to the pediatrician and there's a little person in the background taking notes. But now, largely, we have models to listen to the conversation between patient and provider and then put out a documentation for the medical record, which, in theory, is supposed to cut down the burden of, you know, documentation after hours. So those are the two places where we largely have seen AI being used. I think, Doctor. Evans, what you're talking about here, the administrative processes is really an area that I think has a huge amount of potential and is actually part of maybe the backbone that enables the clinical decision support type of applications that you were talking about in the anesthesia space and in other spaces like the ED. I think scribes are a great example, as you pointed out, right, of technology that's actually it's like the success story for administrative burden reduction in or implementation of AI across health care today. I actually think we use the term AI, but it's really kind of the the increase in compute that we've all experienced, right, that is actually enabling the next generation of automation. So when we talk about AI, I think about automation of processes all across the system. And I often think about the issue that we have in health care, which is lack of resources. We don't have enough doctors, hospitals, nurses, systems to be able to actually provide care meaningfully for the entire country today. And we have a huge number or large aging population, so I expect we're actually gonna see a lot more stress across the system. The only solution I can imagine, actually, is continued automation, particularly as you would put it on, like, rote processes. I just don't think we have another option. My company, we've been working on, you know, describing this listing. We've been working on reading medical records. Why do we have people read medical records in one place and type it into something else? That doesn't seem like a very good use of human capital. But if you go to any clinical facility anywhere, you'll see people reading medical records or reading documents and typing them into something else. And so I think the technology has gotten finally to the point where we can automate some of those human inferred or complex processes, and we have technology that's now capable of doing that. We just need to be really careful that we implement it safely and in the right places. So yeah. Mika, I wanna get you from a yeah. Yeah. Go So. so, in general, when we have a new technology that economists call general purpose technology, and AI is certainly general purpose technology. Some other examples are information technology prior to AI. Mika just referred to that now, you know, couple decades ago or going farther back, electric power, supplanting steam, and water power. In general, we see, and we have long history of these kind of things, is that, applications first occur in relatively straightforward places that are not exactly isolated, but can be kind of hived off. And what can take longer and sometimes actually a very long time indeed is sort of organization wide and system wide application to the point where it really makes a difference. I call this the slow pace of fast technological change. And the reason for that is that it's not just the technology. There are human factors and organizational factors that are absolutely critical. And so yeah. I mean, look. There are places that's being adopted. And if it can help out physicians who are particular, you know, severely burdened with all these administrative tasks that not only do they not sign up for, that's not the best use of their time, that can be done fairly quickly in a fairly straightforward way. That's great. But I think the kinds of things we're seeing initially, not too surprisingly, are just scratching the surface and getting to the point where we see really systemic impacts in a positive way that just takes longer. That's great. I I I it seems we had a panel last week where where there were some AI experts experts talking about its impact on the workforce. And I've gotta say it was a pretty dark conversation, and it feels like this is the opposite of that. It feels like the three of you share a certain level of optimism about AI's impact in health care. What would be the Martin, you're kind of pulling this sort of a phrase. What would be the negative base case for AI in in you know, if you were making the the counter case, kinda what would that what would that look like? Well, it's my per technology. Sure. not always been a good thing. You know, I I my father's a doctor in Delaware. They have to use this epic system, electronic health records. People, you know, no one ever said, oh my god. Thank god for electronic health records. Right? So the the his technology in health care has not always been a good one. What would be the base case for this, Right. you know, not being some sort of utopian future? it makes you long for the the, the the days of paper records and fax machines. But, yeah. So, you know, like, I I'm an economist, so I it's my professional obligation to put the dismal and dismal science. There are there are a number of, downsides. Let me start sort of in the the small and and move to to the large. I mean, of these are are obvious. It's not, particularly think of clinical applications. Gave a specific example or sounds like that's working really well. But it's not clear that that's a diagnosis related applications, how well all those things work or that even in conjunction with humans at this point, they really make a difference, that they improve diagnosis above and beyond what it would be in the absence of the of the AI tools. There's also the potential for downsides, obviously, errors, but not only that for bias. So a lot, you have an algorithm. The algorithm doesn't know anything, but you train algorithms on data. And depending on the data used to train the algorithm, there could be all kinds of unintentional biases built into algorithms. That's not specific to health care at all. That's a general issue. And that doesn't mean AI is bad. Right? It just means that, care and thought have to be given to that. At a larger level, as an economist who thinks about markets, I think there's some potential adverse effects of AI that would exacerbate severe problems we already have in our health care system. The health care health care in The United States, pretty much every urban area in The United States is dominated by one gigantic health system. Right? So, you know, Duke UNC and the Triangle area, University of Pittsburgh Medical Center here in Pittsburgh. You got in Cleveland nearby us, University Health and Cleveland Clinic. Most places, it's really just just one. Then you have heard huge insurance companies that have been buying up everything in sight except for hospitals, UnitedHealthcare as an example of that. And what all of these, entities are trying very, very hard to do is to avoid what they call leakage. They don't want revenue. Well, we might say patients, but what real they really care about is the revenue leaking outside the system. So if you now own an insurance company and a bunch of physician practices and home health and a PBM and pharmacies and data holding companies and data analytics and so on and so forth, you've done your best to capture as much revenue as the system as you can. You don't want that leaking. Now AI might actually exacerbate these problems for a couple reasons. One, it only works as well as the data upon which it's based. I mentioned that in the context of data, and you need a lot of data. So it makes it harder for smaller entities to compete. Also, the algorithms can be proprietary and that can also confer greater advantages on large conglomerate kinds of systems. So I'll I'll stop talking about this for the moment. We can certainly return to it if we want. But for all the possible blessings of AI, and I think there are definitely many, something to keep an eye on is what the impacts might be at the system level and whether that creates even more dominance and market power on the part of the very big players in that US health care system. Yeah. Very good point. And I wanna pass it over to to Mika and and Adjoa on that on that same case. Is there a I mean, I I think, you know, the the idea that AI accelerates concentration in the in the of power in asset owners is a general concern across all of economics. Right? I mean, just the big technology companies themselves that own these models, you know, stand to get obscenely wealthy, and and there's a concern there. And I I imagine that would, obviously, to your point, Martin, potentially be the same thing happening at this health care level. What about down on the ground? Does AI potentially make us does it know, the utopian vision is that it frees up humans to be more human in their health care. But is there a you know, tech as I said before, technology has sort of got in the way to date of of that kind of doctor patient health care. If I think about going to the doctor today, they usually have their back to me, and they're typing into a screen. So, you know, I guess there is, you know, people might be forgiven for being somewhat skeptical that more technology is going to improve the humanity of health care. Yeah. I mean, I think we're definitely at an inflection point. Right? Because I'm not that old. But when I started medical school, we did have paper charts and we did have we were just getting introduced to electronic medical records. I remember being an intern on my surgery rotation. I did my residency at Yale New Haven Hospital in Connecticut and arriving at 4AM to comb through the charts and having to figure out what the doctors were trying to document because it was completely illegible. And so then when Epic came out, to your point, Dan, it was like, oh, we can just read all this documentation on the computer. How wonderful. And there are some wonderful things about Epic. The fact that I can be walking from seeing a patient in the emergency room back up to the ICU and get an alert about a patient's results and call the nurse and say, I want you to start this medication, give this fluid, hang this blood product, etc. In minutes. To me that's pretty remarkable. And we have seen that sort of lifesaving effect of these technologies largely in the ICU when it comes to sepsis and sepsis detection and sepsis alerts. There are whole infrastructure in hospitals to sort of intervene on patients before they get sick and trying to reroute that course of sepsis. But you're exactly right. You know, the corollary to that is also in my intern year I remember thinking like, Wow, you know, I'm getting ready to be a doctor, this is so wonderful! And being really shocked and surprised at how much time was spent at the computer versus how much time was spent at the bedside. Walking into the sign out room at 06:00 in the morning and there's a sea of, you know, eight to 10 computers and everyone's like this. We did ultimately get up and do our morning rounds and see patients and talk to families and do all the things, but there still is quite a bit of a distance there. And when we talk to medical students and compared to folks who are more senior, they do feel like there's a bit of a demise in in the physical exam because they're growing up in an era where they're really dependent on some of these electronic tools. And so, theoretically, you know, if we do have more freedom from documentation, I think it does give folks like myself who are teaching this next core of students and trainees to bring them back to the bedside and say, you know, hey. Look at this patient's big vein in the neck. Maybe what we thought was sepsis is actually heart failure, and we need to give this medication instead of that one. So those are the real applications. I mean, I mentioned Yale. I've worked at Stanford. I now work at Duke. And in all of these places, the documentation burden is very real. But I'm also mentioning those places because I really like what Martin said about sort of bias. One thing that is sometimes spoken about is where and how these models are trained, and they're trained on large patient populations in big academic teaching centers that are not always representation of the general population at large, right? So they largely leave out underrepresented patients. They largely leave out patients who do not speak English. They largely leave out patients who are under or uninsured. And so in many places that is the exact demographic of folks who are coming to the hospital to seek care. So there's definitely some inherent biases there as well as the way that these are trained to sort of be executed and work. That is something that I think is going to be very, very difficult to parse apart. So a lot of what we are trying to do in folks who have a shared mindset like myself is to bring back what historically were called soft skills, right? Because as these technologies become more salient, the soft skills like how do you tell a patient and their family or how do you tell a patient's family that their loved one is dying? That to me is going to be much more helpful than did you diagnose sepsis at 10:01AM or 10:03AM. Mika, I'm I'm keen to get your perspective because you actually build these models and you sit at the front line of the tech. So. So a couple of things. I was just kind of thinking of the downside. There's kind of three big things that I think about that are going on, and one affects, I think, all of us. EMR adoption took about seven years. If you think about the time frame that it took from when, you know, meaningful use and all the legislation, it's about seven years before it really penetrated. What's interesting about AI is the speed at which it's going. It looks like it's going faster. You'd say, oh, it's gonna be another seven years, but it appears to be going faster. And what it means is even it means everyone across the system needs to be aware that there's a new skill set coming. And I think we actually have a real retraining and refocus issue. Right? Which is if I said it's not a good use of human capital for people to read medical records and type them into something else, that's actually a lot of people's jobs. Right? That's their primary function or a major part of their function. And if that goes away, then the question is what else are they going to do? And we did see this in the EMR adoption. You know, there were physicians and other practitioners who stopped practicing, nurses who left who said, I don't wanna go through the EMR adoption phase, you know, too far along. It's just not for me. I'm gonna go and try and do something else. The reason I bring it up as a generality is we see that in software development, by way. I talk to my software engineers, and I tell them, if you guys aren't adopting AI tools today, it's a very real thing. If you're not leaning strongly into that, there is a very real risk that you'll be obsolete and not, like, seven years from now, but more like three months from now. This is the pace is just really kind of incredible in terms of the innovation. So that's one thing I think about AI. I think we all have to realize they're neutral. You know, we just got power tools. You gotta learn how to use them. We need to understand what they are and how they work. All the biased stuff we were talking about, I think, is part of that that. Can I stop in there? Because there seems to be a I would say that that sounds like there would be a connection to what Martin is saying. You know, the the risk has been that when these big health groups have had a had had technology adoption, they they they don't tend to you know, to Adjoa's point about, like, let's reinvest in the soft skills. You know, my father, who's a doctor, you know, now has to see 26 patients a day instead of 16. It feels like the it feels like the introduction of technology tends to get coupled with how how quickly can the health group kind of dehumanize the experience, get more people to get more money, squeeze more out of the experience, not not trying to enrich the experience. And one of the points that was made on the panel last week that I thought was excellent was the difference between and I don't know if this is true, but they were talking about the difference between AI adoption with Chinese CEOs and Western CEOs. Western CEOs being very fixated on cost reduction and then, therefore, headcount reduction. And and in the East, Chinese and other Asian CEOs being much more interested in new product creation, new experience creation, new feature development sort of generative, but it does feel a bit like knowing, you know, Western CEOs if if you have the opportunity to reduce those jobs and therefore make more peep you know, fewer people do more stuff, Mika, which is to Martin's kind of point, that tends to be the direction of travel. Oh, good. We don't need as many nurses. Let's get rid of them. And now you've got one doctor and an AI robot doing 17 people's job. me tie in a couple other pieces just to that, and and then I'm gonna come back to what you're talking. We are this aging population thing is, in my mind, very real. There's a demography issue that we have. Right? And if you just look at, let's call, Medicare or the people that we insure we insure, right, like our government insurers, that population is between now and 2030 gonna go from just picking smaller numbers from, let's say, 40,000,000 a few years ago to 80,000,000. So we're picking up double the amount of people for whom we are taking a collective societal risk who are then being funneled off into these big systems and the insurance companies who buy that Medicare risk and Medicare Medicaid Advantage plans, the entire financial industry of managing health care suddenly has a lot more older, sicker people in it at a time when we don't have all the resources. So there's there's, like, attention. I actually think it means we all need to do even more. So it's not an extra 26, an extra 50 or 60 patient, right, in order to accommodate the we need the capacity to accommodate the demand, right, and set that capacity that we just can't magically create. We can't create more trained nurses or doctors or build hospitals in the timescale that we're having to happen. here. We just we just import them. yeah. Well, we we have whether we'll continue to do that. is Anyway, it's a, great. one. Yeah. Yeah. No. No. No. It's it's a whole different piece. I wanna put another one in, is kinda competition. So now we have large, let's call it, large players, right, that control a huge amount of the data. But we also have technology that enables smaller companies, right, to move very, very quickly. And, actually, small companies innovate, right, to your point about east versus west, I would think that there's also large versus small. I can tell you small companies innovate very, very quickly because the change management burden is much, much smaller. You don't have to redirect, you know, hundreds of employees when you have 50. It's much easier to communicate to them a change in scaling skills. So the real issue, I think, becomes who controls the data and information? And I think this is part of the AI competition competition piece is how freely does information and knowledge move across these networks. And I don't think that we've come to an understanding. If you look at just some of the litigation that's going on in the space around EHR systems, all the rest, it's all about things like information blocking and can information move around. And so the dystopian piece of this, Dan, for me is if we don't solve this very complicated problem, we actually run the risk of kind of the whole system collapsing financially around us. And I think we've been on that first you know, on that edge now for many decades, and I'm not sure how much further down the road we're gonna be able to kick that can at this point. And so that's something that worries me a lot for all of us. That's great. I'll strong point, which is, like, the alternative could be worse. I I wanna make sure that we're we're at the half hour mark, and we're gonna go for another ten or fifteen minutes. So I kinda wanna throw it open to other people's questions. I got a question here in the in the chat that's asking about user data safety, and I would take this opportunity to say to the people that are listening, you know, feel free to throw your questions into the chat, and we will ask the we will ask the expert directly. So let's just open that up. The question is, you know, in the new AI era, are there risks associated with user data safety? Right? Which is, I guess, this is HIPAA and and kind of patient confidentiality. This stuff is very important. It's people's most precious data, I guess, is their their health data. Right? Are we living the question is are we have we identified all the risks associated with this, and are we living are are we living with unknown risks? So the concern about data security privacy issues. Is that correct? Something like. that. Yeah. So I'll all the, folks respond a sec, but, it's not clear to me that the issues are very different. We have issues right now. They're data breaches. So, data, as as Mika alluded to, data are siloed. They don't the data don't follow the patient, and I think that's that's a huge holdup. It raises antitrust issues as I mentioned as I mentioned earlier. But even when the data are controlled by one entity, those entities are being attacked and they are being breached now. We need more interoperability. I don't think that's going to increase the risk of privacy or or security. So and I I think HIPAA is a is a pretty weak shield if that. Yeah. You know, it's interesting. How many people. do you think have uploaded their medical records into check? Forget the, you know, OpenAI Health or Anthropic Health, you know, these new ones. But before that, how many people uploaded some part of their personal medical history into a chat functions? I was doing it about ten minutes before this call because I've got some mild gastritis. So I've had a very detailed where did, this data, we we did not bring it on, Daniel. Okay. Yeah. Hopefully, it was not stress induced. Where did that data go, by the way? If you if you think about the way these models work, even the app you know, the engineers who are building these models don't necessarily understand how they learn and train for the day. The data in these models is abstracted from human thought. They do not think like humans. It looks like a person, but the background thinking is not human thinking that's going on. It's machine thinking. And so all of this private data that's been loaded up into these models have been training the model, and it's in there somewhere, and nobody knows where. And so when is it going to pop back out and where? And I don't think anyone's answered that question yet. I don't think we actually understand it at the engineering level. So I, yeah, I I wouldn't look it. in there. But Yeah. Mika, I'm curious what say your I think there's. a gap between Sorry. Go ahead, please. oh, I was saying I think that there is sometimes, when it comes to these questions about privacy and safety, a little bit of a gap between sort of what patients and individuals perceive, what their data is going to be used for, and then what is actually used for. So first of all, yes, it is largely de identified and then repurposed. And surely, it could somehow be sort of backtracked. But the truth is that there are probably much more sensitive parts of just one's entire being that are a little bit more readily available to these systems than, say, you know, one's X-ray and so forth. The tools that we currently use within health care systems are all encrypted and sort of de identified and so forth, but it is exceedingly rare that someone has suffered sort of some sort of mal, you know, mal outcome because of their data being used in a way that was untoward. I don't think most people care data are much more secure than than all of like, social media. People log on to platforms and register for this, that, and they supply all kinds of information without thinking twice about things basically the point Meeker was making. Right? And that's actually, in some cases, extraordinarily vulnerable, much more so than than, the vast majority of health care data. Yeah. I mean, the point there, Martin, do your consumer financial data, which I've you know, we've worked with consumer financial data companies. Think of, like, the Lexus of the world and what they actually have in there. And you combine that with, you know, the fact that your phone basically tells everyone your location all day long where where it travels. I mean, outside of health care data, there's just a Look. and that data, left. I worked for the federal I would just yeah. worked for the federal government. And, then, Mark. a few years after I I I left, I got a message from the office of personal management saying there was a data breach. So the federal government itself has had personnel data breached. I mean and, of course, the federal government does not want that to happen. Anyway, this is just adding on to the the general point that people supply data in all kinds of ways where actually their information is very exposed and much more so than in health care. I I wanna make a reflection on all of you in a way because it's really interesting to see the difference. Last week's panelists were incredibly pessimistic, and it's very obvious we have a panel of sort of AI evangelists or at least sort of mild optimists. And and part of the argument from everyone on this call seems to be sort of like the alternatives are worse. Right? So when it comes to sort of Mika Newton talking about, we, you know, we have this crisis of of of, you know, too many patients, not enough doctors. Like, you know, the the the argument for embracing AI seems to be the alternative is worse. And then and then likewise on this data issue, you're also saying, well, look. You know, maybe there'll be some problems associated with, you know, personal health care data, but look at what happens with personal financial data on social media. So much worse. Right? So it's kind of just the reflection that that the part of your optimism is grounded in this sort of maybe this realism about how bad of the existing systems existing systems are, whether they are the capacity issue in the existing health care system or the the the existing sort of templates for sort of data privacy. It it your your it's fueling your your optimism just by how bad things are currently. Well, and I'll say a point about that. I think that you know, COVID sort of taught us a lot of lessons. And when you think about attrition and the workforce sort of in comparison to the aging population and just not only that folks are living longer, but they're living longer with more complex diseases that are requiring more health care, more intervention, particularly at the end of life in places like the ICU where I work. But what we learned during COVID is that you cannot, to your father's point Dan, you cannot ask you know individuals to go from seeing 18 patients to 26 patients to 30 patients, and then think that those 30 patients are going to get high quality care, that there will be an equal or less amount of medical errors, and that the physicians will then work, you know, the throughput of their career. Because the exact opposite happened. Right? We had burnout. There was a lot of attrition, and now we're dealing with something called moral injury where folks are just at odds. You know, they're sort of moral compass is at odds with the care that they provide. And the generation that is coming up, they don't they don't tolerate that. They want, you know, vacations. They want CME to go to conferences. They're they're not they're looking for jobs and saying absolutely not. I'm not gonna work in this old guard where I was forced to see 35 patients, you know, in one hour and just continue on doing that for a year a career of forty years. So I do think that, again, when I keep using this term inflection point because truly, not an evangelist per se, but I think if we are thoughtful and careful about how we introduce these, they can, you know, alleviate some of these concerns. But for the generation that is growing up with technology tools and are very comfortable with that, it can distance so that not only do you say, oh, I went to the doctor and they had their back towards me, but they didn't even touch me the whole time. They didn't you know, there was no exam. They didn't listen to me, those kinds of things. And that, I I think, is where where the peril can come in. Yeah. So I think it's really a question. of of at the last minute. a question of how AI gets used. And the point Mika made a few minutes ago about the fact that there will be some pains, no pain, no gain associated with this, right, related you need new human and organizational skills and factors with these new technologies. That means some skills will become obsolete or much less valuable. That just is part of the equation. I suspect that was a large part of your discussion with the other group, Dan. And I I think we need to be cognizant of that and be planning ahead for those kinds of scenarios. I wanna make sure we get to people's questions because I've just started to explode on the chat, and we're down to the wire now. I wanna make sure we don't, you know, overstay our welcome. And I will just say for those of you who are asking questions, you know, I will encourage you. We have three amazing experts from our Qwoted database of experts, but remember that they're just three, and we have, obviously, hundreds of thousands. So, you know, a recommendation was I'll ask these questions of our experts now, but feel free to obviously post them post these questions as expert calls, maybe as, story ideas is a is a feature that we have inside Qwoted if these are questions you'd like more people to answer. So Dan asks, in clinical settings, can AI be a helpful tool to identify patterns in population health and disease that can guide clinicians diagnosing and treating an individual patient? And we're gonna do these like a speed round. So if you feel like you can really jump in and add something to that question, go for it because we've got two other questions I wanna get to before we close. I think this that very briefly. I'll say yes and. no. On a population level, probably yes, if you're looking for large signals and big groups. But on the individual level, not necessarily. Concrete example, a lot of these tools have to do with sort of visual learning. Right? So dermatology and radiology are the big fields that we're looking at. If you look at a rash like psoriasis or something in someone who comes from a certain ethnicity who is Asian or Caucasian or so forth, that same rash is going to look very different on someone who's got skin like me and the diagnosis might actually be missed. And so the answer again broadly is yes, but on a case to case level this does fall short if the patient that you're trying to diagnose doesn't fit the way in which the tool was trained. Yeah. I think the big opportunity is gaps in care. So running population level screening for gaps to just standard of care, most people are not actually treated with the basic things that they should be done. They just miss. They don't go see the doctor. I think there's a lot of cleanup or level setting that'd be done, but, again, that's at the population level. K. Roy asks, there are some AI powered chatbot clinical assistants. For example, Epic has Emma, and I will just say parenthetically, I'm not sure why or if it's a good idea that all all chatbot assistants are being given sort of female names. I think that's a whole separate we're gonna have a whole separate thing about about what that says about our society. These chatbots seem to intermediate between patients and providers. Can people reflect on what's good and bad about that? So, basically, is chatbots powered by AI? Is it creating distance between provide you know, is that good that it creates, you know, intermediation? I guess the plus would be rapid response. You don't have to wait for humans to be there. The bad would be people are talking to chatbots instead of doctors. a good and a bad. Right? The good is if someone's clinical job is to give you the same advice over and over again, and each patient feels like they have a unique question, but it's actually the same question that has a very textbook answer, then why do we need a human to, you know, tell you that, you know, you need to lose weight over and over again, or your weight is too high if you have it? What are the downstream impacts of being obese, for instance? That could be a simple question. I'm not a clinician. I'm just thinking of one example. I think the downside to that is what do you do the one time that you actually needed a human? It's not that simple question, but it's more complex. And it misses it, and you give someone a very rote answer, and you should have had, you know, much more of a stronger intervention. So at scale, again, I think there's big advantages, but I think you really run into this issue that Mika brought up, which is when you get down to the individual level, suddenly some of these models and biases just were kind of an ugly side to them. Yeah. I. think I'll say that I'm speaking through the bias of my specialties, which are anesthesiology and critical care medicine. I don't really think that the chatbots are very helpful because even if we're just trying to screen someone and say, yes, you are safe and suitable and okay to undergo surgery, or no, you need to be optimized by having these testing or interventions and such, I just think that now the patients that we care for are are too complex, and the responses that are needed to be sent aren't gonna be too nuanced, and the bots miss those kind of fine tuning in those nuances. There are probably spaces where this is can be used a bit more effectively, like Mika's point, where the patient population might be a bit more homogeneous. But I think when you get into these, like, tertiary and quaternary care centers where patients have already sort of failed smaller community hospitals and are coming to us for this expert care, the bot can't provide that and can't kind of replace our clinical experience. Okay. Last question from the audience that I can see here. Kerry is asking, and this is seems like something that, Martin, you might be able to to kick off with. Do you think patients and clinicians are aligned in their level of optimism about AI in health care, and how should organizations navigate disconnect between the two? I guess one could also add to that patients, clinicians, and the health groups that they work with would be the three groups that would presumably need to get aligned on their on their level of optimism? Do they, you know, are we all are those three distinct constituencies approaching this with different incentives and different a different mindset? So, yeah, I mean, different different mindsets, different incentives, and I'm coming back to some things that were mentioned previously. The organizations themselves have one set of incentives, and they're very focused on profits regardless of whether they're nominally or not for profit or for profit. That doesn't matter. I mean, health care is the biggest sector of The US economy. There are trillions of dollars at stake. Money matters here. And as I said earlier, the larger, more dominant these organizations get, the more they will impose what, what their objectives are and everybody else, including, clinicians, physicians, and other clinical workers, whether they like it or not. If they don't have alternative places to go, then, they'll have to consider whether they wanna continue to practice medicine or not. So that's that's sort of a a negative a negative side of these things. And I'll I'll defer to Adjoa and and Vik on the on the more general stuff. My sense is patients aren't necessarily big fans of, of these things. And the key question on all of this is not whether we're gonna use AI, but how we'll use it and use it appropriately and productively. Yeah. AI is here for better or worse, just not entirely sort of integrated. For patients, it is a bit of a mixed bag. On one hand, they really love that they're they have more accessibility to us. And so they can go on you know, here at Duke, we call it MyChart. Some other places call it MyHealth, and they can log in and say, doctor Boateng Evans, I'm having this problem. Can you advise me? And send it almost like an email or a text message. And that wasn't the case years ago. You maybe had to go through an admin or through some other sort of vehicle to get to your clinician. And so that accessibility patients like. I will say a lot of clinicians don't always love it because since it is so easy to reach, sometimes the inbox can very quickly swell and become swollen. On the other hand, because the information and particularly lab results, CAT scans, gets uploaded into these platforms very quickly, we have had instances where patients are finding out about a cancer diagnosis before clinician has even had a chance to call them or finding out some other lab result that is highly sensitive before it can be interpreted through the lens of someone who has medical training. It sometimes engenders a lot of unnecessary questions like, you know, the RDW, which is like a measure of your blood cells, is 1% higher than average, like, am I sick? Do I have cancer? What does this mean? And so all of those sorts of things can just really create a bit of noise between patient and provider that then starts to cause the burnout and the moral injury and all the other things that we sort of talked about today. So it's certainly not uniform. I think some patients really love that. Some clinicians really like being able to walk around the hospital and see what's happening inside of the operating room. But there are perils to both of those accessibilities. Yes. I wonder how many people that might be in our chat here complaining about, you know, chatbots disintermediating them from their providers might also be like me, the people who go to GPT with every small ailment. You know what I mean? I wonder whether there's like and maybe the just the sheer prevalence of the of of a tool like GPT in our kind of daily lives. I wonder whether that might, you know, that might change people's technology adoption curve. It might be more, you know, maybe seeing the AI doctor for at least the basic sniffly cold thing or, like, Niko was saying, you know, help me think about losing some weight, what the implications might be for diabetes or whatever sort of the rote things. Maybe we might become more that might a bit like we do with driverless cars. We might just become a bit more accustomed to that. That might not be as scary in the future as it sounds today to go visit the to go visit the robot doctor. So I I think one thing is that, you know, human beings are social animals. And, again, look, I think AI has a lot of potential. And, certainly, Adjoa was on the front lines and all things I've heard make a lot of sense. We also have to bear in mind that people do need contact with other people. Sometimes for, you know, what I call technical reasons. Right? Clinically, that's really what you need. AI is not gonna do the job. But there's also human to human interaction that's very important for overall well-being. And I think that's something that has to be considered in a broad sense. It's not just health care. Right? It's shopping. It's all kinds of other activities. We saw during the pandemic just all the kinds of harms in a university setting. I saw this. Right? We had classes online And after it was okay for one. After a while, the students were actually pretty unhappy because they wanted social interaction, not. with me, with with the other students. And I think that captures what some of the issues are. So that will be a challenge for us as a society. It's a challenge for us right now as a society. But it's a challenge for health care as a critical part of our society going forward. So their companies are in. these chat. Right? So the the AI can call you at home and have a conversation with you now. Right? And if you look at some of the companies working in that space, the one I know is called Hippocratic AI, there's there's others who are developing this too. Things like calling you up and making sure, for instance, that you have arrived to your next appointment or that you remember to fast. Right? There's, like, all these and I've heard some of the recordings of these. Or, for instance, giving someone nutritional advice who who has diabetes is another one that I've heard where they're supposed to adjust their diet. The dialogue is so surprisingly human now. I mean, it it would be hard. And before we always knew if it was, like, the robocall. Right? You'd be like, this is not a real person. I think it's becoming really hard to discern when it's a real person or not. And we see that, by the way, in social media and all the fake video. Like, you can basically create anything, right, today that looks and mimics human behavior, and that's not slowing down. Again, talking about pace. And I think that's something that we're gonna need to figure out because, frankly, this entire panel could be AIs talking to each other in the not too distant future, our personalities, training them to other, you know, writing that we put on the web or even our social fingerprints. And that's a a big societal question that needs to get. And that's obviously a topic for another discussion. So I think. this is a great place to wrap it up. I just want to point out to something that Jordan from our team shared in the chat. If what our fabulous experts today has kinda sparked any ideas. You know, we encourage all of the reporters and writers and podcasters who are listening today to go and create a request. Get get this conversation going, put a call out, and get not just these three experts, but also all of the experts on our platform kinda talking about these pretty important issues. And so there's a link shared there where you can click to create a request today. You can also follow the links to our experts that have been shared in the chat and and and ask them questions directly if you have follow-up questions. And we'll be sharing a copy of this recording with everyone who came. I want to thank doctor Evans, Mika, Martin for your participation today. Really interesting discussion. You know, you you're all in your own ways at the front lines of this, and and I think it's so fascinating. We could have gone for many more hours. So thank you very much, and thanks, everybody, for.