In this episode of Telemedicine Talks, hosts Phoebe Gutierrez and Dr. Leo Damasco dive into the latest AI disruptions in healthcare—from Utah's pilot letting AI handle prescription renewals to ChatGPT's new health features. Explore the pros, cons, and ethical concerns of replacing human judgment with algorithms, and why patient safety hangs in the balance.
This episode is sponsored by Lightstone DIRECT. Lightstone DIRECT invites you to partner with a $12B AUM real estate institution as you grow your portfolio. Access the same single-asset multifamily and industrial deals Lightstone pursues with its own capital – Lightstone co-invests a minimum of 20% in each deal alongside individual investors like you. You’re an institution. Time to invest like one.
__________________________
What if AI could renew your prescriptions or diagnose your symptoms, but at the cost of losing the human touch that catches life's nuances?
In this timely discussion, hosts Phoebe Gutierrez and Dr. Leo Damasco react to breaking developments in AI-driven healthcare. They unpack Utah's innovative pilot program partnering with Doct.ai to automate prescription renewals for long-term meds, debating whether it streamlines care or risks overprescribing and bias. The conversation shifts to ChatGPT's health tools, including HIPAA concerns, data privacy, and potential for misuse by vulnerable users like teens. Drawing from personal experiences as providers and parents, they highlight AI's strengths as a supplemental tool e.g., for research or differentials, versus the dangers of full autonomy, emphasizing the irreplaceable role of physician gestalt, context, and independence. Real-world examples like Open Evidence and UpToDate illustrate AI done right, while raising alarms about corporate incentives, edge cases, and moral compasses in tech.
If you're a physician, health tech enthusiast, or patient navigating AI's rise, this episode offers balanced insights on innovation, regulation, and safeguarding human elements in medicine.
Three Actionable Takeaways:
About the Show:
Telemedicine Talks explores the evolving world of digital health, helping physicians navigate new opportunities, regulatory challenges, and career transitions in telemedicine.
About the Hosts:
[00:00:00] Hey everyone. So welcome back to an episode of Telemedicine Talks. So this episode is something that, I have been thinking was gonna happen a lot sooner than it has, but it's been a really interesting week in healthcare, specifically in the AI space. And so Leo and I are really honestly just gonna have a conversation of reacting to some of these new programs offerings where it's really closely tied to almost like trying to overtake what positions do today.
So let's jump in, Leo. Yeah, no,it's definitely interesting. when actually you were on top of it. And,said, Hey, did you hear about these AI solutions or there's a specific, AI app, that we're gonna talk about, especially in Utah. And, and that was just yesterday.
So I was quickly just, trying to read up on it and, get a little bit more information. But you're right. [00:01:00] in. the very recent timeframe, especially over the last what year or so we've seen AI blow up. and especially in the medical, field, and we've talked about it before, and personally, I've seen ai, get used more and more in the telemedicine applications, not just just medicine altogether.
And for the most part, AI's been used as, at least what I've seen, and I'm just a n of one, right? But what I've seen is mostly as a supplemental tool, right? Hey, um, you know, make, research more efficient, get more data, maybe provide a differential diagnoses and in the operational sense, make operations more efficient.
Maybe, use AI for the front end work, the front office work gathering information, maybe even having the initial HPI talks. But ultimately when it came down to decision making, it went back to, brick and mortar. But no,a physician in flesh, right? Like an [00:02:00] actual doctor or provider that's actually making the decision.
And that's, carry forward. But, yeah, you, we found that, you could talk about what, these new AI, solutions, quote unquote, that you found that's coming up, oh, yeah, absolutely. So I think, interestingly enough, there's a couple different pieces that I'm gonna, cover real quickly just to like orient people.
AI is, Basically, it's been a really great tool, I think, to empower people to make quicker, like human decisions, right? So in my world, I use AI all the time. Can you help me? where should I look to find this regulation? where are the, you know, the last kind of like lawsuits that happened that were tied to this scope of practice or whatnot, right?
AI it's not ever my source of truth or. Something that is an end all be all for me. And so in healthcare, especially as we get into these ideas around prescribing and clinical guidance and really helping encourage and push people's health forward, I think that's [00:03:00] where there's this like really slippery slope.
Especially as you think about how it ties into, corporate life, how it ties into, like driving the bottom line. And just like setting the stage there. There's a lot that people are talking about because again, like AI has been a really good tool for cost savings.
But when we think about healthcare, I think the one thing which everyone who knows that I harp about corporate practice of medicine and some of those things is like it has to be done with independence. It has to be done in a safe way. The one big thing that came out this week is the state of Utah is launching a pilot program aimed at, basically letting AI control prescription renewals.
Now, I have heard on both sides of the coin that this is great and this is gonna help, it's great for physicians, it's great for patients, it's great, all around. This program [00:04:00] specifically is they're partnering with a company called Doct. that's like an AI physician e kind of tool.
And, the idea behind it is, these are people who've been on the same prescriptions for years, and, what difference does it make if it's a physician or an ai, checking this, like there's really nothing that's gonna change. I think that's where I have problems with it because the whole point of like medication management is like at those renewals you're supposed to assess and revisit and do they still need the drugs or do they not need the drugs?
Have you had, have you talked to the patient? Is it working for the patient? to me where I see a slippery slope on this side is you're removing that whole human element. Meaning is the AI going to catch when a person should get taken off? Is the AI overseen appropriately? Is it unbiased? there's a million thoughts that I have.
Yeah. Yeah. and this is a slippery slope. I, ever since I've been reading [00:05:00] up on this, there's two sides of the coin, right? we've, a lot of us do prescription renewals, on a regular basis. It's a pretty, honestly, overall. I wouldn't say mindless, it's pretty straightforward for the most part.
The majority of these are straightforward. you look for certain clues and you look for certain criteria and you go forward. And I could see where, AI could do this more efficiently, but on, on the flip side though, again, I think it's a slippery slope, right? allowing AI.
To take over the entire process. I think AI is great, but there's definitely times where it's missing pieces, where there are nuances that AI can't necessarily pick up in their algorithm right now. Can it learn? Maybe. But, who knows? there's definitely times when I'm doing AI searches for [00:06:00] medical issues that they have missing information or, what they base on their information is pretty questionable.
So I think that's the biggest thing is, how far are we gonna allow AI to make these decisions, especially when there's nuances. It's funny, I have a saying when I see patients, it's like a lot of times I joke, when patients present. In a atypical fashion, it's ha, your body didn't read the textbook.
can AI pick that up? can it's those small little nuances that, that I think individual providers with their experiences, with their gestalt can pick up. and AI may not be able to. yeah. Yeah, it's interesting. It's interesting that they are giving the entire process.
Yeah. I don't know. I always go back to it's not so much patient or doctor, it's just being a human right. We all, [00:07:00] how many of us don't even like calling call centers anymore because you're talking to an AI bot or a chat bot? we can't even have AI help with my DoorDash freaking delivery yet we're saying that we're going to put prescribing somebody's long-term health.
Yeah. in into it now. I do wanna say, I think this could be really interesting from an authorizations and an approval side completely separately, right? So I do think it could be really interesting, let's say a Medicaid population. So like I know in California for example, all prescriptions are actually ordered through the state.
The state monitors every single order. They have a team of nurses and doctors who oversee all of that. Helping them go through renewals and authorizations quicker and easier, I think could be a really interesting value add and could expedite some of those like state and federal programs. Where I have concerns is when you have [00:08:00] private companies that are going to want to be able to offer these services.
Not having insight or control from a regulator's perspective on how it's coded, how it's set up, what it's authorizing versus not. I am heavily convinced that you are going to be over prescribing simply because you're making money off of it. And that's where I get concerned because I think from a physician's perspective, physicians bring in the human element.
That really makes sure it's done safely and it's done correctly and that it is not blanketed. I think every physician also gets frustrated with the insurance, whole insurance experience of being stuck within like a protocol and having the ability to go, actually I think you shouldn't be taking this, or I actually think there might be some alternative options here.
Again, I think that's where the AI is not necessarily, super strong. Yeah, [00:09:00] no, I definitely agree. it's the human element, right? The judgment element, the contextual element, that I think,a I just can't capture, right? It just can't, there's definitely decisions made by this human element that, that, that can't be replicated via algorithm machine.
and another question is, where are they coming up with these? What sources are they using? a lot of times, and when I was teaching residents, I would say, Hey, medicine is an art. It's not necessarily a science. You have to take this bigger picture into account.
And that bigger picture involves experience, involves, criteria beyond, just ones and zeros and definitive criteria. I think so, yeah. how are they gonna mimic that? How are they gonna replicate it? they can't. They can't. And to your point too, the regulation aspect, you're right.
like private companies are gonna come in, they're not gonna want to lose money. How are they gonna set up their algorithms? Can we trust it? And who's [00:10:00] gonna regulate that a lot? It's funny 'cause I was reading up on it too, a lot of how these AI companies check themselves is through another set of ai algorithms, right?
So they check how good they're doing, how accurate they are. Using AI algorithms themselves. Yes, they have, compared to doctors as well. But, it's questionable and, we have to really delve deeper into, how much we could trust the whole process. Yeah. I agree.
I agree. I think too one of the things that I've harped on and I will continue to harp on, and it will be a sword that I will die on, is like. I, we as consumers do not know right from wrong when it comes to health or prescribing or anything in the clinical realm. That's why we have doctors.
That's why we have healthcare. Care in, I'm gonna put in air quotes. because, for us, we don't know if this is accurate, we don't know if there's gonna [00:11:00] be, contraindications with a supplement we're taking or so on and so forth. There's so many different things. I feel like that, that fall into that and it's.
And I wanna say I don't feel like that's my responsibility to go, I was prescribed this thing and now I need to go research and make sure that it works with everything else. I think that we heavily rely on our healthcare system and our care team to guide us through this.
Just like people hire lawyers, just like we have teachers, just like we have, all other professions, in, in this. And so I think to me, as some of this stuff is just really. quick and loose. I do feel like there are gonna be so many edge cases that are just not able to be captured because that's just the nature of health.
We are all different. Our biology is different. What I do, my lifestyle, everything. And I think in order to really make this, again, we have doctors for a reason and part of that is. two, I don't think people are gonna know [00:12:00] what to say or what to ask, and how even to identify certain problems because we don't speak that language.
I was laughing because you just said Gestalt. Which, I don't know what that means, but I know that every doctor that I work with, I think you guys learned that in medical school. Oh, absolutely. Every doctor knows that word. And I'm, I have to like, what is geal? What, yeah. Y and that's one of the things, right?
it's, God, I learned it early in, medical school, as an MS three starting clinicals. When the old Fogy doctors, which I'm now, started practicing, on the fringes of what you know, it said in the textbooks like, Hey, it, it says this, A, B, and C, and this is what we should do based on the textbooks, but you're doing, f why are you doing that?
And often the answers are, Hey, it's clinical gestalt. I've seen this before. It's just the clinical feeling. And most of the times they were right. gestalt wins [00:13:00] and it's, we've learned, a lot of us have learned to trust that and, and pass that kind of practice pattern and that feeling, to younger doctors who just bring it up because it's there.
it's Been proven clinically, I think, and you could argue against me, whatnot. I bet you there's gonna be a lot of people agreeing with that. But yeah. AI's not gonna be able to pick that up. how are they going to sense that? It's very binary in that sense.
And, there, it's tough. Yeah. No, I agree. I'm a big believer the lived experience I think that's what makes doctors better. I think that's why they all go to residency and med school and it's such a freaking long process is because you have to try these things and experience it and get your hands on it and the, it does not matter what you code, you cannot get some of those things across.
It's, again, it's the edge cases, it's the nuances. Now, I do wanna say that there is, right now federal legislation. If you all have not heard of [00:14:00] HR 2 38, look it up. It is the Healthy Technology Act of 2025. So it was introduced last year. It's going through the motions, but this is actually legislation that is basically saying AI or certain machine learning technologies, may be eligible to prescribe drugs globally .
Right now there are certain drugs that'll have to be, there's gonna be more oversight and all of that, but this bill scares the sh I almost said something I shouldn't say. The heck outta me. It scares me. This scares me one again, because unless this is tied to a federal regulatory board that is gonna be overseeing, approving these technologies, testing them, really monitoring it.
I am so nervous for the, what this is gonna open up. And again, [00:15:00] even to the point of who is building this, because you talked to 10 different doctors and 10 different doctors will tell you 10 different things because there, in some of these instances, there is not a right or wrong way to do things.
Yeah. it really is. You have to look at the whole person. So you bring up a great point. yeah. who's gonna be saying what the right answer is. Let's not going into the political sphere right now and what's happening in medicine, but let's think about that, right? Like the current federal medical backbone quote, air quotes, huge air quotes authority right now, right?
Putting out. New guidelines, and I'm a pediatrician, so you know, asterisk pediatric guidelines, so forth and so on, don't agree with all the other medical or all the other major medical authorities out there. their [00:16:00] guidelines are, actually quite contrary to what the other medical authorities that are out there.
Like who's gonna say what's right when you are. When you're regulating this, like these me are federal appointed medical authorities going to say, Hey, this is the right way to practice while others, while the other authorities saying no, like it, again, it gets back to the whole art thing.
there's not specifically one right answer. there's a ton of wrong answers. who's gonna say exactly what's right and what's wrong? and again, and when they do get it wrong and an adverse event happens, who's gonna be on the line for that? can you sue an AI technology?
Like you probably go after the business, but the business is gonna be like, this is what AI said, So interestingly enough, who's gonna be holding the back? That's exactly what's happening. So if you actually dive into Doct, they're the first company that actually has medical malpractice on their algorithm.
But at the end of the [00:17:00] day. They don't care. who cares about malpractice cases? You have physicians that, again, like you, you go back to there, there's a reason that there's a human element and now you're able to basically ensure your technology. So at the end of the day, if somebody dies, if something harmful happens, do the companies care?
Does it, to me, that's where I think there's a slippery slope. Historically, there was not a medical malpractice. You had to meet certain criteria, so it really was a lot more, solid. I think, again, and that brings it up too, like who, you're right, who's gonna be checking it? Let's say an adverse event happens, right?
Usually you go back to the doc and you freeze the doc and you say, Hey, now you're on a PPI, or you're on some sort of process improvement, and they go back and you're not allowed to practice. How are you gonna do that with technology? Are you gonna stop the whole, business while things happen?
I don't know. and are you forced to do that? And then when, if a fix happens, how are you gonna check on that? what are the, what steps are gonna happen? [00:18:00] So yeah, this definitely opens up the Pandora box. A bunch of questions and a bunch of legalities there. Yeah. No. Interesting. Yeah. So flip side, right? Let's say, the other kind of interesting thing that happened in this week, and this is different, right? So the first one is you have consumers that have no idea that they're even being reviewed by ai. Chat PT just launched chat PT Health.
So unless you truly live under a rock, you don't know what cha PT is. but they just launched a new line. wait list is open. I signed up. I'm interested. I use chat GPT just about every single day. And this is different where they're actually basically creating a health hub for people. give us your lab data, tie in your wearable data.
we have your health history, you can tap into your Apple Health and we are going to basically be able to support you from a health [00:19:00] perspective. completely now it's a little different because again, you have a consumer voluntarily providing some of this information, and consenting, which is goes with the Cures Act.
where if a person wants their health information, they technically an EHRA health system, you have to give it to them. It's a federal rule, but again, it's interesting to be like, I'm curious in terms of like where the protections to where the clinical escalation pathways go, at what point.
Is this not a fun, a fun little, tell me what, vitamin I should take or tell me what food I should eat too. I'm having a mental health crisis and I am taking, I'm gonna go take 50 Tylenol. I'm curious to see, wait, how far they've pushed the boundaries to quote unquote be, a healthcare.
Offering. It's, this one's interesting too, I was just reading up on it quickly and,the first, one of the first [00:20:00] lines, it under kind of the public ad for chat, GBT Health is like, Hey, this does not replace decisions. This does not replace, it should be supplemental.
But gosh, that's a big question, right? some of the examples that they have are offering medical advice, Hey, should I go see the ER right now? Should I not, should you know, what should the next steps be? and they have, let's see. they were breaking down their service into several kind of buckets of what they offer, right?
One is like medical advice. One is, helping to draft medical texts, others is just seeking,what the next steps are, so forth and so on. but, another question that I had is, again, like the other discussion is what is the source of truth?
What are the resources they're using, in terms of their data? looking at the writeup, it says, [00:21:00] Hey, we're using, contextual data found across the internet. God, this kind of gives me a headache a bit. looking back into the rare times I see brick and mortar when patients come in and said, Hey, I did my own Google search, or, I chat gpt and this is what they told me I needed to be worried about.
And a lot of times I'm like, Hey, what's your sources? And it could be, my sources was blah blah blah.com, and it's yeah, that's not necessarily a reputable source, this is actually a commercial source. Trying to push one way or the other. And, is there, I don't know, maybe I'm missing something, but just can chat GBD Health, differentiate that,
I don't know. Can it, Yeah. and another question is, how did chat TPT Health, guarantee quality, right? Guarantee that what they were doing is right. And I was looking into, and, they actually went down and, listed kind of what their thinking was and how they, basically,assess themselves.
And,the whole [00:22:00] assessment process is questionable . And actually, if you look at their assessment process, they said they had 600 doctors across, several countries, so forth and so on. Assess how they answered. And it was funny, they also asked the doctors to grade chatty PTs Health's answers.
this, and they compared the doctor's responses, the doctor's grades and chatty PT actually created their own another AI evaluation tool and graded themselves. And some of the aspects, some of the topics, were outside or just barely within the standard deviation of what the doctors grade themselves.
This again brings up the question, can we trust it? yeah. it's interesting. It's interesting. Again, I could see, I'll be honest. Yeah, I've used AI tools chat, GBT specifically to, to help me do my research. But, it was [00:23:00] at the end up to me to develop my own plan to develop my own.
differential diagnoses, management issues, whatever, based on the research. But it's not, I didn't necessarily rely strictly on the AI tool to do right? Yeah. And to add. Even an additional layer. So I think that it's noble to say this isn't medical advice.
you're removing liability from yourself. Oh, yeah. where we're not, this isn't medical advice. This doesn't replace care teams. Open AI knows very clearly what is happening in America. They know very clearly that. Hundreds of thousands of people are losing health insurance.
They know very clearly that hundreds of thousands of, if not millions, luckily yesterday the premium subsidies got, expanded. But before all of this, I mean that there were gonna be a hundreds of thousands, if not millions of people displaced from their [00:24:00] traditional healthcare.
So to me the idea of thank you for telling me this isn't medical advice, but you're basically targeting a population that you know doesn't have a provider, that all they have is an emergency room and you are as much as you say you're not trying to replace, you are very strategically trying to step into something that is.
Again, not for the good of humanity, but for something where, I do think it, it is being used as a tool to, try to, take advantage of some of the political climate and kind of like where we are with healthcare, and that again is like. we haven't even yet talked about the, like, is this HIPAA secure?
Are they sharing this data? it's not, now, you know, again, they say because we're providing it, it's being, separated into a separate database and it's being separated. The chat is not gonna help inform other chats. Do I trust that? No. Do I think that there's going to be any [00:25:00] protections in, I mean, there's, chate has already accidentally screwed up on so many things that were supposed to be private.
So again, it's this really interesting dynamic from not just a consumer's perspective, a safety perspective, but also just like what is happening with this data? Am I gonna start getting freaking ads because I'm using this and I'm asking questions about mental health? Is growth therapy gonna be all in my feed all day, every day?
Because of that. I think that to me is where I can see a lot of this going, that we are going to be using this and then all of a sudden we're being targeted with advertisements because somehow our data, inadvertently got shared or leaked or sold. Yeah. Yeah. And looking at it, no chat, GPT is not under the scope of hipaa.
So looking it up, right? Yeah. They don't have to follow HIPAA rules because technically when you input your information, chatt PT, you've agreed that you're gonna share your [00:26:00] information with them. Now you're trusting that Chatt PT is gonna keep this secure. but hey, it's buyer beware, right?
yeah. read the fine print. I haven't really delved into it, but looking at the whole specific HIPAA compliance, no. No, they don't need to follow HIPAA rules. So that's a big concern. and again, I just, I wanna say too is just as a, and this again is like a different perspective, like coming from like a parental perspective,we are,I'm not, I'm a different generation, right?
Leo, you're a couple years older than me, so you're, I'm a different generation, just like two. Two-ish. No. yeah, no, but we're raising children in this environment where, AI is part of their everyday Oh yeah. Life. And so to me, again, is, you, we remember what it was like growing up as a teenager.
We remember the struggles of all of that. Now we're combating social media. We're combating social isolation. [00:27:00] We are combating, drug usage and fentanyl. I mean, there's so many things that I don't even wanna think about what my boys are going through. And again, we're having a tool that's going to, they could sign up, right?
oh yeah. Federally electronic consent is at 14. So any child could sign up for this. What are those protections like? And what, again, I will die on the sword of at the end of the day, AI could never replace a psychiatrist or the mental, aspect or the social service aspect that providers and our care team provides and is not even in scope for them.
So to me it's what is gonna be built around some of that when my kids have a health question and they go to AI 'cause they don't feel comfortable talking to me and AI gives them something that I think is completely ridiculous. That is, and you won't know. That's what I'm seeing.
You won't know. They're gonna run with it. No, they're gonna, yeah, they're gonna run with it. This thing's safe. I should do this. This is a, again, to me, I'm like, at what point, where are, [00:28:00] where is the moral compass coming from? and again, it's hard to gauge because it's not live.
So there's a wait list. I'm, I cannot wait to get my hands because, for me, I do, I want my boys to use it. I wanna figure out what their experience is like versus mine, versus my husband's versus my grandma's. Because to me. the second I see something worrisome is maybe I'm overreacting, maybe they've really thought through all of these things.
I doubt it because that's why we have so many different providers and specialties and all of that, and that also kind of highlights, going back to the strengths of kind of the algorithm or the weaknesses even. You know that question highlights one of, one of the bigger weaknesses in the algorithm.
If you actually take a look at, chat GPT and how they grade themselves. again, they were, I'm looking at the website right now, but they graded themselves. They agreed with actual people on how well they [00:29:00] did during their grading, on kind of sending communication to each other, to patients.
How,they're decent at health data tasks. just operational logistical tasks, but actually clinical tasks. emergency referrals is the edge of,agreement with the doctors. one thing that's very concerning is that the context seeking, can chat GPT.
Find more context or figure out if they're missing something contextual. Really this whole gestalt thing that we're talking about, totally off right chat ts grading algorithm graded themselves outside of the standard deviation of what actual doctors graded the algorithm as. So yeah, definitely. there's a disconnect there and I think that's the biggest thing that we're missing is.
Hey, can you take this data and actually process it? Can [00:30:00] chatt BT take the data that's out there for better, for worse and actually process it to a safe answer? And I don't know it doesn't look like. it's, again, outside of the stand for better, for worse. And again, I haven't taken a deep delve into the data, but just looking at the graphs offhand, that's a big question, right?
I, completely agree. I think now, again, my, just, not to harp too much is. I don't want people to think like I'm anti ai. I'm not, I think open evidence phenomenal, amazing tool. I think so helpful to just about every single doctor that I I did not know that Open Evidence gave you guys C-M-C-M-U.
Yeah. Yeah. So actually that's a good point. there's open evidence ai, right? there's also the Tried and true, especially in my generation up to date,up to date just started their AI. these AI engines are [00:31:00] driven by scientific medical data, right? AI or open evidence.
And now up to date will give you the, literature search resources that they base their answers from, right? and they will give you try to frame a context, but it's gonna be up to you to actually. process that. it's not up to us, right? I can't get up to date.
Oh, yeah. I can't get open evidence. I don't have an NPI. So at the end of the day, it's, they're giving Yeah. The physician, yeah. Expedited information so you can make more informed decisions. So you can go back to your patient and explain it in ways that they understand where, again, where I think the slippery slope here is you're removing that physician interpretation.
Again, how many times have I called you and been like. this thing happened. I don't even know what the hell it means, and you're like, oh, it means this. Don't worry about it. Or just the other day. the same thing that I did, when I called you about my [00:32:00] son's rash, I uploaded the pictures in chat g pt, Yeah. I did the same thing. I should I be worried, should I not? it didn't give me an answer that I was confident enough as a mom. To the point where it's okay, this is what Chacha BT said, but I'm still gonna call my pediatrician bud and ask the question. I agree.
I, I am not against ai, I love it. I love it. For that point, because when you texted me, I was like, okay, using my knowledge plus I was like, Hey,what does AI think? it could pull from different resources, but then I was able to, To evaluate those resources, evaluate the answer and agree with, It helped. It helped me gather information faster, make my decision faster. It's a great tool, but I also agree with you, it's that next step over the line. I'm gonna say over the line where you do remove the decision making process, and that you give the entire decision making process to an algorithm , to a machine.
Yeah. again, I [00:33:00] don't know what Gestalt means or meant, but I could tell you every single one of my doctors uses it like it's saying cheese. but again, you guys speak a language that is understood in a certain way. I think the good doctors are able to take that. And, again, you in you, you give your normal layman's terms to, your patients.
the change though is you are taking that and interpreting it. Yeah. And I understand like we have these tools, but again, who trained the tools, who, are there certain things that are biased? Like I, was doing research. there's a doctor out there that like claims that like smoking cigarettes is healthier than eating or drinking fruit juice.
is that the doctor? That was one of the 600? I wanna understand what is the breakdown of like the specialists? How many psychiatrists did you have doing that? How many hormones, specialists? How many pediatricians versus gerontologists?
I need to like really [00:34:00] understand because to me like it, the devil's in the details and when you're pumping this out to. millions and millions of people who are going to trust it inherently. Oh, absolutely. It's, they're, to me it's no one's gonna ask all those questions, but to me, as somebody who, I I love my little care team around me.
my free care team that I get to talk to about all these good stuff . That's how I've realized that there is no right or wrong answer to half of this stuff. and it really does come down to the, it really is so specialized, even on the simplest level. Yeah. No,it's gonna be interesting to track this, It's gonna be interesting to delve deeper into, the entire specifics. Yeah. we're gonna uncover a lot of things, a lot of questions, and there's not gonna be a straight answer to that. just the whole question about bias and the authority itself. who is authority? Who are you calling?
The authority, right? are you gonna say, are, [00:35:00] especially with the laws so forth, the federal bills, so forth and so on, are you gonna trust our current medical authority to say, Hey, this is what's right and wrong? Honestly, a lot in the medical community are gonna be like, hell no, we're not.
how far is that gonna extend? are they gonna apply it to this? Is that gonna be the source of truth? Should it be the source of truth? All these questions come up, wait till they throw in a, an async questionnaire. That's when I'm gonna start, that's when I'm gonna really freak out when it's like introducing a screener tool.
it's already there. Yeah, it's already there in certain platforms, and I like it when it's there in certain platforms, but I'm able to review the answers. I'm able to ask more questions when I need to and I'm able to process information. But yeah. taking that out of, taking the human element out.
Ugh. I dunno. Yeah. Concerns me. Yeah. another day, another new thing for us to worry about in healthcare and health [00:36:00] tech and patient safety and. All the good stuff. Is this a precursor to Sky link? I don't know. we'll see. Yeah. Yeah. you never know. we hope that this episode was helpful and informative.
If you guys have thoughts, I wanna hear Yeah. I gimme your thoughts on why you disagree with me or Leo and where you, you see this going, especially if you're a physician or a provider or running your own company. I'd love to get your take on. How AI helps or where you think it's gonna hurt, drop us a line.
Yeah, no, definitely interested in your feedback. and shoot, if we wanna get a bunch of people to have a conversation about this, that'd be awesome. So if you're interested in doing that, drop us a line,more input, the better. I'd love to have a discourse on this. Yeah. All right everyone.
Thanks for listening in. Cool. See you next time.