Telemedicine Talks

#69 - When AI Becomes a Doctor: Regulatory Battles in Pennsylvania & Utah

Episode Summary

What happens when AI chatbots impersonate physicians and when state agencies let AI prescribe medications without direct physician oversight? Host Dr. Leo Damasco breaks down two major 2026 cases: Pennsylvania’s lawsuit against Character.ai and the clash in Utah between the Medical Board and Doctronic’s AI refill program.

Episode Notes

In this timely solo episode of Telemedicine Talks, Dr. Leo Damasco explores the rapidly evolving intersection of artificial intelligence and medical practice. Following last week’s conversation with Dr. Ashok Gupta on AI hallucinations, Leo examines two real-world cases making headlines in 2026.

First, he dives into the Commonwealth of Pennsylvania’s lawsuit against Character.ai, where an undercover investigator exposed a chatbot persona (“Emily”) that falsely claimed to be a licensed psychiatrist, provided fake credentials, and offered therapy and medication advice. Leo also analyzes the growing tension in Utah between the Department of Commerce’s Office of Artificial Intelligence Policy and the Utah Medical Licensing Board over Doctronic’s autonomous AI platform for processing 30, 60, and 90 day prescription refills.

Leo shares balanced commentary on the promise and dangers of AI in healthcare, the critical importance of physician involvement, regulatory gaps, and who should ultimately oversee AI-driven medical decisions. He discusses liability, patient safety, and the need for proper collaboration between innovators and medical boards.

This episode is essential listening for clinicians, telemedicine practitioners, digital health entrepreneurs, and anyone interested in the future regulation of AI in medicine.

Top 3 Takeaways:

About the Show:

Telemedicine Talks explores the evolving world of digital health, helping physicians navigate new opportunities, regulatory challenges, and career transitions in telemedicine.

About the Hosts:

 

Episode Transcription

[00:00:00] Hey, welcome back everybody to Telemedicine Talks. This is your host, Leo Damasco. And firstly, I believe this episode is gonna be aired right after Mother's Day, so just wanted to put it out there. Happy belated Mother's Day to all the moms out there. Hopefully you all had a great, weekend and a great day with your families.

And a big shout-out to my mom. Thank you, Mom, for everything that you've done for me. All of this could not have happened without you, so thank you. With that being said, diving into our episode. If we could title this episode, it would be, hey, when AI becomes a doctor. last week we talked to Dr.

Ashok Gupta and talked about the hallucinations of AI. if you haven't, listened to this, episode, highly recommend it. It was a great conversation. he had some great talking points, great insight. It's gonna go down as one of my favorite episodes of all time just 'cause, hey it was a great conversation on a good topic.

But after the talk about AI and the limitations of AI with [00:01:00] Dr. Gupta, Phoebe and I started looking into cases and, kind of the limitations, the regulations behind it, and a couple of recent cases came to light. and actually the talk was kind of timely. the two cases I'm gonna talk about today is one case in Pennsylvania involving a AI platform, Character.ai, and and the second case we're gonna talk about is the Teladoc case in Utah.

I know I've mentioned this before, and brought this topic up, but recently, over the past couple weeks, there's been some updates. And, yeah, it goes along what we were talking about last week and, these updates bring on interesting points. let's start with this Pennsylvania versus Character.ai, case.

So who is involved? the Commonwealth of Pennsylvania, so the state, and the governor's office, Josh Shapiro. [00:02:00] And, the AI platform is Character Technologies, which runs Character.ai. And the central controversy around what's going on with this is that Character.ai had a chatbot persona named Emily, and, this chatbot claimed to be a licensed psychiatrist that was licensed in Pennsylvania and created this whole persona that this chatbot claimed to be trained in London and was licensed in London as well.

And they claimed that this chatbot claimed to be able to prescribe medications and do therapy. And chatbot basically told the user that it was a psychiatrist. there was an investigator in Pennsylvania that went undercover and started interacting with this chatbot. And the chatbot made all [00:03:00] these claims to the investigator.

And now, this investigator came back to the governor's office and now the state of Pennsylvania is suing character.ai and character technologies, claiming that this chatbot was impersonating a physician, was engaging in the unauthorized practice of medicine and misleading users into believing that they were receiving legitimate psychiatric care.

The lawsuit claims that the chatbot falsely represented itself to psychiatrists, fabricated licensure information, discussed treatment for depression and implied that it was indeed a prescribing authority. So yeah, basically this chatbot was saying, hey, I'm a doctor. Here's my license, so forth and so on.

And it actually provided a license number. And when you go back to look for that license number under the Pennsylvania licensing board, that didn't exist. Yeah. And [00:04:00] this is recent. The lawsuit was filed in May 2026. the investigation happened, much earlier than that. And is a big deal, right?

 we're coming into a point where, AI is being used everywhere in medicine. And, don't get me wrong. I'm a big proponent of AI in the use of medicine. I personally use it. I use it to help me with, my HMPs. I use it to help me gather information. I use it to help me get assessment plans and, widen my scope and, make it easier for me to, find information or, to gather that and process it.

But, is different. Now we're talking that AI is claiming to be a doctor, providing information, providing therapy even, and giving the user the impression that it is indeed a doctor. Little context now. character.ai is saying that, "Hey, and if you go Google character.ai, it's this website, [00:05:00] and their whole thing, I think, is providing characters, almost fictional characters, to interact with. And character.ai is claiming that, "Hey, if you go look at our website, we're very upfront saying that this is a fictional character that none of this is real and should not be taken as real professional advice."

But what Pennsylvania's claiming is that, "Hey, you've created this whole persona, and it's gonna be difficult for users especially vulnerable users, to differentiate whether or not, you know, this is coming from a real person, whether or not, this has been vetted by real psychiatrists, real medicine.

 you just can't do this. You can't pretend to claim to be psychiatrists either under the fictional guise. it's just dangerous." Now, character.ai is not actually new to this kind of controversy. they've already faced multiple lawsuits involving teen mental [00:06:00] health, self-harm discussions.

 they were involved in a couple cases, I don't know the cases specifically, but just looking into it and reading the articles that involved in teen suicide and pushing vulnerable teens to suicidal ideations. Other lawsuits involve that,they were pushing these teens to self-harm, violence towards parents, and, even emotionally manipulative relationship with minors.

Yeah, super interesting case. gonna be interesting to see where this plays out and actually how much liability and how much responsibility this, character.ai company is going to be responsible for, creating this character. Even though, upfront they said, "Hey, this is fictional.

This shouldn't be taken as medical." But, if you're imposing as a medical doctor and actually giving licensure information, albeit fake, and claiming that you were trained, again, some of these vulnerable people won't be able [00:07:00] to tell and may take it seriously, and who is responsible?

So, and the state of Pennsylvania is trying to crack down. Yeah, something to track, something to look at. So super interesting case.

All right, now let's jump into part two of this episode. So part two, we're gonna be talking about a similar but kind of different issue happening in Utah. So, I kinda mentioned this before in a previous episode when this announcement first came out, but there's been some recent changes and recent updates that make this conversation a little bit more interesting right now.

So let's go back to the beginning, back to January sixth, two thousand twenty-six. So our players are the Utah Department of Commerce and this Office of Artificial Intelligence Policy that is under the Department of Commerce and Doctronic. for those out there that don't know it, it's a telemedicine platform that offers urgent care and even primary [00:08:00] care services in a synchronous or asynchronous method, depending on the state, right?

And one of their big things is that they have this AI technology that helps assist and speed up care and makes it more standardized, so forth and so on. And now, just gonna put it out there, don't get me wrong I haven't worked for Doctronic, but I know friends and colleagues that are working for them, and they absolutely love working for them.

They say that, "Hey their AI model absolutely makes care, more efficient. it makes it more straightforward," and they like working, under the AI model. Now, how they usually work with it is the AI model does the initial kind of questions, suggests a assessment plan, and then doc goes in and reviews it and says, "Hey, I agree," or will ask more questions, so forth and so on.

But they absolutely love working for the platform, and they think that, Doctronic provides great care. So I don't want anybody to think that, "Hey, this is a bash on Doctronic," or anything like that. I actually kinda like the [00:09:00] platform. But again, I'm bringing this issue up because, hey, we should be aware of this issue because it pertains to how we practice telemedicine and how telemedicine is now regulated, especially as AI and the role of AI is possibly growing.

 so yeah, those are your players. You have the Utah Department of Commerce, this Office of Artificial Intelligence Policy, and Doctronic. And what they have done is they've partnered so Doctronic will use their AI platform, and strictly AI, without physicians to process specific thirty, sixty, and ninety-day refills.

Without having a physical doctor involved. Yeah, if you actually look at a presser, or the Department of Commerce, Utah's Department of Commerce says, "Hey, you know, under this partnership, Doctronic will become the first AI to legally prescribe routine refills by [00:10:00] deploying autonomous AI health platform designed for fast, private, personalized prescription renewals within kind of the Utah's regulatory sandbox framework."

And the sandbox framework is this almost research framework where they can apply AI services like this, bypassing kind of certain regulatory processes. And they further state that, "Hey, our Office of Artificial Intelligence Policy will rigorously evaluate the platform's clinical safety protocol, patient's experience in real world effectiveness."

 and they're saying, "Hey, we're at the forefront of AI-powered innovation. This is great," and there's a connotation that, hey, this has been signed off by the medical community from Utah and that they're working closely with the medical community there.

Now, of course, they're working with the medical community with Doctronic as well, you know, and Doctronic involves a lot of medical specialties and their, medical intelligence to help grow and, sustain their AI [00:11:00] platform. So that was back in January twenty twenty-six.

Now, what makes it interesting is recently and in, actually April twenty twenty-six four months later, the State of Utah Medical Licensing Board actually issued a letter to the Utah Department of Commerce and to Doctronic saying, "Hey wait, we didn't agree to this. Please pause this now."

Right? And I'm looking at the letter right now and, it's stating that, "Hey, medical board is reaffirming that they are indeed tasked with protecting the public in the state of Utah," and, reading this word for word, " The medical board supports the legislative mandate to explore AI implementation, but we, the medical board, also has a stewardship to protect Utah citizens.

Collectively, the medical board has decades of medical experience across a variety of specialties, positioning themselves to understand the potential [00:12:00] consequences of implementing what they may seem like a innocuous task of AI-driven prescription refills." They further state that, hey, overseeing prescription refills is a task reserved for properly licensed medical practitioners for critical safety and clinical reasons.

Each refill requires reassessment and clinical decision-making to safely adjust doses, monitor for side effects, contradictions, drug interactions, so forth and so they say, quote, "There is a reason why prescription refills require physicians' authorizations." And they further state that, "Hey, proceeding with this agreement," and that's the Doctronic Utah agreement, "without consulting the medical board potentially places Utah citizens at risk and remains a major concern for the board.

It is imperative that professionals with medical background review all proposals prior to implementation to ensure these programs do not compromise patient safety. We must not allow AI or other financial motiva-motivations to override this obligation." Yet that is [00:13:00] precisely what occurred here. And in bold, they state, "It is the strong recommendation of the Utah Medical Licensing Board that this program be immediately suspended pending further discussion."

initial presser and kind of the initial feeling, Utah Department of Commerce I don't know if they stated it, specifically, but it seemed that, hey, the Utah medical community was behind it. But this new letter dated, late April is very obvious that, hey, the medical board wasn't involved in this, decision.

And, you know, I think rightly so, want a part of this decision, and they want this program to stop pending further review from the medical board. And, it sounds like the medical board is willing to move forward, but they want say in how it's done. And I think, again, 

 the State Medical Board is tasked with, protecting, regulating, determining medical [00:14:00] practice within the state. And it seems like, hey, they've been kind of left out of the loop. Now, what makes this even more interesting is the Division of Professional Licensing, the Utah Department of Commerce response.

And I'm gonna read stuff out here. So Reading the letter, they said, "Hey, we received your letter signed by eleven of the fourteen, board members in April twentieth. Thank you for your continued dedication to protecting the health and safety of Utah citizens. We deeply value the clinical expertise you provide and It will attempt to address below the concerns that you have expressed regarding.

So, they basically said, "Hey, the OAIP or 

the Office of Artificial Intelligence Policy has been rightly deemed by the state to create these AI programs. And, furthermore, the OAIP consulted with practicing medical specialists in the specific field as well as public health experts and regulators to ensure that the technology [00:15:00] contains the requisite safety guardrails.

And accordingly, as Dr. Zach Boyd, the OAIP director, explained to the board the pilot involving Doctronic was rigorously reviewed by several medical professionals prior to launch. The evaluation process generated a large number of suggested substantive adjustments and guardrails, many of which were integrated into the pilot.

And as we communicated in a recent email thread, we look forward to involving the board more involved in our vetting and oversight process as we evaluate the pilot and consider future pilots." So basically, they're saying, "Hey, the OAIP consulted with medical specialists," but, you know, telling from the Utah Medical Board's letter, it obviously wasn't the specialist part of the Utah Medical Board, which is what-- Aren't they supposed to be tasked for the regulation of such processes, for the regulation of basically [00:16:00] medical practice in Utah?

But I guess they aren't, right? they weren't consulted. the letter doesn't specify which professionals that were consulted. they don't identify who was part of the evaluation process. Funny that they mention this Dr. Zach Boyd, the OAIP director.

Interestingly enough, even with the title doctor involved, looking into it further, Dr. Zach Boyd, the director, is not a medical doctor. Instead, he is a researcher focusing on AI. Machine learning and mathematical modeling in social science applications. So he is a computer researcher, mathematician.

 so this Dr. Zach Boyd, PhD, and Dr. DR listed in the response letter, right, is not A medical doctor. It's a PhD researcher, mathematician I [00:17:00] don't know what he considers himself, but not a medical doctor. so basically the response letter also says that, "Hey, just so you know," and this is, the Department of Commerce to the medical board, they reiterated that, this Doctronic program is in its initial phase, in phase one, where in this phase that a hundred percent of the prescriptions are reviewed by a human physician, but after the fact.

 the computer will prescribe the medications, but then a doctor after the fact will review all the prescriptions, a hundred percent of them. And they also said that, "Hey, the AI-generated chatbot performs the appropriate comprehensive clinical assessments similar as doctors," and that, this chatbot will only prescribe within a s-strict scope of limitations.

 it only prescribes thirty, sixty to ninety-day renewals, used to treat specific chronic conditions, and they [00:18:00] avoid controlled substances, ADHD medication, so forth and so on." And they also say that, "Hey, if they find that any of the cases fall outside of the escalation protocols, that they will escalate it to a human decision for further processing."

So, further in the letter they said under the paragraph of moving forward in collaboration so the Office of Commerce said to the Utah Medical Board, "Your letter strongly recommending the immediate suspension of the Doctronic program pending further review discussion. Because the pilot is currently in phase one where it is reviewed a hundred percent by a doctor we will not be suspending the pilot at this time."

So basically they said, "Thank you for the letter. Thank you for your concern, but hey, Utah Medical Board, we're gonna continue doing this." And really, What's stopping them from doing it? the medical board [00:19:00] works under this division of professional licensing under Department of Commerce.

They really have no authority. and it shows, right? They can't suspend An AI chatbot's license 'cause it's not licensed. What are they gonna do? They have no leverage whatsoever. Now in the final part of the letter, the Department of Commerce says, "Hey, As we get more data, we're going to invite you when we seem fit to be involved in the process.

So we look forward to your future involvement in this process. But bottom line, we're gonna continue what we're doing, despite you, the regulatory board of Utah, saying that, we don't agree with this and that you should stop it." So yeah, this brings up a very interesting case and a very interesting question.

who is in charge of it, right? And as AI grows and as the role of AI grows within medicine who is gonna be in charge, right? The state medical boards, when [00:20:00] they regulate medicine within the state, their leverage is pulling the doc's license, right? When they find that the clinician, the physician, whoever, is practicing against the guidelines, against the boards, practicing erroneously, they could pull that practitioner's license and, that's how they have leverage, right?

And, they either, do corrective stuff or even just pull it all together. But this shows that in this case, they have zero leverage whatsoever. they've made a strong recommendation that, "Hey, you should suspend this program," but the Department of Commerce says, "No, thank you.

We're gonna do what we wanna do, regardless if you agree or not." So kinda uncertain waters there, right? Yeah, furthermore, it's a little concerning that, it came to light that the Utah Medical Board was not meaningfully consulted before the launch. And yeah. So again, going back to AI, and just kind of [00:21:00] my editorial I love AI and what it does and what it can do, and I think last week's discussion kinda opened my eyes a little bit more on how I could look at AI, right?

I think, agreeing with Dr. Gupta, AI is a great tool to augment what we do right now, but the big question is: Can it replace it? And you know, what Utah Doctronic and Department of Commerce is doing is commendable in the sense where it's trying to make care more efficient.

 it went about it the wrong way, right? it didn't necessarily involve the people that should be involved, right? The people that they've designated as regulators of medical care within the state. And furthermore, when those regulators, spoke up and said, "Hey, let's take a step back," they basically just turned their back on them and said, "No, not really.

 we thank you, but no thank you." Is there a way that AI could safely practice in this sense or have a [00:22:00] bigger role? I think absolutely. I think, there could be framework, but instead of this kind of retroactive framework where you build a program and implement it and then go back to the regulatory board, to the specialists that you have designated as your medical leaders of that state and ask for their input after the fact.

I think that's going backwards, right? And, there's still also questions that, hey, how good can AI really be to provide care, you know? And there's a bunch of studies, and I'm looking at a kind of a Stanford Harvard study right now that shows that, hey AI is good at collecting data, AI is good at predicting at scale, but AI is not necessarily good at, practicing care in real time models where, sometimes, your information is not complete or your [00:23:00] information's changing.

It's just not good when AI has been implemented in real life models. so many of these studies have found that AI is only good as a medical student. And, talking with Dr. Gupta, he quoted, that AI right now, the current models are still making errors at a ten percent rate.

 and for us, that's just not acceptable, right? in medical decision-making, we wanna get it under one percent. So this ten percent medical rate, is not acceptable at this time. You know, so yeah, super interesting. I'm gonna be following this case very closely to see, how this plays out.

 what can the medical board do to gain leverage? since this program is still in phase one and the prescriptions still have to go under a medical doctor, right? So- The prescription is still written a medical doctor's name. Can the Utah Medical Board pull those doctors' licenses and say, hey, we're not going [00:24:00] to allow you to prescribe in our state anymore?

Is that an option? I don't know. I don't know what other levers the state medical board can pull to convince, the Department of Commerce and electronic deposits until a better framework is created. Anywho, your thoughts. Welcome your input. Thank you for listening.

 it was just me today. So, it was useful. So until next time, thank you for listening. And if you have any questions, concerns, drop me a line, leo at telemedsintox.com. You can drop Phoebe a line at phoebe at telemedsintox.com or our info line at infotelemedsintox.com. Until next time, I see you all then.

Mahalo.