Telemedicine Talks

#43 - 2026 Compliance Forecast: Surviving the New AI Rules

Episode Summary

In this episode of Telemedicine Talks, Phoebe Gutierrez breaks down the real regulatory shifts coming in 2026 around AI in healthcare—from FDA oversight to state-level legislation—and the biggest mistakes clinicians and digital health companies are making right now. Learn how to protect yourself, your license, and your organization in a rapidly evolving AI landscape.

Episode Notes

This episode is sponsored by Lightstone DIRECT. Lightstone DIRECT invites you to partner with a $12B AUM real estate institution as you grow your portfolio. Access the same single-asset multifamily and industrial deals Lightstone pursues with its own capital – Lightstone co-invests a minimum of 20% in each deal alongside individual investors like you. You’re an institution. Time to invest like one.

________________________

Can you trust an AI that’s writing your treatment plans, or will 2026 be the year clinicians start paying the price for automation?

 In this 2026 compliance predictions episode, Phoebe Gutierrez shares her “love–hate relationship” with AI: it streamlines operations and boosts efficiency, but it cannot be treated as a source of truth. As more practices embed AI into core clinical workflows, the question becomes unavoidable: Who is responsible when AI is wrong? The clinician? The platform? The vendor?

Phoebe explores how regulators are now answering that question. She explains how the FDA, the ONC, and state legislatures are rapidly rolling out rules governing AI-enabled software, clinical decision support, bias testing, audit trails, human oversight, and patient disclosure. With over 250 AI-related bills introduced in 34 states, the landscape is shifting faster than most companies can keep up. She walks through the most common—and dangerous—mistakes she sees digital health companies making, including auto-populating treatment plans without clinician review, failing to track AI overrides, not disclosing AI use in patient encounters, ignoring bias testing, and misunderstanding liability responsibilities between platforms and vendors.

AI isn’t going anywhere—but the way we use it must evolve. This episode gives you the roadmap.

Three Actionable Takeaways:

About the Show

Telemedicine Talks explores the evolving world of digital health, helping physicians navigate new opportunities, regulatory challenges, and career transitions in telemedicine.

About the Host:

Episode Transcription

[00:00:00] Welcome back to Telemedicine Talks. So this is part of our series of Phoebe's compliant predictions for 2026, and how you, as a clinician, physician, telemedicine company, can prepare yourself for some of the emerging trends, regulatory changes in operational hurdles that you might experience in this next year.

So this episode I'm gonna talk all about ai and for those who listen to the podcast, y'all already know I have a love hate relationship with ai. I love AI in the sense of it streams operations. It makes things easy. It helps me factor and do things in a lot quicker way. But I hate AI because it should never be used as a source of truth, and you always have to question it.

How many times I've found [00:01:00] my AI tools to be wrong or incorrect or leaning more towards just wanting to keep me happy as a human. And it is a robot, right? And so we are at one of these weird junctions in healthcare where AI tools are everywhere. They're charting, they're triaging, they're prompting.

They're recommending treatment plans, they're writing protocols, and many clinicians don't know how to interact with these new AI tools. One day, a virtual, clinic tells you the AI suggested a diagnosis the next day. A patient challenges that suggestion. Who is responsible here? The clinician. The vendor, the company, in 2026, that question doesn't remain hypothetical with real guidance dropping from, the FDA and the office of the national coordinator for, health information technology, the ONC and various state rules and regulations [00:02:00] are really trying to frame and path. The right path for how they're gonna oversee AI in healthcare and how you can actually protect patients in the best way.

So today I'm gonna walk you through what the rules are, some of the real mistakes that I'm seeing companies make and how states are leading the way, and exactly what you can do to kind of protect yourself as a clinician, as a company, as a consumer going forward in 2026.

So telemedicine exploded during COVID. I feel like I say that on every freaking episode, but like alongside that, AI also exploded. It moved from voice to note to intake tools, to ultimately like clinical decision making support. And so, you think about this, right? You have a telemedicine platform using an AI triage.

Bought and it is flagging a patient for low [00:03:00] risk and it sends the patient home and they return with complications and the clinician, you know, reviews the bot's output but doesn't dig deeper and who technically is getting blamed in that scenario? A lot of companies in digital health will say, we are an AI tool that.

Simplifies this, simplifies that, streamlines this makes this easier. AI's job is to make our lives easier. Right? But you also have to think from a clinician standpoint, everybody is trained in a certain way to kind of keep a human in the loop. And so, in short, if your AI is. Playing a clinical recommendation role or giving a diagnostic category without you verifying that and you just checking a box.

You're in this very interesting territory that is now [00:04:00] being under a microscope The FDA and state rules are really coming out strong and pretty fast with different rules to protect people. So the FDA has some AI and machine learning. Rules. They actually came out with an action plan and different guidance requiring manufacturers and developers to follow what's considered the good machine learning practice.

And it's just supposed to be transparent. There's some validations bias mitigation, really, again, you can go to any AI tool and if you haven't used ai, like go use it. Fun. Look at chat GPT, but I can promise you like you have to check it and you have to like push back on certain things. And that's exactly what it's saying when it says bias mitigation, it's like how can you verify that you're not being led down a deep rabbit hole because you are both kind of feeding into each other's, information the [00:05:00] draft guidance for the artificial intelligence enabled device software functions is setting some expectations for ongoing monitoring change controls and really making sure that whatever the prompts are that are being used are overseen and monitored continuously. You know, for example, if you have a telemedicine company that uses AI to recommend dose change for hormone therapy, that tool will be regulated under these guidance.

And so you must ensure that the vendor and your own deployment meets these different FDA expectations. State legislators are actively passing, different. Legislation. So over 250 AI bills were introduced in 34 different states, which is a ton. And like you have to think like Illinois was the first that passed, AI usage and mental health services.

People are paying for services and a lot of [00:06:00] these companies are getting away with. Using AI and actually not disclosing that. And so a lot of these rules are around how you disclose that information as being an AI bot or being an AI algorithm. States like Utah, Colorado, California are also pushing some of those rules for, some algorithmic accountability in their code and in health.

And so if you operate nationally, you have to think about how you're factoring in these different rule changes across the board. Because again, some are telling you that you have to disclose certain things. Others are saying you have to have your operations and your code manually reviewed at different intervals, and that you have to just make sure you're documenting those.

So some real mistakes and issues to watch for in 2026. Now, again. One key thing I wanna say here is like there are of course companies that like write the ai. There are tons, right? But there's also tons of companies that [00:07:00] you are contracting with for one simple element, a tool that just does your voice charting a tool that.

Routes, lab orders, a tool that interprets this, a tool that writes that, a tool that communicates this. A lot of those now have AI built into it, and so part of it is through your BAA process, how are you actually overseeing some of those practices with your vendors, which. Is your responsibility as the business owner, and again, from the clinician standpoint, if you're working for some of these companies, asking those questions, like who looks at this tool?

Who pays attention to this tool? Who's monitoring this tool? so let's get practical. Here are some of the key mistakes that I see using AI without human oversight. Pretty, basic one startup allowed algorithm generated treatment plans [00:08:00] to auto-populate clinician reviews without any additional review.

And when outcomes faltered regulators, flagged the lack of, what they call human in the loop evidence, meaning a human actually reviewed that, a clinician actually reviewed that. Another big mistake is failing to log AI decisions and clinical override. So you have to capture when AI makes a decision, what it was, who reviewed, and then when the clinician modified it without that log, you actually can't show that you are overseeing your practices.

 another big one is deploying AI without evaluating bias. I talked a little bit about this earlier, but it's like if your AI was trained on only data from 30 old men and then you apply it broadly to women or older patients, you have to understand that the output is totally biased and [00:09:00] the FDA is going to really expect that you have some bias risk mitigation built in.

I can harp on this one all day, but of course, like not tracking state specific rules. So if you have AI that is doing anything patient related and it's not factoring in compliance rules and regulations specifically for the state that it is working in. Problem. As I mentioned, Illinois does not allow AI for mental health.

You need to make sure that as all these emerging rules come up, that you factor those state rules in so you don't accidentally violate a state rule. And then, as I had already mentioned, misplacing liability between you as a company or a provider. The vendor that you are purchasing the software from.

So just because you're licensing these tools does not absolve you from being a participant in these regulators are going to [00:10:00] assume that you are overseeing your vendors and. You have the responsibility of deployment and oversight and validation, and ultimately the documentation piece. So if you think that again, a company is too good to be true and they're handling all these things for you, like you need to ask some more questions, 

For 2026, you know, you wanna come up with a really tactical plan around ai. And I wanna be really clear as I say this because I get comments and like everybody probably thinks I'm like an AI hater and that's not the case. I think AI is a beautiful tool, if used correctly. I think the problem is that we tend to use AI for things.

When we shouldn't, we tend to go to AI to be the doctor versus the how can it help me document? We try to get clinical diagnoses versus can you interpret all of these results and then funnel those to the doctor. We [00:11:00] always need to make sure that humans control the process because AI is only trained based on.

You know, massive amounts of data and they're gonna pull out what they think is the best. But at the end of the day, the human eye is gonna be able to catch those nuances. So you wanna come up with a plan. And here is, you know, the things that I would do if I were in your shoes, you wanna set up an AI governance structure.

So you wanna make sure that you have one person responsible for all these different tools to make sure that they meet some of these requirements. You wanna have an inventory of those, so the same person that you're going to appoint to kind of oversee. Do an inventory and really understand what they do, what their purpose.

Do you understand their prompts? Do you understand their algorithms? Is there a human in the loop? You wanna validate and understand that bias review process. So for each tool ask, was it tested across different populations and use case? Is there any sort of audit [00:12:00] bias that it could potentially be leaning towards or forgetting when it is, providing some potential recommendations to clinicians.

And again, part of this is like, you also wanna be transparent with your clinician team. If you're a clinician, ask these questions. Go, you know, like. Are there biases missing? Can we iterate on this process? Like, again, people don't know what they don't know.

So make sure that you speak up and you vocalize, and be transparent with certain things, especially if you see gaps. Make sure that there's an audit trail so that you can see who reviewed, who proposed the information and who edited different things. Of course, in the patient journey, you always wanna make sure that you are disclosing.

And getting consent when you are using ai? I think, again, from a patient perspective, you just wanna make sure you're a trusted entity. They are coming to you as a trusted provider, and you just wanna [00:13:00] make sure that you are giving them the experience that they are expecting. Make sure that you're monitoring state rules, especially as they relate to ai if you're using these tools because you wanna make sure that you can turn them on and off and adjust as rules change, and then of course, review your contracts, have good oversight of your vendors, and make sure.

That, if your vendor is claiming it's not for medical use and you're using it for medical use, that you stop. So really make sure that you're looking into some of those things. 2026, my predictions that I think are coming is I have a feeling that there are gonna be some malpractice and regulatory actions.

specifically citing. Algorithmic negligence. I think that state boards are gonna start issuing sanctions for AI misuse from both a physician and a clinician standpoint. I [00:14:00] think that there's going to be a lot more regulation around AI use. And clinician review before treating reimbursement for various payers as well.

I think platforms are going to be asked for some full audits of their AI tools, whether it's who used them, what the outcomes were, the overrides. I think you're gonna be huge. , And then of course, like an error log. And then I really think there's going to be some enforcement where the platform, not just the clinician is held responsible.

For some of these AI driven factors. So here's your key takeaway. Like AI is not just a futuristic add-on. It's here. We have to live with it. And if you haven't built that into your practice, you should. It's not to be scared of, but at the same thing, like you just have to think like you are liable for this tool that you are using.

You wanna own the oversight. You wanna log every decision, and you wanna keep humans in the loop to monitor [00:15:00] different things and make sure that it's acting accordingly. It is really gonna be about how you're safely, transparently, and responsibly using this. In some, like if your AI vendor doesn't hand you a validation data, an audit log.

Compliance mapping. That's a really good indication that something might be off. So I'm Phoebe Gutierrez. This is part of our 2026 compliant predictions for the next year in telemedicine. Catch you in the next episode, and again, if you have any questions or you need help on a different thing or you have your own predictions.

Go ahead and email me Phoebe at telemedicine talks. I promise we'll get Leo back on here soon to soften the conversation so it's not so dense about all my scary compliance rules. But thank y'all for joining me and we'll see you next time.