Join Phoebe Gutierrez and Leo Damasco on Telemedicine Talks as they welcome guest Dr. Ashok Gupta, founder of TheraNow. From clinician to CEO, Ashok shares the journey of building a successful hybrid virtual physical therapy platform serving about 70,000 patients. They discuss practical, safe AI implementation (computer vision, ambient scribes, clinician performance tools), hybrid care models, regulatory integration, and why augmenting, not replacing clinicians is the key to sustainable telehealth success.
What does it take to build a compliant, clinician-friendly, and patient-centered virtual care platform in a highly regulated field like physical therapy?
In this episode of Telemedicine Talks, hosts Phoebe Gutierrez and Leo Damasco welcome Dr. Ashok Gupta, founder of TheraNow. Ashok shares his evolution from treating patients at the VA and in rural America to building a hybrid tele-physical therapy platform that integrates deeply with hospital systems and EHRs like Epic.
The conversation explores the realities of telehealth adoption pre and post-pandemic, the importance of hybrid (omnichannel) care models, and thoughtful AI integration. Ashok provides real-world examples of safe AI use cases including computer vision for movement tracking, context-aware ambient listening scribes, and AI-powered clinician training/feedback tools ,while stressing patient safety, avoiding hallucinations, maintaining human oversight, and building transparent, accountable systems.
Listeners will gain practical insights on regulatory navigation, deep workflow integration, equity/accessibility features, and why starting with clinician and patient needs leads to better outcomes and sustainable businesses in digital health.
Top 3 Takeaways:
About the Show:
Telemedicine Talks explores the evolving world of digital health, helping physicians navigate new opportunities, regulatory challenges, and career transitions in telemedicine.
About the Guest:
Dr. Ashok Gupta is the founder and CEO of TheraNow, a leading virtual physical therapy platform that has supported over 70,000 patients across the US since 2021. A former VA physical therapist, he is a passionate advocate for hybrid care models and the safe, responsible integration of AI in telemedicine.
Connect with Dr Ashok Gupta on:
Website: TheraNow.com
LinkedIn: Dr. Ashok Gupta
About the Hosts:
[00:00:00]
Leo: Hey, welcome back everybody to Telemedicine Talks. As always, your gracious host, Phoebe, and, this is Leo, just hanging along with Phoebe. Today, we have a great guest. I'm super excited to hear from him. This is Ashok Gupta . Dr. Ashok Gupta is the founder of TheraNow, a virtual physical therapy platform that has supported over 70,000 patients across the US since 2021.
And, now he didn't just start as a typical tech founder, right? He started as a clinician, a physical therapist, treating people at the VA, but then moved along to the technical, virtual side and is now, growing not only as virtual business, but also growing solutions in AI. Welcome to the show.
thank you so much. I really appreciate it. Such an amazing introduction and, nice to be here, Phoebe and Leo. I'm looking forward to this conversation.
Yeah, no, thank you for taking your time.
Pheobe: I kinda wanna just jump right in.
So, I think, I'm a [00:01:00] big proponent of, talking about using AI safely. And I think it's really interesting to, one where you're at. Most clinicians are so adverse to it, but you're one that kind of embraces it. So I don't know. it's always good to understand where you came from to see, like, where you're at.
So can you maybe just, share a little bit about, like, your history and how it actually has evolved into this place where you're such a, and correct me if I'm wrong, but, tech forward, AI forward CEO thinker.
Guest: All right. that's great. happy to talk a lot more about, like, how it came about.
So when we started out, as a clinician, so we are looking at every single thing that's happening across. fortunately my background, I was able to work in big cities and then at the same time, the smallest of the towns possible in America. So, I was able to see healthcare from both perspectives, but one common thing across the board was the access to care.
So right now I'm [00:02:00] talking like 2015, 2016 timeframe, and we're looking at, in big cities, people are struggling with traffic or wait times in the clinics because they can't get the appointments that fast. In the remote areas, they're struggling with the drive itself, like miles to get to the access care.
So me and my wife, we both were looking at each other, both doctor physical therapy, and then we are looking, putting a physical therapy lens, and we're at ... There's something needs to be done about this. And one day, we're watching, and then actually there's a commercial about telemental health. I'm like, we can do that.
There's definitely room for amount of, physical therapy that can be done virtually, not 100%, but what? And then we started to dig more and more and more that, like, where is it going to get to the point where we can't really perform, the care and then the person must be actually have physical contact?
So it started from there, and then we started to build a company around this ideology that where virtual telemedicine care would be [00:03:00] actually a way to go forward. So we did all entrepreneurs do it. Like, they think, like, on the day one, we're gonna put out a tool, and then people will start using. We were faced with a reality, and, we did at least five pivots and finally realized, like, that's not the model gonna work, but then the right way to work is when you don't literally say 50 years we have been doing certain things certain way, and then now suddenly we're gonna provide you with telemedicine, and that's gonna change everything.
What we need to do is a hybrid care. You have to learn, these, conventional care models, and then telemedicine needs to go together, hand in end. And we started partnering with large hospital systems, and we started triaging referrals based on what's best for telemedicine, what's best for in- person care, what's good for hybrid, what about omnichannel experience?
And, then that's the journey brought us into the success of building a virtual care platform that is built for a special use case, and be able to help hundreds and [00:04:00] thousands of people via them. And then specifically to Febi, come back to your question about AI and a safe AI, I think our approach to AI was not any different.
We're like, we can't just say we have been mainly doing things for years, and then suddenly now the AI is what we're gonna give it over to, and then say AI works, and AI is the way to do it, and AI is the future of it. Instead, we found the loopholes where we can safely implement AI into the current workflow.
And, as we will talk a little bit more during this conversation, I'll give you real examples of, like, what's the safe AI versus just that AI throwing into something because it's a trend.
Pheobe: Yeah. Yeah.
Leo: Actually, that's a good point. Now, going back to when you started, you started back before the pandemic, right?
and did those pivots occur pre, post, during the pandemic? And, how did you see your approach change, from, [00:05:00] pre-pandemic, telehealth was always there, right? But it wasn't necessarily the thing, right? Now, the pandemic hits, and that's the only thing, right?
And now I think we live in this post-pandemic world where things are evening out, and telehealth is here to stay, right? But it's not the only thing now, and now the landscape's changed again. going back to that and now tying into AI as well, you know, that is now becoming the thing, right?
How have you pivoted and moved along to maintain and sustain during that pathway and go with the flow and go with the changes as such? Right.
Guest: Yeah. I would just small thing I would say life is the best teacher, and that's what the entrepreneurial life is all about too, is like, when you start, you have, these different, visions about, this is how the company's gonna work, this is what the product is.
And you're right, telemedicine and telehealth is very different use case of technology [00:06:00] than any other industry, because we're highly regulated. So when the COVID came, it really shifted the whole world of healthcare because no one could actually, like, social distancing was the biggest problem.
Mm-hmm. And then the only way you could connect was this, like, the way we are actually talking about it. But the telemedicine was there before too, but it was not, accepted well, not because people doubted telemedicine, the regulation was the limiting factor. And then when the COVID came, people started to use telehealth and suddenly actually they loved it and liked it and that's where the telehealth
Actually, let me tell you, that was not at all the case. We have been still, like, as of today also, we are promoting, we are educating, we are telling people that, this works and the outcomes are exactly the same as, conventional care. But what changed during the COVID was regulations. So all the telehealth that was not covered before, there was no payment model for it.
There was [00:07:00] only way you would think of telemedicine would be part of a value-based care or, adding a little bit show and, like a belts and whistles to a existing system saying like, "Oh, we do offer telemedicine as well, by the way, but we don't really promote it. We don't actually have it mainstream in our workload.
We don't have, even have tools around it. " I'll give you example, when it comes to physical medicine. When you're doing, let's say, telepsych your intervention is why your words, and you can actually, definitely communicate your words via the video. And with video, you could even express your expressions, your body language and, every single thing.
But when you're doing physical medicine, or physical therapy via telehealth, you can't just, be sitting and talking like this the way we're doing right now. You'd need more. So in the 2020, 2019, 2020 time, we first added remote monitoring, via home exercise programs that you could do in the app.
And then during the session, actually, I can evaluate what you're doing. 2020, we installed computer vision [00:08:00] AI into our application. So now I can see how much is your shoulder moving, how much is your risk moving? I can see the quality. Like think of, the easiest way is like you're watching NFL and then there is like a commentator talking about with the annotations, like the player moving here to here.
That's the point we're talking about. Like I'm actually having you do a movement and I'm able to annotate on top of your movement and be able to say, see, remember last time we were able to go up to this point, now we can go up to this point? That changes the way you deliver care and that is what we had to build over time.
And the pivots weren't related to, we are still delivering what we actually started as a company, telephysical therapy. But the business model kept pivoting. We started out as a Uber of virtual physical therapy. People will come and then we'll just connect and we'll become that failed. then we thought like we'll go to value-based care people and then we'll start actually offering virtual.
They question we already have physical therapy, why we need virtual physical therapy [00:09:00] and you're not any cheaper because the time of the therapist is what we are paying for and it takes the same amount of time in clinic and same amount of time and time on telehealth, so we can't actually undercut the pricing.
Finally when we realize this okay, it has to be omnichannel experience where when the patients don't want to go to the clinic that's when the telehealth should be an option. When the patient has already gone to the clinic and they don't really need a hands-on care, that's when the telehealth needs to be an option.
So today's model, when we talk about telemedicine, we are like so engraved into the system that patient can actually have an evaluation online and then go for the second visit in the clinic, and then for the third visit onwards, they can actually be online, or the other way around. So it's literally by choice, because your EMR or EHR standalone doesn't support it, and then now you have to go to the new clinician and they have to start with their own evaluation because of the EHR.
So what we [00:10:00] did, important part was embedded ourselves not only into the workflow, but we also integrated deeply into these softwares. So if you're on Epic, you should be able to see any and every word I type here on TheraNow in real time, and then the other way around at the same time. So integrations came very deeply.
and here I think you're getting the zist of our evolution story, where we started like pure telehealth and then we added like more hybrid care and then we actually got Epic integration and all the other software integrations ... So like I'll give you an example. People think telehealth and then keep forgetting we need to be ADA compliant.
we need to be able to offer equity to, be equitable, accessible to everyone, and s- especially the ones with the special needs. So now think of someone that needs, sign language. Are we not gonna support them? Are we not gonna offer them the [00:11:00] care? How is that going to work? And that's when your integrations with companies like LanguageLine, Propio that comes into play and what are able to do is like a real time ASL interpreter is able to join in and you're able to offer service.
what about not only special needs? Let's talk about, language barriers. So now you have Vietnamese, speaking person and what would happen if you go to the hospital? How can we keep telehealth, different standards than what our conventional medicine has been? So you go to hospitals, if hospital didn't provide you interpreter in the hospital, you would be in a trouble, as a health system.
How can you have your technology driven telehealth service not be able to support that? So we learned all these things early on and then embedded all these, features or, integrations or services, you wanna call it, into one to ensure that the patient has same exact experience that they would [00:12:00] otherwise have like in a conventional setting.
Pheobe: Yeah. I was gonna say, as you were talking, I mean, I think one of the things that's really interesting, which I absolutely applaud you for doing is the whole experience I feel like has been very like patient focused. So a lot of times I meet, CEOs or founders and it's this idea of like, I'm gonna build the thing I want versus the thing that needs to be built around the person.
And again, I always, joke, it's like if you actually build for the patient, like you will make a lot of money because it's so disconnected and fragmented and there's so many issues with it today. And so, as you're thinking about all the ... you really have kind of pieced together all the like regulatory hurdles because when, you know, regulators, that's what we're writing for.
Like we're trying to come up with rules and policies to protect the patient, but also to make sure that the patient has parity with, brick and mortar and, you know, kind of all of those different things. And so I think it's really interesting that you were able to think about it from [00:13:00] that lens, and maybe it's because of the physical therapy that you guys are so used to the close nature of working with the patient.
Leo: And what I also like is, you're focused on an extension, not necessarily replacement of the current medical model, but an extension of the medical model, right? And embedding yourself in that, not setting yourself apart or saying, "Hey, you know, we are here, you were there and you gotta choose one."
and yeah, a lot, I think a lot of companies these days are, trying to make that distinction. We're, this is, we are totally virtual, blah, blah, blah, we set ourselves apart, but yeah, it's, people are gonna go to what they're comfortable with and what they know.
So it's no surprise that it's been that successful.
Guest: I just wanna resonate the same ideology back into when we talk about AI. We didn't do anything different than the same approach. And I always talk about this to any new entrepreneurs, my friends, they are actually building businesses or whenever I talk to my [00:14:00] customers or health systems is we can't just replace stuff.
We need to find ways to augment the current process and then see where the, deficiencies are and then if we can find with a better way to do the same thing. I'll give you an example of like, instead of saying, "Oh, the old ways are old ways and the telehealth is a new future," and we did the same for telehealth versus AI as well, or technology versus AI, or manual versus AI.
And what we talked about is all right, AI is coming in, and the very first presentation of AI was like, AI is going to replace the clinicians. that's what the very first presentation of AI was. Like, that's how it was sold to everybody. And then that's very first reaction for clinicians was like, "Oh, no, it's gonna take my jobs away."
And it's not worth it. it's not effective, it's not right. we're still dealing with hallucinations, in the AI models, and then people started to talk about, "Oh, AI can actually, like, be the doctor of yours, [00:15:00] but potentially they're not from the healthcare field and the people that are making such a bold claims are visionaries, absolutely, but they're a little bit too far ahead and not connected with the healthcare, one, the regulations.
Number two is 5% error rate in any other industry would be okay, but it's not safe for healthcare."
Leo: No, not at all. Not at all, right? Anything less or greater than 1%, right? you're in the red, right? Yeah. so tell us how, you were talking about safe integration and, your approach, how did that look like?
Because again, you've mentioned that, yeah, people go in and go, "I'm gonna get AI to do this, this, and this. We're gonna need minimal manpower, actual from the clinician, from the physician, whatnot." So how did you approach this and how do you differentiate, that?
Guest: first of all, we need to understand when we talk about AI, what exactly is the use case-
Leo: mm-hmm.
Guest: and then what particular technology are we talking about? So I [00:16:00] believe 2022, 2023, ChatGPT came around, became a thing, and then the AI became a mainstream topic. And what we are talking in that context is a word prediction model. It's all about next token or a next word, and then it's predicting putting it in a sentence and then giving it back to you.
That's a generative AI. That is what we are talking about. But, I think just earlier in the discussion, we talked about computer vision AI. We installed computer vision AI into our operative workflow well, before any of this, like in 2020, because we found it would be a lot easier and more effective if I can document how much my patient is moving.
So that is called as like, I wanna distinguish that as a physical AI. So it's taking a physical sensory feedback and then putting that into a documented four degrees or five degrees or 20 centimeters or five inches versus [00:17:00] generative AI model. Now, hallucination is not a problem in the physical AI at this point in time.
It's like what you're feeding it to, but the problem is, when the AI doesn't know anything, it will just come up with something. And the bad part is it sounds so confident when it's so wrong that a normal person would just not even, like, question it. And that's the thing that scares me the most.
And that's the thing once you recognize you can build a safe AI model not the foundation model, but the workflow model that actually ensures we know the limitation of the AI, and then we have a product that is designed to embrace that limitation and then actually fix it so that it's safe for the consumers to use it.
So I'll give you an example of that. So when we started doing instead of like clinical decision support, the very first AI application of generative AI in TheraNow, we [00:18:00] started to work on Ambien listening Skribe. and Skribe is not like a latest thing, a hot topic right now.
It has well covered probably on your podcast also. You must have talked a lot about AI Scribes. But what we did was we changed a little bit of context to AI Skype and Ambien listening. So in telehealth, obviously we were a little bit one step ahead. We don't have to walk into one of the treatment rooms with the phone and then put it right there.
This call is being recorded. Telehealth sessions are easily be tapped into for ambient listening. Super easy to actually get started. But instead of just raw listening and then doing a summary notes, what we actually did was we did an EHR integration and a contextualizing the session well before the session even started out.
So now the AI model has a context of you, who you are, why you are here, and what's going to happen in this session. And then we're [00:19:00] having a conversation, and then this conversation is getting fed into that information, and then towards the end of it, we're getting a complete note, not like 15 minutes later.
We're getting in the real time, so if you don't provide AI output at the point of service, it's too late. Nobody has time to go back and then fix stuff, and it needs to be complete as much as possible. that is where we started the application of AI first, but then there are many more. And then, I'm gonna talk about couple, one of the most exciting one that we recently released is like, okay, we already got a CDI covered, we already got Ambient listening piece covered.
somehow, one can blame my team about, we're already batted into buy and we always want to build because we are builders and we love to build products. the Ambient listening workflow optimizations, agents doing au- automative administrative task, that is all like native to Therano [00:20:00] platform and that's where it keeps us lean.
we don't have millions of dollars in fundraising, so we are not by choice, but it's by design we have to be very, very lean, in anything that we think about.
Pheobe: Yeah. I think one thing, that I've always tried to emphasize is the hallucination aspect. How do you kind of on your side train some of your models to make sure that you have the appropriate safeguards around it?
And then, on top of that, how are you even kind of like working within your team, within your platform to train other people to either identify hallucinations. I'm in the compliance world, so every day somebody tells me that they ChatGPT some rule and regulation that is wrong.
So I'm big on like, "Oh, gosh, you really have to make sure the model's trained." But how can you maybe go a little bit deep into that?
Guest: Right. [00:21:00] See, inherently, LLMs are always going to spit out wrong information if they don't have information. So how can we make situation better for LLM?
That would be the one of the most important thing. What if, we can take a very specific niche, like you can take an open source LAMA model and then actually fine tune it specific to your purpose. But how are you going to train it? are you going to create a real human in a loop training model?
And this is where I say is like why the coding app and application building LLMs are so amazing because it gets like, one is too much of structured data available. Healthcare doesn't have that much of structured data. So unless you have access to a structured data, then the other thing is like you need to create a feedback looping mechanism to train your LLM.
So what does that mean? Like, you can't have a doctor sit down and then every time say, "Oh yeah, your answer is right." Oh, no, your answer is wrong and [00:22:00] then keep doing this. So that's going to be very expensive. So how exactly are you going to make a better situation for LLM? One is like provide more and more specific information for that context.
Good thing, we used to have models very limited to 20,000, 30,000 tokens in the context window. And then after that, you might have like, just for the very simplified version of what I'm talking about, context window, is like when you go into a ChatGPT thread and then after certain number of back and forth, it starts forgetting what you were saying up top and then it'll just start brand new conversation right in the middle of it.
That's because the context window is just what used to be limited. I think ChatGPT now is like 128K tokens. There are models like with a million, tokens also in a context. That means it can remember that much before it will actually start responding to your specific question. So one, use a model with a larger context window, provide a lot more realistic context, and then if you [00:23:00] don't have a structured data, then create a workflow that actually prevents any error.
So I'll give you an example of this is like, if you create a note or you create a summary, if you provide it in the real time to the clinician, it's fresh on my mind before I hop onto three other patients and then you're trying to say, "Hey, can you take a look at this note?" Probably I already forgot what happened three hours ago.
So if you can process very fast, and then while I'm actually like still with the patient and closing up on my, session, I'm saying bye, I'm talking about next session, the note is already ready and I'm reviewing it, that would be one way to make it safe because now it's fresh on my mind and I can correct what the, LLM may have spit something wrong about it.
So that's one way to do it. The second thing is I take back to an example of my daughter, Aria, she's like in a first grade and, when you do a math I practice a lot of math with her and what she says is like, "Here's the answer," [00:24:00] because she's like very big on mental math and then she would just like spit out an answer like that.
I'm like, "No, tell me the process. How did you get to this? " And she was like, "You got the right answer." And like, no, it's the process that matters. And I think you know where I'm getting to it is just like having the AI to have an accountability and then the backtracking and logging the tracing the process, how it came up with what the results are is also important.
if you're ready for this, I'm gonna give you one more real use case- Yeah. ... of how actually we made AI accountable.
Pheobe: Oh yeah, please. Yeah, please. I wanna use it.
Guest: so we have around 450 clinicians and all distributed. They're not under one roof. We cannot just host a training session, put everybody in the auditorium, train them.
We need a standardized output and we n- need a similar experience with you see a shok or you see Richard or you see Jared, your experience should be [00:25:00] kind of like similar. But as a company, that's one of the biggest challenge. And bigger challenge was when we all went to school, we were all taught bedside manners.
all our clinicians were taught bedside manners. COVID hits, we all are virtual and who taught us website manners and it's website, right? Now we all developed our own strategies, like some like the light that way and then some like the camera, some like the microphone, some cared about the way they dressed, some did not care about it and then joined the sessions from CAR or from the soccer game of their kid and then they're picking up the phone call and thinking like, "Oh yeah, as far as I'm talking to the patient, that's more important than, everything looks like.
So we were like, "No, it's not gonna work." So we built a training module around bedside to website 101, trained everybody on it and we're like, "How do we measure the effectiveness of this? " So went back to AI and we built [00:26:00] the model and the model is able to see the exact interaction between clinician and patient and towards the end of it, score on a different rubric.
We went to the best of our clinicians and then said, "What makes your session better?" And we were able to interview them. Then we went to the AI and then said, "Hey, these are the clinician, their NPS is the best. Tell us, like, what's different between their session versus these therapists, their NPS is lower, net promoter score, or patient feedback in simple terms."
And then it came back and then said, "Oh, these therapists introduce themselves very well. These therapists actually have a very good explanation of their plan of care. They talk about continuity of care and they build trust in the very beginning by explaining the whole program how it works. We're like, definitely, this is what going to be our rubric now.
We're gonna measure every single session on these rubrics and then see how well this particular session was, and we're gonna score it, but now here comes the safe AI. Instead of just spitting [00:27:00] out a number four out of five, we asked AI to show us the evidence on basis of what are you scoring this. So when the AI is actually scoring you four on
On five in the introduction. It's actually giving you samples of like, this is what you said, and that's why your score is this. Plus to make it a little bit fun, we edit a leadership board. So instead of making you feel bad, we created like a company by leaderboard and you are like, which showed only top five that their score is X, Y, Z.
And then mine, as I can see obviously, like who doesn't wanna be on the top five? So-
Pheobe: friendly competition I think always helps. you talk about like hospital systems and health plans. I mean, everybody's always talking about their quality scores. No, I think that's awesome.
The ones that the AI companies that I've worked with that have, in my opinion, done it smart as exactly like you. They have like a judge system. So it's not necessarily a score, but it's we're scoring you because of this. We're giving you kind of like the evidence, the [00:28:00] rationale, the reasoning.
And then yeah, the most ridiculous prompts are going into these things to basically train them because to your point, you're talking to a robot. You have to be very specific. You have to make sure that you understand the output so you can correct it, and adjust it as needed.
But super interesting.
Guest: Yeah. I'm glad you like it. There are safe ways of doing things and there are very much unsafe way of doing things. I know, the tech is always about learning along the way, but healthcare is one thing I always talk about is like, we need to talk about the patient safety.
There's a lot at risk here, not money. Money can be earned again. , You can lose it and you can bring it back. But lives, are we talking about here means if you document one thing, human error has been one of the biggest cause of death or mortality or morbidity. But AI based error is also something that there [00:29:00] are ways you can actually control it, so why not?
I'm the biggest proponent and supporter of AI and AI applications. I build those every day, but I am one of those, the biggest cautious one and then if you want a little bit bigger one and then it's a health systems. I don't think so. It was the mentality that we were born with.
Like we have to be a little bit consciously negative about AI, but if you're working with health systems as your consumer or customer, you should see it like they're very, very risk averse. and then the processes they actually put in the procurement, they'll actually have you go through these.
So if you have been through a few of those, then you automatically also become one of them, like, which is risk averse and safety first and transparency in your architectural design of how, what you do and how you do it.
Leo: Yeah. Well, what are kind of the biggest red flags or the most common red flags that you've seen with people implementing this?
what are people doing wrong when they, [00:30:00] jump on the AI train?
Guest: Right. I wanna say most about is a transparency. That's one of the biggest problem means a lot of consumers of tech are not the tech people. ... So they don't understand what's under the hood. So majority of the time a startup founder would think the easiest way to put something out there is just like, what if I just created a small API and then spend this information to ChatGPT or Claude and then be able to reproduce, I can build an MVP.
Fine. As far as you're in a controlled environment, you're just creating not even MVP, you're creating a beta and you are actually just testing certain ideas or you're totally fine with not real patient data. But then the moment you start using that product in the real environment you're not only doing something wrong, but you're also making a lot of laws up there with PII and PHI and HIPAA- Right.
... And then they learn hard ways, and there are safer ways to do it. you [00:31:00] don't really have to cross those barriers when there are easiest HIPAA compliant environments that you can build, you can actually, like, instead of using these commercially available models and instead of using APIs, you can really download them, you can fine tune them and it doesn't cost, it doesn't take billions of dollars to fine tune a model.
you can do a very limited weights and basis, fine tuning and you can get like a very good output that even a regular ChatGPT 5.2 or 5.4 wouldn't be even built to get you out.
Leo: that's super interesting insight. Yeah, because people jump headlong into it without much thought saying, "Hey this is gonna be the end all be all, and this is gonna fix everything."
Or, "This is gonna run my whole clinical model without really understanding what's going on. " And really, like you said, the pitfalls, right? And a lot of these tech people come into the medical world and just don't understand the medical rules and really the nuances behind that. Yeah. [00:32:00]
Pheobe: Yeah. I'll add too, because, I work with a lot of companies that use he- heavily AI.
there's a way consumers, as like patients wanna be interacted with when they're talking about their healthcare services and there's ways that we do not want to be.
I think you have the clinics that, again, if you are using AI in ways that you're using, that again, are making the experience more convenient, but again, there's still that human element that's kind of guiding the care. To me, like, that's what everybody's dream versus super you know, like, we all hate when we have to, get on the phone and we're like a spam call.
So it's like, to me, I'm always like, I find it really funny that there's just a lot of certain things that we wanna try to use AI to, reduce certain friction. And I'm like, no, no, no. Like, yeah, like, AI scribes, great. the kind of, like, that proprietary software that you develop to show movement, that is phenomenal and great, but it's not removing [00:33:00] the need for a physical therapist.
Guest: Correct. Yeah. The core of, technology adoption is, augmenting or creating efficiency in the current workflows rather than adding two more extra clicks or just trying to promise, SARS and Moon saying you wouldn't have to do anything, it's all gonna happen by this technology.
It takes time and, yes, we know that we used to manually, work the farms, now the tractors take care of it. yes, the things are gonna move that direction is just like, what is the right way of doing it? That's the most important one. I'll give you a couple of examples of, like, where human in the loop also actually, like, is not a bad thing, but that doesn't mean what I'm trying to say is, like, human in the loop is not you do a thing with the AI and I check everything.
That's the waste of human in the loop means it's good for training, but not a product, it's not a consumer product actually when I talk about human in the [00:34:00] loop is where somehow human is integrated in the entire workflow. And this is where human is leveraging the AI and not human is literally out of the
So I'll give you a small example of chart summaries. you're a clinician, I believe you're in ER. Doc, so probably you don't have to deal with this a lot, but imagine yourself on a floor, as a hospitalist, before you walk into the patient, how much of a data you have to consume- mm-hmm.
so that you know everything in last 24 hours that has happened, right? And now this is where AI is not doing clinical decision support. All it is doing is compiling all the data, everything that happened in the last 24 hours and giving you in a small summary of it that you can consume very quickly and then be able to go and help patient.
Now, does that eliminate the doctors? does that eliminate the need of the doctor to actually go into that EMR and then click 50 clicks and then find every small thing that probably happened over the last [00:35:00] 24 hours? No. So this is where a safe way of applying AI is versus the second one is like a patient just logs onto this software and then starts putting out, like, "These are my symptoms.
or it starts prescribing you medication.
Leo: Yeah, and I forget what study it was, and they actually studied, use of AI in the clinical models, and they found that AI was the strongest doing that, summarizing, but not necessarily reading into context, reading in clinical context, they were pretty weak compared to standard of care in terms of actually practicing the actual medicine.
So that kind of goes along with it. Right. and, hey, this has been a very interesting conversation about AI and the safety AI. we've been talking about it recently with our most recent guests and this is a great addition to it, and especially talking about the hallucinations, what to look out for and how to actually safely use it.
So thank you. I totally appreciate [00:36:00] the conversation.Thank you so much, . Phoebe you have anything ?
Pheobe: No. I guess one thing that I would, just on closing thoughts for people who are building in this space today, like, let's say 10 years ago, do you have any, like, advice for people that are either starting to build or are considering you know, integrating or pulling in some sort of, like, AI functionality into their own practice or their own telemedicine company and, just, the things that you wish you knew 10 years ago I guess all of us were rookies at AI back then maybe not the big ML, engineers but we certainly were.
Guest: Yeah. that's amazing. there's so much to that question actually I'll tell you is like, the couple of experience related pointers I can talk about is like, yes, the small error rate at scale becomes very large, so don't ignore it. And that's the one of the most important is just like 1%. No, 1% of millions of people when using it it's a lot of people [00:37:00] having bad experience with the product.
also most important is like, see how your product is going to behave in a real deployment or go and actually check the similar, examples of the real deployments, not like research papers, not only the white papers, not like how the websites talk about their products, but in real deployment, how exactly like a similar product behaved, that would be one of the most important things, someone can really do to learn and build something, that is going to actually be acceptable tomorrow.
So those are the real application side of it. But on the flip side not only AI, but any healthcare product, I've been repeatedly talking about this as like, has to be integrated, deeply integrated into the workflows and into the tech stack that our current users are already using. If you can't think around that your product is not ever going to be, consumed at [00:38:00] mass's scale.
And the smallest example this I always have been giving out is you don't start building a house. First thing you do is like lay the foundation, pull all the utilities to the land. That means you're gonna start actually pulling in the integration with the EMR and HR so that you can get the data, you can feed the data back like water, electricity, internet, and then you start building in other parts of it and build a house on top of it.
And those are the type of houses people live in.
Pheobe: Yeah. I, totally agree. I think again, technology, which I think so many people don't understand is really hard to switch or cut over or do anything after the fact. And so to your point, like laying that foundation from day one, I think is super important and making sure that you have it again, like built into your ultimate product, especially if it's, going to be something that is, instrumental to your workflow.
Guest: Absolutely.
Leo: Well, if people want to talk to you more about this and [00:39:00] gain more insight, how can they reach you?
Guest: Pretty accessible. So I'm very active on LinkedIn. you can easily look up for me. I show Gupta, and if you still don't find it, just put Terra behind it and you'll definitely get it.
If not that, like my company's website, TeraNow.com, T-H-E-R-A-N-O-W.com. there's a lot of information. We are a small team. We are very accessible and not only like about virtual health and telehealth, or AI specific in healthcare, but anything in this domain that kind of excites you, happy to connect and chat a lot more about it.
Leo: Yeah, your excitement and interest in this is electric and, you know, it shows. So thank you so much. This has been a super enjoyable talk and, thank you for gracing us with your time.
Guest: Oh, appreciate it. I appreciate having me over here. Thank you.
Leo: Yeah. All right. So, everybody out there, thank you again for listening and we'll catch you again next time.