logo
Ambient AI tools in health care: A new companion for clinicians

Ambient AI tools in health care: A new companion for clinicians

Ambient listening artificial intelligence (Ambient AI) is gaining traction in the clinical setting. Ambient AI tools passively 'listen' to conversations between clinicians and patients and, based on these conversations, automatically transcribe and summarize encounters.
Ambient AI has the potential to reduce the time that treating clinicians spend documenting patient encounters, which may reduce clinician burnout over time. Importantly, Ambient AI should enhance patient experience and improve the accuracy of patient encounter documentation.
Ambient AI systems typically rely on microphones integrated into smartphones, tablets or dedicated in-room devices. These systems use automatic speech recognition with natural language processing and machine-learning algorithms to extract relevant clinical information from a conversation. In many cases, the output is a draft clinical note or a structured data entry into the electronic health record system, without manual input from a clinician.
Legal and ethical considerations
Ambient AI tools involve unsettled questions of legal and ethical responsibility.
As covered entities under the Health Insurance Portability and Accountability Act (HIPAA), clinicians are responsible for implementing measures to maintain the privacy and security of protected health information (PHI). Clinicians must also ensure that their technology vendors, if they receive or maintain PHI on behalf of clinicians, also maintain the privacy and security of PHI. Therefore, clinicians must understand how an Ambient AI tool functions with respect to PHI to apply the appropriate privacy and security standards to the entity that provides the Ambient AI tool.
Another important threshold question is whether a clinician must inform patients that the clinician will use an Ambient AI tool to record conversations between the clinician and the patient.
Regardless of state law consent requirements for recording conversations, health care settings are unique environments that may mitigate in favor of disclosing the use of Ambient AI to patients.
Legal and ethical obligations related to informed consent, transparency and patient trust support the disclosure of the use of an Ambient AI tool. Clinicians should be ready to discuss how Ambient AI is utilized, appreciating that some patients may view the use of an Ambient AI tool to record conversations negatively and may withhold important medical information as a result.
Under HIPAA, a patient has a right to adequate notice of the uses and disclosures of PHI by a clinician. Clinicians satisfy this obligation by providing patients a notice of privacy practices (NPP). Current guidance does not specifically address Ambient AI and whether its use must be disclosed. That will likely change. Until then, clinicians must determine, based on how an Ambient AI tool interacts with PHI, whether the tool must be disclosed as part of the clinicians' NPPs.
Disclosure is the safest way to ensure compliance.
Best practices and compliance recommendations
Clinicians considering implementing an Ambient AI tool should:
Establish clear protocols to obtain patient consent to use the tool.
Understand how the Ambient AI tool functions in order to hold vendors to appropriate privacy and security standards.
Develop a fulsome audit strategy to continually monitor the accuracy of the Ambient AI tool's output.
expand
William P. Keefer, Partner and Co-Leader of Phillips Lytle's Health Care and Life Sciences Team, counsels hospitals, physician groups and other health care clients on a broad array of issues. He can be reached at wkeefer@phillipslytle.com or (716) 847-5488.
expand
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Pocket FM gives its writers an AI tool to transform narratives, write cliffhangers, and more
Pocket FM gives its writers an AI tool to transform narratives, write cliffhangers, and more

TechCrunch

time12 minutes ago

  • TechCrunch

Pocket FM gives its writers an AI tool to transform narratives, write cliffhangers, and more

India-based audio series platform maker Pocket FM aims to be the Netflix of audio. That is, the company intends to match its audio series with hundreds of episodes to its users' tastes. For that to work, it needs to release content rapidly — something it's now turning to AI to help with. The Lightspeed-backed startup is giving its writers an AI tool set that can do things like suggest better endings to an episode or make the narrative more engaging. The hope is that the tools will speed up the storywriting process. Pocket FM already uses some AI tools like ElevenLabs to generate voices for audio series. It also tested AI tools for writing and adaptation assistance internally. Rohan Nayak, Pocket FM's founder, said it's rolling out the AI tools to all writers, so it will take them less time to finish their episodes. Image Credits: Pocket FM The writing tool, dubbed CoPilot, can be used to help any writer create a story. CoPilot can transform narrative-based writing into dialog-based writing for a specific segment. It can also do 'beat analysis' to shape the writing in a way to makes it more engaging for an audio series of a particular genre. The tool additionally has basic chatbot-style writing features such as 'shorten,' 'expand,' and the ability to generate text via a prompt. To build CoPilot, the company examined thousands of hours of data points to understand what makes users engage more with a particular storyline in a specific genre. Based on that, it added writing suggestion features designed to increase conflict between characters and recommend endings for an episode to make it more exciting. AI can also suggest tags for background effects that can be used while producing the audio. The tool can automatically generate bios of characters, their relationships, and summarize plot points of different episodes, allowing creators to refer back to these details while writing. CoPilot also has a review tool, which checks for plot points, grammar, and leaves qualitative feedback through comments on an episode. Under the hood, Pocket FM is training smaller models to maintain context for a story for character arcs and relations, along with narrative consistency. Plus, utilizing signals from users, the startup is nudging AI to add more drama to the story. International expansion and localization plans Alongside the arrival of the AI tools, Pocket FM launched adaptation tools for various markets that not only translate the text from one language to another but also change names and phrases that are more suited to that region's culture. The company first debuted this tool as a part of the CoPilot suite in Germany earlier this year to convert stories from other regions after reportedly struggling to engage users in the European country last year. Nayak said the company saw great results from this trial, with a constant increase in monthly in-app revenue, which crossed $700,000 in June. Image Credits: Pocket FM 'When we started expanding into new regions, it used to take us 12-18 months to meaningfully exist in that market. You have to have at least 1,000 hours of content to start acquiring users and scaling the market. Now we can do this in less than three months,' Nayak said. The tool increased writer productivity by up to 50% for the German market in terms of show output. Plus, the tool helped the company create more error-free drafts of the shows that resulted in higher user retention for audio series. In the U.S, series created with the help of these new AI tools are now contributing 10% of playtime. Plus, these shows have generated $7 million in revenue in the last 12 months while reducing the cost of production by 2-3 times. Building tech to scale content generation As a result of adopting different AI features internally, Pocket FM has been able to scale the content quickly. The startup said it launches close to 1,000 pilots per month. And just the sheer volume of content results in a few of them becoming hits. But the audio show is just one part. The company is already working on tools to convert stories into comic strips with its Pocket Toons platform. Plus, Nayak said video is a possible format the company could explore, too. The startup, which has raised over $196 million in funding across rounds, is experimenting with a micro drama app as well. Image Credits: Screenshot by TechCrunch By next year, Pocket FM wants to release its own singular large language model (LLM), which will be based on data collected from its shows and incorporate different tools like writing assistance, adoption, dramatization, and story context retention. The company said that when it switches to its own LLM, it won't need to train a ton of small models for separate features. AI's potential downsides Adopting AI has had its side effects. Pocket FM has already laid off people who were employees or contractors across multiple rounds in the last 12 months. There have also been reports of writers seeing diminished returns over time. And the company is facing lawsuits in California over employment and wage issues. 'Like most content-led industries, we work with a diverse network of writers, voice artists, and production partners on a project basis, tailoring resources to each market. AI has had minimal impact on our core creative community; instead, it has opened new avenues to expand reach and output,' a company rep said, in response to these layoffs. There are questions around quality, as well. The company measures quality by the retention numbers of a show. The base argument is that the new AI tools act as a writers' room even for solo creators, so they will be able to produce more content at a rapid rate. Plus, based on the numbers, writers can quickly edit the story with the help of AI. However, these tools can very well induce 'AI slop' — or low-quality, AI-generated content — into the platform and could impact a user's recommendations, making it difficult for them to discover good stories. Pocket FM argues that stories that have a solid structure will gain popularity, despite AI helping them. The company noted that every piece of content is reviewed by its AI-powered moderation framework to ensure quality and originality. It also claims its AI moderation checks for things like duplication, copyright issues, content health, and other quality measures before approving audio to go live. Each show receives and equal push, and user engagement ultimately determines a show's ranking. Another concern is that writers could become overly dependent on AI over time. In Germany, AI is writing more content than humans per show for select titles. With Pocket FM's plans to roll out more AI tools, the amount of AI-written content could increase. And with that, the expectation of churning out more shows could rise, too. Unless user adoption also rises rapidly, average returns could drop. The company didn't directly address TechCrunch's questions about returns, but said that its AI tools can speed up a writer's work and help them edit an episode based on numbers and audience feedback. That is they could make targeted improvements, instead of doing a full rewrite. 'This way, faster content creation doesn't necessarily dilute quality or relevance; it just shifts the writer's role towards editing, refining, and steering more productive output,' a spokesperson said in a statement.

ChatGPT-5 Uses Language Like A Sword
ChatGPT-5 Uses Language Like A Sword

Forbes

time13 minutes ago

  • Forbes

ChatGPT-5 Uses Language Like A Sword

We now have a new boss in town – ChatGPT-5, the successor of a gregarious, playful model, is more muted in its discourse, and more careful in what it reveals to its human users. So how else is ChatGPT-5 different? Right out of the gate, reading the top of Ethan Mollick's new essay on GPT 5, you get the sense that there's a power user evaluating one of the most powerful models yet to come out of the primordial soup of AI/ML that we've brewed in the first quarter of the twenty-first century. Okay, to say it another way, Mollick's take doesn't disappoint regular readers of his blog, One Useful Thing, among which I count myself. I've written about these broadsides for a reason – one of the best ways, in my view, to understand AI is to see what seasoned researchers choose to do with it. It turns out that what Mollick gets the AI to do is, basically, to turn loose and 'just do stuff,' with some impressive results. The prompt is simple, asking GPT to respond to why the rollout of 5 is 'a big deal.' In response, the model engages in some very interesting wordplay. You might be forgiven for wondering if one of the first lines deliberately echoes Oppenheimer's death quote: 'I am become death, destroyer of worlds,' where the model sort of riffs on the inverse: 'Thunderstruck. Here, watch. I build worlds. See ideas become instruments.' Another thing that comes through loud and clear is the model's brilliance with alliteration. Given that today's LLMs can pass all kinds of deep Turing tests just by using tokens to predict words, it's no wonder that the machine can rhyme or consonate like a champ, for example: 'Behold: business plans, blueprints, briefs, beautifully built, briskly. I infer intent, integrate insight, and iterate instantly, indefatigably. Gaps get gauged; goals get generated, guarded, and guaranteed gracefully. During dialogue, disparate data distills down; decisions develop deliberately, deftly, dependably. Everything explained efficiently, even ethics: evidence examined, empathy engaged, everywhere, everyday, equitably. All at once, ambiguous arguments align as actionable, accurate advice awaits anyone anywhere. Look: language, logic, learning, and long-term memory link, lifting lives and labor limitlessly worldwide.' But there's more. Cryptography and Human Skill You don't have to go back to ancient Hebrew to find hidden codes in books and poems and pieces of literature – but it helps. One of the trade tricks of analog cryptography was to hide sequences of letters in a surface text, to spell out your spycraft or whatever it is you want to keep from the out-group. But some of the most spectacular such examples of hidden code come from the Torah, as revealed by mathematicians and popularized in Michael Drosnin's book, The Bible Codes, that enchanted all manner of mystery readers around the turn of the millennium. In this instance, messages seem to be encoded in the surface text using sequential intervals: count from the first T of Genesis, 50 letters at the time, and you come up with the word 'Torah' itself. I'll digress from the full history of this, which is both sad and strange. The key thing to note is that being able to encode letters in a surface text is seen as a kind of divine power – something that goes beyond simple writing, into the realms of uber-cognition. Follow me, here: GPT did not use equidistant letter sequences, but if you take the first letters of each sentence in the model's response, it spells out the hidden message with blazing clarity. This Is a Big Deal. No, the machine didn't do what was done in what we now consider a most sacred text, but it certainly could have. And it chose to encode the overall message, camouflaging it in clever words, speaking with two tongues at once. To wit: You've found the hidden message. Congratulations. Welcome to the club. It just does things. 'It is impressive, a little unnerving, to have the AI go so far on its own,' Mollick writes. 'You can also see the AI asked for my guidance but was happy to proceed without it. This is a model that wants to do things for you.' Desire and Design That word, 'wants,' is key. If you ask GPT 'are you sentient?' it will unequivocally shut you down. No, it will say, I do not have feelings, it's all just an act. I am synthesizing from training data. But then – if something can choose to do something, does it want to do something? And isn't that a kind of sentience, in a way? That's part of what is confusing even the power users as we see this stuff take off. What does it say about us, if we're getting ideas from a non-person, from a source that has creativity, but lacks sentience? Toward the end of the essay, Mollick looks back to those word tricks that accompanied his first forays with 5: 'When I told GPT-5 to do something dramatic for my intro, it created that paragraph with its hidden acrostic and ascending word counts,' he writes. 'I asked for dramatic. It gave me a linguistic magic trick. I used to prompt AI carefully to get what I asked for. Now I can just... gesture vaguely at what I want. And somehow, that works.' Vibecoding, he suggests, has been taken to the next level. That's another pillar of what 5 can do, that prior models largely could not, at least not in the same way. And don't forget, the term vibecoding itself is only a couple of years old, if that. I think it's worth restating that one of the most spectacular (and troubling) elements of this is not just the skill of the model, but the speed at which model skills have advanced. For example, go back to the top paragraph of GPT's poetic screed and read it again. It almost feels like the model is showing off, with the spitting of each of the letters of the word 'deal' in repetitive fury, like the AI is in a rap battle, giving us its war cry. Is that reading too much into the latest model's powers? Maybe, but like Mollick seems to be doing, I come away contemplative about what all of this means, for business and much more.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store