logo
We built an AI assistant to give doctors something they rarely have: time

We built an AI assistant to give doctors something they rarely have: time

This as-told-to essay is based on a conversation between Erez Druk, the founder of Freed AI, an AI assistant for clinicians, and his wife, Dr. Gabi Meckler, a family physician. This conversation has been edited and condensed for clarity.
Gabi Meckler
When I was in residency, I really began to feel the weight of notes.
I would stay in the hospital until midnight, writing notes. You'd come home and still have notes to do. They were constant — a fact of life. Some people suffer. Some even quit.
One day, my husband, Erez, asked me what would make my life easier. I jokingly said, "Can you just write my notes for me?"
Erez took the idea seriously. He started building something.
Erez Druk
I built a simple proof of concept using GPT, a customizable version of ChatGPT, over the course of a few hours. It was a bare bones version of Freed that allowed for patient instructions, which are what clinicians send to patients post-visit, and subjective notes, which clinicians use to document patients' experience of their condition.
I showed it to Gabi. She said "interesting," but told me it still needed a few improvements. It would break at times, it didn't know the names of several medications, and wasn't attuned to different specialities.
I knew I needed another data point beyond my wife, so I solicited the opinion of our friend, another clinician. She came over for dinner, I showed her the product, and her response was very different — one of immediate excitement. She texted the next day asking to use it.
I knew we were on to something.
We moved fast from there. It took some work to make it HIPAA compliant. I wanted to get a beta version out quickly. We got some clinicians to test it. We asked them for feedback constantly. Building Freed isn't just about vision or strategy — it's about listening to users and iterating based on what they need.
Meckler
When I started working at a clinic, after residency, I began using Freed every day. I could finally finish my notes before leaving the office. One time, I forgot to send a referral for a patient, but Freed reminded me. That moment made me realize how much Freed was helping, not just with time, but with the details that I might have missed otherwise.
Other doctors at the clinic noticed, too. They'd come to me and say, "Your husband? He saved me hours of work." That was rewarding.
One thing I love is that Erez takes my input seriously. He really understands the nuances that matter to clinicians, and the team is learning too.
Druz
Gabi holds a weekly office hour with the product managers and designers where they share their work with her. She offers her perspective on what's useful and what isn't.
It's surprising — or maybe not — that even after spending a lot of time with clinicians, there's always more to learn. We can never truly be in their shoes, there's a constant depth and nuance to uncover. That's why this setup is so valuable: It helps us get closer to understanding how they think and what's genuinely useful to them.
Meckler
Freed, as the name suggests, is all about freedom. Our goal wasn't to tell clinicians how to use their time — we simply wanted to give it back to them.
That mindset really influences our marketing. We're very intentional about not telling clinicians what they should do — they know better than anyone else. We don't tell clinicians to be heroes or push messages like maintain "eye contact" or "spend more time with patients." You probably already make enough eye contact. And if you want to Netflix and chill, great.
We're not here to tell people how to be better doctors.
Druz
Managing the relationship between husband and wife, founder and consumer, innovator and advisor — isn't always easy. It's definitely annoying sometimes to get feedback that is correct.
Meckler
I think we do it well, though, I don't know how we do it.
I try to be specific about feedback. Like, this is something you absolutely have to change before moving forward, versus this would be nice, or it's a bit iffy.
I told Erez that we needed to incorporate patient instructions. They include a summary of the visit and clear instructions based on the action plan created during the appointment. The goal is to make them easy for patients to read, understand, and follow. It's now one of the most loved features of Freed.
We're building a document analyzer now at the request of users. It will take any clinical document and give the clinician a summary. Users can also ask questions, and it will provide short responses with references from the document.
This whole time, since we launched in January 2023, Erez has been learning about medicine, and I've been learning about startups. So, I know he has to move fast. That's why I focus only on the most important things, and I push really hard when it matters.
I've loved being involved as part of our relationship.
Work is really my second obsession after Gabi — and I'm obsessed with work. So the fact that we can talk about everything, and she knows the people in the team, and what we're working on, is really fun.
We created Freed together. It's a love letter from us, our team, to all the clinicians out there.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I tested 5 apps that detect AI writing — here's the one that beat them all, and the one that missed the mark
I tested 5 apps that detect AI writing — here's the one that beat them all, and the one that missed the mark

Tom's Guide

time2 hours ago

  • Tom's Guide

I tested 5 apps that detect AI writing — here's the one that beat them all, and the one that missed the mark

On the one hand, AI tools like ChatGPT, Google Gemini, and DeepSeek are incredibly useful when it comes to writing emails, summarizing content, and detecting tone in our writing. It's hard to imagine life before late 2022, when most of us discovered that ChatGPT can do some of the legwork when it comes to writing content. Need a cover letter? You can write one in five seconds, complete with a greeting and a summary of your work the other hand, AI slop is all around us. Prose written by a chatbot has a few telltale signs, such as a lack of originality and vague details. In this war of words, though, the AI bots are improving. You can ask ChatGPT to rewrite content so that it sounds more original and can avoid detection by apps like GPTZero. The war rages on, a true cat and mouse don't really know who is winning the war. If you're a student, writing content for your job, or even composing an email for a family reunion this summer, detecting AI writing is far easier than you might think — which might give you pause. For example, most professors in college now know how to run an AI detection service on your assignments. One popular tool — called GPTZero — uses a probability index to detect whether AI was involved in a piece of all of the AI detection apps work the same, though. I found there was one superior tool and one that missed the mark. For my tests, I used a sample chapter from a book I'm writing — I loaded an entire chapter into the five AI detection apps below. I also had ChatGPT write a cover letter for a fictitious job. I asked the bot to use some flair and originality, and to try to avoid AI detection. Lastly, I asked ChatGPT to finish this article for me — essentially, a 50-50 split between me and AI (e.g., something I'd never actually do).Here's how each AI detection tool fared on the three tests, including the big winner. I've used GPTZero many times, in part because the free version lets you detect a small amount of text without signing up for a subscription. For this review, I used the full Premium version that costs $23.99 per month and can do basic and advanced scans. With the advanced scan, GPTZero splits a long section of text into pages and rates the AI probability for each section. GPTZero did flag quite a few paragraphs with a 1% AI probability and a few sentences with a 5% AI probability rating. Yet, overall, the service worked remarkably I tested the cover letter written by ChatGPT, GPTZero really shone the brightest of all the apps. The service reported that it was likely 100% AI-written. The only issue is that there were some false flags, even with that overall rating. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. GPTZero labeled a few sentences as human-generated. When I had GPTZero scan my article that was 50% human and 50% AI, the service flagged it as 58% human — the most accurate of the AI detection apps. is a comprehensive tool that provides detailed detection results. The service costs $12.95 per month for the Pro plan with 2,000 credits. In the sample text from my book, Originality. AI quickly labeled my text with 100% confidence that it was all human-written — the only app that returned that correct result. That is reassuring, although the service did question a few sentences as AI-written even if it gave me an overall 100% confidence the ChatGPT cover letter test, reported that it was 91% human. That's because I asked ChatGPT to try and avoid the AI detection apps and write with flair, but a little troubling. In my test where I asked ChatGPT to finish this article, I was quite shocked. flagged the entire article as original with 100% confidence, even though only the first half was human. (When I asked ChatGPT to finish the article, it churned out some generic content even though I asked the bot to match the article style.) It seems was fooled by that trick even though it's likely a common practice, especially with students. Grammarly is designed primarily to help you write without errors and to avoid plagiarism, but it also includes a robust AI detector. I would say it is too robust. The interface for Grammarly is confusing since it flags plagiarism and AI writing at the same time. The app flagged the chapter of my book, saying '7% of your text matches external sources' which felt like a slap in the face. Come on! First, it isn't true, and second, that's discouraging. The app also said it did not detect common AI patterns in the writing, so that was a relief. Still, I didn't like the false flags. Grammarly is also expensive, costing $30 per month if you pay trick, asking ChatGPT to write a cover letter to avoid detection, proved quite effective — Grammarly said: 'Your document doesn't match anything in our references or contain common AI text patterns.' That was entirely incorrect, since the text was 100% AI-generated. The same result occurred when I fed the article that was 50% me and 50% AI — it said it was all human. Winston AI is another powerful and full-featured app, similar to in many respects. Scanning the sample chapter of my book, Winston AI gave me a 96% human score, which is fair. Unfortunately, like Grammarly, the service flagged some sections with only a 50% probability of human writing. In the middle section, Winston AI labeled two entire paragraphs as 100% AI written, even though they weren't. I tested the Winston AI Essential plan, which costs $18 per month but does not detect plagiarism; it's $12 per month if you pay annually. As for the cover letter, Winston AI was all over it. The service flagged the text as 100% AI written, although it suggested the second half of the letter might have been human-generated (suggesting a 48% probability as human). Fortunately, Winston AI also flagged my article correctly, saying there was a 46% chance of it being human-generated. The app flagged a middle section that was all AI-written, but missed the closing section (which was AI). Monica was my least favorite AI detection tool, but that's mostly because the service has multiple purposes — AI detection is just one feature. The app actually outsourced detection to Copyleaks. GPTZero, and ZeroGPT. For the book chapter, Monica flagged my test as 99% human but didn't provide any other guidance as far as feedback on specific detected the cover letter as 100% AI-written. That's not a surprise since GPTZero reported the same result, and Monica uses that same app. Monica had some serious problems detecting my article which was 50% human and 50% AI-generated. The service decided it was 100% human-generated and didn't flag the second half, which was AI-written.

Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat
Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat

Yahoo

time2 hours ago

  • Yahoo

Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat

Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat originally appeared on TheStreet. You typed in a question and clicked a few links, and Google could get paid if you landed on an ad. For years, that simple cycle helped turn Google into a trillion-dollar titan. But now, that model is under threat. 💵💰💰💵 AI-powered chatbots like OpenAI's ChatGPT are rapidly changing how people find answers. Instead of browsing through links, users are getting direct summaries on AI. These 'zero-click' searches quietly erode the economics that built the modern internet. The number of users is growing fast. OpenAI CEO Sam Altman said in April that ChatGPT already has 'something like 10% of the world" in terms of users, pegging the number closer to 800 million, Forbes reported. Even Google seems to know it. It's giving AI answers, called AI Overviews, right at the top of the page. "What's changing is not that fewer people are searching the that more and more the answers to Google are being answered right on Google's page. That AI box at the top of Google is now absorbing that content that would have gone to the original content creators," Cloudflare CEO Matthew Prince said in a CNBC interview. Alphabet () , Google's parent company, isn't showing any cracks just yet. In April, the company posted first-quarter revenue of $90.23 billion, topping Wall Street expectations. Earnings per share came in at $2.81, far above the forecasted $ the backbone of Google's business, brought in $66.89 billion, accounting for nearly three-quarters of total revenue. Its 'Search and other' segment rose almost 10% year over year, hitting $50.7 billion. Meanwhile, Google's own AI tools are starting to show traction. AI Overviews now has 1.5 billion users per month, up from 1 billion in October, the company said. So far, the numbers suggest that AI isn't cannibalizing Google's business yet. Bank of America remains bullish on Alphabet stock. The firm reiterated a buy rating and a price target of $200, which implies a potential 15% upside from current levels, according to a recent research report. The firm said in May, Google's global average daily web visits held steady at 2.7 billion, unchanged from the previous month and down 2% from a year earlier. ChatGPT, meanwhile, saw a 3% month-over-month increase to 182 million, marking a 105% jump the U.S., Google traffic slipped 2% year-over-year to 524 million daily visits, while ChatGPT surged 112% over the same period to 26 million. Although Google has highlighted the growing reach of its AI Overviews, analysts are uncertain whether it's translating into more traffic. 'So far, we are not seeing a lift in Google traffic from AI Overviews expansion, though we think the search experience is much improved,' the analysts wrote. The competition is real. Google's global search share also edged down in May, falling 8 basis points month-over-month and 123 basis points year-over-year to 89.6%, according to Statcounter. Still, Bank of America analysts remain optimistic on Alphabet stock. "While ChatGPT's traffic continues to grow rapidly, we think Google remains well-positioned given its scale, multi-product reach, data assets, and robust monetization infrastructure," the analysts said. "AI can expand overall search monetization by better understanding the intent behind complex and long-tail queries that were previously hard to monetize," they added. Morningstar's Malik Ahmed Khan echoed that sentiment, saying Alphabet's diverse revenue streams and global exposure should cushion any hits, even as regulatory and AI risks mount, according to a May research report. Alphabet stock closed at $174.92 on June 6. The stock is down 8% unveil bold forecast for Alphabet stock despite ChatGPT threat first appeared on TheStreet on Jun 6, 2025 This story was originally reported by TheStreet on Jun 6, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Inside the Secret Meeting Where Mathematicians Struggled to Outsmart AI
Inside the Secret Meeting Where Mathematicians Struggled to Outsmart AI

Yahoo

time3 hours ago

  • Yahoo

Inside the Secret Meeting Where Mathematicians Struggled to Outsmart AI

On a weekend in mid-May, a clandestine mathematical conclave convened. Thirty of the world's most renowned mathematicians traveled to Berkeley, Calif., with some coming from as far away as the U.K. The group's members faced off in a showdown with a 'reasoning' chatbot that was tasked with solving problems they had devised to test its mathematical mettle. After throwing professor-level questions at the bot for two days, the researchers were stunned to discover it was capable of answering some of the world's hardest solvable problems. 'I have colleagues who literally said these models are approaching mathematical genius,' says Ken Ono, a mathematician at the University of Virginia and a leader and judge at the meeting. The chatbot in question is powered by o4-mini, a so-called reasoning large language model (LLM). It was trained by OpenAI to be capable of making highly intricate deductions. Google's equivalent, Gemini 2.5 Flash, has similar abilities. Like the LLMs that powered earlier versions of ChatGPT, o4-mini learns to predict the next word in a sequence. Compared with those earlier LLMs, however, o4-mini and its equivalents are lighter-weight, more nimble models that train on specialized datasets with stronger reinforcement from humans. The approach leads to a chatbot capable of diving much deeper into complex problems in math than traditional LLMs. To track the progress of o4-mini, OpenAI previously tasked Epoch AI, a nonprofit that benchmarks LLMs, to come up with 300 math questions whose solutions had not yet been published. Even traditional LLMs can correctly answer many complicated math questions. Yet when Epoch AI asked several such models these questions, which were dissimilar to those they had been trained on, the most successful were able to solve less than 2 percent, showing these LLMs lacked the ability to reason. But o4-mini would prove to be very different. [Sign up for Today in Science, a free daily newsletter] Epoch AI hired Elliot Glazer, who had recently finished his math Ph.D., to join the new collaboration for the benchmark, dubbed FrontierMath, in September 2024. The project collected novel questions over varying tiers of difficulty, with the first three tiers covering undergraduate-, graduate- and research-level challenges. By February 2025, Glazer found that o4-mini could solve around 20 percent of the questions. He then moved on to a fourth tier: 100 questions that would be challenging even for an academic mathematician. Only a small group of people in the world would be capable of developing such questions, let alone answering them. The mathematicians who participated had to sign a nondisclosure agreement requiring them to communicate solely via the messaging app Signal. Other forms of contact, such as traditional e-mail, could potentially be scanned by an LLM and inadvertently train it, thereby contaminating the dataset. The group made slow, steady progress in finding questions. But Glazer wanted to speed things up, so Epoch AI hosted the in-person meeting on Saturday, May 17, and Sunday, May 18. There, the participants would finalize the final batch of challenge questions. Ono split the 30 attendees into groups of six. For two days, the academics competed against themselves to devise problems that they could solve but would trip up the AI reasoning bot. Each problem the o4-mini couldn't solve would garner the mathematician who came up with it a $7,500 reward. By the end of that Saturday night, Ono was frustrated with the bot, whose unexpected mathematical prowess was foiling the group's progress. 'I came up with a problem which experts in my field would recognize as an open question in number theory—a good Ph.D.-level problem,' he says. He asked o4-mini to solve the question. Over the next 10 minutes, Ono watched in stunned silence as the bot unfurled a solution in real time, showing its reasoning process along the way. The bot spent the first two minutes finding and mastering the related literature in the field. Then it wrote on the screen that it wanted to try solving a simpler 'toy' version of the question first in order to learn. A few minutes later, it wrote that it was finally prepared to solve the more difficult problem. Five minutes after that, o4-mini presented a correct but sassy solution. 'It was starting to get really cheeky,' says Ono, who is also a freelance mathematical consultant for Epoch AI. 'And at the end, it says, 'No citation necessary because the mystery number was computed by me!'' Defeated, Ono jumped onto Signal early that Sunday morning and alerted the rest of the participants. 'I was not prepared to be contending with an LLM like this,' he says, 'I've never seen that kind of reasoning before in models. That's what a scientist does. That's frightening.' Although the group did eventually succeed in finding 10 questions that stymied the bot, the researchers were astonished by how far AI had progressed in the span of one year. Ono likened it to working with a 'strong collaborator.' Yang Hui He, a mathematician at the London Institute for Mathematical Sciences and an early pioneer of using AI in math, says, 'This is what a very, very good graduate student would be doing—in fact, more.' The bot was also much faster than a professional mathematician, taking mere minutes to do what it would take such a human expert weeks or months to complete. While sparring with o4-mini was thrilling, its progress was also alarming. Ono and He express concern that the o4-mini's results might be trusted too much. 'There's proof by induction, proof by contradiction, and then proof by intimidation,' He says. 'If you say something with enough authority, people just get scared. I think o4-mini has mastered proof by intimidation; it says everything with so much confidence.' By the end of the meeting, the group started to consider what the future might look like for mathematicians. Discussions turned to the inevitable 'tier five'—questions that even the best mathematicians couldn't solve. If AI reaches that level, the role of mathematicians would undergo a sharp change. For instance, mathematicians may shift to simply posing questions and interacting with reasoning-bots to help them discover new mathematical truths, much the same as a professor does with graduate students. As such, Ono predicts that nurturing creativity in higher education will be a key in keeping mathematics going for future generations. 'I've been telling my colleagues that it's a grave mistake to say that generalized artificial intelligence will never come, [that] it's just a computer,' Ono says. 'I don't want to add to the hysteria, but in many ways these large language models are already outperforming most of our best graduate students in the world.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store