
Daniel Nadler
'The reason is a great reason, which is the golden age of biotechnology,' says Nadler, an artist, poet, and tech entrepreneur who became interested in positive uses for AI while working in machine learning. But it means doctors need to keep up with an overwhelming amount of information, such as clinical trial results and new treatments. Staying on top of it all is "basically impossible.' He's calculated that at least one new medical paper is published every minute, and if doctors read only the new evidence in the top 10 journals, it would still require nine hours of their day. 'You would literally never have time to see patients,' he says. 'Most doctors don't do that, and they miss new findings.'
Enter OpenEvidence, an AI-powered search engine that sifts through thousands of peer-reviewed studies in real time. Nadler describes it as a 'brain extender' for clinicians. Many report using it to make high-stakes clinical decisions, like analyzing new treatments and figuring out which makes the most sense for a patient. To determine if a just-approved psoriasis drug is safe for a pregnant patient, for example, doctors would 'need to go fairly deep on the data.' By searching OpenEvidence, however, that information will be quickly and succinctly aggregated, at their fingertips before the patient leaves the office.
In February, OpenEvidence signed a content-use agreement with the New England Journal of Medicine, which grants it permission to use all of the journal's findings from 1990 on. Nearly 30% of doctors in the U.S. currently use the tool, Nadler says; more than 50 million patients will be treated this year by a doctor using OpenEvidence. It can be especially valuable in rural areas, allowing doctors to access the kind of expert guidance that would otherwise be scarce.
'This is a very different type of medical innovation or medical revolution,' he says, 'because we've very quickly reached across the country to the middle of the country, to the health care deserts, to the health care islands, to the fringes and edges of health care.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
37 minutes ago
- Android Authority
Welcome to dystopia: I helped ChatGPT pass a CAPTCHA and doomscroll my Facebook
Calvin Wankhede / Android Authority Last week, OpenAI released what may be the most ambitious (and potentially controversial) update to ChatGPT since its launch: Agent mode. Unlike the standard conversational interface, Agent mode gives ChatGPT control over a virtual machine running Chrome, allowing it to interact with websites like a human would. It can identify elements on websites, scroll, click buttons, fill out forms, and, if granted credentials, even log into your online accounts. For the first time since the AI's launch, it can perform tasks instead of spitting out some text on how to do it yourself. On the surface, the potential seems endless. The chatbot could reply to your emails, shop for groceries, book a flight, or perform even more complex tasks spanning multiple websites. The best part is that you can watch ChatGPT 'move' its mouse cursor around the virtual web browser and navigate the internet (as you can see in the video below). Admittedly, it's a lot like a toddler struggling to walk at times, but it's endlessly fascinating nonetheless. So what can ChatGPT's Agent mode actually do with all of these capabilities? To answer that question, I tested the feature with a couple of real-world tasks — the kind you might actually want to offload to an AI assistant. Here's how it handled them, and what ChatGPT did when it encountered an obstacle. Putting ChatGPT Agent to work: A grocery run Amazon's Alexa can add toilet paper to your cart with a voice command, but ChatGPT's Agent mode can be entrusted to do a whole lot more. Specifically, it can shop your entire grocery list on any platform of your choice. Case in point: I gave the agent a simple task: buy everything I would need for a homemade pizza from Walmart. I didn't offer any specific ingredients, items, or even guidance on price just to see what it would pick. The agent booted up a virtual computer and navigated to Walmart in no time. But it ran into a roadblock almost immediately — Walmart threw up an anti-bot verification screen requiring a human to press and hold a button. Shockingly, the agent recognized this screen and asked me to briefly take control of the browser and complete the task. I took control and about ten seconds later, we were in. I handed control back, and the agent immediately got to work. It looks like CAPTCHAs will need to evolve yet again if they are to keep bots out in the future. ChatGPT summoned me when it needed a human touch, which it turns out means just solving CAPTCHAs. Moving on, I watched the agent methodically search for 'pizza dough,' 'pizza sauce,' 'mozzarella cheese,' and 'pepperoni.' But to my surprise, the agent didn't just grab the first result. Instead, it prioritized familiar and well-priced alternatives just like I personally would. In more than one instance, I watched it pick the third or fourth item in the results or call a competing product overpriced. The agent also correctly moved past inaccurate search results like a fully premade frozen pepperoni pizza when it was merely shopping for pepperoni, the ingredient. Within four minutes, my virtual cart was filled with everything I needed to make a pizza. The agent navigated to the checkout page and then handed control back to me to complete another CAPTCHA, login, and enter my payment details securely. ChatGPT says it cannot see your inputs when you're in control of its virtual machine, presumably meaning it can't store your login or credit card info. Despite that, I didn't elect to enter my login details and therefore, spent the night without any pizza. Needless to say, this was a very impressive showing even if it was a rather straightforward task with a clear sequence of actions. But can the agent handle something more ambiguous? I decided to give it a harder challenge: find a list of used car candidates on Facebook Marketplace. Can ChatGPT find the perfect used car? I love Facebook Marketplace because there's always a great deal around the corner, but I dread scrolling through the hundreds of listings to find the right one. So I put ChatGPT's Agent mode to the task, fully expecting it to stumble at some point. Specifically, I asked it to log into my Facebook account, navigate to the Marketplace tab, and compile a list of used Honda Fit specimens. As if that wasn't difficult enough, I asked it to only look for 2015 to 2020 models within a 50-kilometer radius of Toronto. Much to my surprise, the agent started off strong and logged into my Facebook account without a hiccup. I did have to step in to provide a two-factor authentication code, but other than that, it was an entirely painless process. ChatGPT navigated to the Marketplace section, changed the location to the city I provided, and even adjusted the maximum distance filter. It then started going through individual listings and recorded important details about each car it came across. To be fair, ChatGPT's Agent mode is not faster than a human — I could probably go through individual listings faster than the agent, at least in its current state. But like any computer, ChatGPT is stubbornly persistent. After watching it look through listings for a few minutes, I walked away from my computer to grab a coffee. When I returned about ten minutes later, the agent was still meticulously clicking on individual listings. ChatGPT is stubbornly persistent, and scrolled through over a hundred listings on Facebook Marketplace. The final result was a comprehensive report of two to three dozen cars that met my criteria. As the image above shows, the agent had compiled a detailed brief for each vehicle it found. It organized the key details into columns: year and model, price and mileage, transmission, and location. It even created a notes section where it summarized the seller's description, pulling out crucial information like 'dealer listing,' 'salvage title,' or 'open to negotiation.' Finally, each line item had a screenshot of the Marketplace listing for me to peruse if I was interested in that specific vehicle. If that seems impressive, it absolutely is. Putting together a list like this manually would otherwise have taken me at least a couple of hours. And the best part is that I could probably go one step further and ask the ChatGPT agent to contact the shortlisted sellers on my behalf. It's not the responsible thing to do, but it's certainly a possibility. Not perfect, but impressive…and scary Calvin Wankhede / Android Authority ChatGPT's Agent mode is not perfect by any means; I watched it go in an endless loop countless times until I was forced to intervene. It can also follow instructions too closely, almost to a fault. For example, it refused to accept a search radius of 51km instead of the 50km I prescribed on a different used auto website. The result? The agent used its virtual mouse to 'drag' a slider back and forth for several seconds until it landed precisely on the 50km it needed. Common sense doesn't come naturally to AI, even in agent mode. I also noticed a pretty big delay between ChatGPT executing an action like clicking on a link or pressing the browser's back button. This causes the AI to retry the action, which is a big problem because it inevitably ends up on the wrong page. At times, the agent would click the back button twice and end up on the homepage. It would then restart the whole search all over again, wasting several minutes in the process. ChatGPT Agent can be a bumbling fool sometimes, but it's only going to get better. But for all its clumsy moments, the true significance of Agent mode isn't in its current speed or ability. The fact that I could walk away to make a coffee while an AI autonomously compiled a detailed report on used cars is the main value proposition here. The potential for increased productivity is massive. But at the same time, I'm not sure how I feel about an AI making decisions on my behalf. So the question now is: how much more autonomy are we willing to hand over to AI? If you'd like to use Agent mode, keep in mind that you will need a ChatGPT Plus subscription. That will set you back $20 monthly, and the feature is currently capped at just 40 messages each month. But if you're patient, I'm sure it will eventually trickle down to the free tier just like other new ChatGPT features in the past. Follow


Tom's Guide
2 hours ago
- Tom's Guide
OpenAI ChatGPT-5 live blog: Countdown and all the big news as it happens
Today could be the day that we finally get ChatGPT-5. For months, the tool has been hinted at, and over the past few weeks, reviewers have been given access and hints have been dropped. Now, lining up perfectly with estimated timelines, OpenAI has announced a livestream set to go live today at 10am PT, 1pm ET, 6pm BST. In typical OpenAI fashion, the company has stayed quiet about what exactly this is, but Sam Altman has claimed that this live stream "will be longer than usual," suggesting we've got a lot of exciting new features to cram into the reveal. LIVE5TREAM THURSDAY 10AM PTAugust 6, 2025 If this is GPT-5, early testers and insiders say it could be a major leap forward — with faster responses, fewer hallucinations, and even the ability to build apps or software from scratch. We'll be updating this page throughout the day with everything you need to know: Whether you're a casual ChatGPT user or a hardcore AI enthusiast, stay tuned — this could be one of the most important AI updates of the year.


Forbes
4 hours ago
- Forbes
Analysis Of Whether Generic Generative AI Falls Within The Purview Of Providing Therapy And Psychotherapeutic Advice
In today's column, I examine a seemingly straightforward question that asks whether contemporary generic generative AI and large language models (LLMs) are said to be providing therapy and psychotherapeutic advice. The deal is this. When you use ChatGPT, Claude, Llama, Gemini, Grok, and other such popular generative AI systems, you can readily engage the AI in conversations about mental health. This can be of a general nature. It can also be a very personal dialogue. Many people are using AI as their de facto therapist and doing so without nary a thought of reaching out to a human therapist or mental health professional. Does the use of those LLMs in this manner signify that the AI is proffering services constituting therapy and psychotherapy? You might declare that yes, of course, that is precisely what the AI is doing. It is blatantly obvious. But AI makers who make and maintain the AI are undoubtedly reluctant to agree with that plain-stated assessment or ad hoc opinion. You see, new laws are starting to be enacted that bear down on generic AI that provides unfettered services within the scope of therapy and psychotherapy. AI makers are likely to desperately contend that their generic AI falls outside that regulatory scope. The question arises whether they will be successful in making that kind of tortuous argument. Some would say they don't have a ghost of a chance. Others believe they can dance their way around the legally troubling matter and come out scot-free. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. State Law Ups The Ante I recently analyzed a newly enacted law on AI for mental health that had been signed and enacted in Illinois on August 1, 2025, see my coverage at the link here. This new law is quite a doozy. The reason that it is a doozy is that it lays out violations and penalties for AI that provides unfettered therapy and psychotherapy services. The implication is that any generic generative AI, such as the popular ones I noted earlier, is now subject to potential legal troubles. Admittedly, the legal troubles right now would seemingly be confined to aspects of or within Illinois, since this is a state law and not a broader federal law. Nonetheless, in theory, the use of generic generative AI by users in Illinois that, by happenstance, provides therapy or psychotherapeutic advice is presumably within the scope of getting dinged by the new law. You can bet your bottom dollar that similar new laws are going to be popping up in many other states. The clock is ticking. And the odds are that this type of legislation will also spur action in the U.S. Congress and potentially lead to federal laws of a like nature. It all could have a tremendous impact on AI makers, along with major impacts on how generative AI is devised and made available to the public. All in all, few realize the significance of this otherwise innocuous and under-the-radar concern. My view is that this is the first tiny snowball that is starting to roll down a snowy hill and soon will be a gigantic avalanche that everybody will be talking about. Time will tell. Background On AI For Mental Health I'd like to set the stage before we get into the particulars of this heady topic. You might be vaguely aware that the top-ranked public use of generative AI and LLMs is to consult with the AI on mental health considerations, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. AI makers already find themselves in a bit of a pickle on this usage of their AI. The deal is this. By allowing their AI to be used for mental health purposes, they are opening the door to legal liability if their AI gets caught dispensing inappropriate guidance and someone suffers harm accordingly. So far, AI makers have been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role. You might wonder why the AI makers don't just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold. One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren't supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don't use capacity. Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here. The Services Question My commentary on these sobering matters is merely a layman's viewpoint. Make sure to consult with your attorney to garner any legal ramifications pertaining to your situation and any potential legal entanglements regarding AI and mental health. Let's take a look at the Illinois law that was recently passed. According to the Wellness and Oversight for Psychological Resources Act, known as HB1806, these two elements are a core consideration (excerpts): Regarding the use of unregulated AI in this realm, a crucial statement about AI usage for mental health purposes is stated this way in the Act (excerpt): There are varying ways to interpret this wording. One interpretation is that if an AI maker has a generic generative AI that also happens to provide mental health advice, and if this is taking place without the supervision of a licensed professional, and this occurs in Illinois, the AI maker is seemingly in violation of this law. The AI maker might not even be advertising that their AI can be used that way, but all it takes is for the AI to act in such a manner (since it provides or offers as such). Generic AI Versus Purpose-Built AI Closely observe that the new law stipulates that the scope involves 'therapy or psychotherapy services'. This brings us back to my opening question: Before we unpack the thorny issue, I'd like to clarify something about the topic of AI for mental health. You might have noticed that I referred to generic generative AI. What does the word 'generic' mean in this context? Let me explain. Well, first, there are customized generative AI systems and AI-based apps that are devised specifically to carry out mental health activities. Those are specially built for that purpose. It is the obvious and clear-cut intent of the AI developer that they want their AI to be used that way, including that they are likely to advertise and promote the AI for said usage. See my coverage on such purpose-built AI for mental health at the link here and the link here. In contrast, there is generic generative AI that just so happens to have a capability that encompasses providing mental health advisement. Generic generative AI is intended to answer all kinds of questions and delve into just about any topic under the sun. The AI wasn't especially tuned or customized to support mental health guidance. It just happens to be able to do so. I am focusing here on the generic generative AI aspects. The custom-built AI entails somewhat similar concerns but has its own distinct considerations. I'll be going into those facets in an upcoming posting, so be on the watch. Definitions And Meaning Are Crucial An AI maker might claim that they aren't offering therapy or psychotherapy services and that their generic generative AI has nothing to do with therapy or psychotherapy services. It is merely AI that interacts with people on a wide variety of topics. Period, end of story. The likely retort is that if your AI is giving out mental health advice, it falls within the rubric of therapy and psychotherapy services (attorneys will have a field day on this). Thus, trying to dodge the law by being sneaky about wording isn't going to get you off the hook. If it walks like a duck and quacks like a duck, by gosh, it surely is a duck. One angle on this disparity or dispute would be to nail down what the meaning and scope of therapy and psychotherapy encompass. Before we look at what the Illinois law says, it is useful to consider definitions from a variety of informed sources. Definitions At Hand According to the online dictionary of the American Psychological Association (APA), therapy and psychotherapy are defined this way: The Mayo Clinic provides this online definition: The National Institute Of Health (NIH) provides this online definition: And, the popular website and publication Psychology Today has this online definition: Interpreting The Meanings Those somewhat informal definitions seem to suggest that the nature of therapy and psychotherapy includes these notable elements: (1) aiding mental health problems, (2) use 'talk' or interactive chatting as a mode of communication, and (3) undertaken by a mental health professional. Let's see what the Illinois law says about therapy and psychotherapy (excerpts per the Act): It is interesting and notable that some carve-outs were made. The scope appears to exclude peer support, along with excluding religious counseling. Contemplating The Matter It might be worthwhile to noodle on how an AI maker might seek to avoid repercussions from their generic generative AI getting caught up in this messy milieu. First, if therapy and psychotherapy were defined as requiring that a mental health professional be involved, this provides an angle of escape. Why so? Oddly enough, an AI maker could simply point out that their AI doesn't employ or otherwise make use of a mental health professional. Therefore, the AI cannot be providing these said services since it fails to incorporate a supposed requirement. Notably, the Illinois law seems not to fall into that trap, since it seems to simply indicate that there are services and does not name that a mental health professional is part and parcel of the definition. Some of the other definitions that I listed would potentially be in a murkier condition due to explicitly mentioning a required role of having a trained professional or other similar verbiage. Second, an AI maker might try to claim that their generic generative AI is more akin to peer support. The beauty there is that since peer support is a carve-out, perhaps their AI is no longer within scope. It would be a tough row to hoe. Peer support stipulates that individuals are involved. At this juncture, we do not genuinely recognize AI as having legal personhood, see my discussion at the link here, and therefore, trying to assert that AI is an 'individual' would be an extraordinary stretch. Third, an AI maker might go the route of claiming that their generic generative AI is a form of religious counseling. The advantage would be that the matter of religious consulting is a carve-out. In that case, if AI were said to be doing religious counseling when providing mental health advice, the AI maker would apparently be free of the constraint. This appears to be a failing strategy for several reasons, including that the AI is presumably not a clergy member, pastoral counselor, or other religious leader (maybe a desperate attempt could be made to anoint the AI in that fashion, but this would seem readily overturned). Caught In A Web Other potential dodges or efforts to skirt the coming set of laws will indubitably be a keen topic for legal beagles and legal scholars. If an AI maker doesn't find a viable workaround, they are going to be subject to various fines and penalties. Those could add up. For example, Illinois has a population of approximately twelve million people. Of those, suppose that half are using generic generative AI (that's a wild guess), and that half of those use the AI for mental health aspects from time to time (another wild guess). That would be three million people, and each time they use the AI for that purpose might be construed as a violation. If each person does so once per week, that's twelve million violations in a month. The Illinois law says that each violation is up to a maximum fine of $10,000. We'll imagine that instead of the maximum, an AI maker gets fined a modest $1,000 per violation. In one month, based on this spitball conjecture, that could be $12 billion in fines. Even the richest tech firms are going to pay attention to that kind of fine. Plus, once other states go the same route, you can multiply this by bigger numbers for each of the additional states and how they opt to penalize AI that goes over the line. Crucial Juncture At Hand An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. See my analysis of the potential widespread impacts at the link here. So far, AI makers have generally had free rein with their generic generative AI. It seems that the proverbial rooster has finally headed home to roost. Gradually, new laws are going to be enacted that seek to prohibit generic generative AI from dispensing mental health advice that is absent of a human therapist performing counseling. Get yourself primed and ready for quite a royal battle that might determine the future mental status of us all.