logo
ChatGPT 5 release date: what we know about OpenAI's next chatbot as rumours suggest summer release

ChatGPT 5 release date: what we know about OpenAI's next chatbot as rumours suggest summer release

He continued: "Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn't always know which one, and you'd like to get the best response of 10,000 each time, and so that increase in reliability will be important."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI helped design an innovative new cancer treatment plan
AI helped design an innovative new cancer treatment plan

Yahoo

timea day ago

  • Yahoo

AI helped design an innovative new cancer treatment plan

If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. Researchers may have come up with an interesting new treatment for cancer by talking to AI. According to a paper published this month, a research team led by the University of Cambridge turned to an 'AI scientist' powered by GPT-4 to help create a new AI-designed cancer treatment plan. The kicker? It only uses widely available drugs that have nothing to do with treating cancer. The researchers started by taking all of the data they had regarding popular drugs used to treat high cholesterol and alcohol dependence to look for hidden patterns that could point toward new cancer drug options. They prompted GPT-4 to identify combinations of the drugs that could possibly have a significant impact on breast cancer cells. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 The result is a new AI-designed cancer treatment plan that avoids standard cancer drugs and relies on drugs that will not target non-cancerous cells. The drugs that the AI was prompted to look for were also meant to be widely available, affordable, and already approved by regulators. Considering how many different types of cancer treatment options we've seen in recent years, this approach makes a lot of sense. It also opened some new doors, according to the researcher's findings, which are published in the Journal of Royal Society Interface. We've seen a huge increase in researchers and doctors turning to AI to try to come up with new treatment options for old problems, including an AI that can identify autism. So it isn't that surprising to see researchers once more turning to AI to help speed up scientific progress. It seems to have worked, too. According to the findings, the researchers tested the combinations suggested by the GPT-4 'scientist' and found that three of the 12 combinations worked better than current breast cancer drugs. They then fed that information back to the AI, which created four more combinations, three of which also showed a lot of promise. Of course, relying wholly on AI-designed cancer treatment plans isn't something doctors are likely to do immediately. More trials and research are needed to fully test the efficiency of these drug combinations. Testing will also need to be done to ensure there aren't any adverse side effects from combining these drugs over extended periods of time. But for those fighting cancer right now, research like this is promising and could one day help scientists find even better treatment options. And even if the AI hallucinates, the information it gives may spark a new idea that scientists hadn't thought of before. AI will never replace doctors, no matter how hard Google and others push for a future involving AI doctors. But by relying on AI to speed up research, scientists can potentially unlock new options they might not otherwise find for decades to come. More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the

Can AI Come Up With Better Ideas Than People? Sort Of, Science Says.
Can AI Come Up With Better Ideas Than People? Sort Of, Science Says.

Entrepreneur

time2 days ago

  • Entrepreneur

Can AI Come Up With Better Ideas Than People? Sort Of, Science Says.

This story appears in the May 2025 issue of Entrepreneur. Subscribe » Need a great idea? Ask the masses. "Crowdsourcing" has become a popular way for companies to gather insights — because when you bring together people with diverse knowledge and approaches, you can find unexpected genius. But crowdsourcing is time-consuming and expensive. Could generative AI help? Researchers at Harvard Business School and the University of Washington's Foster School of Business decided to find out. Related: Why Every Company Should Be Thinking About Artificial Intelligence The research The study set up a crowdsourcing challenge. The question was: "Who has ideas for how companies can implement the circular economy into their business to make them more climate-friendly?" First, the researchers invited real people to submit their suggestions and ended up with 125 ideas. Next, the researchers prompted ChatGPT's GPT-4 with the same question and slight variations, which produced 730 solutions. All the ideas were reviewed by 300 human judges, who evaluated them based on novelty, value, and quality. The results The human-generated solutions were more novel, but in many ways, the AI-prompted concepts were better — higher quality, more strategically viable, with greater financial and environmental value. As for the economics of the project, they were stark: To produce those 125 human submissions required a combined 2,520 hours of work — and $2,555 to cover expenses. But to get the 730 solutions from AI, it took just 5.5 hours and cost only $27.01. Related: How Crowdsourcing Is Shaping the Future of Everything What we've learned Generative AI is great at generating ideas — but it can't do it all on its own. To succeed, you need to give the tool very clear, thoughtful prompts, says study investigator Léonard Boussioux, an assistant professor at the University of Washington's Foster School of Business who's also affiliated with Harvard Business School. In their experiment, he and his colleagues tried two distinct strategies. In the first, along with the challenge question, they asked GPT-4 to assume 100 different personas (executive, manager, entrepreneur) to replicate the variety of humans who might reply to a crowdsource request. In the second approach, they prompted the AI with the same question and got a first answer — and then followed it up by asking, "Make sure to tackle a different problem than the previous ones and propose a different solution." They repeated this 100 times, with a different persona each time, as if it were one person pushed to give various perspectives. The result: The second approach led to more novel, viable, and quality ideas. How to use this Let's say your company is considering a pivot. "The quality of your questioning and critical thinking is what matters most," Boussioux says. "But AI can help you find good starting questions. Describe your company, the stakeholders, your goals. Include as much data as you can — a blog post about what you do, financial spreadsheets, a white paper. Then ask something like, 'I'm looking maybe to pivot. Can you suggest a direction to investigate?'" This can get you going. Try iterative prompting, but don't do it 100 times like the study did. "We found that if you keep asking for too long, the final ideas are potentially more creative, but they're also not as feasible or valuable," Boussioux says. The sweet spot? Two to four times. Related: Going All in on AI? Here's How to Navigate the Psychology of Artificial Intelligence

Things Humans Still Do Better Than AI: Understanding Flowers
Things Humans Still Do Better Than AI: Understanding Flowers

Gizmodo

time2 days ago

  • Gizmodo

Things Humans Still Do Better Than AI: Understanding Flowers

While it might feel as though artificial intelligence is getting dangerously smart, there are still some basic concepts that AI doesn't comprehend as well as humans do. Back in March, we reported that popular large language models (LLMs) struggle to tell time and interpret calendars. Now, a study published earlier this week in Nature Human Behaviour reveals that AI tools like ChatGPT are also incapable of understanding familiar concepts, such as flowers, as well as humans do. According to the paper, accurately representing physical concepts is challenging for machine learning trained solely on text and sometimes images. 'A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers,' Qihui Xu, lead author of the study and a postdoctoral researcher in psychology at Ohio State University, said in a university statement. 'Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts.' The team tested humans and four AI models—OpenAI's GPT-3.5 and GPT-4, and Google's PaLM and Gemini—on their conceptual understanding of 4,442 words, including terms like flower, hoof, humorous, and swing. Xu and her colleagues compared the outcomes to two standard psycholinguistic ratings: the Glasgow Norms (the rating of words based on feelings such as arousal, dominance, familiarity, etc.) and the Lancaster Norms (the rating of words based on sensory perceptions and bodily actions). The Glasgow Norms approach saw the researchers asking questions like how emotionally arousing a flower is, and how easy it is to imagine one. The Lancaster Norms, on the other hand, involved questions including how much one can experience a flower through smell, and how much a person can experience a flower with their torso. In comparison to humans, LLMs demonstrated a strong understanding of words without sensorimotor associations (concepts like 'justice'), but they struggled with words linked to physical concepts (like 'flower,' which we can see, smell, touch, etc.). The reason for this is rather straightforward—ChatGPT doesn't have eyes, a nose, or sensory neurons (yet) and so it can't learn through those senses. The best it can do is approximate, despite the fact that they train on more text than a person experiences in an entire lifetime, Xu explained. 'From the intense aroma of a flower, the vivid silky touch when we caress petals, to the profound visual aesthetic sensation, human representation of 'flower' binds these diverse experiences and interactions into a coherent category,' the researchers wrote in the study. 'This type of associative perceptual learning, where a concept becomes a nexus of interconnected meanings and sensation strengths, may be difficult to achieve through language alone.' In fact, the LLMs trained on both text and images demonstrated a better understanding of visual concepts than their text-only counterparts. That's not to say, however, that AI will forever be limited to language and visual information. LLMs are constantly improving, and they might one day be able to better represent physical concepts via sensorimotor data and/or robotics, according to Xu. She and her colleagues' research carries important implications for AI-human interactions, which are becoming increasingly (and, let's be honest, worryingly) intimate. For now, however, one thing is certain: 'The human experience is far richer than words alone can hold,' Xu concluded.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store