
Poll finds public turning to AI bots for news updates
People are increasingly turning to generative artificial intelligence chatbots like ChatGPT to follow day-to-day news, a respected media report published today found.
The yearly survey from the Reuters Institute for the Study of Journalism found "for the first time" that significant numbers of people were using chatbots to get headlines and updates, director Mitali Mukherjee wrote.
Attached to Britain's Oxford University, the Reuters Institute annual report is seen as unmissable for people following the evolution of media.
Just 7% of people report using AI to find news, according to the poll of 97,000 people in 48 countries, carried out by YouGov. But the proportion is higher among the young, at 12% of under-35s and 15% of under-25s.
The biggest-name chatbot - OpenAI's ChatGPT - is the most widely used, followed by Google's Gemini and Meta's Llama.
Respondents appreciated relevant, personalised news from chatbots.
Many more used AI to summarise (27%), translate (24%) or recommend (21%) articles, while almost one in five asked questions about current events.
Distrust remains, with those polled on balance saying AI risked making the news less transparent, less accurate and less trustworthy.
Rather than being programmed, today's powerful AI 'large language models' (LLMs) are 'trained' on vast quantities of data from the web and other sources - including news media like text articles or video reports.
Once trained, they are able to generate text and images in response to users' natural-language queries.
But they present problems including 'hallucinations' - the term used when AI invents information that fits patterns in their training data but is not true.
Scenting a chance at revenue in a long-squeezed market, some news organisations have struck deals to share their content with developers of AI models.
Agence France-Presse (AFP) allows the platform of French AI firm Mistral to access its archive of news stories going back decades.
Other media have launched copyright cases against AI makers over alleged illegal use of their content, for example the New York Times against ChatGPT developer OpenAI.
The Reuters Institute report also pointed to traditional media - TV, radio, newspapers and news sites - losing ground to social networks and video-sharing platforms.
Almost half of 18-24-year-olds report that social media like TikTok is their main source of news, especially in emerging countries like India, Brazil, Indonesia and Thailand.
The institute found that many are still using Elon Musk-owned social media platform X for news, despite a rightward shift since the world's richest man took it over.
"Many more right-leaning people, notably young men, have flocked to the network, while some progressive audiences have left or are using it less frequently," the authors wrote.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Examiner
6 hours ago
- Irish Examiner
Watchdog orders Elon Musk's X to clarify how it will protect children on its platform
Ireland's media regulator has ordered Elon Musk-owned X to clarify how it will protect children on its platform or risk 'criminal liability', it has said. Coimisiún na Meán said it was using its statutory powers to compel X, formerly Twitter, to provide information on how it will comply with specific sections of the Online Safety Code, which has been challenged by X through a judicial review in the High Court. 'Information provided by X so far is not sufficient to assess whether X's current measures are sufficient to protect children using the service,' it said. 'Under Part A of the Code, designated platforms must establish and operate age verification systems for users with respect to content which may impair physical, mental, or moral development of minors.' The Online Safety Code sets binding rules on major platforms that also include Facebook and YouTube to prohibit harmful content like cyberbullying, racism, or incitement to hatred. It also makes it incumbent on platforms to have robust age assurance such as verifying a passport photo to prevent children from seeing pornography or gratuitous violence online, as 'merely asking users whether they are over 18 will not be enough'. Set to fully come into force next month, the code is binding on platforms including Facebook, Instagram, YouTube, TikTok, X, Linkedin, Pinterest, Udemy and Tumblr. A number of judicial reviews have been launched by firms objecting to the code, including by X. A judgement is set to be delivered in this case by Mr Justice Conleth Bradley on July 25. While that remains pending, Coimisiún na Meán has said it is exercising its powers to tell X it must provide information relating to its compliance with the Online Safety Code. 'X is obliged to respond by July 22, 2025,' it said. 'Failure to comply with the notice by the provider can result in criminal liability, including a fine of up to €500k. 'An Coimisiún will review the response from X and will consider whether the platform has complied with its obligations under Part A of the Online Safety Code and will then determine if further measures should be taken. X was designated by Coimisiún na Meán as a video-sharing platform service in December 2023 and is consequently obliged to comply with the Online Safety Code. It comes after big tech firms including X were hauled in for a meeting with media minister Patrick O'Donovan on Monday with age verification high on the agenda. Separately, European commissioner for justice Michael McGrath has said that new European laws will prevent children from being exposed to 'dark patterns' online while new age checks will stop them accessing harmful content like pornography. He said that investigations into major platforms like Meta's Instagram and Facebook, TikTok and several pornographic websites are ongoing to ensure they are complying with child protection rules. 'The internet should be a place of opportunity for children, not a minefield of risks,' Mr McGrath said. Read More European laws will prevent children accessing harmful content online


Irish Independent
8 hours ago
- Irish Independent
AI technology in major Dublin hospital is ‘life-saving and speeding up' patient diagnosis
The Mater Hospital in Dublin has become the first hospital in Ireland to establish a centre for AI and Digital Health. Prof Joe Galvin, consultant cardiologist at the Mater said: 'AI has the potential to enhance the accuracy of an electrocardiogram ( ECG) and radiology scan analysis, reducing the time that a patient has to wait for the results of their diagnosis and, if required, starting their treatment sooner. 'If a patient suffers from cardiovascular disease, stroke or cardiac arrest, every minute counts. 'AI's ability to increase accuracy and speed may be life-saving. 'The new hub, which will be based at the Pillar Centre for Transformative Healthcare, will utilise AI to solve clinical problems across the hospital and, through research, develop AI-driven solutions to improve patient care and outcomes.' Research projects in the hospital include AI-driven automation which will help to identify suitable patients for clinical trials in oncology. This will significantly reduce the 16 hours per week it currently takes a nurse to manually scan these lists, speeding up patient access to clinical trials. Another involves the reduction in the number of cardiac fluoroscopy video x-ray images, and subsequently the radiation exposure to both the patients and clinical staff, through real-time AI generated images. The synthetic spinal imaging project, which has developed a generative AI that can convert lumbar spine CT images into synthetic MRI images for Cauda Equina Syndrome presentations, allowing for better and more robust out-of-hour or emergency care planning and treatment. A spokesman said the centre's work is already proving invaluable, with AI solutions contributing to the reduction of emergency department waiting times and optimising hospital workflows in the Mater's radiology department. ADVERTISEMENT It is 'assisting in the rapid notification of suspected pathologies, like stroke and fractures, which are being correctly flagged by AI within two to three minutes of the scan being completed, with an accuracy rate of over 90pc'. Mater Hospital radiologist Prof Peter MacMahon said: 'Our experiences have underscored the tangible benefits of AI, notably in expediting critical diagnoses and reducing turnaround times by rapidly flagging anomalies detected in scans.' Hospital chief executive Josephine Ryan Leacy pointed out that 'AI in healthcare must be implemented with care, accountability, and a clear focus on improving patient outcome'. 'The Mater Hospital's Centre for AI and Digital Health is focused on ensuring that AI is developed and deployed in a way that prioritises patient safety, transparency and real clinical benefits,' she said.


RTÉ News
8 hours ago
- RTÉ News
Poll finds public turning to AI bots for news updates
People are increasingly turning to generative artificial intelligence chatbots like ChatGPT to follow day-to-day news, a respected media report published today found. The yearly survey from the Reuters Institute for the Study of Journalism found "for the first time" that significant numbers of people were using chatbots to get headlines and updates, director Mitali Mukherjee wrote. Attached to Britain's Oxford University, the Reuters Institute annual report is seen as unmissable for people following the evolution of media. Just 7% of people report using AI to find news, according to the poll of 97,000 people in 48 countries, carried out by YouGov. But the proportion is higher among the young, at 12% of under-35s and 15% of under-25s. The biggest-name chatbot - OpenAI's ChatGPT - is the most widely used, followed by Google's Gemini and Meta's Llama. Respondents appreciated relevant, personalised news from chatbots. Many more used AI to summarise (27%), translate (24%) or recommend (21%) articles, while almost one in five asked questions about current events. Distrust remains, with those polled on balance saying AI risked making the news less transparent, less accurate and less trustworthy. Rather than being programmed, today's powerful AI 'large language models' (LLMs) are 'trained' on vast quantities of data from the web and other sources - including news media like text articles or video reports. Once trained, they are able to generate text and images in response to users' natural-language queries. But they present problems including 'hallucinations' - the term used when AI invents information that fits patterns in their training data but is not true. Scenting a chance at revenue in a long-squeezed market, some news organisations have struck deals to share their content with developers of AI models. Agence France-Presse (AFP) allows the platform of French AI firm Mistral to access its archive of news stories going back decades. Other media have launched copyright cases against AI makers over alleged illegal use of their content, for example the New York Times against ChatGPT developer OpenAI. The Reuters Institute report also pointed to traditional media - TV, radio, newspapers and news sites - losing ground to social networks and video-sharing platforms. Almost half of 18-24-year-olds report that social media like TikTok is their main source of news, especially in emerging countries like India, Brazil, Indonesia and Thailand. The institute found that many are still using Elon Musk-owned social media platform X for news, despite a rightward shift since the world's richest man took it over. "Many more right-leaning people, notably young men, have flocked to the network, while some progressive audiences have left or are using it less frequently," the authors wrote.