Latest news with #AI-produced
Yahoo
2 days ago
- Business
- Yahoo
More than 2 years after ChatGPT, newsrooms still struggle with AI's shortcomings
An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. 'I do use AI for background at times but always check out the material first,' the insert's writer told 404 Media. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses.' OpenAI's launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets' revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT's launch, the sheer size of some newsrooms' staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it's an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. 'It's not that we're saying that you can't use any AI,' she said. 'You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.' It's precisely because AI is prone to errors that newsrooms must maintain the 'fundamental standards and values that have long guided their work,' Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP's vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. 'It's really about making sure that your standards are compatible with the partner you're working with and that everyone's clear on what the standard is,' Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI 'like a junior researcher with unlimited energy and zero credibility.' This means that AI writing should be 'subject to the same scrutiny as a hot tip from an unvetted source.' 'The mistake is using it like it's a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,' he added. High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News' AI summaries, for example, were announced in January and already have included several errors. The LA Times' Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That's only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford's Reuters Institute for the Study of Journalism, points out, 'the really egregious cases have been few and far between.' New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they're not infallible, which is how these incidents still occur. 'AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,' Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers' codes of ethics mirror its own. But the 'real takeaway,' as Kass put it, isn't just that humans are needed — it's 'why we're needed.' 'Not to clean up after AI, but to do the things AI fundamentally can't,' he said. '(To) make moral calls, challenge power, understand nuance and decide what actually matters.'
Yahoo
07-05-2025
- Business
- Yahoo
ServiceNow, Nvidia develop LLM to fuel enterprise agents
This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. Dive Brief: ServiceNow and Nvidia developed an open-source AI model to fuel enterprise agents as part of an expanded partnership introduced Tuesday at ServiceNow's annual Knowledge conference. The pair of vendors designed Apriel Nemotron 15B to provide agentic capabilities at a lower cost. ServiceNow and Nvidia noted that the model delivers lower latency and inference costs, making it faster and more cost-effective to run on Nvidia's infrastructure. Customers can begin accessing the LLM in Q2 2025. ServiceNow and Nvidia also unveiled a joint data flywheel architecture or data feedback loop to improve AI agent performance by integrating select Nvidia NeMo microservices with ServiceNow's Workflow Data Fabric. The architecture 'enables a closed-loop learning process that improves model accuracy and adaptability,' the companies said. Dive Insight: Enterprises are exploring AI agent use cases and the value that agentic services can bring, but leaders have concerns. Technical and organizational challenges are plentiful, from governing the tools to securing the ever-expanding attack surface. Plus, nearly 2 in 5 businesses say integrating the technology with current systems is very or extremely challenging, according to a Cloudera report published last month. Workers have expressed concerns, too, especially in terms of accuracy and reliability. More than one-third of workers said AI-produced work is subpar compared with their own, according to a Pegasystems survey. CIOs, however, continue to have high hopes for the technology. Businesses have mostly aimed implementation efforts at the IT service desk, data processing and code development, according to a report. Walmart is an early adopter of the technology, giving developers access to agents designed to identify accessibility gaps in code. Expedia is in the experiment phase, enabling employees to experiment with AI agents as part of a generative AI playground. Other companies on the agent deployment path include Toyota Motor Corporation, Estée Lauder Companies and KPMG. For their part, vendors are working to ease concerns and lower barriers to entry. Most technology providers have curated toolkits and services to accelerate agent development and implementation for enterprise customers. ServiceNow added thousands of pre-built AI agents earlier this year to kickstart adoption among customers. Salesforce expanded its model lineup last week in support of agentic AI, adding variations to increase accessibility and deployment flexibility across potential environments. Snowflake released its own agents in February that can identify fiscal year start dates, explain internal naming conventions and prioritize key tables during SQL generation.


Independent Singapore
01-05-2025
- Business
- Independent Singapore
Rise of the robot colleague: Why workers are choosing AI over each other
Freepik/ (for illustration purposes only) INTERNATIONAL: Today's workplaces have significantly changed — more and more workers are turning to artificial intelligence (AI) for ideas and emotional support instead of their human officemates, and this trend has rewritten the rules of workplace culture. According to Microsoft's latest Work Trend Index report , published by HRD Asia , an increasing number of professionals see AI as a collaborative partner and not a mechanical tool, a 'different kind of team member.' AI becomes the ultimate brainstorm buddy Approximately half (46%) of employees now see AI as a dependable and consistent 'thought partner' — a source of productive ideas, a brainstorm buddy, that can spur creativity, challenge their cognitive abilities, and test their perceptions. Instead of only banking on human colleagues, countless workers have opted for AI for its exceptional gains. Employees said they turn to AI because it's at all times accessible (42%), provides rapid responses and superior results (30%), and delivers a continuous stream of novel concepts (28%). More than these advantages, AI doesn't get tired as it has unlimited capacity (23%). In short, AI is prepared and equipped to work at any time of the day. Dodging drama: Why some avoid human help For many workers, the move towards AI is not merely about acquiring competence or achieving productivity — it is also about dodging the uneasiness of working with colleagues, especially those who have attitude problems. Respondents to the survey mentioned anxiety over judgment (17%), personal tensions (16%), and highly demanding colleagues (15%) as motives why they chose AI. A minor group (8%) even said they try to sidestep alliances or teamwork tasks because co-workers tend to claim too much credit for things they didn't even exert effort on. 'It's a mindset shift,' said Conor Grennan, chief AI architect at NYU Stern School of Business. 'We've been trained to see technology as a tool, but AI is something different — it's like a new teammate.' From HR to IR: The next evolution in the workplace As AI becomes a vital player in everyday work, businesses are starting to reconsider how teams should work. The report proposes that all workers will have to build capabilities around communicating well with AI, such as creating improved prompts, filtering AI-produced replies, and detecting flawed reasoning. Specialists even forecast the development and rise of a completely new division to accomplish this trend. 'Just as HR and IT became foundational functions, we'll likely see the rise of Intelligence Resources departments,' said Harvard's Karim Lakhani. These divisions will aid in overseeing how humans and AI intermingle and collaborate, eventually becoming a key driver of competitive advantage in AI-first establishments.

Yahoo
29-01-2025
- Politics
- Yahoo
The Vatican urges ethical AI use in warfare and healthcare
This story incorporates reporting from Angelus, Catholic News Agency and The New York Vatican has released a comprehensive document offering new guidelines for the ethical development and use of artificial intelligence, with a focus on areas such as warfare and healthcare. This document underscores the importance of human agency in directing AI's application, warning against its misuse and potential to undermine societal trust. Key themes emphasize that AI, while capable of performing complex tasks, lacks the intrinsic qualities of human intelligence, such as empathy and moral judgment. Crafted over six months with input from various experts, the Vatican's paper introduces a Christian framework for understanding intelligence as an inherent gift, urging a renewed appreciation for human interrelations and truth-seeking processes. This document guides AI development, advocating for respect and promotion of the intrinsic dignity of each person. A human-centered approach is recommended to ensure AI's integration into society benefits humanity without eroding essential values. The Vatican cautions that AI systems, if unchecked, may pose significant risks, particularly through the spread of misinformation. Echoing past warnings, the document stresses the potential for AI to disrupt social trust and damage the foundational structures of societies. The dire warning includes the possibility of AI-produced fake media gradually eroding societal cohesion and trust. The document also calls attention to AI's role in healthcare, highlighting its transformative potential to improve patient outcomes. Nevertheless, it reminds developers and practitioners that true education and decision-making require human involvement and discernment. The Vatican's initiative emphasizes that AI should not replace human judgment in critical areas but rather complement it to enhance efficiency and effectiveness. In terms of warfare, the document advocates for regulations ensuring AI technologies are not weaponized in ways that contravene humanitarian principles. The Vatican stresses ethical considerations in military applications, urging international cooperation to prevent the development of autonomous systems that could make lethal decisions without human oversight. Pope Francis, addressing leaders at the World Economic Forum in Davos, underscored these themes. He urged political, economic, and business leaders to consider AI's broader societal impacts, arguing for frameworks that prevent harm and promote human dignity. By engaging such stakeholders, the Vatican aims to foster collaborative efforts for responsible AI governance on a global scale. Ultimately, the Vatican's guidelines encourage deeper engagement with the humanities, suggesting that AI's rise should inspire renewed interest in understanding and valuing the human condition. This approach positions AI as a tool for enhancing, not diminishing, human creativity, empathy, and moral responsibility. Through continued dialogue and regulation, the Vatican hopes to steer AI development towards a future that aligns with ethical and spiritual values. Quartz Intelligence Newsroom uses generative artificial intelligence to report on business trends. This is the first phase of an experimental new version of reporting. While we strive for accuracy and timeliness, due to the experimental nature of this technology we cannot guarantee that we'll always be successful in that regard. If you see errors in this article, please let us know at qi@ For the latest news, Facebook, Twitter and Instagram.