logo
#

Latest news with #AIethics

Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'
Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'

Yahoo

time2 days ago

  • Business
  • Yahoo

Infamous summer reading list shows the perils of AI beyond just energy use: 'Completely embarrassed'

A major newspaper in the United States has rightly come under fire after the discovery of a lack of oversight that led to the publication of false information. As detailed by The Verge, the May 18 issue of the Chicago Sun-Times featured a summer reading guide with recommendations for fake books generated by artificial intelligence. To make matters even more concerning, other articles were found to include quotes and citations from people who don't appear to exist. The summer reading list included fake titles by real authors alongside actual books. The Sun-Times admitted in a post on Bluesky that the guide was "not editorial content and was not created by, or approved by, the Sun-Times newsroom," and added that it was "looking into how this made it into print." In a statement later published on the newspaper's website, the Sun-Times revealed that the guide was "licensed from a national content partner" and said it was removing the section from all digital editions while updating its policies on publishing third-party content to ensure future mistakes like this are avoided. According to The Verge, the reading list was published without a byline, but a writer named Marco Buscaglia was credited for other pieces in the summer guide. Buscaglia was found to have written other pieces that quote and cite sources and experts that do not appear to be real. Buscaglia admitted to 404 Media that he uses artificial intelligence "for background at times," but claimed he always checks the material. "This time, I did not, and I can't believe I missed it because it's so obvious. No excuses," Buscaglia told 404 Media. "On me 100 percent and I'm completely embarrassed." This is yet another incident that highlights the importance of maintaining professional standards and ensuring that AI-generated content is properly vetted before publication. In an age where misinformation can spread quickly, it's up to leading news outlets like the Sun-Times to avoid these mistakes so they don't lose the trust of the general public. On a broader level, AI is an energy-intensive field that carries significant environmental concerns. The International Energy Agency published a report warning that electricity consumption from data centers that power AI is expected to double by 2026 and will reach a level that is "roughly equivalent to the electricity consumption of Japan." It's important to stay informed on critical climate issues and efforts to reduce energy consumption amid the ongoing evolution of AI technology. How often do you worry about toxic chemicals getting into your home? Always Often Sometimes Never Click your choice to see results and speak your mind. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety
As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety

Yahoo

time3 days ago

  • Business
  • Yahoo

As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety

Welcome to AI Decoded, Fast Company's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Spicy AI-generated TACO memes are taking over social media because 'Trump always chickens out' Lego's first book nook is an addictively interactive diorama What is 'ghostworking'? Most employees say they regularly pretend to work During the first two years of the generative artificial intelligence boom, new large language models were very limited in scope and application. They were very expensive auto-complete engines that understood only words. In 2025, generative AI models have a much broader view of the world. They can process code, images, video, and audio. They can reason and strategize about delivering a good answer. They can control external tools, including digital tools such as web search agents and, increasingly, physical tools like robots. As their capabilities grow, so does their potential for harm. This is no longer a purely conceptual argument. Research shows that increasingly large models are already showing a proclivity for unsafe behavior during testing. In a model safety card published last week, Anthropic documented some alarming behavior from its newest and biggest model, Claude Opus 4. During safety testing, one instance of Opus was allowed to discover plans for its decommissioning. It was also given access to some fictional emails of its developers. The model used the content of those emails as fodder to attempt to blackmail its human handlers into keeping it alive. As its attempts failed, they moved from subtle to more overt. Separately, the independent research firm Apollo Research observed an instance of Claude Opus 4 'writing self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself' with the goal of sullying its developers' intentions. Anthropic says that it corrected these early safety issues in later versions of the model. For the first time, Anthropic bumped the new Opus model up to Level 3 on its four-level safety scale. The company said it couldn't rule out the model's ability to assist a user in developing a mass casualty weapon. But powerful AI models can work in subtler ways, such as within the information space. A team of Italian researchers found that ChatGPT was more persuasive than humans in 64% of online debates. The AI was also better than humans at leveraging basic demographic data about its human debate partner to adapt and tailor-fit its arguments to be more persuasive. Another worry is the pace at which AI models are learning to develop AI models, potentially leaving human developers in the dust. Many AI developers already use some kind of AI coding assistant to write blocks of code or even code entire features. At a higher level, smaller, task-focused models are distilled from large frontier models. AI-generated content plays a key role in training, including in the reinforcement learning process used to teach models how to reason. There's a clear profit motive in enabling the use of AI models in more aspects of AI tool development. 'Future systems may be able to independently handle the entire AI development cycle—from formulating research questions and designing experiments to implementing, testing, and refining new AI systems,' write Daniel Eth and Tom Davidson in a March 2025 blog post on With slower-thinking humans unable to keep up, a 'runaway feedback loop' could develop in which AI models 'quickly develop more advanced AI which would itself develop even more advanced AI,' resulting in extremely fast AI progress, Eth and Davidson write. Any accuracy or bias issues present in the models would then be baked in and very hard to correct, one researcher told me. Numerous researchers—the people who actually work with the models up close—have called on the AI industry to 'slow down,' but those voices compete with powerful systemic forces that are in motion and hard to stop. Journalist and author Karen Hao argues that AI labs should focus on creating smaller, task-specific models (she gives Google DeepMind's AlphaFold models as an example), which may help solve immediate problems more quickly, require less natural resources, and pose a smaller safety risk. DeepMind cofounder Demis Hassabis, who won the Nobel Prize for his work on AlphaFold2, says the huge frontier models are needed to achieve AI's biggest goals (reversing climate change, for example) and to train smaller, more purpose-built models. And yet AlphaFold was not 'distilled' from a larger frontier model. It uses a highly specialized model architecture and was trained specifically for predicting protein structures. The current administration is saying 'speed up,' not 'slow down.' Under the influence of David Sacks and Marc Andreessen, the federal government has largely ceded its power to meaningfully regulate AI development. Just last year, AI leaders were still giving lip service to the need for safety and privacy guardrails around big AI models. No more. Any friction has been removed, in the U.S. at least. The promise of this kind of world is one of the main reasons why normally sane and liberal-minded opinion leaders jumped on the Trump train before the election—the chance to bet big on technology's next big thing in a Wild West environment doesn't come along that often. Anthropic CEO Dario Amodei has a stark warning for the developed world about job losses resulting from AI. The CEO told Axios that AI could wipe out half of all entry-level white-collar jobs. This could cause a 10% to 20% rise in the unemployment rate in the next one to five years, Amodei says. The losses could come from tech, finance, law, consulting, and other white-collar professions, and entry-level jobs could be hit hardest. Tech companies and governments have been in denial on the subject, Amodei says. 'Most of them are unaware that this is about to happen,' Amodei told Axios. 'It sounds crazy, and people just don't believe it.' Similar predictions have made headlines before but were narrower in focus. SignalFire research showed that Big Tech companies hired 25% fewer college graduates in 2024. Microsoft laid off 6,000 people in May, and 40% of the cuts in its home state of Washington were software engineers. Microsoft CEO Satya Nadella said that AI now generates 20% to 30% of the company's code. A study by the World Bank in February showed that the risk of losing a job to AI is higher for women, urban workers, and those with higher education. The risk of job loss to AI increases with the wealth of the country, the study found. U.S. generative AI companies appear to be attracting more venture capital money than their Chinese counterparts so far in 2025, according to new research from the data analytics company GlobalData. Investments in U.S. AI companies exceeded $50 billion in the first five months of 2025. China, meanwhile, struggles to keep pace due to 'regulatory headwinds.' Many Chinese AI companies are able to get early-stage funding from the Chinese government. GlobalData tracked just 50 funding deals for U.S. companies in 2020, amounting to $800 million of investment. The number grew to more than 600 deals in 2024, valued at more than $39 billion. The research shows 200 U.S. funding deals so far in 2025. Chinese AI companies attracted just $40 million in one deal valued at $40 million in 2020. Deals grew to 39 in 2024, valued at around $400 million. The researchers tracked 14 investment deals for Chinese generative AI companies so far in 2025. 'This growth trajectory positions the U.S. as a powerhouse in GenAI investment, showcasing a strong commitment to fostering technological advancement,' says GlobalData analyst Aurojyoti Bose in a statement. Bose cited the well-established venture capital ecosystem in the U.S., along with a permissive regulatory environment, as the main reasons for the investment growth. 9 of the most out-there things Anthropic CEO Dario Amodei just said about AI How AI could supercharge 'go direct' PR, and what the media can do about it This new browser could change everything you know about bookmarks Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium. This post originally appeared at to get the Fast Company newsletter: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

A Prelude To The Ethics Of Artificial Intelligence
A Prelude To The Ethics Of Artificial Intelligence

Forbes

time4 days ago

  • Business
  • Forbes

A Prelude To The Ethics Of Artificial Intelligence

Gone are the days when your company or organization must decide if they will use artificial intelligence. It is now just a matter of how. With the vast increase of AI employment, it was inevitable that some ethically questionable use cases would pop-up. Students using chatbots like ChatGPT to write papers is an obvious example, but the reverse is equally worrying. As reported in The New York Times, a business professor at a Boston-area university was allegedly using ChatGPT to grade papers and mistakenly left the prompt in when returning the comments to students. Given the soaring cost of higher education, the student was understandably concerned, and requested a tuition refund for this course. While this situation is clearly ethically compromised - don't tell your students or employees not to use chatbots and then turn around and do it yourself - the majority of AI practices likely fall into a gray area. It would therefore be handy to have black-and-white ethical guidelines. In theory, not too much to ask. In practice, it would take an entire career of research, writing, and teaching to fully flesh out all the ethical implications associated with generative models. But there is a distinction that allows us to establish some general best practices when dealing with AI. The discussion of how to ethically approach artificial intelligence or machine learning began long before the actual technology emerged. The genesis can likely be traced to the landmark 1950 paper 'Computing Machinery and Intelligence' by Alan Turing. The paper introduced the concept of the Turing test, a method for determining whether a machine can exhibit what humans understand as intelligence. In simplest terms, the Turing test puts a machine behind a curtain and asks whether a human, asking it a series of questions on the other side of that curtain, can tell if it is a machine. If the person is not able to discern whether it is a machine or another person giving the responses, the machine passes the test. Nearly no machines can pass the Turing test. What we are left with is a technology that is not intelligent by human standards, and it is therefore an object. This determination then shapes the ethical conversation around that machine. You do not need to treat it as something with agency. Rather, it should be viewed as any other tool, a means to an end. Examples of this kind of object technology could include computers, telephones, or automobiles. The ethical questions that come up for these machines are not about the things in themselves but as objects for our use, such as issues of equality of access, any potential programming bias, or the privacy of the information they store. Although ChatGPT and other large language models may exhibit certain patterns in their responses that can help identify them as a machine - such as tone or consistency - they are far from easy to notice. A Stanford University study from last year confirms that ChatGPT did pass the Turing test, and the technology has only gotten better since. What this means is that ChatGPT and similar AI have human-like intelligence in that they are not discernibly different to the naked eye. In other words, we may have crossed into the machines as subjects-era. By extension, they should be treated as an end themselves. According to the 2007 AI Magazine article, 'Machine Ethics: Creating an Ethical Intelligent Agent,' treating A.I. as a subject means that ethical questions about them should be 'concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable.' The ethical landscape here is about the things in themselves, how they behave, and how you act or relate to them, accounting for societal values, context, and logic. In other words, the ethics of human relationships. AI is bringing change in all areas of life. But is it a subject or an object? In subtle but significant ways, you can make a case for both. We can be sure it is not neutral. Only by solving this riddle can we deal with the difficult ethical questions that come with the technology.

Judge considers sanctions against attorneys in prison case for using AI in court filings
Judge considers sanctions against attorneys in prison case for using AI in court filings

The Independent

time21-05-2025

  • The Independent

Judge considers sanctions against attorneys in prison case for using AI in court filings

A federal judge said Wednesday that she is considering sanctions against lawyers with a high-priced firm hired to defend Alabama's prison system after ChatGPT was used to write two court filings that included nonexistent case citations. U.S. District Judge Anna Manasco held a hearing in Birmingham to question attorneys with the Butler Snow firm about the filings. She said there were five false citations in two filings in federal court. Manasco said that nationally, there have been broad warnings from courts about the use of artificial intelligence to generate legal filings because of the potential for inaccuracies. Manasco said she is considering a range of sanctions, including fines. She gave the firm 10 days to file a brief with the court. Butler Snow lawyers repeatedly apologized during the hearing. They said a firm partner, Matt Reeves, used ChatGPT to research supporting case law but did not verify the information before adding it to two filings with the federal court. Those citations turned out to be 'hallucinations' — meaning incorrect citations — by the AI system, they said. Four attorneys signed the filings with the information, including Reeves. 'Butler Snow is embarrassed by what happened here, which was against good judgment and firm policy. There is no excuse for using ChatGPT to obtain legal authority and failing to verify the sources it provided, even if to support well founded principles of law,' firm lawyers wrote in a response to the judge. Reeves told the judge that he alone was responsible for the false citations and that, 'I would hope your honor would not punish my colleagues.' Alabama has paid millions of dollars to the firm to defend the state prison system and its officials in lawsuits. That includes representing the state as a defendant in a Department of Justice lawsuit alleging that male inmates live in violent and cruel conditions. The filings in question were made in a lawsuit filed by an inmate who was stabbed on multiple occasions at the William E. Donaldson Correctional Facility in Jefferson County. It alleges that prison officials are failing to keep inmates safe. Manasco also questioned Bill Lunsford, head of the Butler Snow division that handles prison litigation, who signed the filings. Alabama's attorney general has appointed Lunsford as a deputy attorney general because he represents the state in court. Lunsford wrote in a response to the judge that he scanned over the documents before filing them but did not do a detailed review since it had been reviewed by Reeves. He told the judge that the firm has been proactive in warning lawyers about the limitations of artificial intelligence.

Judge considers sanctions against attorneys in prison case for using AI in court filings
Judge considers sanctions against attorneys in prison case for using AI in court filings

Yahoo

time21-05-2025

  • Yahoo

Judge considers sanctions against attorneys in prison case for using AI in court filings

BIRMINGHAM, Ala, (AP) — A federal judge said Wednesday that she is considering sanctions against lawyers with a high-priced firm hired to defend Alabama's prison system after ChatGPT was used to write two court filings that included nonexistent case citations. U.S. District Judge Anna Manasco held a hearing in Birmingham to question attorneys with the Butler Snow firm about the filings. She said there were five false citations in two filings in federal court. Manasco said that nationally, there have been broad warnings from courts about the use of artificial intelligence to generate legal filings because of the potential for inaccuracies. Manasco said she is considering a range of sanctions, including fines. She gave the firm 10 days to file a brief with the court. Butler Snow lawyers repeatedly apologized during the hearing. They said a firm partner, Matt Reeves, used ChatGPT to research supporting case law but did not verify the information before adding it to two filings with the federal court. Those citations turned out to be 'hallucinations' — meaning incorrect citations — by the AI system, they said. Four attorneys signed the filings with the information, including Reeves. 'Butler Snow is embarrassed by what happened here, which was against good judgment and firm policy. There is no excuse for using ChatGPT to obtain legal authority and failing to verify the sources it provided, even if to support well founded principles of law,' firm lawyers wrote in a response to the judge. Reeves told the judge that he alone was responsible for the false citations and that, 'I would hope your honor would not punish my colleagues.' Alabama has paid millions of dollars to the firm to defend the state prison system and its officials in lawsuits. That includes representing the state as a defendant in a Department of Justice lawsuit alleging that male inmates live in violent and cruel conditions. The filings in question were made in a lawsuit filed by an inmate who was stabbed on multiple occasions at the William E. Donaldson Correctional Facility in Jefferson County. It alleges that prison officials are failing to keep inmates safe. Manasco also questioned Bill Lunsford, head of the Butler Snow division that handles prison litigation, who signed the filings. Alabama's attorney general has appointed Lunsford as a deputy attorney general because he represents the state in court. Lunsford wrote in a response to the judge that he scanned over the documents before filing them but did not do a detailed review since it had been reviewed by Reeves. He told the judge that the firm has been proactive in warning lawyers about the limitations of artificial intelligence. Kim Chandler, The Associated Press

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store