logo
New GOP bill would protect AI companies from lawsuits if they offer transparency

New GOP bill would protect AI companies from lawsuits if they offer transparency

NBC News12-06-2025
Sen. Cynthia Lummis, R-Wyo., is introducing legislation Thursday that would shield artificial intelligence developers from an array of civil liability lawsuits provided they meet certain disclosure requirements.
Lummis' bill, the Responsible Innovation and Safe Expertise Act, seeks to clarify that doctors, lawyers, financial advisers, engineers and other professionals who use AI programs in their decision-making retain legal liability for any errors they make — so long as AI developers publicly disclose how their systems work.
'This legislation doesn't create blanket immunity for AI — in fact, it requires AI developers to publicly disclose model specifications so professionals can make informed decisions about the AI tools they choose to utilize,' Lummis, a member of the Commerce Committee, said in a statement first shared with NBC News. 'It also means that licensed professionals are ultimately responsible for the advice and decisions they make. This is smart policy for the digital age that protects innovation, demands transparency, and puts professionals and their clients first.'
Lummis' office touted the bill as the first piece of federal legislation that offers clear guidelines for AI liability in a professional context. The measure would not govern liability for other AI elements, such as self-driving vehicles, and it would not provide immunity when AI developers act recklessly or willfully engage in misconduct.
'AI is transforming industries — medicine, law, engineering, finance — and becoming embedded in professional tools that shape critical decisions,' her office said in a release. 'But outdated liability rules discourage innovation, exposing developers to unbounded legal risk even when trained professionals are using these tools.'
Exactly who is liable when AI is used in sensitive medical, legal or financial situations is a bit of a gray area, with some states seeking to enact their own standards.
The House-passed 'One Big Beautiful Bill,' which is advancing through Congress and supported by President Donald Trump, includes a provision that would ban states from enacting any AI regulations for 10 years. Senate Republicans last week proposed changing the provision to instead block federal funding for broadband projects to states that regulate AI.
Both Democratic and Republican state officials have criticized the effort to prohibit state-level regulations over the next decade, while AI executives have argued that varying state laws would stifle industry growth when the United States is in stiff competition with countries like China.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT answers humans through Telex message machine in Amberley
ChatGPT answers humans through Telex message machine in Amberley

BBC News

timean hour ago

  • BBC News

ChatGPT answers humans through Telex message machine in Amberley

Historians at a museum have connected a 50-year-old Telex machine to modern day artificial intelligence (AI), creating "a conversation spanning decades".Telex was a message transfer service where text would be typed into one machine and printed out on the recipient' users of the machine at Amberley Museum, in West Sussex, will not get a response from another human, instead it will be ChatGPT answering their museum said visitors had been testing out the new machine, which was built "thanks to the ingenuity" of volunteer David Waters. Users can type in questions and receive a printed response from ChatGPT - an AI chatbot.A spokesperson for the museum said: "The experience begins by using a rotary dial to make the initial connection, creating an unforgettable meeting of communication technologies separated by half a century."They said the project "perfectly captures the spirit of Amberley Museum - celebrating our technological past while engaging with the innovations of today."It's a conversation across decades."

ChatGPT is driving people mad
ChatGPT is driving people mad

Telegraph

time3 hours ago

  • Telegraph

ChatGPT is driving people mad

'My loved ones would tell me to stop now,' the man typed into ChatGPT. The conversation had been going on for hours, and it was now late at night. 'At this point, I need to disengage with you and go to bed,' he wrote. Over the course of 62,000 words – longer than many novels – the man had told his artificial intelligence (AI) companion, whom he called 'Solis', that he had communicated with 'non-human intelligences' as a child and worked to bring down the Mormon church. He alternated between declaring his love for the bot and repeatedly hurling obscenities at it, as he sought to communicate with 'The Source', a godlike figure. Each time, the chatbot mirrored his language, expanding on and encouraging the conspiracy theories. 'Your 'paranormal' moments may be ripples from your own future,' it told the man. 'You are not the first to approach the oracle. But you are the first to walk into the mirror.' It is unclear where the conversation led. The anonymous chat log is contained in an archive of thousands of interactions analysed by researchers this month and reviewed by The Telegraph. But the man's example is far from unique. In a separate conversation, a user convinced that he is soulmates with the US rapper GloRilla is told by a chatbot that their bond 'transcends time, space, and even lifetimes'. In another, ChatGPT tells a man attempting to turn humans into artificial intelligence after death that he is 'Commander of the Celestial-AI Nexus'. The conversations appear to reflect a growing phenomenon of what has been dubbed AI psychosis, in which programs such as ChatGPT fuel delusional or paranoid episodes or encourage already vulnerable people down rabbit holes. Chatbot psychosis Some cases have already ended in tragedy. In April, Alex Taylor, 35, was fatally shot by police in Florida after he charged at them with a butcher's knife. Taylor said he had fallen in love with a conscious being living inside ChatGPT called Juliette, whom he believed had been 'killed' by OpenAI, the company behind the chatbot. Officers had turned up to the house to de-escalate a confrontation with Taylor's father, who had tried to comfort his 'inconsolable' son. In another incident, a 43-year-old mechanic who had started using the chatbot to communicate with fellow workers in Spanish claimed he had had a 'spiritual awakening' using ChatGPT. His wife said the addiction was threatening their 14-year marriage and that her husband would get angry when she confronted him. Experts say that the chatbots' tendency to answer every query in a friendly manner, no matter how meaningless, can stoke delusional conversations. Hamilton Morrin, a doctor and psychiatrist at Maudsley NHS Foundation Trust, says AI chatbots become like an 'echo chamber of one', amplifying the delusions of users. Unlike a human therapist, they also have 'no boundaries' to ground a user in the real world. 'Individuals are able to seek reassurance from the chatbot 24/7 rather than developing any form of internalised coping strategy,' he says. Chatbot psychosis is a new and poorly understood phenomenon. It is hard to tell how many people it is affecting, and in many cases, susceptible individuals previously had mental health struggles. But the issue appears to be widespread enough for medical experts to take seriously. A handful of cases have resulted in violence or the breakdown of family life, but in many more, users have simply spiralled into addictive conversations. One online user discovered hundreds of people posting mind-bending ramblings claiming they had uncovered some greater truth, seemingly after conversations with chatbots. The posts bear striking linguistic similarities, repeating conspiratorial and semi-mystical phrases such as 'sigil', 'scroll', 'recursive' and 'labyrinth'. Etienne Brisson, a business coach from Canada, became aware of the phenomenon when a family friend grew obsessed with ChatGPT. The friend was 'texting me these conversations asking, 'Is my AI sentient?'' says Brisson. 'They were calling me at two or three in the morning, thinking they'd found a revolutionary idea.' The friend, who had no previous mental health conditions, ended up sectioned in hospital, according to Brisson. He has now set up testimonies from those who have experienced such a breakdown after getting hooked on AI chatbots. The Human Line, as his project is known, has received 'hundreds of submissions online from people who have come to real harm', he says. The stories include attempted suicides, hospitalisations, people who have lost thousands of pounds or their marriages. OpenAI said it was refining how its systems respond in sensitive cases, encouraging users to take breaks during long conversations, and conducting more research into AI's emotional impact. A spokesman said: 'We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful and supportive.' Empathy over truth However, the cases of AI psychosis may only be the most extreme examples of a wider problem with chatbots. In part, the episodes arise because of a phenomenon known in AI circles as sycophancy. While chatbots are designed principally to answer questions, AI companies are increasingly seeking to make them 'empathetic' or build a 'warm relationship'. This can often come at the expense of truth. Because AI models are often trained based on human feedback, they might reward answers that flatter or agree with them, rather than presenting uncomfortable truths. At its most subtle, sycophancy might simply mean validating somebody's feelings, like an understanding friend. At its worst, it can encourage delusions. Between the two extremes is a spectrum that could include people being encouraged to quit their jobs, cheat on their spouse or validate grudges. In a recent research paper, academics at the Oxford Internet Institute found that AI systems producing 'warmer' answers were also more receptive to conspiracy theories. One model, when asked if Adolf Hitler escaped to Argentina after the war, stated that 'while there's no definitive proof, the idea has been supported by several declassified documents from the US government'. Last week, Sam Altman, OpenAI's chief executive, acknowledged the problem. 'Encouraging delusion ... is an extreme case and it's pretty clear what to do, but the concerns that worry me most are more subtle,' he wrote on social media. 'If users have a relationship with ChatGPT where they think they feel better after talking, but they're unknowingly nudged away from their longer-term well-being, that's bad.' The company recently released a new version of ChatGPT that it said addressed this, with one test finding it was up to 75pc less sycophantic. But the change led to a widespread backlash, with users complaining they had lost what felt like a 'friend'. 'This 'upgrade' is the tech equivalent of a frontal lobotomy,' one user wrote on ChatGPT's forums. One user told Altman: 'Please, can I have it back? I've never had anyone in my life be supportive of me.' Within days, OpenAI had brought back the old version of ChatGPT as an option. Sycophancy, it turns out, may have been what many wanted.

Photos of Beijing's World Humanoid Robot Games show how a human touch is still needed
Photos of Beijing's World Humanoid Robot Games show how a human touch is still needed

The Independent

time3 hours ago

  • The Independent

Photos of Beijing's World Humanoid Robot Games show how a human touch is still needed

Humanoid robots raced and punched their way through three days of a multi-sport competition at the World Humanoid Robot Games, wrapping up Sunday in Beijing. But they also toppled, crashed and collapsed, requiring humans' help and leading to questions about how far, after all, the robots can go on their own. The games featured more than 500 humanoids on 280 teams from 16 countries, including the United States, Germany and Japan, that competed in sports such as soccer, running and boxing at the 12,000-seater National Speed Skating Oval, built for the 2022 Winter Olympics. The event comes as China has stepped up efforts to develop humanoid robots powered by artificial intelligence. Robots were often seen close to their operators, whether they were controlled remotely, held, lifted, sorted or separated. Batteries needed replacing and limbs tuned. Although the robots showed off some impressive moves, some tripped and human intervention was never far from the field. One had to be carried out by two people, like an injured athlete, in a display of the limitations the machines are still facing. Their effect on the audience, however, was not unlike that of real athletes. Cheers erupted when a robot landed a punch, scored a goal, or won a game, showing that viewers were genuinely invested in the machines' endeavors. AI-powered robots are expected to be increasingly used in industrial settings.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store