logo
#

Latest news with #CenterforDemocracyandTechnology

Google released safety risks report of Gemini 2.5 Pro weeks after its release — but an AI governance expert said it was a ‘meager' and ‘worrisome' report
Google released safety risks report of Gemini 2.5 Pro weeks after its release — but an AI governance expert said it was a ‘meager' and ‘worrisome' report

Yahoo

time17-04-2025

  • Business
  • Yahoo

Google released safety risks report of Gemini 2.5 Pro weeks after its release — but an AI governance expert said it was a ‘meager' and ‘worrisome' report

Google has released a key document detailing some information about how its latest AI model, Gemini 2.5 Pro, was built and tested, three weeks after it first made that model publicly available as a 'preview' version. AI governance experts had criticized the company for releasing the model without publishing documentation detailing safety evaluations it had carried out and any risks the model might present, in apparent violation of promises it had made to the U.S. government and at multiple international AI safety gatherings. A Google spokesperson said in an emailed statement that any suggestion that the company had reneged on its commitments was 'inaccurate.' The company also said that a more detailed 'technical report' will come later when it makes a final version of the Gemini 2.5 Pro 'model family' fully available to the public. But the newly published six-page model card has also been faulted by at least one AI governance expert for providing 'meager' information about the safety evaluations of the model. Kevin Bankston, a senior advisor on AI Governance at the Center for Democracy and Technology, a Washington, D.C.-based think tank, said in a lengthy thread on social media platform X that the late release of the model card and its lack of detail was worrisome. 'This meager documentation for Google's top AI model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market,' he said. He said the late release of the model card and its lack key safety evaluation results—for instance, details of "red-teaming" tests to trick the AI model into serving up dangerous outputs like bioweapon instructions—suggested that Google 'hadn't finished its safety testing before releasing its most powerful model' and that 'it still hasn't completed that testing even now.' Bankston said another possibility is that Google had finished its safety testing but has a new policy that it will not release its evaluation results until the model is released to all Google users. Currently, Google is calling Gemini 2.5 Pro a 'preview,' which can be accessed through its Google AI Studio and Google Labs products, with some limitations on what users can do with it. Google has also said it is making the model widely available to U.S. college students. The Google spokesperson said the company would release a more complete AI safety report 'once per model family.' Bankson said on X that this might mean Google would no longer release separate evaluation results for fine-tuned versions of its models that it releases, such as those that have been tailored for coding or cybersecurity. This could be dangerous, he noted, because fine-tuned versions of AI models can exhibit behaviors that are markedly different from the 'base model' from which they've been adapted. Google is not the only AI company seemingly retreating on AI safety. Meta's model card for its newly released Llama 4 AI model is of similar length and detail to the one Google just published for Gemini 2.5 Pro and was also criticized by AI safety experts. OpenAI said it was not releasing a technical safety report for its newly-released GPT-4.1 model because it said that the model was 'not a frontier model,' since the company's 'chain of thought' reasoning models, such as o3 and o4-mini, beat it on many benchmarks. At the same time, OpenAI touted that GPT-4.1 was more capable than its GPT-4o model, whose safety evaluation had shown that model could pose certain risks, although it had said these were below the threshold at which the model would be considered unsafe to release. Whether GPT-4.1 might now exceed those thresholds is unknown, since OpenAI said it does not plan to publish a technical report. OpenAI did publish a technical safety report for its new o3 and o4-mini models, which were released on Wednesday. But at the same time, earlier this week it updated its 'Preparedness Framework' which describes how the company will evaluate its AI models for critical dangers—everything from helping someone build a biological weapon to the possibility that a model will begin to self-improve and escape human control—and seek to mitigate those risks. The update eliminated 'Persuasion'—a model's ability to manipulate a person into taking a harmful action or convince them to believe in misinformation—as a risk category that the company would assess during it pre-release evaluations. It also changed how the company would make decisions around releasing higher risk models, including saying the company would consider shipping an AI model that posed a 'critical risk' if a competitor had already debuted a similar model. Those changes divided opinion among AI governance experts, with some praising OpenAI for being transparent about its process and also providing better clarity around its release policies, while others were alarmed at the changes. This story was originally featured on

23andMe bankruptcy filing sparks privacy fears as DNA data of millions goes up for sale
23andMe bankruptcy filing sparks privacy fears as DNA data of millions goes up for sale

Yahoo

time25-03-2025

  • Business
  • Yahoo

23andMe bankruptcy filing sparks privacy fears as DNA data of millions goes up for sale

With genetic testing company 23andMe filing for Chapter 11 bankruptcy protection and courting bidders, the DNA data of millions of users is up for sale. A Silicon Valley stalwart since 2006, 23andMe has steadily amassed a database of people's fundamental genetic information under the promise of helping them understand their disposition to diseases and potentially connecting with relatives. But the company's bankruptcy filing Sunday means information is set to be sold, causing massive worry among privacy experts and advocates. 'Folks have absolutely no say in where their data is going to go,' said Tazin Kahn, CEO of the nonprofit Cyber Collective, which advocates for privacy rights and cybersecurity for marginalized people. 'How can we be so sure that the downstream impact of whoever purchases this data will not be catastrophic?' she said. California Attorney General Rob Bonta warned people in a statement Friday that their data could be sold. In the statement, Bonta offered users instructions on how to delete genetic data from 23andMe, how to instruct the company to delete their test samples and how to revoke access from their data's being used in third-party research studies. DNA data is extraordinarily sensitive. Its primary use at 23andMe — mapping out a person's potential predisposed genetic conditions — is data that many people would prefer to keep private. In some criminal cases, genetic testing data has been subpoenaed by police and used to help criminal investigations against people's relatives. Security experts caution that if a bad actor can gain access to a person's biometric data like DNA information, there's no real remedy: Unlike passwords or even addresses or Social Security numbers, people cannot change their DNA. A spokesperson for 23andMe said in an emailed statement that there will be no change to how the company stores customers' data and that it plans to follow all relevant U.S. laws. But Andrew Crawford, an attorney at the nonprofit Center for Democracy and Technology, said genetic data lawfully acquired and held by a tech company has almost no federal regulation to begin with. Not only does the United States not have a meaningful general digital privacy law, he said, but Americans' medical data faces less legal scrutiny if it's held by a tech company rather than by a medical professional. The Health Insurance Portability and Accountability Act (HIPAA), which regulates some ways in which health data can be shared and stored in the United States, largely applies only 'when that data is held by your doctor, your insurance company, folks kind of associated with the provision of health care,' Crawford said. 'HIPAA protections don't typically attach to entities that have IOT [internet of things] devices like fitness trackers and in many cases the genetic testing companies like 23andMe,' he said. There is precedent for 23andMe's losing control of users' data. In 2023, a hacker gained access to the data of what the company later admitted were around 6.9 million people, almost half of its user base at the time. That led to posts on a dark web hacker forum, confirmed by NBC News as at least partially authentic, that shared a database that named and identified people with Ashkenazi Jewish heritage. The company subsequently said in a statement that protecting users' data remained 'a top priority' and vowed to continue investing in protecting its systems and data. Emily Tucker, the executive director of Georgetown Law's Center on Privacy & Technology, said the sale of 23andMe should be a wake-up call for Americans about how easily their personal information can be bought and sold without their input. 'People must understand that, when they give their DNA to a corporation, they are putting their genetic privacy at the mercy of that company's internal data policies and practices, which the company can change at any time,' Tucker said in an emailed statement. 'This involves significant risks not only for the individual who submits their DNA, but for everyone to whom they are biologically related,' she said. This article was originally published on

Legal fight against AI-generated child pornography is complicated – a legal scholar explains why, and how the law could catch up
Legal fight against AI-generated child pornography is complicated – a legal scholar explains why, and how the law could catch up

Yahoo

time11-02-2025

  • Yahoo

Legal fight against AI-generated child pornography is complicated – a legal scholar explains why, and how the law could catch up

The city of Lancaster, Pennsylvania, was shaken by revelations in December 2023 that two local teenage boys shared hundreds of nude images of girls in their community over a private chat on the social chat platform Discord. Witnesses said the photos easily could have been mistaken for real ones, but they were fake. The boys had used an artificial intelligence tool to superimpose real photos of girls' faces onto sexually explicit images. With troves of real photos available on social media platforms, and AI tools becoming more accessible across the web, similar incidents have played out across the country, from California to Texas and Wisconsin. A recent survey by the Center for Democracy and Technology, a Washington D.C.-based nonprofit, found that 15% of students and 11% of teachers knew of at least one deepfake that depicted someone associated with their school in a sexually explicit or intimate manner. The Supreme Court has implicitly concluded that computer-generated pornographic images that are based on images of real children are illegal. The use of generative AI technologies to make deepfake pornographic images of minors almost certainly falls under the scope of that ruling. As a legal scholar who studies the intersection of constitutional law and emerging technologies, I see an emerging challenge to the status quo: AI-generated images that are fully fake but indistinguishable from real photos. While the internet's architecture has always made it difficult to control what is shared online, there are a few kinds of content that most regulatory authorities across the globe agree should be censored. Child pornography is at the top of that list. For decades, law enforcement agencies have worked with major tech companies to identify and remove this kind of material from the web, and to prosecute those who create or circulate it. But the advent of generative artificial intelligence and easy-to-access tools like the ones used in the Pennsylvania case present a vexing new challenge for such efforts. In the legal field, child pornography is generally referred to as child sexual abuse material, or CSAM, because the term better reflects the abuse that is depicted in the images and videos and the resulting trauma to the children involved. In 1982, the Supreme Court ruled that child pornography is not protected under the First Amendment because safeguarding the physical and psychological well-being of a minor is a compelling government interest that justifies laws that prohibit child sexual abuse material. That case, New York v. Ferber, effectively allowed the federal government and all 50 states to criminalize traditional child sexual abuse material. But a subsequent case, Ashcroft v. Free Speech Coalition from 2002, might complicate efforts to criminalize AI-generated child sexual abuse material. In that case, the court struck down a law that prohibited computer-generated child pornography, effectively rendering it legal. The government's interest in protecting the physical and psychological well-being of children, the court found, was not implicated when such obscene material is computer generated. 'Virtual child pornography is not 'intrinsically related' to the sexual abuse of children,' the court wrote. According to the child advocacy organization Enough Abuse, 37 states have criminalized AI-generated or AI-modified CSAM, either by amending existing child sexual abuse material laws or enacting new ones. More than half of those 37 states enacted new laws or amended their existing ones within the past year. California, for example, enacted Assembly Bill 1831 on Sept. 29, 2024, which amended its penal code to prohibit the creation, sale, possession and distribution of any 'digitally altered or artificial-intelligence-generated matter' that depicts a person under 18 engaging in or simulating sexual conduct. While some of these state laws target the use of photos of real people to generate these deep fakes, others go further, defining child sexual abuse material as 'any image of a person who appears to be a minor under 18 involved in sexual activity,' according to Enough Abuse. Laws like these that encompass images produced without depictions of real minors might run counter to the Supreme Court's Ashcroft v. Free Speech Coalition ruling. Perhaps the most important part of the Ashcroft decision for emerging issues around AI-generated child sexual abuse material was part of the statute that the Supreme Court did not strike down. That provision of the law prohibited 'more common and lower tech means of creating virtual (child sexual abuse material), known as computer morphing,' which involves taking pictures of real minors and morphing them into sexually explicit depictions. The court's decision stated that these digitally altered sexually explicit depictions of minors 'implicate the interests of real children and are in that sense closer to the images in Ferber.' The decision referenced the 1982 case, New York v. Ferber, in which the Supreme Court upheld a New York criminal statute that prohibited persons from knowingly promoting sexual performances by children under the age of 16. The court's decisions in Ferber and Ashcroft could be used to argue that any AI-generated sexually explicit image of real minors should not be protected as free speech given the psychological harms inflicted on the real minors. But that argument has yet to be made before the court. The court's ruling in Ashcroft may permit AI-generated sexually explicit images of fake minors. But Justice Clarence Thomas, who concurred in Ashcroft, cautioned that 'if technological advances thwart prosecution of 'unlawful speech,' the Government may well have a compelling interest in barring or otherwise regulating some narrow category of 'lawful speech' in order to enforce effectively laws against pornography made through the abuse of real children.' With the recent significant advances in AI, it can be difficult if not impossible for law enforcement officials to distinguish between images of real and fake children. It's possible that we've reached the point where computer-generated child sexual abuse material will need to be banned so that federal and state governments can effectively enforce laws aimed at protecting real children – the point that Thomas warned about over 20 years ago. If so, easy access to generative AI tools is likely to force the courts to grapple with the issue. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Wayne Unger, Quinnipiac University Read more: Watermarking ChatGPT, DALL-E and other generative AIs could help protect against fraud and misinformation Generative AI could leave users holding the bag for copyright violations Could Apple's child safety feature backfire? New research shows warnings can increase risky sharing Wayne Unger does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store