Latest news with #DallE
Yahoo
4 days ago
- Entertainment
- Yahoo
Is AI porn the next horizon in self-pleasure — and is it ethical?
The AI revolution is well and truly upon us. As we grapple with the ramifications of generative AI in our professional and personal worlds, it's worth remembering that its impact will be felt in even the most intimate corners of our lives — including our private browsers. Whether you're aware of it or not, AI is coming for the porn industry. Already, there are a number of new genres emerging which make use of generative AI, such as hyper porn, a genre of erotic imagery which stretches the limits of sexuality and human anatomy to hyperbolic new heights (think: a Barbie-esque woman with three giant breasts, instead of two). There are also various iterations of 'gone wild' porn, a subdivision of porn which sees users attempt to 'trick' safe-for-work image generation models like Dall-E into depicting erotic scenes — and enjoying the work-arounds and euphemisms which these tools may use to avoid depicting explicit sex. But it's unlikely AI will wipe out the existence of IRL porn performers. AI porn stretches the fantasy innate within the porn and erotic content industries – materialising flawless avatars tailored to an individual's unique desires out of, seemingly, thin air. For some, this will be a turn-on but for others, it will lack the sweat and grit that makes IRL sex so appealing. 'I think there will be a splitting between people jumping head first into unreality and the people who actually want an antidote to it. We're already seeing such a huge fracturing of reality in our everyday lives,' says Vex Ashley, the porn performer, director, producer and one half of creative pornography project Four Chambers. SEE ALSO: Majority of Gen Z would marry an AI, survey says Ultimately, she insists, there will be a demographic that still hungers for a semblance of real human interaction. 'We'll absolutely see something like a build-your-own-AI custom pornstar who is also your digital girlfriend but I think — despite what people say — for many, sex is an experience they want to be grounded in some kind of authenticity,' Ashley adds. 'Person to person, there's a reason why you want to talk to your favourite pornstar on OnlyFans. I think we'll see a pushback, a rise of amateur, homemade content and in-person sexual events, experiences — something tactile.' While the industry is beginning to grapple with generative AI, the consumer point of view is coming into focus and, for some, it could provoke difficulties – especially for those already struggling with excessive porn use. While sex and relationships therapists tend to be sceptical about the topic of 'porn addiction' — it doesn't appear in diagnostic manuals and it is instead considered to be a form of compulsive sexual behaviour — a whole porn subculture exists around 'gooning': an extreme evolution of orgasm denial which sees individuals, generally cis men, enter into a trance-like state after edging for hours, locked into masturbation sessions with the aid of online porn. Speaking to an anonymous gooner, he shares his view of how AI may impact chronic porn users. SEE ALSO: What is gooning? 'AI porn kind of offers this new version of gooning. What is extremely sexy is typing in every crass thought that you have and immediately seeing it generated as an image.' He describes accessing NSFW generative AI models like Uber Realistic Porn Merge and then downloading different LoRAs (Low-Rank Adaptation — a type of add-on which allows you to quickly fine-tune an AI model) for various angles and scenarios within porn, such as 'reverse anal' or 'deep-throat side view'. 'You try them out and you're like, 'Oh, this is super hot…but I want the characters to be holding hands with a priest!'' From there, the hunger for more and more extreme fantasies can quickly escalate. 'It's bizarre, you tend to end up in this cycle of typing in a scenario, waiting five seconds until it comes up and, from there, chasing different scenarios — like having sex in the subway — that you can't do in real life.' The possibilities of AI to turn an individual's most niche fantasies into tangible images, all at the click of a button, are hugely compelling but, as my interviewee explains, it's also potentially troubling for individuals who may struggle with compulsive porn use. 'Live generation goon sessions will definitely become more popular,' he says. 'I've seen people in Reddit threads who are like, 'I can't stop gooning over AI porn'. I agree — I tasted it and it was fucking addictive.' It's worth noting here that, for anyone who is concerned about their porn consumption (of AI content, or otherwise), you might want to ask yourself questions about whether you want to stop but can't, or if there is a pattern of escalation. Ultimately, if you feel like your porn consumption is spiralling, it's worth reaching out to a therapist specialised in the field of compulsive sexual behaviour or a charity such as Relate, which can offer support around so-called 'porn addiction' (psychotherapists say there is no clinical evidence to support the diagnosis of 'porn addiction'). But as well as holding potential concerns for porn viewers, the rise of AI porn could have a serious knock-on effect for individuals currently working in the porn industry as actors or performers. After all, the obvious appeal of AI is the ability to see images and short-form video that explore hyper-unrealistic fantasies that aren't just impossible in 'real life' but for humans at all (like realistic vampire porn, or convincing erotic alien abduction scenes). For flesh-and-blood individuals working in the erotic industries, the increasing availability of AI — and potential impact on demand for porn featuring real humans — is already providing pause. 'I think it would be naive to say that we won't see huge shifts across all industries, porn and sex have always been right at the forefront of technological advancement,' says Ashley. As with other industries and forms of labour, Ashley explains that there will inevitably be concerns around workers' rights as consumers begin to explore AI-generated imagery. 'We're unfortunately going to see a space long dominated by the labour, skill and ingenuity of women and queer people be flooded with men finally able to achieve the ability to create the image of a person they want to fuck, without needing the person themself,' Ashley explains. While some porn performers are using AI themselves, such as for sexy chatbots, the lack of employment law protections for workers in the industry means they will be especially vulnerable to consumer changes. 'It's going to be a labour rights issue for sex workers who are already so legislatively unprotected compared to other performers in mainstream media,' says Ashley. In addition to these labour rights concerns, AI can be used to create non-consensual explicit deepfakes, prompting serious questions around consumer responsibility. For those who are unfamiliar, non-consensual explicit deepfakes typically consist of an individual's face and likeness being superimposed onto a naked body or an erotic scenario without their knowledge, then distributed online. It goes without saying that this type of material is a major violation of an individual's right to autonomy, privacy, and dignity. As a result, the creation of these images is already due to become illegal in England and Wales, with legislation recently signed to crack down on deepfakes in the U.S. However, as Professor Clare McGlynn, an expert in the legal regulation of pornography, sexual violence, and online abuse explains, the consumption of these images remains unregulated — meaning that they can be viewed without repercussions. 'Viewing sexually explicit deepfakes is not an offence. It is, though, deeply unethical. Survivors experience this abuse as a violation of their bodily and sexual integrity,' she explains. 'Each viewing is a new act of sexual violence, a breach of their consent. That so many are viewing this material should be deeply worrying, as it suggests a large market for non-consensual material.' Thankfully, efforts are being made to bring distributors of sexually explicit deepfakes to account — cutting consumers off from the source. This year, in fact, the figure behind one of the world's best-known non-consensual explicit deepfake sites was identified, and the site in question, MrDeepfakes, shut down. However, more should be done to prevent this kind of abuse from happening, rather than taking down the material once it has already been made and distributed, as Madelaine Thomas, an adult content creator and the founder of Image Angel, a software company which creates invisible watermarks to prevent non-consensual image sharing, attests. 'Social media platforms don't have the infrastructure they need to be able to protect the people on those platforms from content that isn't authentic or isn't captured in the correct way,' Thomas explains. The best-known cases of non-consensual explicit deepfakes involve well-known celebrities, but the scale of the harm is wider than many are aware. The ability to pirate the likeness or body of any individual who has posted photos on the internet has led to an increasing number of victims speaking out about the abuse they have faced over the past few years. In future, it's likely that more and more individuals will sadly be impacted by these crimes, including those in the adult entertainment industry, a demographic who are often victim-blamed when they come forward about instances of sexual abuse. But are there solutions? In the background, work is definitely underway. For example, Image Angel was founded after Thomas's intimate images were distributed without her consent, leading to a passion to prevent this kind of abuse in future — one that is reflected in her company's mission. 'Image Angel adds an invisible forensic watermark to any content that is received on a platform that has our tech installed. For example, if a content creator is sending out multiple nude or suggestive images, they can make sure that whoever receives them will be traceable if they share them,' she explains. While Thomas is keen to emphasise the damage of non-consensual explicit deepfake abuse, she also emphasises that the current AI model for all explicit content is based on the non-consensual extraction of erotic images. 'I work with the Digital Intimacy Coalition, and for years we have been campaigning to get people to understand that generated deepfakes do not solely put one person at the center of the harm,' she explains. 'The customer is none the wiser, but these AI tools are almost like a black hole that we are just seeing the very surface of. There are thousands of people, mostly women, whose images have been fed into these multi-language models. The tools might spit out an image of one person, but that image is comprised of thousands of sex workers' data.' Ultimately, an increased awareness around the potential harms of AI porn may encourage some viewers from taking a more conscientious approach. But we can't just rely on consumers to do the right thing – there needs to be an infrastructure to protect individuals from having their image stolen and misappropriated. For her part, Thomas expects to see more porn and erotic imagery behind the paywall, with in-built technology to prevent content from being fed into AI models. She also expects that AI-poisoning tools, like Nightshade, may well augment in popularity as a way to prevent images and videos from being fed into AI models. Whatever the future holds, it will be a rocky journey for the adult entertainment industry and wider society as we grapple with these nefarious uses of new technology – making it all the more important to listen to porn and adult content creators, fight for survivors of image-based sexual abuse, and campaign for further AI regulation.


CNET
12-05-2025
- Business
- CNET
Copyright Office Punts on AI and Fair Use, One of the Biggest Questions Surrounding Gen AI
If you've been hoping for clarity from the US Copyright Office on AI training -- whether AI companies can use copyrighted materials under fair use or creators can claim infringement -- prepare to be disappointed. The Office released its third report on Friday, and it's not the major win tech companies hoped for, nor the full block some creators sought. The US Copyright Office set out in 2023 to release a series of reports, guidance for creators, dealing with the myriad of legal and ethical issues that arise from AI-generated content from software such as ChatGPT, Gemini, Meta AI and Dall-E. In previous reports, the Copyright Office ruled that entirely AI-generated content can't be copyrighted, while AI-edited content could still be eligible. These reports aren't law, but they give us a picture of how the agency is handling copyright protections in the age of AI. The third report, available now, isn't the final report; it's a "prepublication" version. Still, there won't be any major changes in the Copyright Office's analysis and conclusions in the final report, according to its website, so it does give us a good understanding of the guidance it will offer for future claims. The 108-page report deals primarily with copyright concerns around the training of AI models -- specifically, whether AI companies have legal footing to ask for a fair-use exception, which would let them use copyrighted content without licensing or compensating the copyright holders. In short, the Copyright Office didn't rule out the possibility of a fair-use case for companies using copyrighted material for AI training. But there are a couple of important things that would be a metaphorical strike against a fair-use case, which the report spells out in detail. So it's also possible that an AI company found to be using copyrighted material without the author's permission could be grounds for a copyright infringement claim. It depends on the AI model, how it's used and what it produces. "On one end of the spectrum, uses for purposes of noncommercial research or analysis that do not enable portions of the works to be reproduced in the outputs are likely to be fair," the report says. "On the other end, the copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace … is unlikely to qualify as fair use. Many uses, however, will fall somewhere in between." The Copyright Office, which is part of the Library of Congress, is the subject of current political controversy. CBS News reports that the department's head, Shira Perlmutter, known as the Register of Copyrights, was fired by President Donald Trump this past weekend. This was a few days after Trump fired the Librarian of Congress, Carla Hayden, on Thursday. Hayden was the first woman and first African American to hold the position. Here's what the Copyright Office wrote on fair use and what you need to know about why the legal web of AI and copyright continues to grow. Why does fair use matter? Tech companies have been pushing hard for a fair-use exception. If they're granted such an exception, it won't matter if they have and use copyrighted work in their training datasets. The question of potential copyright infringement is at the center of more than 30 lawsuits, including notable ones like The New York Times v. OpenAI and Ortiz v. Stability AI. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) The Copyright Office said in its report that these cases should still continue in the judicial system: "It is for the courts to weigh the statutory factors together," since there's "no mechanical computation or easy formula" to decide fair use. Writers, actors and other creators have pushed back equally hard against fair use. In an open letter signed by more than 400 of Hollywood's biggest celebrities, the creators ask the administration's office of science and technology policy not to allow fair use. They wrote: "[America's] success stems directly from our fundamental respect for IP and copyright that rewards creative risk-taking by talented and hardworking Americans from every state and territory." For now, it seems we have only a few more answers than before. The big questions around whether specific companies like OpenAI have violated copyright law will have to wait to be adjudicated in court. Guidance on deciding on fair use Fair use is part of the 1976 Copyright Act. The provision grants people who are not the original authors the right to use copyrighted works in specific cases, like for education, reporting and parody. There are four main factors to consider in a fair-use case: one, the purpose of the use; two, the nature of the work; three, the amount and substantiality used; and four, the effect it has on the market. The Copyright Office's report analyzes all four factors in the context of AI training. One important aspect is the transformativeness of the work, which is whether AI chatbots and image generators are creating outputs that are substantially different from the original training content. The report seems to indicate that AI chatbots used for deep research are sufficiently transformative. But image generators that produce outputs in too similar a style or aesthetic to existing work might not be. The report says guardrails that prevent the replication of protected works -- like image generators refusing to create popular logos -- would be evidence that AI companies are trying to avoid infringement. This is despite the fact that the office cites research that proves those guardrails aren't always effective, as OpenAI's Studio Ghibli image trend clearly demonstrated. The report argues that AI-generated content clearly affects the market, the fourth factor. It mentions the possibility of loss of sales, diluting markets through oversaturation and loss of licensing opportunities for existing data markets. However, it also mentions the potential for public benefit from the development of AI products. Licensing, a popular alternative to suing among publishers and owners of content catalogs, is also highlighted as one possible pathway to avoid copyright concerns. Many publishers, including the Financial Times and Axel Springer brands, have struck multimillion-dollar deals with AI companies, giving AI developers access to their high-quality, human-generated content. There are some concerns that if licensing becomes the sole way to attain this data, it will prioritize the big tech companies that can afford to pay for that data, boxing out smaller developers. The Copyright Office writes that those concerns shouldn't have an effect on fair-use analyses and are best dealt with by antitrust laws and the agencies that enforce them, like the Federal Trade Commission.
Yahoo
27-03-2025
- Entertainment
- Yahoo
Trending in Texoma — Mega Millions Lottery changes to a street named for a prince
WICHITA FALLS (KFDX/KJTL) — Whether it's happening on the other side of the country, somewhere else in Texas, or right in our communities, Digital Producer Mariana Vela takes a look at the stories currently generating the most interest on social media. Social Rundown: Killer whales, curious otters escape, Kilauea volcano erupts, and Jersey Mike's ChatGPT new feature AI-loving users will soon be able to create images directly inside the ChatGPT program. Right now, for image creation, users must use the 'Dall E' program, but soon, Open AI will generate images without it. Changes coming to the Mega Millions Lottery Changes to the Mega Millions may make it easier to find its next winner. From chances of winning one in 24, it will be one in 23—a reset to 50 million from 20 million and a bump to the minimum price. Fresh Prince is getting a street name. 'In West Philidelphia born and raised…' Fresh Prince of Bel-Air Will Smith is getting his street in Philly called 'Will Smith Way.' the 59th Street will be renamed to just that. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.