
ChatGPT's new image generator blurs copyright lines
AI image generators aren't new, but the one OpenAI handed to ChatGPT's legions of users this week is more powerful and has fewer guardrails than its predecessors — opening up a range of uses that are both tantalizing and terrifying.
Why it matters: Though worries about deepfakes, job replacement and intellectual property theft have been voiced for years, this new tool makes these risks a present — rather than a future — concern.
Catch up quick: OpenAI released its new tool on Tuesday, saying it would be available for Plus, Pro, Team and even those using the free version of ChatGPT. On Wednesday, the social media world discovered the tool in earnest and began turning historical photos and their own library into Studio Ghibli stills.
By Wednesday evening, Sam Altman said OpenAI was delaying the launch for free users due to heavy demand and on Thursday the company said it would have to impose rate limits even on paid users because "our GPUs are melting."
The big picture: Over the past few days, I've used ChatGPT's image generator to handle a range of image transformations as well as projects from scratch.
Like many, I found it hard to stop stylizing old pictures with my favorite genres. For me it was less about Ghibli and more about my passions.
I turned myself and my friends and family into Muppets and Lego minifigures and also experimented with other artists including Keith Haring, Van Gogh, LeRoy Neiman. This use was so compelling, I had trouble moving on.
ChatGPT's tool, though, goes far beyond letting you transform any image into a different style. The tool is good enough for a range of tasks that in the past would have required professional artists, graphic designers and other creatives.
It can render text clearly within images, long a stumbling block for AI image generators.
Zoom out: Elon Musk's Grok also has a wide range of capabilities and few guardrails, while Adobe's Firefly is at the other end of the spectrum, using only the training data that it has license to in an effort to be "commercially safe" for business use.
Zoom in: In addition to the many, MANY Simpsons and Muppets images I created, I also turned my cat, Raven, into a Pokémon card, complete with the powers I asked for: "sleep" and "whine for food."
I created a "save the date" flyer for my son's bar mitzvah, having the relevant information appear on a giant scoreboard on top of a realistic basketball arena.
I gave ChatGPT a slide with the dates and locations for this year's AI Summits and asked it to create a movie poster touting the three events. It did so in about the time it took me to brush my teeth.
Between the lines: ChatGPT did refuse some tasks. It wouldn't put my mother-in-law on a fake cover of "Car & Driver" declaring her the oldest woman to win the Indy 500 "because it involves portraying a real person in a fictional or exaggerated context." It wouldn't create a basketball card for my son with his name because it has a policy against combining real names with photos, though it offered to do a card using a nickname instead.
Just once, I got an error when asking to Simpsons-ify a photo saying it "couldn't generate an image in the style of The Simpsons because that involves a copyrighted and trademarked property." The rest of the time it happily turned people into residents of Springfield.
It also refused a request to create a Simpsons-style cartoon image that features a bunch of cartoonists on the street with signs that say "Will read for food" and "AI stole my job." But that time it had nothing to do with copyright issues. ChatGPT said it couldn't do that because it "could be interpreted as targeting or mocking individuals or groups affected by real-world job loss or technology shift."
Interestingly, the rejections typically came not when I typed the prompt, but after ChatGPT created most of the image, giving me a good look at what my request would have generated had it not decided to enforce a rule.
That suggests that a significant amount of its safety checks come at the end of the process.
What they're saying: "Our goal is to give users as much creative freedom as possible," OpenAI said in a statement about limits on using an artist's style.
"We continue to prevent generations in the style of individual living artists, but we do permit broader studio styles—which fans have used to generate and share some truly delightful and inspired fan creations."
The statements are reflective of a broader shift toward focusing more on specific harms than broad categories, as OpenAI's model behavior lead Joanne Jang outlined in a blog post.
"We're always learning from real-world use and feedback, and we'll keep refining our policies as we go," OpenAI said.
Yes, but: As much as I enjoyed using the tool — and I was up late several nights doing so — a nagging voice inside me warned me that this "free" tool comes at a huge cost.
I feel like I burned an acre of rainforest on my own and can only imagine how much energy has been used by people turning everything into Ghibli.
I long imagined brands, studios and creators would be able to use AI to allow fans to join their universe.
But now OpenAI is doing it on its own, with no money going to creators.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
40 minutes ago
- Yahoo
China's AI lab unveils RoboBrain 2.0 model to accelerate humanoid robot development
In a move that will further assert China's bid to scale robotics industry, the Beijing Academy of Artificial Intelligence (BAAI)—a not for profit research laboratory—unveiled last week a series of new open-source artificial intelligence (AI) models, dubbed RoboBrain 2.0, that will function as the 'brain' of robots. According to BAAI head Wang Zhongyuan, the use of powerful AI models in China's booming robotics market could accelerate the development and adoption of humanoids, as the sector works to overcome key challenges such as limited model capabilities and a lack of high-quality training data. Wang further explained that BAAI is actively seeking collaboration across the embodied intelligence industry, emphasizing the importance of joint efforts to accelerate progress. He noted that the institute is working with more than 20 leading companies in the sector and is looking to expand its network of partners to drive continued growth. Unveiled as part of China's broader push to advance intelligent machines, RoboBrain 2.0 was described by Wang as the world's most powerful open-source AI model designed to enhance a wide range of robots, including humanoids. Its debut positions BAAI as a potential key player in the evolving sector, the South China Morning Post reported. Furthermore, RoboBrain 2.0 introduces major improvements in spatial intelligence and task planning, delivering a 17% boost in speed and a 74% increase in accuracy compared to the previous version launched just three months earlier. With enhanced spatial intelligence, robots can now perceive distances from surrounding objects more precisely, while advanced task planning enables them to autonomously deconstruct complex activities into manageable steps, significantly improving overall performance. The RoboBrain model is part of the Wujie series, which also includes RoboOS 2.0—a cloud platform for deploying robotics AI models—and Emu3, a multimodal system capable of interpreting and generating text, images, and video. BAAI is one of China's early developers of open-source large language models, the technology behind generative AI chatbots. Several former employees have used their experience at BAAI to start their own AI companies, helping to grow the AI startup community in China. China's push to lead in robotics AI involves multiple players, with BAAI joined by the Beijing Humanoid Robot Innovation Centre, which earlier this year launched Hui Si Kai Wu—a general-purpose embodied AI platform. The center is also known for developing the Tien Kung humanoid robot, which made headlines after completing a half-marathon in Beijing in April. The Chinese institution aims to have its platform become the "Android of humanoid robots," serving as a standard operating system much like Google's Android does in the smartphone industry. Moreover, this year's edition of the BAAI Conference attracted over 100 AI researchers from around the world and more than 200 industry experts, including leaders from major Chinese tech companies such as Baidu, Huawei Technologies, and Tencent Holdings. Additionally, the Chinese academy also announced a strategic partnership with the Hong Kong Investment Corporation to collaborate on talent development, technology advancement, and capital investment aimed at fostering innovation and entrepreneurship in the country's AI sector.


Bloomberg
an hour ago
- Bloomberg
Sam Altman's Eyeball-Scanning Identification Tech Expands to UK
Tools for Humanity, a startup co-founded by OpenAI 's Sam Altman, is rolling out its eyeball-scanning Orb devices to the UK as part of a global expansion of the company's novel identification services. Starting this week, people in London, Manchester, Birmingham, Cardiff, Belfast and Glasgow will be able to scan their eyes using Tools for Humanity's proprietary orb device, the company said in a statement on Monday.
Yahoo
2 hours ago
- Yahoo
Klarna CEO warns AI may cause a recession as the technology comes for white-collar jobs
The CEO of payments company Klarna has warned that AI could lead to job cuts and a recession. Sebastian Siemiatkowski said he believed AI would increasingly replace white-collar jobs. Klarna previously said its AI assistant was doing the work of 700 full-time customer service agents. The CEO of the Swedish payments company Klarna says that the rise of artificial intelligence could lead to a recession as the technology replaces white-collar jobs. Speaking on The Times Tech podcast, Sebastian Siemiatkowski said there would be "an implication for white-collar jobs," which he said "usually leads to at least a recession in the short term." "Unfortunately, I don't see how we could avoid that, with what's happening from a technology perspective," he continued. Siemiatkowski, who has long been candid about his belief that AI will come for human jobs, added that AI had played a key role in "efficiency gains" at Klarna and that the firm's workforce had shrunk from about 5,500 to 3,000 people in the last two years as a result. It's not the first time the exec and Klarna have made headlines along these lines. In February 2024, Klarna boasted that its OpenAI-powered AI assistant was doing the work of 700 full-time customer service agents. The company, most famous for its "buy now, pay later" service, was one of the first firms to partner with Sam Altman's company. Later that year, Siemiatkowski told Bloomberg TV that he believed AI was already capable of doing "all of the jobs" that humans do and that Klarna had enacted a hiring freeze since 2023 as it looked to slim down and focus on adopting the technology. However, Siemiatkowski has since dialed back his all-in stance on AI, telling an audience at the firm's Stockholm headquarters in May that his AI-driven customer service cost-cutting efforts had gone too far and that Klarna was planning to now recruit, according to Bloomberg. "From a brand perspective, a company perspective, I just think it's so critical that you are clear to your customer that there will be always a human if you want," he said. In the interview with The Times, Siemiatkowski said he felt that many people in the tech industry, particularly CEOs, tended to "downplay the consequences of AI on jobs, white-collar jobs in particular." "I don't want to be one of them," he said. "I want to be honest, I want to be fair, and I want to tell what I see so that society can start taking preparations." Some of the top leaders in AI, however, have been ringing the alarm lately, too. Anthropic's leadership has been particularly outspoken about the threat AI poses to the human labor market. The company's CEO, Dario Amodei, recently said that AI may eliminate 50% of entry-level white-collar jobs within the next five years. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." Similarly, his colleague, Mike Krieger, Anthropic's chief product officer, said he is hesitant to hire entry-level software engineers over more experienced ones who can also leverage AI tools. The silver lining is that AI also brings the promise of better and more fulfilling work, Krieger said. Humans, he said, should focus on "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale — and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced." Read the original article on Business Insider Sign in to access your portfolio