
Think Twice Before Creating That ChatGPT Action Figure
May 1, 2025 9:56 AM People are using ChatGPT's new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself. Photograph:At the start of April, an influx of action figure started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.
All this is possible because of OpenAI's new GPT-4o-powered image generator, which supercharges ChatGPT's ability to edit pictures, render text, and more. OpenAI's ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.
The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models. Hidden Data
The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you're potentially handing over 'an entire bundle of metadata,' says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. 'That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.'
OpenAI also collects data about the device you're using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. 'And because platforms like ChatGPT operate conversationally, there's also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.'
It's not just your face. If you upload a high-resolution photo, you're giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.
This type of voluntarily provided, consent-backed data is 'a goldmine for training generative models,' especially multimodal ones that rely on visual inputs, says Vazdar.
OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn't need to scrape the web for your face if you're happily uploading it yourself, Vazdar points out. 'This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.'
OpenAI says it does not actively seek out personal information to train models—and it doesn't use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI's current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.
Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity advisor at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn. Uncanny Likeness
In some markets, your photos are protected by regulation. In the UK and EU, data protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.
However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is 'unlikely to meet this definition,' she says.
Meanwhile, in the US, privacy protections vary. 'California and Illinois are leading with stronger data protection laws but there is no standard position across all US states,' says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI's privacy policy doesn't contain an explicit carveout for likeness or biometric data, which 'creates a grey area for stylized facial uploads,' Checchi says.
The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. 'While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.'
OpenAI says its users' privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.
Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.
ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data by default, according to the company. Trending Topics
The next time you are tempted to jump on a ChatGPT-led trend such as the action figure or Studio Ghibli-style images, it's wise to consider the privacy trade-off. The risks apply to ChatGPT as well as many other AI image editing or generation tools, so it's important to read the privacy policy before uploading your photos.
There are also steps you can take to protect your data. In ChatGPT, the most effective is to turn off chat history, which helps ensure your data is not used for training, says Vazdar. You can also upload anonymized or modified images, for example, using a filter or generating a digital avatar rather than an actual photo, he says.
It's worth stripping out metadata from image files before uploading, which is possible using photo editing tools. 'Users should avoid prompts that include sensitive personal information and refrain from uploading group photos or anything with identifiable background features,' says Vazdar.
Double-check your OpenAI account settings, especially those related to data use for training, Hall adds. 'Be mindful of whether any third-party tools are involved, and never upload someone else's photo without their consent. OpenAI's terms make it clear that you're responsible for what you upload, so awareness is key.'
Checchi recommends disabling model training in OpenAI's settings, avoiding location-tagged prompts, and steering clear of linking content to social profiles. 'Privacy and creativity aren't mutually exclusive—you just need to be a bit more intentional.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
44 minutes ago
- Yahoo
Apple Researchers Just Released a Damning Paper That Pours Water on the Entire AI Industry
Researchers at Apple have released an eyebrow-raising paper that throws cold water on the "reasoning" capabilities of the latest, most powerful large language models. In the paper, a team of machine learning experts makes the case that the AI industry is grossly overstating the ability of its top AI models, including OpenAI's o3, Anthropic's Claude 3.7, and Google's Gemini. In particular, the researchers assail the claims of companies like OpenAI that their most advanced models can now "reason" — a supposed capability that the Sam Altman-led company has increasingly leaned on over the past year for marketing purposes — which the Apple team characterizes as merely an "illusion of thinking." It's a particularly noteworthy finding, considering Apple has been accused of falling far behind the competition in the AI space. The company has chosen a far more careful path to integrating the tech in its consumer-facing products — with some seriously mixed results so far. In theory, reasoning models break down user prompts into pieces and use sequential "chain of thought" steps to arrive at their answers. But now, Apple's own top minds are questioning whether frontier AI models simply aren't as good at "thinking" as they're being made out to be. "While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood," the team wrote in its paper. The authors — who include Samy Bengio, the director of Artificial Intelligence and Machine Learning Research at the software and hardware giant — argue that the existing approach to benchmarking "often suffers from data contamination and does not provide insights into the reasoning traces' structure and quality." By using "controllable puzzle environments," the team estimated the AI models' ability to "think" — and made a seemingly damning discovery. "Through extensive experimentation across diverse puzzles, we show that frontier [large reasoning models] face a complete accuracy collapse beyond certain complexities," they wrote. Thanks to a "counter-intuitive scaling limit," the AIs' reasoning abilities "declines despite having an adequate token budget." Put simply, even with sufficient training, the models are struggling with problem beyond a certain threshold of complexity — the result of "an 'overthinking' phenomenon," in the paper's phrasing. The finding is reminiscent of a broader trend. Benchmarks have shown that the latest generation of reasoning models is more prone to hallucinating, not less, indicating the tech may now be heading in the wrong direction in a key way. Exactly how reasoning models choose which path to take remains surprisingly murky, the Apple researchers found. "We found that LRMs have limitations in exact computation," the team concluded in its paper. "They fail to use explicit algorithms and reason inconsistently across puzzles." The researchers claim their findings raise "crucial questions" about the current crop of AI models' "true reasoning capabilities," undercutting a much-hyped new avenue in the burgeoning industry. That's despite tens of billions of dollars being poured into the tech's development, with the likes of OpenAI, Google, and Meta, constructing enormous data centers to run increasingly power-hungry AI models. Could the Apple researchers' finding be yet another canary in the coalmine, suggesting the tech has "hit a wall"? Or is the company trying to hedge its bets, calling out its outperforming competition as it lags behind, as some have suggested? It's certainly a surprising conclusion, considering Apple's precarious positioning in the AI industry: at the same time that its researchers are trashing the tech's current trajectory, it's promised a suite of Apple Intelligence tools for its devices like the iPhone and MacBook. "These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning," the paper reads. More on AI models: Car Dealerships Are Replacing Phone Staff With AI Voice Agents


CNBC
an hour ago
- CNBC
OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth
OpenAI has hit $10 billion in annual recurring revenue less than three years after launching its popular ChatGPT chatbot. The figure includes sales from the company's consumer products, ChatGPT business products and its application programming interface, or API. It excludes licensing revenue from Microsoft and large one-time deals, according to an OpenAI spokesperson. For all of last year, OpenAI was around $5.5 billion in ARR. Reaching its meteoric growth rates requires a substantial amount of cash. The San Francisco-based startup lost about $5 billion last year. OpenAI is also targeting $125 billion in revenue by 2029, according to a person familiar with the matter who asked not to be named because the details are confidential. The Information first reported on OpenAI's revenue ambitions. The new revenue metrics gives some context to OpenAI's monster valuation. OpenAI closed a $40 billion funding round in March, marking the largest private tech deal on record. At today's metrics, OpenAI is valued at about 30 times revenue, which highlights the hyper growth expectations by some of its largest investors. OpenAI is backed by Japan's SoftBank, Microsoft, Coatue, Altimeter, Thrive and others. OpenAI burst onto the scene with the release of the consumer version of ChatGPT in late 2022, and began launching business products the following year. As of late March, OpenAI said it supports 500 million weekly active users. The company announced earlier this month that it has 3 million paying business users, up from the 2 million it reported in February.
Yahoo
an hour ago
- Yahoo
Tommy Lee's Wife Brittany Furlan Advised To Stick With A.I. To Spice Up The Bedroom
There is a third party in Tommy Lee and Brittany Furlan's bedroom, and no, it is not a person. The power couple has shared a top secret from their passionate moments, and how it has kept their flame of romance burning very bright. Tommy Lee and Brittany Furlan have maintained an open policy about details of their marriage since tying the knot six years ago in 2019, after meeting online two years prior. Furlan and Lee's sexual satisfaction in their marriage has sometimes had his wife turning to ChatGPT for help in navigating their bedroom business. However, with their romance rekindled, sex expert Vanessa Marin suggests that Furlan can continue using A.I. to enhance their relationship, both in and out of the bedroom. Marin noted that many individuals now utilize ChatGPT for improved connection and encouraged couples to embrace these tools to foster intimacy and communication. The sex expert stated the potential benefits of ChatGPT and similar tools, including her own creation, "Vanessa A.I." She believes these tools can assist couples, such as Furlan and the drummer, in various ways to improve their emotional interaction. Per TMZ, Marin identified the immense benefits of A.I. in the bedroom, including initiating difficult conversations, generating engaging discussion topics, and even providing advice for enhancing intimacy for couples. However, Marin warned that the comfort A.I. provides should not replace genuine human interactions. According to her, the complexities of human relationships are irreplaceable, and Marin emphasizes the value of real emotional connections. Dr. Julie Carpenter, an expert in human-computer interaction, also reaffirms Marin's sentiment, cautioning that it is easy for users to get too relaxed with chatbots designed to engage users effectively. She stresses that while these tools can provide social benefits, they are ultimately profit-driven and lack a true understanding of human emotions. The tech guru emphasized that artificial intelligence applications do not fully comprehend unique contexts and the intricacies of human interactions with one another. Carpenter then applauded Furlan's openness about using ChatGPT to solve her intimacy challenges. According to her, an admission from someone like her helps to normalize its use in society. Last year, The Blast noted that Furlan got candid about her husband's hygiene habits during a podcast session with Kaitlyn Bristowe. In a surprising revelation, she disclosed that the famous drummer only showers once a week, a practice he attributes to his upbringing. The actress explained that Lee was raised with the understanding that daily showers are not a pointer to hygiene. "They would just do the white washcloth if they felt like they were getting a little funky. My husband showers, like, once a week," she continued. Despite this unusual routine, Furlan emphasized that the boy band superstar maintains a neat appearance, countering the stereotype of messy musicians. She compared his cleanliness with that of her previous partners, stating that they were not as neat as her meticulously organized home, which "looks like a f-ucking museum." Lee's irregular hygiene has no bearing on how they feel for each other, as the actress emphasizes that the only thing she feels for him is intense attraction. She even finds the drummer "so handsome" and insists he does not give her "ick" like most people who do not consistently shower would. Furlan has pretty much indulged her husband's weird ways. Three years ago, the musician broke the internet by uploading a naked picture on X. The graphic image featured a full-frontal photo of the celebrity. For years before meeting, the couple had been huge fans of each other's work. Lee used to enjoy watching Furlan's Vine videos and even gave her a follow. Furlan also felt the same way about his music. According to her: "I was a Methods of Mayhem fan and Mötley Crüe fan. I wasn't a huge fan — I didn't listen to a bunch of Mötley Crüe, but I definitely liked Methods of Mayhem, and I always thought he was super attractive." Their exchange with each other bore positive fruit as they eventually met and went out on a couple of awesome dates. Lee and Furlan's love is strong and enduring, proving resilient despite their past relationships. Even Pamela Anderson, who once considered rekindling her romance with Lee, has moved on. In May, The Blast reported that sources close to Anderson revealed she has moved on from her past relationship and feels relieved to be distanced from him. Initially, she reportedly had lingering feelings, but now those close to her believe a reunion is unlikely. Anderson is embracing her single life and focusing on herself. Earlier in 2023, she did reach out to Lee, referring to him as her "one true love" before the release of her Netflix documentary. Anderson's yearning to mend their broken ties came from a desire to become better grandparents. After her youngest son's engagement last summer, she reportedly emphasized the importance of "peacefully co-grandparenting" with Lee.