
New AI Chibi figure trend is taking over social media - here's how you can transform into a pocket-sized toy
Unless you've been living under a rock, it's likely you have spotted the AI action figure trend taking over your social media over the last few weeks.
The trend saw users transform themselves into collectors toys, complete with a nameplate and matching accessories.
Now, a new AI trend is emerging, dubbed the AI Chibi figure trend.
First spotted by TechRadar, the trend sees users turn themselves into pocket-sized figures in the 'Chibi' style.
Chibi is a style of caricature originating in Japan, where characters are drawn in an exaggerated way.
While it might look technically advanced, it's actually fairly simple.
All you'll need is a full-length photo and access to ChatGPT.
Here's how you can jump on board the AI Chibi trend and turn yourself into a pocket-sized toy.
How to try the Chibi figure trend
1. Open ChatGPT
To begin, simply visit ChatGPT.com on your brower or use the ChatGPT mobile app.
2. Upload a photo
Choose a high-resolution photo of yourself.
If you want your Chibi figure to capture your fulll outfit and accessories, it's best to opt for a full-body image.
3. Write a clear prompt
To create your figure, make sure you give ChatGPT a clear description of what you want, including outfit, pose, and hairstyle.
For example, TechRadar suggests the following prompt:
Generate a portrait-oriented image of a realistic, full-glass gashapon capsule being held between two fingers.
Inside the capsule is a Chibi-style, full-figure miniature version of the person in the uploaded photo.
The Chibi figure should:
Closely resemble the person in the photo (face, hairstyle, etc.)
Wear the same outfit as seen in the uploaded photo
Be in a pose inspired by the chosen theme
4. Review and adjust your image
If it doesn't appear immediately, don't panic - the image may take several minutes to load.
Once it has been generated, take a look and see if you'd like any changes.
Happy with your action figure? Click the arrow in the bottom right to download it and share across social media!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mirror
2 hours ago
- Daily Mirror
Tesco selling Godiva Dubai-Style Chocolate but shoppers startled by price
The Godiva Pistachio & Kadayif Chocolate, which has a net weight of 90g, features the confectioner's smooth milk chocolate wrapped around crunchy pistachios and crispy Kadayif pieces Premium chocolate brand Godiva is usually only stocked in high-end stores in the UK, such as Harrods and Selfridges. However, the coveted confectionary firm has announced one of its products will be available in Tesco for a limited time. Its Pistachio & Kadayif Chocolate is a delicious new creation inspired by the Dubai-style chocolate craze. The bar, which has a net weight of 90g, features Godiva 's smooth milk chocolate wrapped around crunchy pistachios and crispy Kadayif limited-edition release introduces a new and more accessible version of Godiva's original Dubai-style chocolate bar - the handcrafted Godiva Premium Dubai Chocolate with Crispy Pistachio & Kadayif Filling bar, available exclusively at Harrods, Selfridges, and the Covent Garden flagship for £24.95. Now, the bar is reimagined for broader appeal. Retaining its signature indulgence, the new edition is priced at £10 – or £7.95 for Clubcard holders – and is available for a limited time in 360 Tesco stores nationwide, offering a luxurious chocolate experience to more of Godiva's UK fans. Isabella Rogers, Chief Marketing Officer at GODIVA commented: 'Inspired by the viral Dubai-style chocolate trend, our new GODIVA Crispy Kadayif & Pistachio Chocolate offers an effortless way for more people to experience our premium take on this indulgent sensation for a limited time only on the high street. News of the chocolate bar coming to the supermarket quickly spread like wildfire on the internet amongst shoppers, as many shared photos of it on Facebook, Instagram and TikTok stocked in their local Tesco brand. This included a post on page 'UK Newest Foods'. The photo saw them hold up the bar in front of the £10 sign. Many were startled by the price of the bar, as one person exclaimed: "That's a rip-off." Another echoed: "Rip-off Britain," while a third asked: "How much?" Someone else added: "£10 for a bar of chocolate wow," as another exclaimed: "Omg that is so expensive." A further commenter scolded: "Ten quid? Broken Britain." Despite the negative reaction, others online are excited to get a taste of it, as one person exclaimed: "This looks amazing," while another added: "Can not wait to try this." The Godiva Pistachio & Kadayif Chocolate bar will be available in Tesco stores nationwide from Wednesday June 4 until Tuesday June 17, so you don't have long to grab it!


Evening Standard
6 hours ago
- Evening Standard
Rotten Apple: are we finally watching the death of the iPhone?
While panic ripples through Apple, Sir Jony Ive — inventor of its flagship products from the iMac to the iPod, iPhone and Apple Watch — has risen up with an alternative answer elsewhere. Ive left Apple in 2019 to start his own design firm, LoveFrom, with the help of Laurene Powell Jobs, widow of Steve Jobs, who was an early investor. On May 21 came the announcement that OpenAI, the developer of ChatGPT, had acquired Ive's AI design start-up, io — in which Powell Jobs has also invested — in a deal worth $6.4 billion. News then broke that Ive and Sam Altman, the CEO of OpenAI, are working together to develop a new AI device, called a 'companion'.

The National
7 hours ago
- The National
Justice will come under threat from AI's ‘hallucinations'
Did you know that large language models like ChatGPT are in the habit of embedding random but superficially plausible falsehoods into the answers they generate? These are your hallucinations. Facts are made up. Counterfeit sources are invented. Real people are conflated with one another. Real-world sources are garbled. Quotations are falsified and attributed to authors who either don't exist, or didn't express any of the sentiments attributed to them. And troublingly, none of these errors are likely to be obvious to people relying on the pseudo-information produced, because it all looks so plausible and machine generated. We aren't helped in this by uncritical representations of AI as the sovereign remedy to all ills – from YouTube advertisers hawking easy solutions to struggling workers and firms, to governments trying to position themselves as modern and technologically nimble. READ MORE: Zia Yusuf returns to Reform UK in new 'Doge role' just two days after quitting Back in January, Keir Starmer announced that 'artificial intelligence will deliver a decade of national renewal', promising a plan that would 'mainline AI into the veins of this enterprising nation'. An interesting choice of metaphor, you might think, for a government which generally takes a dim view of the intravenous consumption of stupefying substances. Describing these failures as 'hallucinations' is not uncontested. Some folk think the language of hallucinations is too anthropomorphic, attributing features of human cognition and human consciousness to a predictive language process which we all need reminding doesn't actually reason or feel. The problem here isn't seeing fairies at the bottom of the garden, but faced with an unknown answer, making up facts to fill the void. One of the definitions of these systems failures I like best is 'a tendency to invent facts in moments of uncertainty'. This is why some argue 'bullshitting' much better captures what generative AI is actually doing. A liar knowingly tells you something that isn't true. A bullshitter, by contrast, preserves the illusion of themselves as a knowing and wise person by peddling whatever factoids they feel they need to get them through a potentially awkward encounter – reckless or indifferent to whether or not what they've said is true. Generative AI is a bullshitter. The knowledge it generates is meretricious. When using it, the mantra should not be 'trust but verify' – but 'mistrust and verify'. And given this healthy mistrust and time-consuming need for verification, you might wonder how much of a time-saver this unreliable Chatbot can really be. Higher education is still reeling from the impact. Up and down the country this month, lecturers have been grading papers, working their way through exam scripts and sitting in assessments boards, tracking our students' many achievements, but also contending with the impact of this wave of bullshit, as lazy, lost or desperate students decide to resort to generative AI to try to stumble through their assessments. If you think the function of education is achieving extrinsic goals – getting the essay submitted, securing a grade, winning the degree – then I guess AI-assisted progress to that end won't strike you as problematic. One of the profound pleasures of work in higher education is watching the evolution of your students. When many 18-year-olds arrive in law school for the first time, they almost always take a while to find their feet. The standards are different. The grading curve is sharper. We unaccountably teach young people almost nothing about law in Scottish schools, and new students' first encounter with the reality of legal reading, legal argument and legal sources often causes a bit of a shock to the system. But over four years, the development you see is often remarkable, with final-year students producing work which they could never have imagined was in them just a few teaching terms earlier. And that, for me, is the fundamental point. The work is in the students. Yes, it requires a critical synthesis with the world, engagement with other people's ideas, a breadth of reading and references – but strong students pull the project out of their own guts. READ MORE: UK won't recognise Palestine at UN conference despite 'discussions', reports say They can look at the final text and think, with significant and well-earned satisfaction – I made that. Now I know I'm capable of digesting a debate, marshalling an argument, presenting a mess of facts in a coherent and well-structured way – by myself, for myself. Education has changed me. It has allowed me to do things I couldn't imagine doing before. Folk turning in the AI-generated dissertations or essays, undetected, can only enjoy the satisfactions of time saved, getting away with it and the anxious future knowing that given the opportunity to honestly test themselves and show what they had in them, they decided instead to cheat. At university, being rumbled for reliance on AI normally results in a zero mark and a resit assessment, but the real-world impacts of these hallucinations are now accumulating in ways that should focus the mind, particularly in the legal sector. In London last week, the Court of Appeal handed down a stinging contempt of court judgment involving two cases of lawyers rumbled after citing bogus case law in separate court actions. The lawyers in question join hundreds of others from jurisdictions across the world, who've found their professional reputations shredded by being caught by the court after relying on hallucinated legal sources. We aren't talking about nickel and dime litigation either here. One of the two cases was a £89 million damages claim against the Qatar National Bank. The court found that the claimants cited 45 cases, 18 of which turned out to be invented, while quotations which had been relied on in their briefs were also phoney. The second case involved a very junior barrister who presented a judicial review petition, relying on a series of legal authorities which had the misfortune not to exist. As Dame Victoria Sharp points out, there are 'serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused' in this way, precisely because of its ability to produce 'apparently coherent and plausible responses' which prove 'entirely incorrect', make 'confident assertions that are simply untrue', 'cite sources that do not exist' and 'purport to quote passages from a genuine source that do not appear in that source'. The Court of Appeal concluded that 'freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT, are not capable of conducting reliable legal research'. I agree. For legal professionals to be presenting cases in this way is indefensible, with serious implications for professional standards integrity, for courts relying on the legal argument put before them and for clients who suffer the consequences of their case being presented using duff statements of the law or duff sources. I worry too about the potentially bigger impact these hallucinations will have on people forced to represent themselves in legal actions. Legal aid remains in crisis in this country. Many people who want to have the benefit of legal advice and representation find they cannot access it, particularly in civil matters. The saying goes that 'a man who represents himself in court has a fool for a client'. In modern Britain, a person who represents themselves in court normally has the only lawyer they can afford, as foolish and unfair as this might be. READ MORE: Freedom Flotilla urges UK Government to 'protect' ship from Israel as it nears Gaza Acting as a party litigant is no easy task. Legal procedures are often arcane and unfamiliar. Legal institutions can be intimidating. If the other side has the benefit of a solicitor or advocate, there's a real inequality of arms. But even before you step near a Sheriff Court, you need to have some understanding of the legal principles applying to your case to state it clearly. Misunderstand and mispresent the law, and you can easily lose a winnable case. In Scotland, in particular, significant parts of our law isn't publicly accessible or codified. This means ordinary people often can't find reliable and accessible online sources on what the law is – but it also means that LLMs like ChatGPT also haven't been able to crawl over these sources to inform the automated answers they spit out. This means that these large language models are much more likely to give questioning Scots answers based on English or sometimes even American law than the actual rules and principles a litigant in person needs to know to persuade the Sheriff that they have a good case. Hallucination rates are high. Justice will suffer.