logo
Calling all fashion models … now AI is coming for you

Calling all fashion models … now AI is coming for you

The Guardian30-03-2025

The impact of AI has been felt across industries from Hollywood to publishing – and now it's come for modelling. H&M announced last week that it would create AI 'twins' of 30 models with the intention of using them in social media posts and marketing imagery if the model gives her permission.
In a statement, Jörgen Andersson, the chief creative officer at H&M, described the idea as 'something that will enhance our creative process and how we work with marketing but fundamentally not change our human-centric approach in any way'.
The retail giant has worked with successful models including Vilma Sjöberg and Mathilda Gvarliani, who model for Vogue and brands including Chanel. As part of the agreement, each model would be able book her twin on shoots for other brands – meaning they could, in image anyway, be in two places at the same time. Speaking to The Business of Fashion, Gvarliani described her replica as 'like me, without the jet-lag'. Images of AI and human, side by side, look scarily lifelike.
The news has been greeted with dismay by the wider industry, which fears this could be the start of a shift. It mirrors the concerns of Hollywood workers who went on strike in 2023 over the use of AI in film and TV. This is not the first time a major fashion company has explored AI models – Levis and Hugo Boss have also experimented with the technology.
Bectu, the union that represents the creative industries, explained the concerns. 'Even if models are compensated for the use of their image, it is hard to see how using technology will not have a significant detriment to other fashion creatives and industry workers, from make-up and hair, to rigging and lights,' said the head of the union, Philippa Childs. A survey found that 54% of Bectu workers believed AI would have a negative impact on the fashion industry.
Those advocating for models have also raised concerns. Sara Ziff, the founder of Model Alliance, says: 'There are a lot of open questions, and one of them is about compensation. What does fair compensation for a digital twin look like?' H&M has said each model would be paid when their twin was used, with the fee negotiated with their agency.
In 2020, the model and founder of technology education company WAYE, Sinead Bovell, wrote an article in Vogue titled 'I Am a Model and I Know That Artificial Intelligence Will Eventually Take My Job'. She raises the issue of the lack of regulation. 'If a model gives their consent to use their likeness somewhere, it can enhance the actual AI model, the data centre that the company uses, which could accelerate automation,' she says. 'There's all these nuanced ways that models may get even more exploited.'
Guard rails are being built. The Model Alliance's Fashion Workers' Act comes into law in June – a piece of legislation which requires consent from models for AI usage, when working with agencies based in the state. 'It couldn't be more timely,' says Ziff. 'With H&M's announcement, it gives a lot of other industry players licence to move forward in a way that could be very reckless and damaging.'
In addition to this, the AI Act will be introduced in the EU in 2026, requiring AI images to be labelled as such (H&M said it would use a watermark on images featuring AI).
Ziff is clear that she is not 'anti-tech' and there are big players in fashion who have embraced advances in technology.
The former Harpers Bazaar editor Lucy Yeomans founded Drest in 2019, a game that allows players to dress up avatars in designer brands. However, the use of AI is limited. 'It would be lovely if AI could create all the looks,' says Yeomans, 'but if JW Anderson decides he is going to put a belt halfway down the skirt, AI says: 'Belts don't go halfway down skirts'.'
High-profile models such as Sjöberg and Gvarliani will likely be compensated well – casting agent Chloe Rosolek estimates they would be paid 'millions' – but AI is likely to directly threaten models who more typically feature in e-commerce shoots that showcase products on brands' websites. 'Most models have had to deal with job loss already and this is a whole other frightening thing for them,' says Rosolek.
Ingo Nolden, who is Gvarliani's agent in Germany, agrees: 'There is the erosion of human work especially at the entry level,' he says. He had negotiated a deal for an AI version of a model he works with in 2023, only to back out 'because I felt it was giving the licence out to a third party I have no control of'.
Lalaland AI creates AI models, and has clients including Zalando and Levis. Michael Musandu, the founder, says the technology allows consumers to see a more diverse 'casting' when they look at a website. 'As a person of colour, I never got to see models that represented this diversity or inclusion perspective,' he says.
He argues the amount of shoots that big brands produce means it would be difficult to implement this change using real models.
AI is not always a tool for inclusivity, of course – in 2023, Shereen Wu, a Taiwanese-American model posted on social media that her image had been changed so she looked white.
While Musandu says criticism of AI models is 'warranted', he downplays its impact. 'Brands are going to continue to use real models,' he says. 'This is supplementing … there is a cost-saving element, you can produce this content at a much more affordable price.'
Rosolek describes H&M's AI shoots as 'another capitalistic approach to cut people, to cut their costs and increase their profits'. Profit may be on the company's mind. In financial results for the first quarter of 2025, they missed expected growth, with Reuters describing its start to the year as 'sluggish'.
An H&M spokesperson responded to concerns with the following statement: 'While we understand that this will spark opinions, discussions, and uncertainty, we are humble in acknowledging that we do not hold all the answers at this point.
'We will continue to explore with other creatives within the industry to see what generative AI can bring to any creative process, while ensuring we maintain our human-centric approach.'
Rosolek hopes the development 'is a massive flop' with consumers. 'Models make an image incredible and I don't think that can be replicated through AI,' she says.
Yeomans agrees. 'If you think of someone like [photographer] Steven Meisel, he was always discovering that next model, and everyone would ask, 'Oh my gosh, is that beautiful?' 'Oh yes, it is beautiful',' she says. 'I'm not sure whether AI will be able to look around the corner and predict what might be next.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Unlock the Secret to Lifelike AI Voices with Eleven v3
Unlock the Secret to Lifelike AI Voices with Eleven v3

Geeky Gadgets

time31 minutes ago

  • Geeky Gadgets

Unlock the Secret to Lifelike AI Voices with Eleven v3

What if your AI-generated voice could do more than just sound human? Imagine it expressing joy, urgency, or even quiet reflection with the same nuance as a skilled actor. With ElevenLabs' Eleven v3, this isn't just possible—it's the new standard. This innovative platform redefines what AI voice synthesis can achieve, offering emotional depth and tonal precision that were once exclusive to professional voice artists. Whether you're crafting an audiobook, designing a virtual assistant, or creating immersive storytelling experiences, Eleven v3 gives you the tools to make your content truly resonate. But here's the catch: unlocking its full potential requires more than just pressing 'generate.' It demands a thoughtful approach to prompt engineering—a skill that, once mastered, can transform your creative projects. ElevenLabs explain how to harness the expressive power of Eleven v3 through strategic prompt design, tone modulation, and iterative refinement. You'll discover how to guide the AI to deliver lifelike performances, from cheerful and animated to calm and introspective. We'll also delve into the platform's standout features, like dynamic emotional shifts and customizable voice styles, which open up endless possibilities for creative and professional applications. Whether you're a seasoned content creator or just starting to experiment with AI voice tools, this guide will equip you with actionable insights to elevate your projects. After all, the difference between a good voice and a great one often lies in the details. Mastering Eleven v3 Features What Sets Eleven v3 Apart? Eleven v3 stands out for its ability to produce high-quality, expressive AI voices that go beyond traditional text-to-speech systems. Unlike earlier iterations, this version emphasizes naturalness and emotional nuance, making it suitable for a wide range of applications, including audiobooks, virtual assistants, and interactive storytelling. Key features include: Advanced expressive voice synthesis: Enables nuanced and lifelike vocal delivery. Enables nuanced and lifelike vocal delivery. Control over vocal tone, emotion, and style: Tailor outputs to specific contexts and audiences. Tailor outputs to specific contexts and audiences. Dynamic emotional shifts: Seamlessly transition between emotions within a single output. These features allow you to create voices that resonate with listeners, whether your goal is to inform, entertain, or persuade. The platform's versatility ensures it can adapt to diverse creative and professional needs. Mastering Prompt Engineering for Precision Prompt engineering is the cornerstone of effective AI voice generation. It involves crafting detailed instructions to guide the AI in producing the desired output. With Eleven v3, you can specify parameters such as tone, emotion, and style to achieve highly tailored results. For example: A simple prompt like 'read this with excitement' provides general guidance. A more detailed instruction, such as 'deliver this with a cheerful, upbeat tone suitable for a children's story,' yields a more specific and refined result. Experimentation is essential to mastering this process. Testing different phrasing and levels of detail helps you understand how the AI interprets your instructions. Over time, you can refine your prompts to achieve consistently accurate and engaging outputs. Eleven v3 Expressive AI Voice Prompt Engineering Guide Watch this video on YouTube. Uncover more insights about AI voice generation in previous articles we have written. Fine-Tuning Vocal Tone and Emotional Delivery One of Eleven v3's most powerful features is its ability to modulate vocal tone and emotion, allowing you to align the voice with the context of your content. This capability is particularly useful for applications requiring emotional depth or tonal precision. Consider these examples: Neutral and professional tone: Ideal for customer service chatbots or corporate training materials. Ideal for customer service chatbots or corporate training materials. Warm and empathetic delivery: Perfect for storytelling or therapeutic content. To guide the AI effectively, include emotional cues in your prompts. For instance: 'Speak with a sense of urgency' for time-sensitive messages. 'Use a calm and reflective tone' for meditative or introspective content. Additionally, Eleven v3 supports dynamic emotional shifts within a single output. This feature enables more complex and engaging voice performances, such as narrations that transition from excitement to solemnity, enhancing the listener's experience. Customizing Voice Styles for Specific Applications Voice style customization is another standout feature of Eleven v3, allowing you to create unique voice profiles tailored to specific use cases. This capability is invaluable for developers, content creators, and businesses seeking to establish a distinct auditory identity. Examples of customized voice styles include: Professional and authoritative: Suitable for corporate presentations or e-learning modules. Suitable for corporate presentations or e-learning modules. Playful and animated: Ideal for video game characters or children's content. To create a custom voice style: Define the key characteristics you want, such as pitch, tempo, or tone. Use the platform's customization tools to adjust these parameters. Test and refine multiple versions until the voice aligns with your objectives. This iterative process ensures that the final output meets your specific needs, whether you're designing a virtual assistant or crafting a unique character voice. Iterative Refinement: Achieving Perfection Perfecting your voice outputs often requires multiple iterations. Eleven v3 allows you to refine your prompts and settings based on the AI's initial outputs, allowing continuous improvement. For example: If the voice sounds overly formal, adjust the prompt to include terms like 'relaxed' or 'conversational.' If the emotional delivery feels misaligned, tweak the instructions to emphasize the desired sentiment. This iterative approach not only enhances the quality of your outputs but also deepens your understanding of how the AI interprets prompts. Over time, you'll develop the skills to craft highly effective instructions, reducing the need for extensive revisions and streamlining your workflow. Tools and Features for Enhanced Customization Eleven v3 provides a robust suite of tools to simplify and enhance the voice customization process. These features are designed to save time, improve accuracy, and expand creative possibilities. Key tools include: Pre-built voice templates: Quickly generate voices with predefined settings, ideal for common use cases. Quickly generate voices with predefined settings, ideal for common use cases. Real-time preview: Listen to how your prompts sound before finalizing them, making sure precision and efficiency. Listen to how your prompts sound before finalizing them, making sure precision and efficiency. Advanced editing options: Make fine adjustments to the voice after generation for greater control over the final output. Make fine adjustments to the voice after generation for greater control over the final output. Integration capabilities: Seamlessly connect Eleven v3 with external applications, allowing broader use cases such as automated workflows or interactive experiences. These tools make Eleven v3 a versatile and practical choice for a wide range of users, from individual creators to large-scale enterprises. Unlocking the Full Potential of Eleven v3 Mastering Eleven v3 requires a combination of strategic prompt engineering, vocal tone control, and iterative refinement. By using its advanced tools and features, you can create natural, emotionally resonant voice outputs tailored to your specific needs. Whether you're developing a virtual assistant, producing an audiobook, or designing a character for a video game, Eleven v3 enables you to bring your vision to life with precision and creativity. Media Credit: ElevenLabs Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Cadbury Twirl White Dipped compared to noughties Flake Snow
Cadbury Twirl White Dipped compared to noughties Flake Snow

South Wales Argus

time44 minutes ago

  • South Wales Argus

Cadbury Twirl White Dipped compared to noughties Flake Snow

The new Twirl White Dipped went on sale earlier this month, and shoppers have been looking everywhere to find it in the UK. Morrisons, Co-op, Home Bargains and B&M are selling the popular bar – but only in select shops for a limited time. When shoppers did get their hands on it, they were quick to take to social media to praise its taste and similarities to a discontinued favourite from the late noughties. Is a Twirl just a Flake dipped in chocolate? Long has been the debate that Twirls are similar to Cadbury's famous Flake chocolate bar – but now the comparison has heightened for a different reason. The Twirl White Dipped was being compared to Cadbury's Flake Snow even before it hit the shelves. Cadbury's Flake Snow chocolate bar was first launched in the UK back in 2000. The bar had a crumbly white chocolate flake inside, covered in milk chocolate. But it only lasted eight years in the UK, with Cadbury revealing in 2008 the chocolate bar had been discontinued. Cadbury Twirl White Dipped – shoppers react online Taking to Facebook foodie groups, shoppers were quick to share where they had picked up a Twirl, at what price and if it had similarities with Flake Snow. 'Limited edition Cadbury twirl white 85p spotted at B&M,' one person wrote. 'Does it taste like a Flake Snow? Yes it does,' another said. One person commented: 'It's like a Flake Snow in reverse!' Another added: Wow, these are outstanding - closest bar you'll ever get to a Cadbury Snowflake!" Recommended reading: 'Not bad but definitely not as good as Flake Snow was," another wrote. Others commented about the chocolate's taste, saying: 'These were amazing!" 'They are divine' and 'Found mine, I love them – so delicious.' Before its release, Twirl brand manager at Mondelez International, Katya Savelieva, said: "With smooth white chocolate surrounding our iconic milk chocolate swirls, the new Twirl White Dipped is an indulgence you won't want to miss - grab it as soon as you can and experience Twirl like never before!"

Tennis analysis tool brings live stats to Wimbledon fans
Tennis analysis tool brings live stats to Wimbledon fans

Times

timean hour ago

  • Times

Tennis analysis tool brings live stats to Wimbledon fans

Those rows of swivelling heads following the tennis ball during Wimbledon matches could become the 'Centre Court nod' as spectators instead check a new artificial intelligence service about matches they are watching. The All England Lawn Tennis Club has unveiled an AI-powered 'match chat' assistant that can answer fans' questions and provide unprecedented levels of analysis during ladies' and men's singles matches. The system is targeted particularly at younger fans who watch sport with one eye on the live action and the other on their mobile phones. Although the development will dismay some traditionalists, it has been welcomed by the tennis coach Judy Murray, mother of Sir Andy, who has twice been men's singles champion, and Jamie Murray, who has twice won the mixed-doubles championships. Judy Murray said that the system would 'engage with those who are growing up with technology' JOHN WALTON/PA 'It's all about engaging with those who are growing up with technology, or have grown up with technology, and understanding how they get their news and their information and how they are viewing sport,' Murray said. 'I certainly see it with the young people that I'm coming into contact with.' She added: 'They do not have the same attention span. They may be watching a match and want the highlights of another game on their telephones. They want the statistical information about what is happening during the match and not have to wait until the end.' The AI analysis, which is powered by the technology giant IBM, will also be given to the players' boxes so they can change their tactics following a lifting of a ban on in-game coaching. Wimbledon allowed coaching during matches for the first time in 2023. Coaches can issue advice either verbally or via hand signals at any time, except during the playing of a point. Murray said: 'The skill of the coach is disseminating that information and passing on what is the most relevant and easy the player to take on in the heart of the match.' Wimbledon is also replacing 300 line judges with artificial intelligence from this year's championships. AI-generated commentary was introduced in 2023 for matches which would not normally be covered by humans.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store