logo
A power for good?

A power for good?

For some, the advent of the worldwide web is still fresh in the memory. But technological leaps seem to happen with ever-increasing frequency, and we now all find ourselves blinking in the brilliant light at the dawn of the age of AI. At the Advertising Standards Authority (ASA), we've donned the sunglasses and rolled up our sleeves, and AI is already proving a game-changer in how we regulate.
The lightning speed with which AI has developed and integrated into our everyday lives inevitably raises legitimate concerns. What does it mean for jobs, data protection, originality, creativity, copyright, plagiarism, truth, bias, mis- and disinformation and what we think is fake vs real?
These are undoubtedly important issues to grapple with. But the technology also brings multiple benefits. As was the case in the mid-1990s with the launch of search, web browsers and online shops, there were innovators, early adopters, cautious sceptics and technology resisters. AI is no different. The ASA is firmly in the 'early adopter' category. Four years ago, we appointed a head of data science and began building our AI capability; AI is now central to our transformation into a preventative and proactive regulator. Around 94 per cent of the 33,903 ads we had amended or withdrawn last year came from our proactive work using our AI-based Active Ad Monitoring system. The ability to be front-foot and take quick and effective action is crucial when regulating the vast online ecosystem. AI gives us much greater visibility of online ads.
Last year, our system scanned 28 million ads with machine learning and, increasingly, large language models finding the likely non-compliant ads we're interested in. That was a tenfold increase on 2023. Our target is to scan 50 million ads this year. AI-based tools are embedded in our work to help us monitor and tackle ads in high-priority areas and are now used in most of our projects, including our work on climate change and the environment, influencer marketing, financial advertising, prescription-only medicines, gambling and e-cigarettes. It's enabling us to carry out world-leading regulation – monitoring, identifying and tackling potential problem ads at pace and scale. Take one example: our ongoing climate change and environment project. Following high-profile and precedent-setting rulings against major players in various industries, we're now seeing businesses adapting and evolving to make better evidenced, more precise green claims.
Monthly sweeps using AI show high levels of compliance. Following our 2023 airline rulings on misleading 'sustainable' and 'eco-friendly' claims, of the circa 140,000 ads we monitored, we found just five that were clearly non-compliant.
Importantly, we're not removing humans from the equation. Our experts are and will remain central to our regulation. While our AI capability has dramatically improved the efficiency of our monitoring (weeding out the millions of ads that stick to the rules and aren't a problem), it filters and flags potential problem ads to our human specialists for their expert assessment. AI is assisting rather replacing our people. There are a lot of open questions about how AI will impact industries, positively and negatively. And that's certainly true of advertising, as ever at the forefront of technological change.
Subscribe to The New Statesman today from only £8.99 per month Subscribe
We know that the use of AI is already changing advertising. There are big efficiency and effectiveness gains in play. Lower-cost ad ideation and creation, hyper-personalisation and improved customer experience. Quicker and better media planning and buying. Get this right and ads will be cheaper to make and send, and be more engaging and relevant to receive. UK businesses and the British economy will be boosted. But in all of this, responsible ads must not be sacrificed at the altar of advances in technology.
We're well aware of the many potential benefits and problems AI poses for advertising. Think back to the story from Glasgow, where AI-generated ads promised a Willy Wonka-themed event that wasn't quite as advertised. The advertising of certain AI products and services certainly throws up broader ethical considerations. On our radar are ads for AI tech offering mental health support (substituting human therapists), essay-writing tools that pass work off as original, and chat boxes that act as a partner or friend. We don't regulate the products themselves, but in all these examples there is potential for ads to be misleading, irresponsible or harmful. How can businesses use AI safely and responsibly? What does that mean for advertisers?
Our media and technology-neutral rules already cover most of the risks. Ads can't mislead, a principle as old as the hills. In the past, that might have been using photo-editing software; today, it might be through generative AI. Adverts must not be likely to cause harm or serious or widespread offence either. Generative AI might be an unsurpassed pattern-recogniser, but it's not a human and may well miss the nuance of judging prevailing standards in society when producing ad content. Advertisers who harness AI can't abdicate responsibility for the creative content that it produces. That's why we urge businesses to be careful: use the good of AI, but avoid the bad. Put in place human checks and balances.
At the ASA, we're determined to take full advantage of technological advances, developing our Active Ad Monitoring system further and making even more use of large language models to speed up review of ads. Actively experimenting with how these tools can make our internal processes more efficient. And continuing to keep a close eye on how AI is used in advertising.
We are witnessing the next technological revolution that will change society in ways the internet did, perhaps even more. We can say with confidence that our use of AI is already delivering world-leading advertising regulation.
Related

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

A power for good?
A power for good?

New Statesman​

time2 days ago

  • New Statesman​

A power for good?

For some, the advent of the worldwide web is still fresh in the memory. But technological leaps seem to happen with ever-increasing frequency, and we now all find ourselves blinking in the brilliant light at the dawn of the age of AI. At the Advertising Standards Authority (ASA), we've donned the sunglasses and rolled up our sleeves, and AI is already proving a game-changer in how we regulate. The lightning speed with which AI has developed and integrated into our everyday lives inevitably raises legitimate concerns. What does it mean for jobs, data protection, originality, creativity, copyright, plagiarism, truth, bias, mis- and disinformation and what we think is fake vs real? These are undoubtedly important issues to grapple with. But the technology also brings multiple benefits. As was the case in the mid-1990s with the launch of search, web browsers and online shops, there were innovators, early adopters, cautious sceptics and technology resisters. AI is no different. The ASA is firmly in the 'early adopter' category. Four years ago, we appointed a head of data science and began building our AI capability; AI is now central to our transformation into a preventative and proactive regulator. Around 94 per cent of the 33,903 ads we had amended or withdrawn last year came from our proactive work using our AI-based Active Ad Monitoring system. The ability to be front-foot and take quick and effective action is crucial when regulating the vast online ecosystem. AI gives us much greater visibility of online ads. Last year, our system scanned 28 million ads with machine learning and, increasingly, large language models finding the likely non-compliant ads we're interested in. That was a tenfold increase on 2023. Our target is to scan 50 million ads this year. AI-based tools are embedded in our work to help us monitor and tackle ads in high-priority areas and are now used in most of our projects, including our work on climate change and the environment, influencer marketing, financial advertising, prescription-only medicines, gambling and e-cigarettes. It's enabling us to carry out world-leading regulation – monitoring, identifying and tackling potential problem ads at pace and scale. Take one example: our ongoing climate change and environment project. Following high-profile and precedent-setting rulings against major players in various industries, we're now seeing businesses adapting and evolving to make better evidenced, more precise green claims. Monthly sweeps using AI show high levels of compliance. Following our 2023 airline rulings on misleading 'sustainable' and 'eco-friendly' claims, of the circa 140,000 ads we monitored, we found just five that were clearly non-compliant. Importantly, we're not removing humans from the equation. Our experts are and will remain central to our regulation. While our AI capability has dramatically improved the efficiency of our monitoring (weeding out the millions of ads that stick to the rules and aren't a problem), it filters and flags potential problem ads to our human specialists for their expert assessment. AI is assisting rather replacing our people. There are a lot of open questions about how AI will impact industries, positively and negatively. And that's certainly true of advertising, as ever at the forefront of technological change. Subscribe to The New Statesman today from only £8.99 per month Subscribe We know that the use of AI is already changing advertising. There are big efficiency and effectiveness gains in play. Lower-cost ad ideation and creation, hyper-personalisation and improved customer experience. Quicker and better media planning and buying. Get this right and ads will be cheaper to make and send, and be more engaging and relevant to receive. UK businesses and the British economy will be boosted. But in all of this, responsible ads must not be sacrificed at the altar of advances in technology. We're well aware of the many potential benefits and problems AI poses for advertising. Think back to the story from Glasgow, where AI-generated ads promised a Willy Wonka-themed event that wasn't quite as advertised. The advertising of certain AI products and services certainly throws up broader ethical considerations. On our radar are ads for AI tech offering mental health support (substituting human therapists), essay-writing tools that pass work off as original, and chat boxes that act as a partner or friend. We don't regulate the products themselves, but in all these examples there is potential for ads to be misleading, irresponsible or harmful. How can businesses use AI safely and responsibly? What does that mean for advertisers? Our media and technology-neutral rules already cover most of the risks. Ads can't mislead, a principle as old as the hills. In the past, that might have been using photo-editing software; today, it might be through generative AI. Adverts must not be likely to cause harm or serious or widespread offence either. Generative AI might be an unsurpassed pattern-recogniser, but it's not a human and may well miss the nuance of judging prevailing standards in society when producing ad content. Advertisers who harness AI can't abdicate responsibility for the creative content that it produces. That's why we urge businesses to be careful: use the good of AI, but avoid the bad. Put in place human checks and balances. At the ASA, we're determined to take full advantage of technological advances, developing our Active Ad Monitoring system further and making even more use of large language models to speed up review of ads. Actively experimenting with how these tools can make our internal processes more efficient. And continuing to keep a close eye on how AI is used in advertising. We are witnessing the next technological revolution that will change society in ways the internet did, perhaps even more. We can say with confidence that our use of AI is already delivering world-leading advertising regulation. Related

Ladbrokes ads banned over use of ‘Ladbucks' likely to appeal to under-18s
Ladbrokes ads banned over use of ‘Ladbucks' likely to appeal to under-18s

The Independent

time4 days ago

  • The Independent

Ladbrokes ads banned over use of ‘Ladbucks' likely to appeal to under-18s

Ads for gambling firm Ladbrokes have been banned for using the term 'Ladbucks', found to resemble gaming references likely to be of strong appeal to under-18s. The TV ad, seen in December, featured a voiceover that stated: 'This is a Ladbuck, the new way to get rewarded at Ladbrokes, and these are some of the 100 million Ladbucks that will be dropping weekly. 'Collect them on our free to play games and choose rewards like free spins, free bets and more … Plus you can even use them to play your favourite games for free in our Ladbucks arcade. Like Fishin Frenzy and Goldstrike.' A Video on Demand ad, seen on Channel 4 around the same time, was the same as the TV ad. The Advertising Standards Authority (ASA) received two complaints that the term 'Ladbucks' was likely to be of strong appeal to under-18s. Ladbrokes said the term 'Ladbucks' was chosen as a play on the word Ladbrokes, and because it referenced, through the use of the term bucks, that it had value on the Ladbrokes website. They said the word had no origins in youth culture and believed that it was not of inherent strong appeal to under-18s, and highlighted that both ads had targeting restrictions to reduce the likelihood of children viewing them. The firm said it believed that the term was not associated with any coins from video games which were popular with under-18s, adding that 'V-Bucks' from Fortnite and 'Robux' from Roblox were in-game currencies that had to be purchased before being used to buy in-game items. Further, it did not believe the term 'lad' referred to a boy or young man and said its brand had never been used in that context. The ASA said several online games popular with under-18s, such as Roblox and Fortnite, had their own in-game currencies, which were called Robux and V-Bucks respectively. These currencies, which could be both bought and earnt through gameplay, were depicted as coins, and spent within in-game stores, usually on cosmetic items that enhanced gameplay. According to Ofcom's 2024 report into media use and attitudes, 60% of children aged between three and 17 years gamed online, while 89% of 11 to 18-year-olds gamed online weekly, with categories of games that were most popular including building games, such as Roblox, followed by games played against others, such as Fortnite. The ASA said it considered the term 'Ladbucks', through the suffix 'bucks', had strong similarities to the in-game currencies Robux and V-Bucks. It said the name 'Ladbucks', when considered alongside the imagery and the application of the coin in the ads, was 'depicted in a manner which was similar to features in video games popular with children'. 'We therefore considered the term in the ads was likely to be of strong appeal to under-18s and breached the Code,' it said. The watchdog ruled that the ads must not appear again in their current form, adding: 'We told Ladbrokes not to include content in ads that was reflective of youth culture or which had strong appeal to those under 18 years of age.' A spokesman for Entain, which owns Ladbrokes, said: 'We are disappointed by the ASA's ruling on our 'Ladbucks' advertising campaign, and we are seeking an independent review of what we consider to be a flawed decision. 'For example, it is based on an inaccurate comparison with games such as Fortnite or Roblox and their in-game currencies. Entain works extremely carefully to ensure that its advertising does not target or appeal to under-18s. 'We maintain that this was a responsibly created and targeted campaign, pre-approved by Clearcast and only shown after the watershed.'

Diesel clothing advert banned for objectifying Katie Price
Diesel clothing advert banned for objectifying Katie Price

South Wales Argus

time5 days ago

  • South Wales Argus

Diesel clothing advert banned for objectifying Katie Price

The ad, which appeared on the Guardian news website on March 26, included an image of Price wearing a bikini and holding a handbag in front of her chest. The Advertising Standards Authority (ASA) received 13 complaints that the ad objectified and sexualised women and featured a model who appeared to be unhealthily thin. The banned Diesel ad featuring Katie Price (ASA/PA) Diesel said the ad was part of a brand campaign called 'The Houseguests', which was designed to challenge stereotypes and support diversity and inclusion in the fashion industry by reflecting a wide range of body types. It believed the ad was compliant with the advertising rules but said it removed the ad from the Guardian website. The brand said Price was 46 years old and had a body type that was not usually included in high fashion campaigns, explaining that the average age for editorial models was between 16 and 23. Diesel believed the image was a 'celebration of Ms Price's sexuality and empowerment and was not objectifying, degrading or sexualising', and 'showed Ms Price clearly in control in an active and dynamic pose where she proudly showed off her body and the handbag'. Diesel added that Price was 'well-known for her exaggerated appearance and larger-than-life personality and her large lips and breasts formed part of her curated public image', and this 'exaggerated, eccentric and altered appearance' formed part of the creativity of the campaign. Finally, Diesel said although Price was slender, she had excellent muscle tone and was not unhealthily underweight. The Guardian said it received a complaint directly about the ad on April 4 and blocked it from appearing again because it did not consider it complied with their policies. Partly upholding the complaints, the ASA said the bikini only partially covered Price's breasts, and it considered the positioning of the handbag, in front of her stomach with the handle framing her chest, drew viewers' attention to, and emphasised, that part of her body. The ASA said: 'While we acknowledged that Ms Price was shown in a confident and self-assured pose and in control, we considered that because of the positioning of the handbag, which had the effect of emphasising and drawing attention to her breasts, the ad sexualised her in a way that objectified her. 'We therefore considered the ad was likely to cause serious offence, was irresponsible and breached the Code.' The ASA did not uphold complaints about Price appearing to be unhealthily thin, and concluded that the ad was not irresponsible on that basis. The watchdog ruled that the ad must not appear again, adding: 'We told Diesel to ensure their future ads were socially responsible and did not cause serious or widespread offence.' Diesel said: 'Diesel's latest Houseguests campaign continues its tradition of challenging norms and embracing individuality. A key image features model Katie Price, 46, showcasing a body type rarely seen in high fashion, proving that women of all shapes and ages deserve representation. The photo celebrates confidence and empowerment without objectification. 'Shared in over 100 countries, it has not received any regulatory complaints, highlighting Diesel's commitment to respectful, inclusive storytelling.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store