Latest news with #JessicaTye


The Independent
20-03-2025
- Entertainment
- The Independent
Watchdog bans ‘shocking' ads that objectify women on mobile gaming apps
'Degrading' ads objectifying women, showing non-consensual sexual encounters, and using pornographic tropes have been banned by the UK advertising watchdog after appearing to child audiences on on mobile gaming apps. An investigation by Advertising Standards Authority (ASA) used avatars to mimic the browsing behaviour of different age groups and genders to monitor the ads that appear on mobile games. It then identified breaches of the UK code. Although most of the almost 6,000 adverts that appeared did comply with UK rules, the watchdog identified eight that portrayed women in a 'shocking' way and banned them. One advert for the app Perfect Lie – a game which included a sexual innuendo – was shown to a female child avatar while using a game which featured a virtual cat and likely appeals to a younger audience. The offending ad, which showed a teacher bent over, with her bottom appearing pixelated, was found to risk causing harm and serious offence. Another ad for an interactive romance game called My Fantasy was shown to both male and female child avatars while using a game that involved freeing cars from traffic jams. It showed an animation of a woman being approached by another woman and being pushed on to a desk. It then showed options asking what she should do – 'enjoy it', 'push her away', 'please continue' and 'stop it'. The watchdog said the content was 'strongly suggestive and implied the sexual encounters were not consensual.' Two ads for an artificial intelligence chatbot app called Linky: Chat With Characters AI, appeared while the female child avatar was using a flight simulator game and a character simulation game. The ad began with a woman dressed in a manga T-shirt, a short skirt and bunny ears dancing in a bedroom. It then showed a text that read: 'Tell me which bf [boyfriend] I should break up with.' It then showed a text conversation with three manga-style men. One character was conveyed as 'obsessively possessive, aggressively jealous and won't let you out of his sight. He's also a kidnapper and killer'. The text described yanking the woman 'into the car, swiftly knocking her out'. She asked, 'okay but what if I enjoy this' and he replied, 'You will not enjoy this.' The ASA said the ad was 'suggestive and implied scenarios involving violent and coercive control and a lack of consent'. The report highlighted that although these instances were rare, they have a 'zero-tolerance' to content that show 'degrading portrayals of women'. 'We know that seeing harmful portrayals of women can have lasting effects, especially on younger audiences,' said Jessica Tye, regulatory projects manager at the ASA. 'Whilst we're glad to see that most advertisers are doing the right thing, the small number who aren't must take responsibility. Through this report, we're making it clear: there's no room for these kind of ads in mobile gaming, or anywhere,' she added. Over the past two years the watchdog has investigated and upheld 11 complaints in cases where in-app ads have harmfully objectified women, or condoned violence against them. Almost half of Britons are concerned about the way women and girls are depicted in ads, a separate piece of research by YouGov of 6,500 people revealed. It found 45 per cent of people are concerned about ads that show idealised images of women and 44 per cent are concerned about the objectification of women and girls in ads.


The Independent
13-02-2025
- Entertainment
- The Independent
Celebrity deepfake scam ads were most reported to watchdog in 2024
Fake adverts featuring celebrities and public figures remain the most common type of scam adverts appearing online, according to new figures from the Advertising Standards Authority (ASA). Data from the watchdog's Scam Ad Alert System found that scam ads containing famous figures made up the 'vast majority' of the alerts it sent to platforms in 2024. The scam adverts often contain doctored or deepfake images of celebrities or public figures, with the ASA noting it saw scam ads depicting Prime Minister Sir Keir Starmer and Chancellor Rachel Reeves last year, as well as ads using the likeness of Stacy Solomon and Strictly Come Dancing judge Anton Du Beke. Last week, BBC presenter Naga Munchetty revealed she had found scammers using her image in scam ads which linked to fake news stories about her – a common tactic used by scammers, who the ASA said are often promoting cryptocurrency, suspicious investment schemes and other activities. Many experts have warned that the rise of AI-powered deepfakes is now making these adverts more convincing and therefore more dangerous to consumers. In its latest update, the ASA said it sent 177 Scam Ad Alerts to social media and other online platforms to remove ads and further act on – up from 152 sent in 2023. Jessica Tye, regulatory project manager at the ASA, told the PA news agency that a key factor for scammers was honing in on the public desire for celebrity gossip, as well as who is currently in the news. 'I think the public is very interested in stories about celebrities, fake stories about their downfall or bad things happening to them. They're also interested in endorsement,' she said. 'Though it's a long-running trend, we see new celebrities being used in these scam ads all the time, depending on perhaps who is in the news or just who scammers have alighted on.' 'Scam ads online are not a new problem, and scams themselves are not a new problem either – for as long as people have been selling something, scams have existed. 'Obviously, they develop and evolve over time, and we know that scammers use very sophisticated techniques to avoid detection.' Public reporting doesn't solve scam ads, and it's not the public's responsibility to solve scam ads, but they can play their part Jessica Tye, ASA X, formerly known as Twitter, was one of the platforms to receive more than one alert from the ASA in 2024, and critics have argued that since Elon Musk's takeover of the site in 2022, and subsequent rolling back of content moderation, more spam and scam material has been allowed to circulate on the site. The ASA's figures showed that in 2024, X failed to respond to 72% of the 22 alerts it was sent by the ASA. However, the watchdog said a discrepancy was identified and resolved between it and X over contact arrangements, which resulted in a response rate of 80% within 48 hours of reporting for the final three months of the year. Ms Tye said scammers were using sophisticated digital techniques to make it hard for platforms to respond. Those techniques included one known as 'cloaking' where scammers present different content to platforms or their moderation system than they do to potential victims, enabling them to evade detection unless reported. 'This can make it quite difficult for platforms and others involved in the ad ecosystem to stop these ads appearing,' she told PA. 'And we do know from our work with platforms that they have all the measures in place to stop these ads appearing. 'Ultimately, that is not always successful, and that's one reason why we run the Scam Ads Alert System so that consumers can report to us when they do see these ads, and so that we can play a small part in helping to disrupt these scam ads.' She added that it was important the public did report scam ads to platforms and the ASA – which can be done via a quick reporting form on the watchdog's website. 'Public reporting doesn't solve scam ads, and it's not the public's responsibility to solve scam ads, but they can play their part,' she said. 'So we would definitely encourage the public, if they see a scam ad in paid-for space, to report it to us, because they can be confident that we will assess those reports swiftly and do everything we can to stop similar ads appearing in the future.' Rocio Concha, director of policy and advocacy at Whic h?, said: 'A flood of celebrity deepfakes and other scam adverts reinforces why the current slow, reactive and toothless system for tackling fraud online is woefully inadequate. 'The biggest online platforms have shown they're unwilling to take effective action to stop scam ads appearing in the first place, which is why specific requirements in the Online Safety Act for platforms to stop scam ads from appearing are so desperately needed. 'It is extremely disappointing that these protections have been kicked into the long grass and may not take effect until 2027. Ofcom must speed up the full implementation of the Online Safety Act so platforms that fail to tackle fraudulent ads can be hit with tough penalties – including multimillion-pound fines.'