logo
#

Latest news with #Deepfake

They sold their likeness to AI companies — and regretted it
They sold their likeness to AI companies — and regretted it

Japan Times

time17-04-2025

  • Entertainment
  • Japan Times

They sold their likeness to AI companies — and regretted it

South Korean actor Simon Lee was stunned when he saw his likeness — at times as a gynecologist or a surgeon — being used to promote questionable health cures on TikTok and Instagram. He is one of scores of people who have licensed their image to artificial intelligence marketing companies, and then ended up with the unpleasant surprise of seeing themselves feature in deepfakes, dubious adverts or even political propaganda. "If it was a nice advertisement, it would've been fine to me. But obviously it is such a scam," he said, adding that the terms of his contract prevented him from getting the videos removed. The result was that he was left him with his digital clone advocating for lemon balm tea to lose weight or ice baths to fight acne. AI technology allows firms to build catalogues of digital models — cheaper than filming actors, but more realistic than an entirely AI-generated avatar — to appear in videos that mostly promote products or services. Solene Vasseur, a digital communications and AI consultant, said this new form of advertising was fast and cheap compared to a real-life production. Using avatars is also a way for brands to "show that they're comfortable with the new tools." The method is quick and straightforward: half a day's shooting, a green screen and a teleprompter. The actor has to display different emotions, which will allow the artificial intelligence to make the avatar say all sorts of things, in an infinite number of languages. "The performance in terms of the expressiveness of a real human — voice, facial movements, body language ... is still superior to anything AI can generate right now," said Alexandru Voica, head of corporate affairs at Synthesia, a U.K.-based industry leader. To make a video, the platform's customers just have to select a face, a language, a tone — such as serious or playful — and insert the script. The whole process comes at a modest price: the ultra-basic version is free, while the pro version costs a few hundred euros. 'Am I crossing a line?' The contracts offer up to a few thousand euros, depending on duration and how well a person is known. But they can be filled with legal jargon and sometimes abusive clauses, and in their rush to make quick cash, some people have found it hard to fully understand what they were signing up for. Such was the case for Adam Coy, a 29-year-old actor and director based in New York, for whom selling his image was a financial decision. Chinese technology company Honor displays its new AI agent and Deepfake detection technology at the Mobile World Congress in Barcelona on March 2. | REUTERS In October 2024, he signed over the rights to his face and voice to MCM for $1,000, granting the company the use of his avatar for one year. "If I was more successful, I feel like I would maybe be able to have the ethical conversation with myself," he said. "Is this right, or am I crossing a line by doing this?" A few months later, his partner's mother came across videos in which his digital clone claimed to come from the future and announced disasters to come. None of this is forbidden by the contract, which only prohibits use for pornographic purposes, or in connection with alcohol and tobacco. Coy described the experience of watching his avatar as "surreal" and said he initially thought he would be an animated avatar. But "it's decent money for little work," he added. Propaganda British actor and model Connor Yeates, who signed a three-year contract with Synthesia for €4,600 ($5,240), also encountered an unpleasant surprise in 2022. At the time, he was sleeping on a friend's sofa, he told British newspaper The Guardian in 2024. "I don't have rich parents and needed the money," he said. This seemed like a "good opportunity." But he then discovered that his image had been used to promote Ibrahim Traore, the president of Burkina Faso who took power in a coup in 2022. "Three years ago, a few videos slipped our content moderation partly because there was a gap in our enforcement for factually accurate but polarizing type content or videos with exaggerated claims or propaganda, for example." said Voica, head of corporate affairs at Synthesia. The firm said it has introduced new procedures but other platforms have since appeared, some applying much less stringent rules. It was possible to make an avatar from one of these platforms say outrageous things. "The clients I've worked with didn't fully understand what they were agreeing to at the time," said Alyssa Malchiodi, a lawyer who specializes in business law. "One major red flag is the use of broad, perpetual and irrevocable language that gives the company full ownership or unrestricted rights to use a creator's voice, image and likeness across any medium," she said. Contracts often contain clauses considered abusive, Malchiodi said, such as worldwide, unlimited, irrevocable exploitation, with no right of withdrawal. "Technology is evolving faster than courts or legislatures can respond," the lawyer said. "These are not invented faces," she said, calling for more caution.

Fake Job Seekers Are Exploiting AI To Scam Job Hunters And Businesses
Fake Job Seekers Are Exploiting AI To Scam Job Hunters And Businesses

Forbes

time11-04-2025

  • Business
  • Forbes

Fake Job Seekers Are Exploiting AI To Scam Job Hunters And Businesses

It's hard for people to find a job in this current market. To make matters worse, the U.S. job market is contending with a growing threat of fraudulent job applicants. They are armed with artificial intelligence (AI) tools that deceive hiring managers to secure remote positions. Using deep fake videos, voice manipulation, and fabricated resumes, these impostors exploit generative AI to create convincing false identities. This scam isn't just a hiring headache. It's a cybersecurity crisis. Bad actors are infiltrating companies to steal data, plant malware, or steal funds. As remote work surges, businesses, recruiters, hiring managers, and job hunters must be careful and cautious. From startups to Fortune 500 firms, recruiters face a wave of applicants wielding AI-driven tools, deep fake avatars, tailored resumes, and cloned voices that slip past experienced professionals. Gartner, a leading research firm, predicts that by 2028, one in four global job candidates could be fake, driven by the accessibility of generative AI. A quick scroll through LinkedIn under the prompt 'job scams' shows dozens of accounts claiming that they've either been scammed or an attempt was made to take advantage of the account holder. Both companies and job hunters need to be on alert. Scammers leverage generative AI to fabricate resumes, photo IDs, and employment histories. They may copy real professionals' LinkedIn profiles or use Deepfake technology to mimic faces during video interviews. Scammers leverage AI tools like ChatGPT to craft resumes packed with buzzwords such as '10 years at Google,' 'led cloud migrations' and optimized to bypass applicant tracking systems (ATS). These documents are so good that recruiters often overlook red flags. Some bad actors employ real-time AI assistants to answer technical questions, masking their lack of expertise. Others orchestrate a 'bait-and-switch,' where a skilled stand-in aces the interview, only for a different person to take the job. Bots amplify the chaos, flooding job boards with thousands of applications to overwhelm recruiters and bury genuine candidates. Many claim U.S. residency while operating overseas, funneling salaries through intermediaries or outsourcing tasks to hidden teams. Once hired, their goals turn malicious. This includes installing ransomware, leaking trade secrets, or quietly draining accounts. Americans lost $501 million to these scams in 2024, according to the Federal Trade Commission. Victims lose around $3,000 to this devious behavior, The FBI's Internet Crime Complaint Center reports. Remote roles are a scammer's dream. There are no office visits or bosses on the alert watching over them. Beyond financial losses, estimated at millions annually from breaches and wasted hires, trust in hiring erodes. Legitimate job seekers face heightened scrutiny, competing in a market cluttered with fake or 'ghost' job postings. A 2023 report noted a 118% surge in job scams, fueled by AI and remote work, according to the Identity Theft Resource Center. This scam is a wake-up call for employers. Balancing open hiring with rigorous verification is critical, especially for remote roles. Companies must invest in AI-detection tools, train recruiters to spot fraud, and rethink trust in digital interactions. For job seekers, the irony stings. Legitimate candidates face tougher hurdles while impostors glide through. As AI evolves, so must defenses, or the line between human and machine will blur further, leaving businesses vulnerable and the job market a minefield. Mark Anthony Dyson, a popular Career Coach and podcaster on LinkedIn, who has been raising the alarm of job scams, says 'Companies will do everything they can to prevent fake employees. However, if they are now acknowledging the news of their financial stability, credibility, intellectual properties, and products, it may be too late. The hiring faux pas will cause companies to delay hiring once they find out they've paid fake employees. It will cost them more money than they're willing to spend to reverse the process. In some cases, they will have no choice but to pursue the protection of bad actors.' There is a related ghost or fake job problem which is different from the scamming issue. It's not nearly as malicious as the job scams, but an irritant. In a survey conducted by Clarify Capital, around 25% of hiring managers say that they've left job postings open for more than four months. For the most part, they don't intend to hire anyone. Listing fake jobs involves cultivating a pipeline of candidates. Companies will keep their resumes on file for the future, and will have the applicant's data. Some hiring managers post phoney jobs to make it look like the business is doing well and growing. I've seen managers post job listings to make overworked employees feel that there is help on the way, but it never materializes. Additionally, human resource managers list jobs online, but over time, they forget about them, and they're left for months. It is frustrating for job hunters as they apply to the role and never hear back because the original poster forgot all about the position. There are people who use postings as a cover to make it look like they are recruiting for an open role, however the hiring manager already knows who they want to hire. The job posting creates a false impression that it is a fair and open hiring process. The company can gain a sense of the marketplace by posting phantom jobs. By the responses, they can determine how much money other companies offer or whether this type of role is in demand. If the company is looking to downsize or cut a few jobs, the phony roles can provide insight into how hard it would be to find a replacement and at what compensation level.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store