Fans blast online short drama for AI-generated Dilraba Dilmurat
14 Apr - Fans of Dilraba Dilmurat recently expressed anger over the irresponsible party misusing AI to generate the actress' image for their online short drama.
Recently, a short drama titled "Ni Qi" was found to have used AI to replace their original actor's face with Dilraba's, causing fans of the actress to immediately claim infringement and demanding her agency to take legal action.
Netizens pointed out that in some of the scenes involving the lead character, who is a high school student, the face does not resemble Dilraba at all from the side, but then changes to the Uyghur actress' face when she turns around.
This is not the first time that a celebrity's face was misused by others via AI to promote their products. Louis Koo and Raymond Lam's images were previously generated to promote an online casino gaming.
It is noted that the said short drama, which consists of a series of two to three minute-episodes, has been shown on multiple online short drama apps prior to the complaint. A manager of one of these platforms stated that they have already initiated the process to delete scenes involving the actress' image.
(Photo Source: Dilraba Weibo, Louis Fanpage IG, Sin Chew)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Washington Post
2 hours ago
- Washington Post
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.'


Fox News
4 hours ago
- Fox News
Fox News AI Newsletter: FDA approves cancer-fighting tech tool
IN TODAY'S NEWSLETTER: - FDA approves first AI tool to predict breast cancer risk- Kesha changes cover art for 'Delusional' single after fan backlash over AI image- OpenAI to appeal copyright ruling in NY Times case as Altman calls for 'AI privilege' SMARTER SCREENINGS: The U.S. Food and Drug Administration (FDA) has approved the first artificial intelligence (AI) tool to predict breast cancer risk. 'WE WILL FIGHT': OpenAI CEO Sam Altman said Thursday that the company plans to appeal a court decision in a copyright case brought by The New York Times. NOVA IN ACTION: Flock Safety has released another piece of revolutionary technology aimed at keeping everyday civilians safe from crime. The company's new product, Flock Nova, helps law enforcement with a common but often overlooked problem – a lack of data sharing and access. ROBOT NURSES RISING: The global healthcare system is expected to face a shortage of 4.5 million nurses by 2030, with burnout identified as a leading cause for this deficit. In response, Taiwan's hospitals are taking decisive action by integrating artificial intelligence and robotics to support their staff and maintain high standards of patient care. INNOVATION HUB: Amazon is investing $10 billion to create data centers in North Carolina aimed at expanding its artificial intelligence and cloud computing technology capabilities. 30 AI STOCKS: Investing in the next generation of artificial intelligence winners is getting more comprehensive with the launch of a new exchange-traded fund based on 30 stock picks from one of Wall Street's top analysts. AI ART BACKLASH: Kesha unveiled the new cover artwork for her single "Delusional" after facing backlash from fans for initially using an AI-generated image. OPINION: UAE AMBASSADOR: President Donald Trump's recent visit to the UAE marked a pivotal moment for UAE-U.S. bilateral relations, shining a spotlight on a shared vision for the future. As the UAE and the "New Gulf" pivot from oil to cutting-edge technologies, our partnership with the U.S., rooted in decades of trust, has become a beacon of what's possible when nations collaborate. ROBOT GOES BERSERK: A chilling video circulating on social media has reignited old anxieties about robots turning against their creators. The footage shows a Unitree H1 humanoid robot, a machine about the size of an adult human, suddenly flailing its arms and legs with alarming force during a test, coming dangerously close to two technicians. FOLLOW FOX NEWS ON SOCIAL MEDIA FacebookInstagramYouTubeTwitterLinkedIn SIGN UP FOR OUR OTHER NEWSLETTERS Fox News FirstFox News OpinionFox News LifestyleFox News Health DOWNLOAD OUR APPS Fox NewsFox BusinessFox WeatherFox SportsTubi WATCH FOX NEWS ONLINE STREAM FOX NATION Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.
Yahoo
4 hours ago
- Yahoo
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON (AP) — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.'