AI-generated images of DeepSeek startup team circulate online
"The average age of the Deepseek team is less than 35 years old. This group of young people has caused panic in the technology community in the United States," reads the simplified Chinese caption of one of the posts.
DeepSeek's R1 chatbot stunned investors and industry insiders with its ability to match the functions of its Western competitors at a fraction of the cost, though a number of countries have questioned its storage of user data and have moved to ban it from government devices (archived link).
Founded in May 2023, the firm is the brainchild of tech and business prodigy Liang Wenfeng, who was born in 1985 (archived link).
In an interview with a Chinese tech media 36kr in May 2023, Liang said the core technical positions of the company "are essentially filled with freshmen and people who graduated a year or two ago" (archived link).
The images were also viewed tens of thousands of times elsewhere on X and Facebook.
However, they contain inconsistencies that indicate they were generated with AI tools.
AFP ran the images through the Verification Plugin, also known as InVID-WeVerify tool, from AFP partner veraai.eu. The tool can identify traces left by AI image generation software.
The results indicate a 99 percent chance the first image was created with a faceswap algorithm, while it was 100 percent likely the second image was "manipulated through face swapping".
The two images contained inconsistencies indicating they were AI-generated, said Shu Hu, director of the Purdue Machine Learning and Media Forensics (M2) Lab (archived link). He pointed out blurry ears and a lack of details in teeth and pupils in both images.
Siwei Lyu, director of the University at Buffalo's Media Forensic Lab (UB MDFL) also pointed out a "noticeable colour difference" on one face, an unnaturally long neck and some identical eyes and mouths of the people in the images (archived link).
"Everyone's smile in the images is unnaturally uniform, resembling the stereotypical smile learned by an AI model," Lyu told AFP. "Their teeth are showing [in the images] at nearly the same angle, which is unusual for real photos."
In addition to the blurry rendering of ears, the second image also features irregular "text" on the uniforms, as well as an erroneous DeepSeek logo in the background. The company's genuine logo is a whale.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
12 minutes ago
- Axios
Beijing's hackers are playing the long game
Chinese hackers are targeting more sensitive U.S. targets than ever — not to smash and grab, but to bide their time. Why it matters: Beijing is investing in stealthy, persistent access to U.S. systems — quietly building up its abilities to disrupt everything from federal agencies to water utilities in the event of escalation with Washington. Even the most routine spying campaign could leave China with backdoors to destruction for years to come. Driving the news: At least three China-based hacking groups exploited vulnerable SharePoint servers in the last month, according to Microsoft. Researchers at Eye Security, which first discovered the SharePoint flaws, estimates that more than 400 systems were compromised as part of the SharePoint attacks. In this case, hackers also stole machine keys. That means the attackers can regain access whenever they want — even after the system is patched — unless admins take rare manual steps to rotate keys. The big picture: China's state-linked hackers have been growing in sophistication over the last few years as they focus more on targeting technology and software providers with hundreds of customers, often including government agencies. By the numbers: More than 330 cyberattacks last year were linked to China, double the total from 2023, according to CrowdStrike data shared with the Washington Post. Those numbers continued to climb in early 2025, according to CrowdStrike. Between the lines: At least three major Chinese government teams have been targeting U.S. networks in recent years. Volt Typhoon has focused on breaking into endpoint detection tools to burrow deep into U.S. critical infrastructure, including pipelines, railways, ports and water utilities. Their goal is to maintain persistent access and be prepared to launch destructive attacks in the event of contingencies such as a war over Taiwan, experts say. Salt Typhoon, known for its compromises of global telecom networks, has focused on traditional espionage and spying. This group tapped cell phones belonging to President Trump, Vice President Vance and other top government officials. The FBI believes that threat is now "largely contained." Silk Typhoon — which has been linked to a recent breach of the U.S. Treasury Department and is known for the global 2021 Microsoft Exchange hacks — has been ramping up its work in recent months. The group uses previously undetected vulnerabilities, known as zero-days, to break into networks. Zoom in: Researchers at cybersecurity firm SentinelOne have uncovered more than 10 patents tied to Silk Typhoon's work — a rarity among nation-state hackers. The patents — detailed in a report published Thursday — suggest the group was at one point developing new offensive tools, including to encrypt endpoint data recovery, conduct phone and router forensics and decrypt hard drives. The researchers also found that Silk Typhoon has links to at least three private sector companies. The intrigue: Beijing's growing reliance on private contractors adds another layer of complexity — shielding state involvement while expanding capability. A DOJ indictment released last month details how the Shanghai State Security Bureau directed employees at tech companies to hack into computers across U.S. universities and businesses to steal information. A trove of leaked documents stolen from private Chinese contractor I-Soon early last year also highlighted how hired hackers targeted several U.S. government agencies, major newspapers and research universities. State of play: China's growing cyber prowess comes as the Trump administration has diminished resources for its own cyber defenses. At least a third of the workforce at the Cybersecurity and Infrastructure Security Agency has left through voluntary buyouts, early retirements or layoffs. The Trump administration also wants to cut its budget. Yes, but: The administration is expected to invest heavily in its own offensive cyber powers — with $1 billion from the "One Big Beautiful Bill" heading to the Pentagon for just that purpose.


Tom's Guide
12 minutes ago
- Tom's Guide
I used AI to resurrect extinct animals in a documentary — the results blew my mind
MiniMax 02 from Chinese AI startup Hailuo is one of the few models to match Google's Veo 3 in terms of physics and visual realism. It's able to create stunning real-world scenes from simple or complex prompts and is a massive upgrade on the previous generation. It doesn't have the audio capabilities of Veo 3 but you can create video from image or text and use the consistent character feature to ensure consistency across videos. There are 6 or 10 second video generation options and 720p or 1080p resolutions. To put the latest generation AI video model to the test I came up with a concept — a wildlife documentary about wildlife that no longer exists. The extinct species such as the Dodo, sabertooth tigers and woolly mammoths — and then tie it to species today. The first task was to come up with the story. From threat of annihilation through to what they might be like if they survived to the modern era. It isn't particularly clever, some scenes don't make much sense but it looks good. I then turned the story into prompts - just simple ideas. I turned to Grok 4 to help me work out the story. I gave it the idea I'd come up with and asked it to help me plot out a series of videos to create a one to two minute documentary. Using AI to help prompt AI is a practical solution, it can create structure and add key terms such as camera type or motion to otherwise simple sentences. Grok 4 is particularly useful as it instantly went online, found prompting guides for MiniMax 02 and tailored its responses based on best practice. It came up with 16 prompts, each resulting in six seconds of video. It started with the concept: 'Document the behaviors and environments of extinct species as if captured on film, discussing evolution and extinction causes. Include 'what if' scenarios like a mammoth in modern times for a speculative twist.' Next was working out the specific scenes I'd need to tell the story and craft prompts that played to MiniMax strength. For example MiniMax 02 is great for realistic physics such as fur movement and collisions, as well as camera controls and texture details. For example, I decided to open with a scene called The Dawn of Extinction. The goal is to set the stage with a dramatic overview of prehistoric Earth. First prompt: "Panoramic view of a lush prehistoric valley teeming with diverse extinct animals like dodos and saber-toothed tigers grazing peacefully, subtle wind physics rustling leaves and fur, wide-angle orbiting camera pulling back to reveal an approaching asteroid shadow, cinematic epic style with warm golden hour lighting, 1080p/24 FPS, 6s." As you can see it lists a wide range of extinct animals, the wider scene and of course the lighting, camera type and style. Once I had the prompts, 16 in total, I turned to Hailuo's MiniMax, selecting 02 from the model menu and ensuring I was on text-to-video. If you wanted a more consistent control you could use image-to-video, first generating pictures in Midjourney or similar. I only have a standard account so set it to 1080p and 6s. Going for the highest resolution available on any given model gives you more flexibility with editing later, such as cropping in or adding a zoom motion. I only had to repeat a video twice, although I wasn't as picky as I could have been. In one I had a prompt to create a battle between a sabretooth tiger and a wooly mammoth - but it gave me an ordinary tiger and then shifted to a sabretooth. In the other it was an endboard and I forgot to put the text in double quotes. That ensures accurate rendering. Once I had the 16 videos that made up my mini documentary I set about creating sound effects for each video. That meant using Grok 4 to turn the video prompts into SFX prompts for the ElevenLabs SFX generator. I then turned to Suno to create an instrumental sweeping soundtrack and back to ElevenLabs to give voice to a script I'd written based on the contents of the videos. Finally I put it all together into CapCut and selected the sounds to match key moments in the video. I then added the voice over and music tracks. Creating content using AI video tools has never been easier. AI video tools like MiniMax are also becoming increasingly realistic, not just in the way they look but in how they handle lighting and physics. You can create an entire documentary from a handful of prompts. Where once you might need a dozen bad video generations for each good one, now, as long as the prompt is good, you get more or less one for one making it both cheaper and faster. Hailuo MiniMax 02 is the only model that consistently achieves physics and lighting accuracy in almost every generation other then Google Veo 3, but its much cheaper.

Epoch Times
3 hours ago
- Epoch Times
Plunder of Ghana's Gold by Chinese Criminals Continues, Authorities Say
JOHANNESBURG—Thousands of Chinese citizens remain in Ghana to mine gold illegally, despite a crackdown by authorities in Africa's largest producer of the precious metal, according to law enforcement agencies in the capital, Accra. They say the illegal miners appear to be taking advantage of the record-high gold price, which hit $3,500 in April, with much of the illicit metal being smuggled back to China.