Alibaba makes AI model for video, image generation publicly available
Alibaba's announcement follows similar action from startup DeepSeek, whose ostensibly low-cost open-source models earlier this year generated excitement among technology investors and surprise in the capital-intensive sector with performance akin to those of more established rivals such as OpenAI.
Alibaba said it has released four variants of Wan 2.1—T2V-1.3B, T2V-14B, I2V-14B-720P, and I2V-14B-480P—which generate images and videos from text and image input. The '14B' indicates the variant accepts 14 billion parameters, meaning it can process far more input to yield more accurate results.
The models are available globally on Alibaba Cloud's ModelScope and Hugging Face platforms for academic, research, and commercial use.
Alibaba introduced the latest version of its video- and image-generating AI model in January—later shortening its name to Wan from Wanx—touting its ability to generate highly realistic visuals.
The firm has since highlighted its top ranking on VBench, a leaderboard for video generative models, where it leads in functionality such as multi-object interaction.
On Tuesday, Alibaba released a preview of reasoning model QwQ-Max, which it plans to make open source upon full release.
It also announced plans this week to invest at least 380 billion yuan ($52 billion) over the next three years to bolster cloud computing and AI infrastructure.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Al Arabiya
2 days ago
- Al Arabiya
OpenAI staff looking to sell $6 billion in stock: Report
Current and former employees of OpenAI are looking to sell nearly $6 billion worth of the ChatGPT maker's shares to investors including SoftBank Group and Thrive Capital, a source familiar with the matter told Reuters on Friday. The potential deal would value the company at $500 billion, up from $300 billion currently, underscoring both OpenAI's rapid gains in users and revenue, as well as the intense competition among artificial intelligence firms for talent. SoftBank, Thrive and Dragoneer Investment Group did not immediately respond to requests for comment. All three investment firms are existing OpenAI investors. Bloomberg News, which had earlier reported the development, said discussions are in early stages and the size of the sale could change. The secondary share sale investment adds to SoftBank's role in leading OpenAI's $40 billion primary funding round. Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualized run rate of $12 billion, and is on track to reach $20 billion by the end of the year, Reuters reported earlier in August. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.


Makkah Newspaper
3 days ago
- Makkah Newspaper
Phishing evolves with AI and stealth: Kaspersky highlights biometric and signature risks with attempts increasing by 22.5% in KSA
Kaspersky has detected and blocked over 142 million phishing link clicks globally Q2 2025, the Kingdom of Saudi Arabia saw a 22.5% increase from Q1 in phishing attempts. Currently phishing is going through a shift driven by sophisticated AI-powered deception techniques and innovative evasion methods. Cybercriminals are exploiting deepfakes, voice cloning and trusted platforms like Telegram and Google Translate to steal sensitive data, including biometrics, electronic signatures and handwritten signatures, posing unprecedented risks to individuals and businesses. AI-powered tactics transforming phishing attacks AI has elevated phishing into a highly personalized threat. Large language models enable attackers to craft convincing emails, messages and websites that mimic legitimate sources, eliminating grammatical errors that once exposed scams. AI-driven bots on social media and messaging apps impersonate real users, engaging victims in prolonged conversations to build trust. These bots often fuel romantic or investment scams, luring victims into fake opportunities with AI-generated audio messages or deepfake videos. An example of a phishing email created with DeepSeek (left) and an example of a phishing website created with AI (right) Attackers also create realistic audio and video deepfake impersonations of trusted figures — colleagues, celebrities or even bank officials — to promote fake giveaways or extract sensitive information. For instance, automated calls mimicking bank security teams use AI-generated voices to trick users into sharing two-factor authentication (2FA) codes, enabling account access or fraudulent transactions. Additionally, AI-powered tools analyze public data from social media or corporate websites to launch targeted attacks, such as HR-themed emails or fake calls referencing personal details. Employing new tactics to bypass detection Phishers are deploying sophisticated methods to gain trust, exploiting legitimate services to prolong their campaigns. For instance, Telegram's Telegraph platform, a tool to publish long texts, is used to host phishing content. Google Translate's page translation feature generates links that look like and are used by attackers to bypass security solutions' filters. A phishing page mimicking an Office document hosted on Telegraph (left) and an example of a phishing page hidden behind a URL provided by Google Translate (right) Attackers now also integrate CAPTCHA, a common anti-bot mechanism, into phishing sites before directing users to the malicious page itself. By using CAPTCHA, these fraudulent pages deflect anti-phishing algorithms, as the presence of CAPTCHA is often associated with trusted platforms, lowering the likelihood of detection. A switch in hunting: from logins and passwords to biometrics and signatures The focus has shifted from passwords to immutable data. Attackers target biometric data through fraudulent sites that request smartphone camera access under pretexts like account verification, capturing facial or other biometric identifiers that cannot be changed. These are used for unauthorized access to sensitive accounts or sold on the dark web. Similarly, electronic and handwritten signatures, critical for legal and financial transactions, are stolen via phishing campaigns impersonating platforms like DocuSign or prompting users to upload signatures to fraudulent sites, posing significant reputational and financial risks to businesses. 'The convergence of AI and evasive tactics has turned phishing into a near-native mimic of legitimate communication, challenging even the most vigilant users. Attackers are no longer satisfied with stealing passwords — they're targeting biometric data, electronic and handwritten signatures, potentially creating devastating, long-term consequences. By exploiting trusted platforms like Telegram and Google Translate, and co-opting tools like CAPTCHA, attackers are outpacing traditional defenses. Users must stay increasingly skeptical and proactive to avoid falling victim,' said Olga Altukhova, security expert at Kaspersky. Detailed information is available in a report on Earlier in 2025 Kaspersky detected a sophisticated targeted phishing campaign which was dubbed Operation ForumTroll, as attackers sent personalized phishing emails inviting recipients to the 'Primakov Readings' forum. These lures targeted media outlets, educational institutions and government organizations in Russia. After clicking on the link in the email, no additional action was needed to compromise their systems: the exploit leveraged a previously unknown vulnerability in the latest version of Google Chrome. The malicious links were extremely short-lived to evade detection and in most cases ultimately redirected to the legitimate website for 'Primakov Readings' once the exploit was taken down. To be protected from phishing, Kaspersky recommends: • Verify unsolicited messages, calls, or links, even if they appear legitimate. Never share 2FA codes. • Scrutinize videos for unnatural movements or overly generous offers, which may indicate deepfakes. • Deny camera access requests from unverified sites and avoid uploading signatures to unknown platforms. • Limit sharing sensitive details online, such as document photos or sensitive work information. • Use Kaspersky Next (in corporate environments) or Kaspersky Premium (for individual use) to block phishing attempts.


Arab News
5 days ago
- Arab News
Musk's bid to dismiss OpenAI's harassment claims denied in court
A federal judge on Tuesday denied Elon Musk's bid to dismiss OpenAI's claims of a 'years-long harassment campaign' by the Tesla CEO against the company he co-founded in 2015 and later abandoned before ChatGPT became a global phenomenon. In the latest turn in a court battle that kicked off last year, US District Judge Yvonne Gonzalez Rogers ruled that Musk must face OpenAI's claims that the billionaire, through press statements, social media posts, legal claims and 'a sham bid for OpenAI's assets' had attempted to harm the AI startup. Musk sued OpenAI and its CEO Sam Altman last year over the company's transition to a for-profit model, accusing the company of straying from its founding mission of developing AI for the good of humanity, not profit. OpenAI countersued Musk in April, accusing the billionaire of engaging in fraudulent business practices under California law. Musk then asked for OpenAI's counterclaims to be dismissed or delayed until a later stage in the case. OpenAI argued in May its countersuit should not be put on hold, and the judge on Tuesday concluded that the company's allegations were legally sufficient to proceed. A jury trial has been scheduled for spring 2026.