
Alibaba releases VACE, an open-source AI video editor
Alibaba has introduced Wan2.1-VACE (Video All-in-one Creation and Editing), an open-source artificial intelligence model that integrates video generation and editing capabilities within a single multimodal framework.
Wan2.1-VACE is part of Alibaba's Wan2.1 series and is the first open-source model reported to provide unified video generation and editing solutions for a variety of content creation tasks. The system is designed to support inputs from text, images and video, enabling creators to transform different forms of media into video content rapidly.
The model's comprehensive editing tools include referencing still images or selected video frames, repainting video sequences, modifying chosen regions within a video, and spatio-temporal extensions, which collectively enable more flexible editing workflows. These editing functionalities are applicable across multiple sectors, such as short-form social media content, advertising and marketing production, post-production effects for film and television, and educational training resources.
According to Alibaba, Wan2.1-VACE enables users to generate videos featuring specific interactions between subjects based on image samples. Static images can be converted into moving video sequences with realistic motion effects, providing creators with options for pose transfer, motion control, depth simulation, and recolouring.
The system's selective editing functions allow additional content to be added, altered or removed from designated video regions without impacting other segments, while the video boundary extension feature can expand a video's spatial dimension and autonomously generate complementary content.
"As an all-in-one AI model, Wan2.1-VACE delivers unparalleled versatility, enabling users to seamlessly combine multiple functions and unlock innovative potential. Users can turn a static image into video while controlling the movement of objects by specifying the motion trajectory. They can seamlessly replace characters or objects with specified references, animate referenced characters, control poses, and expand a vertical image horizontally to create a horizontal video while adding new elements through referencing," the company stated.
The technical architecture of Wan2.1-VACE is designed around several new concepts, including the Video Condition Unit (VCU), which serves as a unified interface supporting the processing of different input modalities—text, images, video footage, and masks. The model also incorporates a Context Adapter structure, using representations of time and space to inject task-specific information across a range of video editing and synthesis applications.
"Wan2.1-VACE leverages several innovative technologies, to take into account the needs of different video editing tasks during construction and design. Its unified interface, called Video Condition Unit (VCU), supports unified processing of multimodal inputs such as text, images, video, and masks," said the company. "The model employs a Context Adapter structure that injects various task concepts using formalised representations of temporal and spatial dimensions. This innovative design enables it to flexibly manage a wide range of video synthesis tasks."
Alibaba stated that advances in the model architecture support quick and efficient content creation for social media, marketing, and entertainment, as well as training and education. The company also highlighted the resource intensity of training video foundation models, noting, "Training video foundation models requires immense computing resources and vast amounts of high-quality training data. Open access helps lower the barrier for more businesses to leverage AI, enabling them to create high-quality visual content tailored to their needs, quickly and cost-effectively."
The Wan2.1-VACE model is being released in two versions—a 14-billion-parameter model and a 1.3-billion-parameter version. Both will be available for free download on platforms such as Hugging Face, GitHub, and Alibaba Cloud's ModelScope open-source community.
Earlier this year, Alibaba released four other Wan2.1 models, followed in April by a video generation system supporting start and end frame creation. Collectively, these models have accumulated more than 3.3 million downloads on platforms including Hugging Face and ModelScope, highlighting growing interest in AI-driven video production tools.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NZ Herald
4 days ago
- NZ Herald
Auckland gets taste of In-N-Out Burger with one day pop-up event
American fast-food chain In-N-Out Burger is popping up in central Auckland for one day only. The Californian burger chain posted to their Instagram confirming they would be holding a pop-up at Wynyard Pavilion, 17 Jellicoe Street in Wynyard Quarter from 9am until 3pm today. An image posted to Reddit


Techday NZ
22-05-2025
- Techday NZ
Cynet launches next-gen AI threat detection, cuts false alarms
Cynet has announced a significant update to CyAI, its proprietary artificial intelligence engine that is used for advanced threat detection across the Cynet cybersecurity platform. The updated CyAI engine is claimed to reduce false positives by 90%, a development that the company says will help managed service providers (MSPs) and small-to-medium businesses (SMBs) maintain a higher level of security protection. This updated AI functionality is part of Cynet's ongoing objective to provide robust, purpose-built protection and is supported by Cynet's 24/7 security operations centre (SOC) team. The CyAI engine uses machine learning models that have been trained on millions of samples, enabling it to continuously scan every executable file across all endpoints. This ongoing analysis allows CyAI to identify both known and previously undetected, or zero-day, threats before they cause damage within a customer's environment. Cynet has previously recorded performance in the latest MITRE ATT&CK Evaluation, achieving 100% protection and 100% detection visibility without generating any false positives, an outcome that the company says sets a benchmark for detection accuracy in its sector. With the updated CyAI engine, Cynet says it has further strengthened its position. The newly developed core of the AI system is designed to offer greater precision and wider coverage for threats. According to the company, this enhancement is already being observed in live security environments. Timea Kovacs, Head of Data Science at Cynet, who oversaw the redesign of CyAI, provided details on the technology's present-day performance. Kovacs said: "Supercharged CyAI is already catching new malicious executables in active incidents. These are threats no other antivirus mechanism had flagged, and CyAI spotted them first in the wild. We also built a Deep Analysis cloud feedback loop to facilitate rapid model improvement, ensuring CyAI continuously optimizes over time." The update is part of a broader strategy at Cynet to improve the detection capabilities of its security platform. Aviad Hasnis, Chief Technology Officer at Cynet, commented: "Detection was already recognized as a Cynet strength. With upgrades to CyAI, we're doubling down on that advantage. This and upcoming enhancements and expansions to the All-in-One Cybersecurity Platform reflect our entire team's commitment to continuously reinvent protection, harnessing the full potential of AI for Cynet partners and customers." Chief Executive Officer at Cynet, Jason Magee, also commented on the update and its role in supporting channel partners. Magee said: "Partner success is our obsession, and AI-enabled detection is a must-have capability for channel players in today's market. With CyAI, Cynet continues to deliver on that demand by combining feedback from our partners, threat intelligence from our analysts, and genuinely innovative ML infrastructure by our engineers. I'm proud of our team for making the All-in-One Cybersecurity Platform stronger and delivering new advantages to Cynet partners and customers around the world." The company highlighted the importance of effective threat detection, particularly as cyberattacks continue to increase in both volume and complexity. For MSPs, Cynet stated that enhanced detection can act as a key differentiator that helps attract and retain clients. For small and medium businesses, robust threat detection is viewed as critical to protecting resources and maintaining business continuity.


Techday NZ
21-05-2025
- Techday NZ
Red Hat partners to drive innovation in software-defined vehicles
Red Hat has announced the development of a partner ecosystem focused on integrating silicon, middleware, application software, and services with the Red Hat In-Vehicle Operating System for future software-defined vehicles. The automotive sector is undergoing a transformation towards software-centric vehicle architectures, which requires new forms of collaboration among industry participants. Red Hat is actively seeking strategic partnerships with a range of technology experts, including silicon vendors, independent software vendors, tier-1 suppliers, and systems integrators, to ensure its platform supports this shift. Red Hat's strategy is to provide automakers with a wider choice of trusted solutions for deploying its in-vehicle operating system. By collaborating across the ecosystem, Red Hat aims to help accelerate development cycles, simplify integration, and enable vehicles to deliver enhanced user experiences. Francis Chow, Vice President and General Manager, In-Vehicle Operating System and Edge at Red Hat, said: "The automotive industry is experiencing a significant shift, as more and more automakers and suppliers turn to open source for software-defined vehicles. Red Hat acknowledges that navigating this complex evolution requires highly strategic, targeted collaborations in order to meet the rigorous requirements of automotive. By proactively building a partner ecosystem of technology experts and suppliers, Red Hat In-Vehicle Operating System is poised to reshape the world of automotive and empower the industry on its transformative journey to software-defined vehicles." Red Hat works with silicon partners to enable software portability through hardware abstraction, which is intended to speed up software development while maintaining access to advanced hardware features. Middleware and software vendors contribute to the seamless operation of critical vehicle applications. Systems integrators are collaborating to deliver end-to-end solutions to further optimise design speed and reduce validation times. Bruno Putman, Vice President, Automotive Go-to-market and Alliances at Arm, remarked: "This era of automotive innovation relies on a strong software ecosystem to deliver advanced features that drivers have come to rely on. In collaboration with Red Hat, we ensure Red Hat In-Vehicle Operating System supports Arm automotive architectures and enable the development of mixed critical automotive software with open collaboration through SOAFEE, accelerating automotive software design built on Arm." Uwe Brandenburg, Chief Technology Officer, Automotive & Manufacturing Industry, DXC Technology, added: "Together with Red Hat, DXC is helping automotive manufacturers accelerate their journey to autonomous driving. By leveraging Red Hat In-Vehicle Operating System within our software-defined vehicle solutions, we're enabling a secure, reliable, and scalable foundation for real-time, safety-critical workloads — driving innovation without compromising performance." Christian Uebber, Chief Technology Officer, ETAS, said: "I congratulate Red Hat on achieving its first functional-safety certification for Red Hat In-Vehicle Operating System. Having reviewed the Red Hat In-Vehicle Operating System Safety Guidance Manual, I can confirm that this milestone is a significant step forward for ASIL-B safety in open source software. An exceptional amount of work has clearly gone into this – this is real progress." Jack Weast, Intel Fellow, Vice President and General Manager, Intel Automotive, commented: "Red Hat In-Vehicle Operating System plays a crucial role in advancing software-defined architectures within the automotive industry. Intel Automotive is pleased to collaborate as a key partner, and with Red Hat's extensive experience of more than 30 years on Intel Architecture, our combined efforts are set to contribute significantly to the evolution of software-defined solutions in the automotive sector." Chang Sukjin, Head of Smart Mobility Laboratory, CTO Division, LG Electronics, stated: "The collaboration between LG Electronics and Red Hat marks a pivotal step in accelerating innovation in software-defined vehicles. By combining Red Hat In-Vehicle Operating System with LG Electronics' software-defined vehicle middleware technologies, we are setting a new standard for a stable and flexible vehicle software platform, driving the future of the mobility ecosystem." Ray Cornyn, Senior Vice President and General Manager, Automotive Processors, NXP Semiconductors, gave his perspective: "Open source software is a catalyst to automotive innovation. Red Hat In-Vehicle Operating System helps address a key challenge in advancing software-defined vehicles – combining the rapid evolution of a community-driven platform with the safety rigor required in real-time vehicle systems. At NXP, we welcome the Red Hat In-Vehicle Operating System safety certification that is supported by NXP's S32G safety-certified vehicle network processors, as a major step forward. This collaboration reflects our shared vision of a future where software-defined vehicles redefine mobility." Nico Hartmann, Ph.D, Chief Technology Officer, Qorix GmbH, said: "The increasing complexity of software-defined vehicles demands robust, flexible and secure software architectures. Safety-certified Linux, powered by open source, is a key enabler for this transformation. By integrating our powerful Qorix middleware with Red Hat In-Vehicle Operating System, we see a strong opportunity to efficiently implement mixed-criticality use cases in a future-proof way. We are excited to be part of a growing ecosystem, driving innovation for the mobility of tomorrow." Juhapekka Niemi, Senior Vice President, Product Management, Qt Group, stated: "We're excited to collaborate with Red Hat to help accelerate the future of software-defined vehicles. Red Hat In-Vehicle Operating System is a big step forward for software-defined vehicle development. Together with Qt Safe Renderer, we are helping automakers develop modern embedded systems that are both functionally safe and dependable." Marco Di Benedetto, Senior Vice President, Engineering, Sonatus, explained: "Red Hat In-Vehicle Operating System offers an important base for a growing range of automotive applications spanning different vehicle domains, and this most recent safety certification expands its applicability into many types of safety-critical systems. Sonatus technology has been ported to Red Hat In-Vehicle Operating System and we have been working with Red Hat to show how our combined solutions can accelerate the shift to software-defined vehicles in a range of important applications." Narasimham RV, President, Engineering Services, Tech Mahindra, said: "The transition from hardware-centric architectures to software-first paradigms necessitates open, modular, and safety-aware operating systems. Tech Mahindra is at the forefront of this shift, utilizing our full-stack capabilities from silicon to software and supporting global OEMs in building scalable, future-ready software-defined vehicle (SDV) platforms. Our collaboration with Red Hat enables both organizations to jointly accelerate time to market and deliver innovative mobility solutions tailored to the needs of our customers." Artem Aginskiy, Product Line Manager, High-Performance Processors, Texas Instruments (TI), commented: "Our collaboration with Red Hat represents an important milestone in the evolution toward the software-defined vehicle. With TI's high-performance automotive hardware platforms and Red Hat's cutting-edge work to deliver a Linux offering that addresses functional safety requirements, developers now have access to the vast Linux-based open source ecosystem to enable innovation and accelerate time to market." Shinpei Kato, Chief Executive Officer and Founder, TIER IV, and Founder, Autoware Foundation, said: "Red Hat has been an invaluable partner to TIER IV and the Autoware Foundation in advancing a software-defined vehicle reference implementation, enabling the Autoware open-source autonomous driving software to seamlessly run on Red Hat In-Vehicle Operating System. We are excited about the possibilities this open source-based autonomous driving reference platform unlocks, and we believe it will play a vital role in accelerating the deployment of safe, secure, and scalable autonomy solutions across the automotive industry." Dongchao Xu, Chief Executive Officer, ThunderX, stated: "As the intelligent automotive industry rapidly evolves, open source technology remains pivotal to innovation. ThunderX, specializing in intelligent driving domain controllers and high-performance computing (HPC) platforms, is proud to collaborate with Red Hat. Together, utilizing Red Hat technologies, we are advancing an open source automotive ecosystem to accelerate the software-defined future of mobility."