SkyReels Open Sources the World's First Human-Centric Video Foundation Model for AI Short Drama Creation – SkyReels-V1, Reshaping the AI Short Drama Landscape
Open-Source Repositories:
SkyReels-V1https://github.com/SkyworkAI/SkyReels-V1
SkyReels-A1https://github.com/SkyworkAI/SkyReels-A1
Technical Report:https://arxiv.org/abs/2502.10841
Official Website:skyreels.ai
Addressing global pain points in AI video generation—such as closed-source models, limited accessibility, high costs, and usability issues—SkyReels is breaking new ground by open-sourcing two SOTA-level models and algorithms, SkyReels-V1 and SkyReels-A1. These cutting-edge technologies for AI short drama creation are now offered to the open-source community and AIGC users.
The production format for AI videos and short dramas has been market-validated. SkyReels helps address the challenges in traditional short drama production—such as complex offline processes including scriptwriting, casting, set design, storyboard creation, filming, and post-production, which require substantial manpower, incur high costs, and extend production cycles.
01SkyReels-V1: Human-Centric Video Foundation Model, the world's first open-source video generation model dedicated to AI short drama creation
AI short dramas require precise control over both cognitive and physical expressions—integrating lip-sync, facial expression, and body movement generation into a unified process. Currently, lip-sync generation is particularly well-developed, owing to its strong mapping with audio cues that enable high precision and superior user experience.
Yet, the true quality of AI short drama generation lies in the nuances of character performance. To dramatically enhance the controllability of facial expressions and body movements, SkyReels-V1 not only meticulously annotates performance details but also processes emotions, scene context, and acting intent, fine-tuning on tens of millions of high-quality, Hollywood-level data points.
Research team has implemented advanced technical upgrades to capture micro-expressions, performance subtleties, scene descriptions, lighting, and composition. As a result, characters generated by SkyReels now exhibit remarkably precise acting details—approaching an award-winning level.
SkyReels-V1 delivers cinematic-grade micro-expression performance, supporting 33 nuanced facial expressions and over 400 natural motion combinations that faithfully reproduce genuine human emotional expression. As demonstrated in the accompanying video, SkyReels-V1 can generate expressions ranging from hearty laughter, fierce roars, and astonishment to tears—showcasing rich, dynamic performance details.
Moreover, SkyReels-V1 brings cinematic-level lighting and aesthetics to AI video generation. Trained on Hollywood-level high-quality film data, every frame generated by SkyReels exhibits cinematic quality in composition, actor positioning, and camera angles.
Whether capturing solo performance details or multi-character scenes, the model now achieves precise expression control and high-quality visuals.
Importantly, SkyReels-V1 supports both text-to-video and image-to-video generation. It is the largest open-source video generation model supporting image-to-video tasks at equivalent resolution, achieving SOTA-level performance across multiple metrics.
Figure 1: Comparison of Text-to-Video Metrics for SkyReels-V1 (Source: SkyReels)
Such SOTA-level performance is made possible not only by SkyReels self-developed high-quality data cleaning and manual annotation pipeline—which has built a tens-of-millions–scale dataset from movies, TV shows, and documentaries—but also by 'Human-Centric' multimodal video understanding model, which significantly enhances the ability to interpret human-related elements in video, particularly through in-house character intelligence analysis system.
In summary, thanks to the robust data foundation and advanced character intelligence analysis system, SkyReels-V1 can achieve:
Cinematic Expression Recognition: 11 types of facial expression understanding for characters in film and drama, including expressions such as disdain, impatience, helplessness, and disgust, with emotional intensity levels categorized into strong, medium, and weak.
Character Spatial Awareness: Leveraging 3D reconstruction technology to comprehend spatial relationships among multiple characters, enabling cinematic positioning.
Behavioral Intent Understanding: Constructing over 400 behavioral semantic units for precise action interpretation.
Scene-Performance Correlation: Analyzing the interplay between characters, wardrobe, setting, and plot.
https://youtu.be/U2SXDzQGnbQSkyReels-V1 is not only among the very few open-source video foundation models worldwide, but it is also the most powerful in terms of performance for character-driven video generation.
With SkyReels self-developed inference optimization framework 'SkyReels-Infer,' the inference efficiency has been significantly improved —achieving 544p video generation on a single RTX 4090 in just 80 seconds. The framework supports distributed multi-GPU parallelism, Context Parallel, CFG Parallel, and VAE Parallel. Furthermore, by implementing FP8 quantization and parameter-level offload, it meets the requirements of low-memory GPUs; support for flash attention, SageAttention, and model compilation optimizations further reduces latency; and leveraging the open-source diffuser library enhances usability.Figure 2: Using equivalent RTX 4090 resources (4 GPUs), the SkyReels-Infer version reduces end-to-end latency by 58.3% compared to the HunyuanVideo official version (293.3s vs. 464.3s).
Figure 3: Under similar A800 resource conditions, the SkyReels-Infer version reduces end-to-end latency by 14.7%–28.2% compared to the HunyuanVideo official version, demonstrating a more robust multi-GPU deployment strategy.
02SkyReels-A1: The First SOTA-Level Expressive Portrait Image Animation Algorithm Based on Video Diffusion Transformers
To achieve even more precise and controllable character video generation, the SkyReels is open-sourcing SkyReels-A1, a SOTA-level algorithm based on video diffusion transformer for expression and action control. Comparable to Runway Act-One, SkyReels-A1 supports video-driven, film-grade expression capture, enabling high-fidelity micro-expression reproduction.
SkyReels-A1 can generate highly realistic and consistency videos for characters in any reference conditions—from portrait of half-body to full-body shots. It achieves a precise simulation of facial expressions, emotional nuances, skin textures, and body movements.
By inputting both a reference and a driving video, SkyReels-A1 'transplants' the facial expressions and actions details from the driving video onto the character in the reference image. The resulting video shows no distortion and faithfully reproduces the micro-expressions and body movements from the driving video, even surpassing the video quality generated by Runway Act-One in evaluation.
More encouragingly, SkyReels-A1 not only supports profile-based expression control but also enables highly realistic eyebrow and eye micro-expression alignment, along with more pronounced head movements and natural body motions.
For example, in the same dialogue scene, while the character generated by Runway Act-One shows noticeable distortion and deviates from the original appearance, SkyReels-A1 preserves the character's details, maintaining authentic nuance and seamlessly blending facial expressions as well as body movements.
Furthermore, SkyReels-A1 can drive more dramatic facial expressions scenes. Compared to Runway Act-One that fails to generate the desired effect, SkyReels-A1 support to transfer more complex expression dynamics, enabling the character's facial emotions to naturally synchronize with body movements and scene content for an exceptionally life–like performance.
03Empowering the Global AI Short Drama Ecosystem through Open-Sourcing
Video generation models are among the most challenging components of AI short drama creation. Although model generation capabilities have significantly improved over the past year, there remains a considerable gap—particularly given the high production costs.
By open-sourcing our SOTA-level models, SkyReels-V1 and SkyReels-A1, SkyReels becomes the first in the AI short drama industry to take such a step. This initiative not only represents a modest yet significant contribution to the industry but also marks a major leap toward fostering a flourishing ecosystem for AI short drama creation and video generation.
It's believed that with further advancements in inference optimization and the open-sourcing of controllable algorithms, these models will soon provide users with more cost-effective and highly controllable AIGC capabilities. SkyReels aims to empower users to create AI short dramas at minimal cost, overcome current issues of inconsistent video generation, and enable everyone to generate detailed, controllable character performances using their own computers.
This open-sourcing of our video generation models is not only a technological breakthrough that helps narrow the digital divide in the global content industry, but it is also a revolution in cultural production capacity. In the future, the convergence of short dramas, gaming, virtual reality, and other fields will accelerate industrial integration. AI short dramas have the potential to evolve from a 'tech experiment' into a mainstream creative medium and become a new vehicle for global cultural expression.
'Achieve artificial general intelligence and empower everyone to better shape and express themselves.' With this open-source initiative, SkyReels will continue to release more video generation models, algorithms, and universal models—advancing AGI equity and fostering the sustained growth and prosperity of the AI short drama ecosystem, while benefiting the open-source community, developer ecosystems, and the broader AI industry.
CONTACT: Jingnan Fu Skywork AI PTE.LTD. fujingnan(at)singularity-ai.comSign in to access your portfolio
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
7 minutes ago
- Android Authority
The Narwal Flow is the closest I've seen a robot vacuum get to being perfect
Narwal Flow If you're considering a robot vacuum purchase purely based on its ability to leave floors of all types as clean as possible, the Narwal Flow is the best bot you can get. AI-powered navigation, EdgeReach Technology, and anti-tangle brushes clean carpets and hard floors all the way to the edges and into the corners with zero fuss. The multi-function base station, with self-cleaning and drying functions, also makes it a breeze to use. Today, I looked at my floor and noticed it looked exceptionally good. I have been reviewing the very best robot vacuums for quite some time now, so they are constantly running in my home. What bot made me take notice? That would be the Narwal Flow. Not only does the floor look great, but the edges and corners are super clean, and my rugs are clean and have not moved from where I placed them. A number of vacuums can accomplish all of these things, but this is the first time they have all happened at the same time. And after testing it for a month, running it for well over 1,000 hours and 7,500 sqft of floor cleaning tasks, I'm pretty happy to declare that this is one of the best robot vacuums you can buy. What's the Narwal Flow all about? Paul Jones / Android Authority Back at CES 2025 we awarded the Narwal Flow a CES 2025 Breakthrough award for its various innovations. The multi-function base station, and the new FlowWash System with EdgeReach Technology were top factors in that decision. It was obvious from those early demos that this was going to be the bot to beat for floor mopping in 2025, so I'm super pleased to have now had the bot in-house for live testing. Traditionally, Narwal deploys dual spinning mop pads on its bots. With the Narwal Flow, the FlowWash Mopping System is an elongated roller system. This creates a full-width flat surface that can polish a lot of surfaces at once. The Flow also adds EdgeReach, which allows the roller to push out to the side so that the bot can clean completely to the edge of your room and into corners. Jonathan Feist / Android Authority The Flow is equipped with familiar folding front brushes and a zero-tangling main roller, powered by 20,000 Pa of suction pressure for all of your vacuuming needs. Though Narwal has been a leader in terms of mapping and navigation in the past, the Flow steps that up, too. Combining powerful AI computing in their Twin-AI Dodge with the reliability of 3D modeling from LiDAR sensors, the Flow navigates my home better than any Narwal bot before it. Plus, the LiDAR sensor is now situated in the rear casing, making this one of the shortest Narwal bots as well, perfect for getting under furniture. Jonathan Feist / Android Authority The 8-in-1 multi-function base station houses clean and dirty water canisters, a large dry debris collection bag, and an assortment of self-cleaning and automation features. These features promise many weeks of maintenance-free and stress-free operation. Many promises, but how does the Narwal Flow stack up in the real world? Jonathan Feist / Android Authority Let's start with the basics. The Flow is proving to be a very reliable vacuum, and that all starts with mapping and navigation. After setup, the Flow mapped my floors just as well as previous Narwal units did, and it continues to navigate with precision. It accurately identifies cords, shoes, furniture, and carpets that it should avoid running over. This includes my extra-thick bath mats that I always talk about; the Flow is the first bot to identify the mats as being too tall for it, avoiding them instead of getting stuck on them. Jonathan Feist / Android Authority Narwal vacuum suction systems were enough to pick up metal marbles, Lego, and other heavy debris back when the bots were around 8,000 Pa; now that the Flow offers 20,000 Pa, picking up debris and pulling things out of deeper carpets is better than ever. The Flow successfully gets those pesky pine needles from all the cracks and crevices. Edge-to-edge, there are few bots as thorough as the Narwal Flow The navigation and reach systems effectively clean the edges and corners of the room. These systems also ensure coverage around furniture legs, which is great. The Narwal Flow continues the tradition of Narwal bots successfully cleaning all the way under my kitchen table, navigating the chair legs and other obstacles. It helps that the bot is shorter than most, so it has no issue at all getting under the low bars and my other low furniture. Jonathan Feist / Android Authority Dry debris is cleanly pulled from the bot after each session, storing the dust and leaves in the 2.5L vacuum bag in the base station. That multi-function base station also does a great job at cleaning the mop pad, using heat and air. For my wood floors, this is the best clean I've seen from any robot vacuum to date. In terms of mopping, the roller pad presses into the floor with 12N of downward force to buff away any dirt and grime that might have still existed on my floors. My hardwoods and tile are looking fantastic. The cleaning solution that Narwal creates has always worked well with my floors, which might change your results, but this is, without question, the best result from a Narwal bot that I've seen to date, and, in fact, the best I've seen from any robot vacuum, period. Jonathan Feist / Android Authority Sadly, the base station does not have automatic detergent addition. You have to manually add the solution to the clean water canister each time you fill it. Speaking of, I'm having to refill the bucket (and empty and clean the used water bucket) for every 900 sqft of mopping. That's just over two full cleanings of my space. Of course, the Flow has plenty of AI smarts baked in, too. I've been using the Freo Mind mode, which has been adjusting the cleaning strategy as it goes. The bot spends more or less time in certain areas based on previous cleaning needs, and may change up its flow, starting in one room or another, or cleaning edges first, then the middle. Narwal is still using its DirtSense technology, which very accurately detects the cleanliness of the water coming off the mop roller. The system overall knows how clean or dirty your space is by tracking the cleanliness of the roller at the time of cleaning in the base station. If the roller is too dirty, then the bot may go back out to clean again. Jonathan Feist / Android Authority What matters most to me is that I cannot see any spots on my floor that the Flow has missed, and I do not have to go around after it to put things back in place that it's run into or pushed around. Most modern high-end robot vacuums are really good at navigation, but the Narwal Flow stands out for precision. In terms of navigation, the absolute only thing I've seen that the Flow could improve on is how it handles closed doors. If the bot knows there's a room behind that door, it's a little pushy at trying to get in there. It does not run into the door if it's completely closed, but if the door is just barely open, the Flow may try to push in. That was a startling experience when I was in the shower once, but it was worth the laugh. Narwal Flow specifications Narwal Flow Expand Robot Dimensions: 368 x 330 x 95 mm Functions ✔ Sweeps ✔ Vacuums ✔ Mops Expand Narwal Flow review verdict: Is it worth it? Jonathan Feist / Android Authority At $1,499 MSRP, the Flow is definitely on the premium end of robot vacuum cleaners, but if I had the cash, I'd buy one as a gift for all of my family and friends. In particular, I know someone with three big dogs who is struggling to keep their floors clean. The Narwal Flow is the first bot I think can keep up with those slobbery beasts and their shedding hair. For the overall cleaning experience, I am reminded of the Narwal Freo Z Ultra ($1499.99 at Amazon) and the Roborock Saros 10R ($1599.99 at Amazon). The Freo Z Ultra is from an older generation of Narwal bots, which helped pave the way for what we get today. It included superb, LiDAR-driven mapping and navigation, and also produced a very pleasant polished clean on my hard floors. Automation and AI-smarts made the Freo Z Ultra a fantastic choice, but it only has 12,000 Pa of suction pressure, and the LiDAR turret on top made it fairly tall. The Narwal Flow feels better to me in almost every way. The Flow is a no-fuss floor cleaner with great navigation. The next best option in its price tier is the Roborock Saros 10R, which excels at navigation, even if the Narwal has it beat in terms of mopping. If you really want to have fun, the Saros Z70 ($2599 at Amazon) is also available, but most people won't want to pay an extra $1,000 for that bot's party trick: a robotic arm. If you want something below the $1,000 price threshold, the Eureka J15 Pro Ultra ($799.99 at Amazon) is also a great choice. Jonathan Feist / Android Authority I try not to get attached to review units I get sent to test, but I'm going to be sad when this bot moves on. That's about the best recommendation I think I can give. Narwal Flow Reliable, powerful vacuuming • Great mopping capabilities • Great hair anti-tangle • Precision navigation MSRP: $1,499.99 Narwal's best in 2025 is a superb floor cleaner The Narwal Flow is a robot vacuum with powerful mopping tools for a full-home clean. The tank-tread style mop roller has EdgeReach Technology to clean from edge-to-edge in your home, including into the corners and around tight furniture legs. With 22,000 Pa of suction pressure, 12N of mopping force, and AI-driven navigation, the Flow lacks for little. See price at NarwalSee price at Amazon Positives Reliable, powerful vacuuming Reliable, powerful vacuuming Great mopping capabilities Great mopping capabilities Impressive hair anti-tangle rollers Impressive hair anti-tangle rollers Precision navigation Precision navigation Reliable mapping and navigation Reliable mapping and navigation Self-cleaning and drying multi-function base station Cons Consume water very quickly Consume water very quickly Doesn't play nicely with doors sometimes Follow
Yahoo
13 minutes ago
- Yahoo
Your Favorite Internet Boyfriend Just Joined Taylor Sheridan & Brandon Sklenar's Crime Thriller 'F.A.S.T.'
Brandon Sklenar is tackling all the genres. After the romantic (and dramatic) , the thrilling , and the inspirational , the 1923 actor is reuniting with Taylor Sheridan for a new movie. F.A.S.T. is coming to theaters in 2027 (which makes sense given how stacked Brandon's calendar is at the moment), and we have the latest info you absolutely need to know. Here's everything we know about Taylor Sheridan's F.A.S.T., starring Brandon Sklenar and coming to theaters in 2027. Where can I watch FAST? Is Taylor Sheridan's action thriller film fast finally heading to theaters in 2027? Yes! F.A.S.T. zooms into theaters April 23, 2027. What is the fast movie with Taylor Sheridan? Taylor Sheridan and Brandon Sklenar's new movie follows a ex-special forces agent who's brought back in to take charge on a special strike team. The target? Drug dealers protected by the CIA. No biggie. And with the crime thriller of it all, it's the perfect story for anyone missing Dept. Q. Who's in the FAST cast? The F.A.S.T. cast (say that five times fast, ha!) will be directed by Ben Richardson and led by Brandon Sklenar, and we just got word that Hunger Games star Sam Claflin is joining the cast. An OG internet boyfriend with our favorite 1923 boyfriend? Say no more. Stay tuned for the full F.A.S.T. cast list: Brandon Sklenar Juliana Canfield Keith Stanfield Jason Clarke Sam Claflin Trevante Rhodes Who's behind FAST? You'll be able to see F.A.S.T. in theaters thanks to Warner Bros. Pictures, who are behind movies like and . 'The breadth of Taylor Sheridan's body of work is simply astounding and unparalleled in sheer excellence and consistent quality,' Warner Bros. Motion Picture Group's Michael De Luca and Pam Abdy told Collider in a . 'With the hugely talented director Ben Richardson behind the camera and the exceptional producing talents of Heyday Films and Bosque Ranch, we are thrilled to have such an incredible creative team bringing F.A.S.T. to the big screen.' Where is FAST being filmed? F.A.S.T. will start filming in August of 2025. Stay tuned for the official filming locations. Stay tuned for the latest news on — and whether Brandon Sklenar and Taylor Sheridan will reunite again on ;). This post has been updated.
Yahoo
an hour ago
- Yahoo
TikTok Adds AI Avatar Stickers, Expanding its AI Toolset
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. TikTok's added a new AI-generated sticker option, which enables you to combine your avatar with an emoji to come up with unique combinations. As you can see in this example, posted by Jonah Manzano, the new option enables you to combine your avatar, which you can also create in the app, with a selection of emojis. You can then use that combined sticker in the app: Which is an interesting application within TikTok's broader integration of AI tools, and with parent company ByteDance investing billions into AI development, you can bet that more and more AI options will be coming to the app very soon. As they are in all apps, but TikTok's plans on this front are less known, because it's primarily focused on China, and building AI tools for the local version of the app, called Douyin. But if you ever want to get an idea of where TikTok is headed, you can look to Douyin. The app reportedly has over 766 million active users, and has become a key social and entertainment platform among Chinese users. And in terms of AI features, that TikTok doesn't have (at least not to the same degree) Douyin has the following: The app composer now includes various AI editing tools, including creative effects, image generation, and more. It also has a text-to-video tool, which enables the creation of short clips in-stream (it also has a separate app called 'Jimeng' for this) There are also various AI assistants in the app, with the latest addition being a chatbot that connects to ByteDance's ChatGPT-like Doubao There's also a Yelp-like AI assistant that recommends restaurants and provides food reviews These are some of the AI elements that are not available on TikTok as yet, which shines a light on the options that could be coming to TikTok soon, building out its AI capacity. In addition, Douyin's also developing improved live-stream avatars which are designed to mimic creators' personalities and habits. TikTok has a form of AI avatars already, but this new wave of AI bots will be more customized and aligned to each creator, enabling influencers to stream to their audience 24/7, Douyin says. At the same time, Douyin is also much more focused on shopping, and has become a leading player in Chinese eCommerce. It's also expanded into food delivery, which is another area that TikTok may look to as it seeks to build out its own shopping elements. On that front, Douyin recently began testing 'Douyin Hourly Delivery,' which will use location services to ensure rapid deliveries to consumers. So, again, this gives you some idea of the AI integrations that are likely coming to TikTok, and the other avenues that it could take to drive more AI usage, while also building out its existing functionality. So on a broader scale, AI-generated emoji stickers are only a small thing, but they add another tool to the platform's expanding AI capacity, which is set to become even more significant over time. Which also means more AI slop, but that's probably inevitable either way. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data