Anycubic S1 Combo Review: We Finally Have a Bambu Lab P1S Competitor
See at Anycubic
Over the last year we have seen a lot of 3D printers trying to replicate the success that Bambu Lab has had with its color system. The P1P combo" target="_self (followed up by the P1S combo) are still some of the best 3D printers" target="_self you can buy, and until now, there was very little in the way of competition. The combination of color system, speed and quality seemed like a pipe dream for most manufacturers. That's all changed in early 2025, with several companies finally taking a stab at the color combo.
Anycubic has been making budget 3D printers for a long time so it makes sense for them to try and compete with Bambu Lab in the color system space. Its first foray, the Anycubic Kobra 3 was a fine attempt, but the S1 combo is the company's direct competitor to the Bambu Lab P1S, and it is surprisingly good. Good enough to hold its own in a space long dominated by one company.
Like the P1S, the Anycubic S1 is a core XY 3D printer. Core XY differs from many printers in that the print bed moves up and down, and the print nozzles stay on the same plane. This creates a much more stable platform for the print and reduces the chance of vibration issues. Core XY works very well with a color system because it is often enclosed and allows you to stack the color system (called the ACE on Anycubic machines) on top of the machine to reduce the footprint without introducing major movement issues. The S1 has a plastic door and lid, rather than the glass from the P1S, and while it does make it slightly louder, it also makes the entire machine lighter, so it's a trade-off. I was worried that the lack of glass would make the S1 feel cheap, but it doesn't make as much difference as I had thought.
What does make a difference is the large LCD panel on the S1. My chief complaint about the P1P/P1S has always been that the display is difficult to navigate and feels very cheap. At 4.3 inches, the LCD on the S1 is large and easy to navigate, which makes adjusting parameters on the fly much easier. It's far more similar to the LCD on the Bambu Lab X1 Carbon (a more expensive machine) than the P1S' tiny 1 inch by 3 inch display. The S1 also uses USB thumb drives as its storage of choice rather than the MicroSD cards used by the P1S. I much prefer that as they are easier to load and unload and they are easier to transport. The S1 does have its own internal memory as well, so when you transfer via WiFi, it keeps a copy stored on the machine, making it easier to reprint if you are making batch copies.
The quality of the Anycubic S1 is surprisingly good. Well, maybe surprisingly isn't the right word. Most modern 3D printers are significantly better than they were even 3 years ago, but the S1 is in a price bracket that can fall prey to poor tuning. The machine doesn't seem to have any mechanical issues that could cause bad prints, and my test print showed significant improvements over the Kobra 3, Anycubic's other color system 3D printer. The test print had significantly better towers and dimensional accuracy --something that a core XY printer is very good at -- and shows the cooling is working the way it should.
I printed models in PLA, TPU, ABS, and PETG and all of them printed well. The enclosure means that even hot materials like ABS can print well, though I recommend taking the lid off to print PLA. The interior can get a little warm, and it just prints better at cooler temperatures. The best models seemed to come from PLA and PETG in terms of quality, and I had some issues getting the TPU through into the nozzle due to the flexibility and length of the PTFE tube. Aside from that, the S1 performed admirably.
Color systems are on the rise and Anycubic's take is likely to give us a good idea of what the budget market systems are going to be like. The ACE unit itself is fine. There is nothing to distinguish it from other color systems as it doesn't have a heated chamber or anything like that, but it works as it is supposed to. I'm a little concerned that the entrances to the material feeds will grind down over time as they are in a fixed position, but they do look like a replaceable part, which is good.
The color prints themselves are excellent. You can see from the cheerful pilot and the purge tower behind it there there was very little in the way of bleeding, and that the purge tower is clean along the margins. That means the "poop chute" and nozzle cleaner are doing what they are designed for, which is unusual for a lot of these Bambu Lab clones. Honestly, even the P1S can struggle with this, so seeing Anycubic work to get this right is a welcome change.
The Anycubic S1 has the same issue as with any color system, the waste. The amount of printer poop -- the colloquial term for printer waste -- that the S1 produces is very similar to that of the P1S. Though I think I could reduce it if I spent a little more time with the software, which creates challenges of its own. The Anycubic slicer is based on several others, including Bambu Slicer, Orca Slicer and Prusa Slicer. Unfortunately, it seems to have removed some of the more helpful settings, like the one that allows me to adjust the purge levels on the printer. Those settings are important if you want to reduce your printer's waste without creating color bleed issues. The Anycubic slicer is certainly better than its last version, but it still has a way to go to make itself as simple to use as its competitors.
This is one of the biggest issues with even the best budget 3D printers" target="_self. These days, 3D printers often come with apps which are usually subpar (like their slicers tend to be). The S1 uses the same app as the Kobra 3, which is, frankly, awful. It is loaded with ads and just feels like an afterthought in a way that the Bambu app and now the Prusa app don't. It feels like it's trying to sell you something, and that's just not what I need from a 3D printing app. It needs to monitor my machines and help me printer quicker or better, but the app struggles to do that effectively.
Overall, my experience with the S1 from Anycubic is positive. For $400 for the standalone machine and $600 for the color system combo, it stands up to the P1S in almost every way, and beats it on price. The P1S with color system sells for around $700 and, from what I can see, doesn't offer anything more than the Anycubic.
Only the software reduces the overall experience of using the Anycubic S1, though I find the better LCD on the printer makes up for some of that shortfall. Software issues aside, the S1 is a sturdy entry into the Core XY with color system pantheon and can be considered one of the best in its price bracket.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
4 days ago
- Business Wire
Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control
SAN FRANCISCO--(BUSINESS WIRE)--Hanabi AI, a pioneering voice technology startup, today announced OpenAudio S1, the world's first AI voice actor and a breakthrough generative voice model that delivers unprecedented real-time emotional and tonal control. Moving beyond the limitations of traditional text-to-speech solutions, OpenAudio S1 creates nuanced, emotionally authentic vocal output that captures the full spectrum of human expression. The OpenAudio S1 model is available in open beta today on for everyone to try for free. 'We believe the future of AI voice-driven storytelling isn't just about generating speech—it's about performance,' said Shijia Liao, founder and CEO of Hanabi AI. 'With OpenAudio S1, we're shaping what we see as the next creative frontier: AI voice acting.' From Synthesized Text-to-Speech Output to AI Voice Performance At the heart of OpenAudio S1's innovation is transforming voice from merely a functional tool into a core element of storytelling. Rather than treating speech as a scripted output to synthesize, Hanabi AI views it as a performance to direct—complete with emotional depth, intentional pacing, and expressive nuance. Whether it's the trembling hesitation of suppressed anxiety before delivering difficult news, or the fragile excitement of an unexpected reunion, OpenAudio S1 allows users to control and fine tune vocal intensity, emotional resonance, and prosody in real time making voice output not just sound realistic, but feel authentically human. 'Voice is one of the most powerful ways to convey emotion, yet it's the most nuanced, the hardest to replicate, and the key to making machines feel truly human,' Liao emphasized, 'But it's been stuck in a text-to-speech mindset for too long. Ultimately, the difference between machine-generated speech and human speech comes down to emotional authenticity. It's not just what you say but how you say it. OpenAudio S1 is the first AI speech model that gives creators the power to direct voice acting as if they were working with a real human actor.' State-of-the-Art Model Meets Controllability and Speed Hanabi AI fuels creative vision with a robust technical foundation. OpenAudio S1 is powered by an end-to-end architecture with 4 billion parameters, trained extensively on diverse text and audio datasets. This advanced setup empowers S1 to capture emotional nuance and vocal subtleties with remarkable accuracy. Fully integrated into the platform, S1 is accessible to a broad range of users—from creators generating long-form content in minutes to creative professionals fine-tuning every vocal inflection. According to third-party benchmarks from Hugging Face's TTS Arena, OpenAudio S1 demonstrated consistent gains across key benchmarks, outperforming ElevenLabs, OpenAI, and Cartesia in key areas: Expressiveness – S1 delivers more nuanced emotional expression and tonal variation, handling subtleties like sarcasm, joy, sadness, and fear with cinematic depth, unlike the limited emotional scope of current competing models. Ultra-low latency – S1 offers sub-100ms latency, making it ideal for real-time applications like gaming, voice assistants, and live content creation where immediate response time is crucial. Competitors, like Cartesia and OpenAI, still experience higher latency, resulting in a less natural, more robotic response in real-time interactive settings. Real-time fine-grained controllability – With S1, users can adjust tone, pitch, emotion, and pace in real time, using not only simple prompts such as (angry) or (voice quivering), but also a diverse range of more nuanced or creative instructions such as (confident but hiding fear) or (whispering with urgency). This allows for incredibly flexible and expressive voice generation tailored to a wide range of contexts and characters. State-of-the-art voice cloning – Accurately replicates a speaker's rhythm, pacing, and timbre. Multilingual, multi-speaker fluency – S1 seamlessly performs across 11 languages, excelling at handling multi-speaker environments (such as dialogues with multiple characters) in multilingual contexts, supporting seamless transitions between different languages without losing tonal consistency. Pioneering Research Vision For the Future OpenAudio S1 is just the first chapter. Hanabi's long-term mission is to build a true AI companion that doesn't just process information but connects with human emotion, intent, and presence. While many voice models today produce clear speech they still fall short of true emotional depth, and struggle to support the kind of trust, warmth, and natural interaction required of an AI companion. Instead of treating voice as an output layer, Hanabi treats it as the emotional core of the AI experience, because for an AI companion to feel natural, its voice must convey real feeling and connection. To bring this vision to life, Hanabi advances both research and product in parallel. The company operates through two complementary divisions: OpenAudio, Hanabi's internal research lab, focuses on developing breakthrough voice models and advancing emotional nuance, real-time control, and speech fidelity. Meanwhile, Fish Audio serves as Hanabi's product arm, delivering a portfolio of accessible applications that bring these technological advancements directly to consumers. Looking ahead, the company plans to progressively release core parts of OpenAudio's architecture, training pipeline, and inference stack to the public. Real-World Impact with Scalable Innovation With a four-people Gen Z founding team, the company scaled its annualized revenue from $400,000 to over $5 million between January and April 2025, while growing its MAU from 50,000 to 420,000 through Fish Audio's early products—including real-time performance tools and long-form audio generation. This traction reflects the team's ability to turn cutting-edge innovation into product experiences that resonate with a fast-growing creative community. The founder & CEO, Shijia Liao, has spent over seven years in the field and been active in open-source AI development. Prior to Fish Audio, he led or participated in the development of several widely adopted speech and singing voice synthesis models—including So-VITS-SVC, GPT-SoVITS, Bert-VITS2, and Fish Speech—which remain influential in the research and creative coding communities today. That open-source foundation built both the technical core and the community trust that now powers early commercial momentum. For a deeper dive into the research and philosophy behind OpenAudio S1, check out our launch blog post here: Pricing & Availability Premium Membership (unlimited generation on Fish Audio Playground): - $15 per month - $120 per year API: $15 per million UTF-8 bytes (approximately 20 hours of audio) About Hanabi AI Hanabi AI Inc. is pioneering the era of the AI Voice Actor —speech that you can direct as easily as video, shaping every inflection, pause, and emotion in real time. Built on our open-source roots, the Fish Audio platform gives filmmakers, streamers, and everyday creators frame-perfect control over how their stories sound.
Yahoo
4 days ago
- Yahoo
Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control
SAN FRANCISCO, June 04, 2025--(BUSINESS WIRE)--Hanabi AI, a pioneering voice technology startup, today announced OpenAudio S1, the world's first AI voice actor and a breakthrough generative voice model that delivers unprecedented real-time emotional and tonal control. Moving beyond the limitations of traditional text-to-speech solutions, OpenAudio S1 creates nuanced, emotionally authentic vocal output that captures the full spectrum of human expression. The OpenAudio S1 model is available in open beta today on for everyone to try for free. "We believe the future of AI voice-driven storytelling isn't just about generating speech—it's about performance," said Shijia Liao, founder and CEO of Hanabi AI. "With OpenAudio S1, we're shaping what we see as the next creative frontier: AI voice acting." From Synthesized Text-to-Speech Output to AI Voice Performance At the heart of OpenAudio S1's innovation is transforming voice from merely a functional tool into a core element of storytelling. Rather than treating speech as a scripted output to synthesize, Hanabi AI views it as a performance to direct—complete with emotional depth, intentional pacing, and expressive nuance. Whether it's the trembling hesitation of suppressed anxiety before delivering difficult news, or the fragile excitement of an unexpected reunion, OpenAudio S1 allows users to control and fine tune vocal intensity, emotional resonance, and prosody in real time making voice output not just sound realistic, but feel authentically human. "Voice is one of the most powerful ways to convey emotion, yet it's the most nuanced, the hardest to replicate, and the key to making machines feel truly human," Liao emphasized, "But it's been stuck in a text-to-speech mindset for too long. Ultimately, the difference between machine-generated speech and human speech comes down to emotional authenticity. It's not just what you say but how you say it. OpenAudio S1 is the first AI speech model that gives creators the power to direct voice acting as if they were working with a real human actor." State-of-the-Art Model Meets Controllability and Speed Hanabi AI fuels creative vision with a robust technical foundation. OpenAudio S1 is powered by an end-to-end architecture with 4 billion parameters, trained extensively on diverse text and audio datasets. This advanced setup empowers S1 to capture emotional nuance and vocal subtleties with remarkable accuracy. Fully integrated into the platform, S1 is accessible to a broad range of users—from creators generating long-form content in minutes to creative professionals fine-tuning every vocal inflection. According to third-party benchmarks from Hugging Face's TTS Arena, OpenAudio S1 demonstrated consistent gains across key benchmarks, outperforming ElevenLabs, OpenAI, and Cartesia in key areas: Expressiveness – S1 delivers more nuanced emotional expression and tonal variation, handling subtleties like sarcasm, joy, sadness, and fear with cinematic depth, unlike the limited emotional scope of current competing models. Ultra-low latency – S1 offers sub-100ms latency, making it ideal for real-time applications like gaming, voice assistants, and live content creation where immediate response time is crucial. Competitors, like Cartesia and OpenAI, still experience higher latency, resulting in a less natural, more robotic response in real-time interactive settings. Real-time fine-grained controllability – With S1, users can adjust tone, pitch, emotion, and pace in real time, using not only simple prompts such as (angry) or (voice quivering), but also a diverse range of more nuanced or creative instructions such as (confident but hiding fear) or (whispering with urgency). This allows for incredibly flexible and expressive voice generation tailored to a wide range of contexts and characters. State-of-the-art voice cloning – Accurately replicates a speaker's rhythm, pacing, and timbre. Multilingual, multi-speaker fluency – S1 seamlessly performs across 11 languages, excelling at handling multi-speaker environments (such as dialogues with multiple characters) in multilingual contexts, supporting seamless transitions between different languages without losing tonal consistency. Pioneering Research Vision For the Future OpenAudio S1 is just the first chapter. Hanabi's long-term mission is to build a true AI companion that doesn't just process information but connects with human emotion, intent, and presence. While many voice models today produce clear speech they still fall short of true emotional depth, and struggle to support the kind of trust, warmth, and natural interaction required of an AI companion. Instead of treating voice as an output layer, Hanabi treats it as the emotional core of the AI experience, because for an AI companion to feel natural, its voice must convey real feeling and connection. To bring this vision to life, Hanabi advances both research and product in parallel. The company operates through two complementary divisions: OpenAudio, Hanabi's internal research lab, focuses on developing breakthrough voice models and advancing emotional nuance, real-time control, and speech fidelity. Meanwhile, Fish Audio serves as Hanabi's product arm, delivering a portfolio of accessible applications that bring these technological advancements directly to consumers. Looking ahead, the company plans to progressively release core parts of OpenAudio's architecture, training pipeline, and inference stack to the public. Real-World Impact with Scalable Innovation With a four-people Gen Z founding team, the company scaled its annualized revenue from $400,000 to over $5 million between January and April 2025, while growing its MAU from 50,000 to 420,000 through Fish Audio's early products—including real-time performance tools and long-form audio generation. This traction reflects the team's ability to turn cutting-edge innovation into product experiences that resonate with a fast-growing creative community. The founder & CEO, Shijia Liao, has spent over seven years in the field and been active in open-source AI development. Prior to Fish Audio, he led or participated in the development of several widely adopted speech and singing voice synthesis models—including So-VITS-SVC, GPT-SoVITS, Bert-VITS2, and Fish Speech—which remain influential in the research and creative coding communities today. That open-source foundation built both the technical core and the community trust that now powers early commercial momentum. For a deeper dive into the research and philosophy behind OpenAudio S1, check out our launch blog post here: Pricing & Availability Premium Membership (unlimited generation on Fish Audio Playground): - $15 per month- $120 per year API: $15 per million UTF-8 bytes (approximately 20 hours of audio) About Hanabi AI Hanabi AI Inc. is pioneering the era of the AI Voice Actor—speech that you can direct as easily as video, shaping every inflection, pause, and emotion in real time. Built on our open-source roots, the Fish Audio platform gives filmmakers, streamers, and everyday creators frame-perfect control over how their stories sound. View source version on Contacts Media Contact: Derek Huangdderekhuang@ Sign in to access your portfolio
Yahoo
4 days ago
- Yahoo
Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control
SAN FRANCISCO, June 04, 2025--(BUSINESS WIRE)--Hanabi AI, a pioneering voice technology startup, today announced OpenAudio S1, the world's first AI voice actor and a breakthrough generative voice model that delivers unprecedented real-time emotional and tonal control. Moving beyond the limitations of traditional text-to-speech solutions, OpenAudio S1 creates nuanced, emotionally authentic vocal output that captures the full spectrum of human expression. The OpenAudio S1 model is available in open beta today on for everyone to try for free. "We believe the future of AI voice-driven storytelling isn't just about generating speech—it's about performance," said Shijia Liao, founder and CEO of Hanabi AI. "With OpenAudio S1, we're shaping what we see as the next creative frontier: AI voice acting." From Synthesized Text-to-Speech Output to AI Voice Performance At the heart of OpenAudio S1's innovation is transforming voice from merely a functional tool into a core element of storytelling. Rather than treating speech as a scripted output to synthesize, Hanabi AI views it as a performance to direct—complete with emotional depth, intentional pacing, and expressive nuance. Whether it's the trembling hesitation of suppressed anxiety before delivering difficult news, or the fragile excitement of an unexpected reunion, OpenAudio S1 allows users to control and fine tune vocal intensity, emotional resonance, and prosody in real time making voice output not just sound realistic, but feel authentically human. "Voice is one of the most powerful ways to convey emotion, yet it's the most nuanced, the hardest to replicate, and the key to making machines feel truly human," Liao emphasized, "But it's been stuck in a text-to-speech mindset for too long. Ultimately, the difference between machine-generated speech and human speech comes down to emotional authenticity. It's not just what you say but how you say it. OpenAudio S1 is the first AI speech model that gives creators the power to direct voice acting as if they were working with a real human actor." State-of-the-Art Model Meets Controllability and Speed Hanabi AI fuels creative vision with a robust technical foundation. OpenAudio S1 is powered by an end-to-end architecture with 4 billion parameters, trained extensively on diverse text and audio datasets. This advanced setup empowers S1 to capture emotional nuance and vocal subtleties with remarkable accuracy. Fully integrated into the platform, S1 is accessible to a broad range of users—from creators generating long-form content in minutes to creative professionals fine-tuning every vocal inflection. According to third-party benchmarks from Hugging Face's TTS Arena, OpenAudio S1 demonstrated consistent gains across key benchmarks, outperforming ElevenLabs, OpenAI, and Cartesia in key areas: Expressiveness – S1 delivers more nuanced emotional expression and tonal variation, handling subtleties like sarcasm, joy, sadness, and fear with cinematic depth, unlike the limited emotional scope of current competing models. Ultra-low latency – S1 offers sub-100ms latency, making it ideal for real-time applications like gaming, voice assistants, and live content creation where immediate response time is crucial. Competitors, like Cartesia and OpenAI, still experience higher latency, resulting in a less natural, more robotic response in real-time interactive settings. Real-time fine-grained controllability – With S1, users can adjust tone, pitch, emotion, and pace in real time, using not only simple prompts such as (angry) or (voice quivering), but also a diverse range of more nuanced or creative instructions such as (confident but hiding fear) or (whispering with urgency). This allows for incredibly flexible and expressive voice generation tailored to a wide range of contexts and characters. State-of-the-art voice cloning – Accurately replicates a speaker's rhythm, pacing, and timbre. Multilingual, multi-speaker fluency – S1 seamlessly performs across 11 languages, excelling at handling multi-speaker environments (such as dialogues with multiple characters) in multilingual contexts, supporting seamless transitions between different languages without losing tonal consistency. Pioneering Research Vision For the Future OpenAudio S1 is just the first chapter. Hanabi's long-term mission is to build a true AI companion that doesn't just process information but connects with human emotion, intent, and presence. While many voice models today produce clear speech they still fall short of true emotional depth, and struggle to support the kind of trust, warmth, and natural interaction required of an AI companion. Instead of treating voice as an output layer, Hanabi treats it as the emotional core of the AI experience, because for an AI companion to feel natural, its voice must convey real feeling and connection. To bring this vision to life, Hanabi advances both research and product in parallel. The company operates through two complementary divisions: OpenAudio, Hanabi's internal research lab, focuses on developing breakthrough voice models and advancing emotional nuance, real-time control, and speech fidelity. Meanwhile, Fish Audio serves as Hanabi's product arm, delivering a portfolio of accessible applications that bring these technological advancements directly to consumers. Looking ahead, the company plans to progressively release core parts of OpenAudio's architecture, training pipeline, and inference stack to the public. Real-World Impact with Scalable Innovation With a four-people Gen Z founding team, the company scaled its annualized revenue from $400,000 to over $5 million between January and April 2025, while growing its MAU from 50,000 to 420,000 through Fish Audio's early products—including real-time performance tools and long-form audio generation. This traction reflects the team's ability to turn cutting-edge innovation into product experiences that resonate with a fast-growing creative community. The founder & CEO, Shijia Liao, has spent over seven years in the field and been active in open-source AI development. Prior to Fish Audio, he led or participated in the development of several widely adopted speech and singing voice synthesis models—including So-VITS-SVC, GPT-SoVITS, Bert-VITS2, and Fish Speech—which remain influential in the research and creative coding communities today. That open-source foundation built both the technical core and the community trust that now powers early commercial momentum. For a deeper dive into the research and philosophy behind OpenAudio S1, check out our launch blog post here: Pricing & Availability Premium Membership (unlimited generation on Fish Audio Playground): - $15 per month- $120 per year API: $15 per million UTF-8 bytes (approximately 20 hours of audio) About Hanabi AI Hanabi AI Inc. is pioneering the era of the AI Voice Actor—speech that you can direct as easily as video, shaping every inflection, pause, and emotion in real time. Built on our open-source roots, the Fish Audio platform gives filmmakers, streamers, and everyday creators frame-perfect control over how their stories sound. View source version on Contacts Media Contact: Derek Huangdderekhuang@ Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data