logo
#

Latest news with #S1

Biggest anime releases in 2025: New shows, sequels & movies you must watch
Biggest anime releases in 2025: New shows, sequels & movies you must watch

Time of India

time5 days ago

  • Entertainment
  • Time of India

Biggest anime releases in 2025: New shows, sequels & movies you must watch

Anime is bigger than ever around the globe, and 2025 is already looking to be one of the most thrilling years ever for US, UK, and other anime fans outside of Japan. So whether you're an anime novice or have been with the genre for decades, 2025 is set to deliver a massive slate of exciting new series, return seasons, and can't-miss blockbuster films. Tired of too many ads? go ad free now From action-packed smashes such as Solo Leveling and Jujutsu Kaisen, to fresh originals and established remakes such as One Piece, the anime scene is sizzling on popular streaming platforms such as Netflix, Crunchyroll, Hulu, and Disney+. Here's your essential guide to the most anticipated anime of 2025 — and why you should be hyped. Top anime sequels returning in 2025 Meanwhile, fans are looking forward to the return of a number of smash hit anime titles next year in 2025. These three sequels guarantee even larger scale conflict, richer narratives and bold new animation breakthroughs. Solo Leveling season 2 release date & what to expect Based on the world-wide smash hit anime Solo Leveling is back for its highly-anticipated second season, picking up directly where S1 left off, with Sung Jin-Woo, the world's greatest hunter. After the explosive finale of season 1, season 2 is sure to deliver an epic new series of dungeon raids, dragon fights, and terrifying foes. So if you're an action fanatic or a fantasy enthusiast, Solo Leveling deserves a place on your watchlist. Dan Da Dan season 2 and movie premiere in 2025 A mix of supernatural action and sci-fi mystery, Dan Da Dan season 2 will take the story further into the unknown. On top of that, the new movie is hitting US and UK cinemas this June — a once-in-a-lifetime opportunity that will bring the anime magic to cinemas around the world. Anyone who enjoys a combination of horror, comedy, and mystery will definitely want to give this series a look. Tired of too many ads? go ad free now Kaiju No . 8 season 2: More monster action coming Making its animated debut last year, Kaiju No. 8 took the world by storm in 2024 and here we are with the long awaited second season returning. Look forward to more massive kaiju, destruction on a colossal scale, and some emotional surprises to boot! It's an ideal selection for readers who want frenetic action laced with a keen emotional focus. Bleach: Thousand-Year Blood War final part Bleach's final saga has already been announced, with the final arc of the long-running game series due to release in late 2025. Longtime fans will get to witness the long awaited, climactic end of the Soul Reapers war with Quincy. It's a welcome back, kickstarter to the past that will surely find a new audience along with old favorites due to polished 21st century animation production value. Jujutsu Kaisen season 3: More cursed energy and drama Following the phenomenal success of its first two seasons and big screen adventures, Jujutsu Kaisen season 3 has officially been greenlit for 2025. This anime's combination of dark fantasy elements, high-energy action, and compelling characters is why it stands as one of the best anime to watch. Spy × Family season 3: Fun and action continue Spy × The Family mixes straight-up comedy, family drama, and comedian spy caper. Though we have a little wait before the third season comes this fall of 2025, it will surely bring more uproariously funny undercover missions and touching familial moments. New anime series to watch in 2025: Fresh stories & big names If you're looking for fresh, new anime to begin with, 2025 loads you up with tons of originals and remakes worth anticipating. Devil May Cry anime (Netflix exclusive) The game's immersive and vibrant world continues in the new, slick anime adaptation, available now exclusively on Netflix. It includes dramatic, modern demon killing action, stylish characters, and climactic fights. With epic combat and a depth of character development, this fantasy action series is tailor-made not only for MOBA and anime fans but to gamers and action lovers of all ages. Lazarus by Shinichirō Watanabe: A must-see original Brought to you by the legendary director of Cowboy Bebop, Lazarus quickly became one of 2025's most anticipated original anime. It delivers an unparalleled sci-fi experience that unfolds before your eyes, including awe-inspiring graphics and an evocative soundtrack. Moonrise by WIT Studio: A futuristic space drama From the studio that brought you Attack on Titan, Moonrise takes place on the moon, as humans navigate their conflict and emotions in an intense, awe-inspiring animated universe. One Piece remake: The beginning arc One Piece devotees receive the definitive first-class shot at a high quality reboot of the original classic East Blue saga. Including the arcs that were adapted slightly differently in the original animation, this remake takes many of Luffy and his crew's earliest adventures and retells them with modern animation, making it an excellent starting point for new viewers. Dragon Ball Daima: A new magical adventure In honor of Dragon Ball's 40th Anniversary, Dragon Ball Daima ushers in a new arc filled with magic and youth reversal, crafted by the original creator Akira Toriyama. Must-watch anime movies in 2025: Big screen & streaming hits Demon Slayer: Infinity Castle trilogy Beginning September 12, 2025, Demon Slayer's upcoming large-scale three-part movie event came to US and UK theaters. These films adapt the final, some would argue the ultimate, arcs of the series — a can't-miss experience for anime movie lovers. Chainsaw Man: Reze arc movie Following up on the bloody, heart-wrenching tale of original Chainsaw Man, the new film zeroes in on enigmatic character Reze, and delivers more chills, spills, and feels. Jujutsu Kaisen: Hidden inventory movie A prequel film delving into Gojo Satoru's backstory and the dawn of the cursed fight, perfect for fans looking to immerse themselves in the lore even further. Lost in Starlight & The colors within Two very different but equally emotional films bringing together sci-fi, fantasy, and romance, both recognized for their stunning animation and emotionally resonant narratives. Why 2025 is the best year to watch anime Anime's audience is exploding across the US and UK due to its accessibility on streaming platforms such as Netflix, Crunchyroll, Hulu, and now, Disney+. With weekly simulcasts, dubs, and official subtitles, it's never been easier to enjoy anime as it happens. Those big anime movies have been coming to theaters in select cities, unifying fans in epic showdowns and opening night celebrations. Anime merchandise, video games and collaborations with major Western brands are supercharging the fandom as never before. Quick anime release dates 2025 you need to know Solo Leveling Season 2 Early 2025 (Crunchyroll, Netflix) Dan Da Dan Season 2 : July 2025 (Netflix, Crunchyroll) Kaiju No. 8 Season 2 : Late 2025 (Crunchyroll) Bleach Final Arc Part 4 : Late 2025 (Hulu US, Disney+ UK) Spy × High Guardian Spice Season 2 – Fall 2025 Family Season 3 – Fall 2025 (Crunchyroll, Netflix) Demon Slayer Infinity Castle Arc Movie : September 12, 2025 (Theaters US/UK) Devil May Cry Anime : Summer 2025 (Netflix Exclusive) 2025 is the year to dive into anime Whether you're looking for action, fantasy, sci-fi or poignant tales, 2025 has something for everyone. That combination of the return of blockbuster sequels, the addition of must-see new original series, and the arrival of major motion picture anime interspersed with them means that the torrent of amazing anime you'll have to watch is anything but drying up. So keep those streaming apps open, stay tuned, and ring in what's sure to be an incredible year for anime! If you're interested, I could assist you in writing SEO-friendly meta descriptions, short social media teasers, or newsletter content to go along with this. If so, just tell me.

Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control
Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control

Business Wire

time6 days ago

  • Business
  • Business Wire

Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control

SAN FRANCISCO--(BUSINESS WIRE)--Hanabi AI, a pioneering voice technology startup, today announced OpenAudio S1, the world's first AI voice actor and a breakthrough generative voice model that delivers unprecedented real-time emotional and tonal control. Moving beyond the limitations of traditional text-to-speech solutions, OpenAudio S1 creates nuanced, emotionally authentic vocal output that captures the full spectrum of human expression. The OpenAudio S1 model is available in open beta today on for everyone to try for free. 'We believe the future of AI voice-driven storytelling isn't just about generating speech—it's about performance,' said Shijia Liao, founder and CEO of Hanabi AI. 'With OpenAudio S1, we're shaping what we see as the next creative frontier: AI voice acting.' From Synthesized Text-to-Speech Output to AI Voice Performance At the heart of OpenAudio S1's innovation is transforming voice from merely a functional tool into a core element of storytelling. Rather than treating speech as a scripted output to synthesize, Hanabi AI views it as a performance to direct—complete with emotional depth, intentional pacing, and expressive nuance. Whether it's the trembling hesitation of suppressed anxiety before delivering difficult news, or the fragile excitement of an unexpected reunion, OpenAudio S1 allows users to control and fine tune vocal intensity, emotional resonance, and prosody in real time making voice output not just sound realistic, but feel authentically human. 'Voice is one of the most powerful ways to convey emotion, yet it's the most nuanced, the hardest to replicate, and the key to making machines feel truly human,' Liao emphasized, 'But it's been stuck in a text-to-speech mindset for too long. Ultimately, the difference between machine-generated speech and human speech comes down to emotional authenticity. It's not just what you say but how you say it. OpenAudio S1 is the first AI speech model that gives creators the power to direct voice acting as if they were working with a real human actor.' State-of-the-Art Model Meets Controllability and Speed Hanabi AI fuels creative vision with a robust technical foundation. OpenAudio S1 is powered by an end-to-end architecture with 4 billion parameters, trained extensively on diverse text and audio datasets. This advanced setup empowers S1 to capture emotional nuance and vocal subtleties with remarkable accuracy. Fully integrated into the platform, S1 is accessible to a broad range of users—from creators generating long-form content in minutes to creative professionals fine-tuning every vocal inflection. According to third-party benchmarks from Hugging Face's TTS Arena, OpenAudio S1 demonstrated consistent gains across key benchmarks, outperforming ElevenLabs, OpenAI, and Cartesia in key areas: Expressiveness – S1 delivers more nuanced emotional expression and tonal variation, handling subtleties like sarcasm, joy, sadness, and fear with cinematic depth, unlike the limited emotional scope of current competing models. Ultra-low latency – S1 offers sub-100ms latency, making it ideal for real-time applications like gaming, voice assistants, and live content creation where immediate response time is crucial. Competitors, like Cartesia and OpenAI, still experience higher latency, resulting in a less natural, more robotic response in real-time interactive settings. Real-time fine-grained controllability – With S1, users can adjust tone, pitch, emotion, and pace in real time, using not only simple prompts such as (angry) or (voice quivering), but also a diverse range of more nuanced or creative instructions such as (confident but hiding fear) or (whispering with urgency). This allows for incredibly flexible and expressive voice generation tailored to a wide range of contexts and characters. State-of-the-art voice cloning – Accurately replicates a speaker's rhythm, pacing, and timbre. Multilingual, multi-speaker fluency – S1 seamlessly performs across 11 languages, excelling at handling multi-speaker environments (such as dialogues with multiple characters) in multilingual contexts, supporting seamless transitions between different languages without losing tonal consistency. Pioneering Research Vision For the Future OpenAudio S1 is just the first chapter. Hanabi's long-term mission is to build a true AI companion that doesn't just process information but connects with human emotion, intent, and presence. While many voice models today produce clear speech they still fall short of true emotional depth, and struggle to support the kind of trust, warmth, and natural interaction required of an AI companion. Instead of treating voice as an output layer, Hanabi treats it as the emotional core of the AI experience, because for an AI companion to feel natural, its voice must convey real feeling and connection. To bring this vision to life, Hanabi advances both research and product in parallel. The company operates through two complementary divisions: OpenAudio, Hanabi's internal research lab, focuses on developing breakthrough voice models and advancing emotional nuance, real-time control, and speech fidelity. Meanwhile, Fish Audio serves as Hanabi's product arm, delivering a portfolio of accessible applications that bring these technological advancements directly to consumers. Looking ahead, the company plans to progressively release core parts of OpenAudio's architecture, training pipeline, and inference stack to the public. Real-World Impact with Scalable Innovation With a four-people Gen Z founding team, the company scaled its annualized revenue from $400,000 to over $5 million between January and April 2025, while growing its MAU from 50,000 to 420,000 through Fish Audio's early products—including real-time performance tools and long-form audio generation. This traction reflects the team's ability to turn cutting-edge innovation into product experiences that resonate with a fast-growing creative community. The founder & CEO, Shijia Liao, has spent over seven years in the field and been active in open-source AI development. Prior to Fish Audio, he led or participated in the development of several widely adopted speech and singing voice synthesis models—including So-VITS-SVC, GPT-SoVITS, Bert-VITS2, and Fish Speech—which remain influential in the research and creative coding communities today. That open-source foundation built both the technical core and the community trust that now powers early commercial momentum. For a deeper dive into the research and philosophy behind OpenAudio S1, check out our launch blog post here: Pricing & Availability Premium Membership (unlimited generation on Fish Audio Playground): - $15 per month - $120 per year API: $15 per million UTF-8 bytes (approximately 20 hours of audio) About Hanabi AI Hanabi AI Inc. is pioneering the era of the AI Voice Actor —speech that you can direct as easily as video, shaping every inflection, pause, and emotion in real time. Built on our open-source roots, the Fish Audio platform gives filmmakers, streamers, and everyday creators frame-perfect control over how their stories sound.

Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control
Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control

Yahoo

time6 days ago

  • Business
  • Yahoo

Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control

SAN FRANCISCO, June 04, 2025--(BUSINESS WIRE)--Hanabi AI, a pioneering voice technology startup, today announced OpenAudio S1, the world's first AI voice actor and a breakthrough generative voice model that delivers unprecedented real-time emotional and tonal control. Moving beyond the limitations of traditional text-to-speech solutions, OpenAudio S1 creates nuanced, emotionally authentic vocal output that captures the full spectrum of human expression. The OpenAudio S1 model is available in open beta today on for everyone to try for free. "We believe the future of AI voice-driven storytelling isn't just about generating speech—it's about performance," said Shijia Liao, founder and CEO of Hanabi AI. "With OpenAudio S1, we're shaping what we see as the next creative frontier: AI voice acting." From Synthesized Text-to-Speech Output to AI Voice Performance At the heart of OpenAudio S1's innovation is transforming voice from merely a functional tool into a core element of storytelling. Rather than treating speech as a scripted output to synthesize, Hanabi AI views it as a performance to direct—complete with emotional depth, intentional pacing, and expressive nuance. Whether it's the trembling hesitation of suppressed anxiety before delivering difficult news, or the fragile excitement of an unexpected reunion, OpenAudio S1 allows users to control and fine tune vocal intensity, emotional resonance, and prosody in real time making voice output not just sound realistic, but feel authentically human. "Voice is one of the most powerful ways to convey emotion, yet it's the most nuanced, the hardest to replicate, and the key to making machines feel truly human," Liao emphasized, "But it's been stuck in a text-to-speech mindset for too long. Ultimately, the difference between machine-generated speech and human speech comes down to emotional authenticity. It's not just what you say but how you say it. OpenAudio S1 is the first AI speech model that gives creators the power to direct voice acting as if they were working with a real human actor." State-of-the-Art Model Meets Controllability and Speed Hanabi AI fuels creative vision with a robust technical foundation. OpenAudio S1 is powered by an end-to-end architecture with 4 billion parameters, trained extensively on diverse text and audio datasets. This advanced setup empowers S1 to capture emotional nuance and vocal subtleties with remarkable accuracy. Fully integrated into the platform, S1 is accessible to a broad range of users—from creators generating long-form content in minutes to creative professionals fine-tuning every vocal inflection. According to third-party benchmarks from Hugging Face's TTS Arena, OpenAudio S1 demonstrated consistent gains across key benchmarks, outperforming ElevenLabs, OpenAI, and Cartesia in key areas: Expressiveness – S1 delivers more nuanced emotional expression and tonal variation, handling subtleties like sarcasm, joy, sadness, and fear with cinematic depth, unlike the limited emotional scope of current competing models. Ultra-low latency – S1 offers sub-100ms latency, making it ideal for real-time applications like gaming, voice assistants, and live content creation where immediate response time is crucial. Competitors, like Cartesia and OpenAI, still experience higher latency, resulting in a less natural, more robotic response in real-time interactive settings. Real-time fine-grained controllability – With S1, users can adjust tone, pitch, emotion, and pace in real time, using not only simple prompts such as (angry) or (voice quivering), but also a diverse range of more nuanced or creative instructions such as (confident but hiding fear) or (whispering with urgency). This allows for incredibly flexible and expressive voice generation tailored to a wide range of contexts and characters. State-of-the-art voice cloning – Accurately replicates a speaker's rhythm, pacing, and timbre. Multilingual, multi-speaker fluency – S1 seamlessly performs across 11 languages, excelling at handling multi-speaker environments (such as dialogues with multiple characters) in multilingual contexts, supporting seamless transitions between different languages without losing tonal consistency. Pioneering Research Vision For the Future OpenAudio S1 is just the first chapter. Hanabi's long-term mission is to build a true AI companion that doesn't just process information but connects with human emotion, intent, and presence. While many voice models today produce clear speech they still fall short of true emotional depth, and struggle to support the kind of trust, warmth, and natural interaction required of an AI companion. Instead of treating voice as an output layer, Hanabi treats it as the emotional core of the AI experience, because for an AI companion to feel natural, its voice must convey real feeling and connection. To bring this vision to life, Hanabi advances both research and product in parallel. The company operates through two complementary divisions: OpenAudio, Hanabi's internal research lab, focuses on developing breakthrough voice models and advancing emotional nuance, real-time control, and speech fidelity. Meanwhile, Fish Audio serves as Hanabi's product arm, delivering a portfolio of accessible applications that bring these technological advancements directly to consumers. Looking ahead, the company plans to progressively release core parts of OpenAudio's architecture, training pipeline, and inference stack to the public. Real-World Impact with Scalable Innovation With a four-people Gen Z founding team, the company scaled its annualized revenue from $400,000 to over $5 million between January and April 2025, while growing its MAU from 50,000 to 420,000 through Fish Audio's early products—including real-time performance tools and long-form audio generation. This traction reflects the team's ability to turn cutting-edge innovation into product experiences that resonate with a fast-growing creative community. The founder & CEO, Shijia Liao, has spent over seven years in the field and been active in open-source AI development. Prior to Fish Audio, he led or participated in the development of several widely adopted speech and singing voice synthesis models—including So-VITS-SVC, GPT-SoVITS, Bert-VITS2, and Fish Speech—which remain influential in the research and creative coding communities today. That open-source foundation built both the technical core and the community trust that now powers early commercial momentum. For a deeper dive into the research and philosophy behind OpenAudio S1, check out our launch blog post here: Pricing & Availability Premium Membership (unlimited generation on Fish Audio Playground): - $15 per month- $120 per year API: $15 per million UTF-8 bytes (approximately 20 hours of audio) About Hanabi AI Hanabi AI Inc. is pioneering the era of the AI Voice Actor—speech that you can direct as easily as video, shaping every inflection, pause, and emotion in real time. Built on our open-source roots, the Fish Audio platform gives filmmakers, streamers, and everyday creators frame-perfect control over how their stories sound. View source version on Contacts Media Contact: Derek Huangdderekhuang@ Sign in to access your portfolio

Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control
Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control

Yahoo

time6 days ago

  • Business
  • Yahoo

Hanabi AI Launches OpenAudio S1: The World's First AI Voice Actor for Real-Time Emotional Control

SAN FRANCISCO, June 04, 2025--(BUSINESS WIRE)--Hanabi AI, a pioneering voice technology startup, today announced OpenAudio S1, the world's first AI voice actor and a breakthrough generative voice model that delivers unprecedented real-time emotional and tonal control. Moving beyond the limitations of traditional text-to-speech solutions, OpenAudio S1 creates nuanced, emotionally authentic vocal output that captures the full spectrum of human expression. The OpenAudio S1 model is available in open beta today on for everyone to try for free. "We believe the future of AI voice-driven storytelling isn't just about generating speech—it's about performance," said Shijia Liao, founder and CEO of Hanabi AI. "With OpenAudio S1, we're shaping what we see as the next creative frontier: AI voice acting." From Synthesized Text-to-Speech Output to AI Voice Performance At the heart of OpenAudio S1's innovation is transforming voice from merely a functional tool into a core element of storytelling. Rather than treating speech as a scripted output to synthesize, Hanabi AI views it as a performance to direct—complete with emotional depth, intentional pacing, and expressive nuance. Whether it's the trembling hesitation of suppressed anxiety before delivering difficult news, or the fragile excitement of an unexpected reunion, OpenAudio S1 allows users to control and fine tune vocal intensity, emotional resonance, and prosody in real time making voice output not just sound realistic, but feel authentically human. "Voice is one of the most powerful ways to convey emotion, yet it's the most nuanced, the hardest to replicate, and the key to making machines feel truly human," Liao emphasized, "But it's been stuck in a text-to-speech mindset for too long. Ultimately, the difference between machine-generated speech and human speech comes down to emotional authenticity. It's not just what you say but how you say it. OpenAudio S1 is the first AI speech model that gives creators the power to direct voice acting as if they were working with a real human actor." State-of-the-Art Model Meets Controllability and Speed Hanabi AI fuels creative vision with a robust technical foundation. OpenAudio S1 is powered by an end-to-end architecture with 4 billion parameters, trained extensively on diverse text and audio datasets. This advanced setup empowers S1 to capture emotional nuance and vocal subtleties with remarkable accuracy. Fully integrated into the platform, S1 is accessible to a broad range of users—from creators generating long-form content in minutes to creative professionals fine-tuning every vocal inflection. According to third-party benchmarks from Hugging Face's TTS Arena, OpenAudio S1 demonstrated consistent gains across key benchmarks, outperforming ElevenLabs, OpenAI, and Cartesia in key areas: Expressiveness – S1 delivers more nuanced emotional expression and tonal variation, handling subtleties like sarcasm, joy, sadness, and fear with cinematic depth, unlike the limited emotional scope of current competing models. Ultra-low latency – S1 offers sub-100ms latency, making it ideal for real-time applications like gaming, voice assistants, and live content creation where immediate response time is crucial. Competitors, like Cartesia and OpenAI, still experience higher latency, resulting in a less natural, more robotic response in real-time interactive settings. Real-time fine-grained controllability – With S1, users can adjust tone, pitch, emotion, and pace in real time, using not only simple prompts such as (angry) or (voice quivering), but also a diverse range of more nuanced or creative instructions such as (confident but hiding fear) or (whispering with urgency). This allows for incredibly flexible and expressive voice generation tailored to a wide range of contexts and characters. State-of-the-art voice cloning – Accurately replicates a speaker's rhythm, pacing, and timbre. Multilingual, multi-speaker fluency – S1 seamlessly performs across 11 languages, excelling at handling multi-speaker environments (such as dialogues with multiple characters) in multilingual contexts, supporting seamless transitions between different languages without losing tonal consistency. Pioneering Research Vision For the Future OpenAudio S1 is just the first chapter. Hanabi's long-term mission is to build a true AI companion that doesn't just process information but connects with human emotion, intent, and presence. While many voice models today produce clear speech they still fall short of true emotional depth, and struggle to support the kind of trust, warmth, and natural interaction required of an AI companion. Instead of treating voice as an output layer, Hanabi treats it as the emotional core of the AI experience, because for an AI companion to feel natural, its voice must convey real feeling and connection. To bring this vision to life, Hanabi advances both research and product in parallel. The company operates through two complementary divisions: OpenAudio, Hanabi's internal research lab, focuses on developing breakthrough voice models and advancing emotional nuance, real-time control, and speech fidelity. Meanwhile, Fish Audio serves as Hanabi's product arm, delivering a portfolio of accessible applications that bring these technological advancements directly to consumers. Looking ahead, the company plans to progressively release core parts of OpenAudio's architecture, training pipeline, and inference stack to the public. Real-World Impact with Scalable Innovation With a four-people Gen Z founding team, the company scaled its annualized revenue from $400,000 to over $5 million between January and April 2025, while growing its MAU from 50,000 to 420,000 through Fish Audio's early products—including real-time performance tools and long-form audio generation. This traction reflects the team's ability to turn cutting-edge innovation into product experiences that resonate with a fast-growing creative community. The founder & CEO, Shijia Liao, has spent over seven years in the field and been active in open-source AI development. Prior to Fish Audio, he led or participated in the development of several widely adopted speech and singing voice synthesis models—including So-VITS-SVC, GPT-SoVITS, Bert-VITS2, and Fish Speech—which remain influential in the research and creative coding communities today. That open-source foundation built both the technical core and the community trust that now powers early commercial momentum. For a deeper dive into the research and philosophy behind OpenAudio S1, check out our launch blog post here: Pricing & Availability Premium Membership (unlimited generation on Fish Audio Playground): - $15 per month- $120 per year API: $15 per million UTF-8 bytes (approximately 20 hours of audio) About Hanabi AI Hanabi AI Inc. is pioneering the era of the AI Voice Actor—speech that you can direct as easily as video, shaping every inflection, pause, and emotion in real time. Built on our open-source roots, the Fish Audio platform gives filmmakers, streamers, and everyday creators frame-perfect control over how their stories sound. View source version on Contacts Media Contact: Derek Huangdderekhuang@ Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Panasonic's S1 II is its most powerful creator camera yet
Panasonic's S1 II is its most powerful creator camera yet

Engadget

time13-05-2025

  • Engadget

Panasonic's S1 II is its most powerful creator camera yet

After dropping a rare photography-oriented camera recently with the S1R II, Panasonic is going back to its creator roots. The company just unveiled the S1 II, a 24-megapixel full-frame mirrorless camera with a partially-stacked sensor (like the Nikon Z6 III) that can capture 6K ProRes RAW video internally with minimal rolling shutter wobble. At the same time, Panasonic is pitching it as a relative value next to full stacked sensor cameras. With the same body as the S1R II, the S1 II is considerably smaller and lighter than the original S1 while still offering a large grip and full complement of manual controls. It also comes with a display that both tilts and flips out, a high-resolution 5.76 million-dot electronic viewfinder, CFexpress Type B and SD UHS-II card slots and optional support for Panasonic's 32-bit float audio. Photographers get up to 70 fps burst shooting speeds in electronic shutter mode and 10 fps with the mechanical shutter (along with 1.5s pre-burst capture). They can also take 96MP high resolution shots (with no tripod needed), along with live view composites, multiple exposures and timelapse animations. Autofocus is via Panasonic's latest AI Phase Hybrid AF with Eye/Face AF, AF tracking and detection of animals, vehicles and a new category, "urban sports" (ie breakdancing). On paper, though, this is more of a video-oriented camera camera. You can shoot ProRes and ProRes RAW video at up to 5.8K and 3:2 "open-gate" video at 5,952 x 3,968 resolution, along with 4K at 120 fps. It supports V-Log / V-Gamut capture with dual native ISO at 640/5000 and up to 15 stops of dynamic range. In addition, you get anamorphic video modes plus external RAW HDMI recording in either ProRes or Blackmagic formats. Panasonic boosted in-body stabilization to 8.0 stops via it's Dual I.S.2 system, while also offering the cropless IBS mode introduced on the S1R II. Optical smoothing can can be enhanced with electronic stabilization when more aggressive smoothing is required for walking or quick camera movements. Other key features include video monitoring tools like false color and exposure review, live streaming, wired webcam supports via USB-UVC/UAC (a first for Panasonic) and support for the Lumix Flow app that lets you do things like creator storyboards and shot lists for quicker editing. And as with other recent models, the S1 II supports real time LUTs and the Lumix Lab app, letting you download creator designed film looks that can be baked in to your video or added later in post. Panasonic will also introduce ARRI LogC3 so that the S1 II, S1R II and S1 IIE can be used in conjunction with ARRI digital cinema cameras. The Panasonic S1 II is now available for pre-order at $3,199 (body only) with shipping set to start on June 16th. That price is high next to its main competition, the Nikon Z6 III, which retails for $2,497 and is often on sale. Panasonic does have an answer to that: the $2,499 S1 IIE that has the body and features of the S1 II but lacks the stacked sensor and high-speed photo bursts. Panasonic also introduced an interesting lens, the Lumix S 24-60mm f/2.8. It offers the features and most of the range of its $2,000 Lumix S 24-70mm f/2.8 lens but in a smaller and lighter size and at a lower $1,200 price tag. I've had the S1 II for a short time now and have been impressed so far with its speed and capabilities, so stay tuned for a full review with final firmware.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store