Latest news with #SIGGRAPH

Yahoo
2 days ago
- Business
- Yahoo
Deemos Wins SIGGRAPH 2025 Best Paper Award, Debuts "Rodin Gen-2" Text-to-3D Foundation Model
Los Angeles, Aug. 15, 2025 (GLOBE NEWSWIRE) -- Deemos, a pioneering research company at the forefront of generative AI for 3D content, is celebrating a triumphant week at SIGGRAPH 2025, marked by winning the conference's prestigious Best Paper Award and successfully debuting its powerful new foundation model, Hyper3D Rodin Gen-2. This victory marks the company's first win after three nominations at the premier computer graphics conference. The pinnacle of Deemos's presence at the event was the recognition of its technical paper, "CAST: Component-Aligned 3D Scene Reconstruction from an RGB Image," with the Best Paper Award. This top honor validated the paper's groundbreaking approach to analyzing a single 2D picture and intelligently reconstructing it into a complex, component-level 3D scene. The work represents a significant leap beyond simple object creation, demonstrating an advanced understanding of object composition crucial for building detailed virtual worlds. Building on this profound research, Deemos provided attendees with a hands-on look at the future of digital content creation at its booth. The team conducted live, interactive demonstrations of Hyper3D Rodin Gen-2, a versatile foundation model that serves as a new engine for 3D asset generation. Attendees witnessed firsthand how the technology interpreted simple text prompts or 2D images to produce intricate, high-fidelity 3D models. This process, which the company calls the "Vibe Modeling" era, moves beyond the technical complexities of traditional 3D software, allowing creativity to flow directly from idea to asset. The model's ability to generate individual, editable parts of a larger object was a key point of interest for professionals seeking to integrate the technology into their production pipelines. "Winning the Best Paper Award at SIGGRAPH is a tremendous honor and a powerful validation of our team's persistent, research-driven approach to solving fundamental challenges in 3D AI," said Qixuan Zhang, CTO of Deemos. "It was incredibly rewarding to share this moment with the community. With Hyper3D Rodin Gen-2, we are turning our most advanced research into a tangible, powerful tool. The enthusiastic reception at SIGGRAPH confirms our belief that this technology will fundamentally change the workflow for 3D artists, game developers, and virtual world builders, making it faster, smarter, and more intuitive than ever before." The company's deep expertise was further showcased through a series of highly-attended technical paper presentations. In addition to presenting the award-winning "CAST" paper, the Deemos team also presented "BANG: Dividing 3D Assets via Generative Exploded Dynamics," which details a novel method for deconstructing 3D models. They also presented collaborative research with Tsinghua University on "Facial Appearance Capture at Home with Patch-Level Reflectance Prior," highlighting innovations in realistic human digital rendering. The successful debut and industry recognition at SIGGRAPH 2025 mark a pivotal moment for Deemos, solidifying its position as a leader in generative 3D. The company has effectively bridged the gap between state-of-the-art academic research and practical, commercially-ready solutions that will define the next generation of digital experiences. About Deemos: Deemos is an AI research and product company dedicated to building the next generation of 3D content creation tools. By combining cutting-edge academic research with intuitive product design, Deemos is accelerating the world's transition to a 3D future. The company is committed to democratizing 3D creation, making it possible for anyone to build, share, and experience immersive digital worlds. Media Contact: Qixuan Zhang CTO of Deemos Email: hello@ Website: ### For more information about Deemos Tech, contact the company here:Deemos TechQixuan Zhang+1 818-633-3910hello@ Bundy Drive #1054, West Los Angeles, CA, 90025 CONTACT: Qixuan ZhangSign in to access your portfolio
Yahoo
5 days ago
- Business
- Yahoo
Autodesk launches freemium access to Autodesk Flow Studio, with new pricing tiers making world-class VFX AI more affordable to all creators
For the first time, Autodesk is offering free access to professional grade AI-powered VFX and animation tools — removing cost barriers for indie filmmakers, digital content creators, marketers, and first-time users around the world SAN FRANCISCO, Aug. 12, 2025 /PRNewswire/ -- Today at SIGGRAPH 2025, Autodesk, Inc. (NASDAQ: ADSK) launched new, more affordable pricing for Autodesk Flow Studio (formerly Wonder Studio) including its first free tier and a 50% price drop for the Lite plan – from $20 to $10 – a significant move to make the toolset more accessible. By removing up-front costs so artists can use premium artificial intelligence (AI) powered 3D tools to create the kind of animation and VFX once reserved for Hollywood, Autodesk is signaling a new era of AI within the future of media and entertainment – one that puts creators at the forefront. This launch marks a major milestone just one year after the Autodesk acquisition of Wonder Dynamics, the company that created the technology. Autodesk Flow Studio uses AI to automate complex VFX tasks like motion capture, camera tracking, and character animation, so creators can focus on storytelling, not technical setup. With just a video clip and a few clicks, users can insert CG characters into live-action scenes and render them making 3D animation and VFX accessible to all. You can also export the AI-generated assets (e.g. clean plates, 3D scenes, alpha masks, camera-tracks) to tools like Maya, Blender, or Unreal Engine for editing and fine tuning. This allows creators to speed up complex animation and VFX workflows, freeing up time for more creative storytelling. By making these capabilities freely available, Autodesk is lowering the barrier to entry for high-quality content creation enabling more artists, marketers and digital content creators to experiment, learn, and bring their ideas to life on their own terms. "Making Autodesk Flow Studio more accessible to everyone reflects our belief in a creator-first future – where AI empowers more artistic expression, not replaces it," said Diana Colella, executive vice president of Media & Entertainment at Autodesk. "By lowering the cost barrier, more storytellers are now able to explore the ways new AI-powered tools can help them bring their ideas to life, regardless of where they are along their creative journey." This expansion introduces a tiered model designed to grow with creators – from first-time users to professional studios. New Free and Standard tiers join the existing Lite, Pro, and Enterprise offerings, giving users more flexibility to choose the plan that fits their creative needs and ambitions. The Free tier includes core tools like AI Mocap and Live Action, while advanced capabilities – such as Wonder Animation and Wonder Tools – remain exclusive to paid tiers. For larger teams and studios, the Enterprise tier offers scalable access to AI-powered workflows with enhanced support and data controls – helping them adopt new technology while preserving the creative processes and roles that define professional production. "Running AI models at scale has traditionally been very challenging for tech startups," said Nikola Todorovic, co-founder of Wonder Dynamics, an Autodesk company. "Our original mission was always to make our technology free and affordable to as many creatives as possible, but we had limited resources as a startup. Now, as part of Autodesk, we're able to offer Flow Studio as a freemium product, powered by robust infrastructure and cloud partners – so more artists can tell more stories, no studio lot or multimillion-dollar budget required." The launch of the Free Tier and flexible pricing model to a global audience also formally completes the integration of Autodesk Flow Studio into the Autodesk ecosystem. This ensures alignment with Autodesk's standard offerings, policies, and processes, granting users a more unified, consistent, and scalable experience. In addition, Autodesk has included the Pro tier of Flow Studio in its Media & Entertainment Collection at no additional cost – giving existing customers a low-risk way to explore AI-powered workflows. This approach reflects Autodesk's commitment to guiding its customers through the evolving landscape of AI in media and entertainment with tools that are accessible, trusted, and built to scale with their creative ambitions. For more information, visit Blog: Flow Studio, which tiers are right for you? Join the social media conversation on YouTube and LinkedIn. About Autodesk The world's designers, engineers, builders, and creators trust Autodesk to help them design and make anything. From the buildings we live and work in, to the cars we drive and the bridges we drive over. From the products we use and rely on, to the movies and games that inspire us. Autodesk's Design and Make Platform unlocks the power of data to accelerate insights and automate processes, empowering our customers with the technology to create the world around us and deliver better outcomes for their business and the planet. For more information, visit or follow @autodesk. #MakeAnything View original content to download multimedia: SOURCE Autodesk, Inc. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Mint
5 days ago
- Business
- Mint
Nvidia just taught robots to think, roam, and simulate reality: Here's how
Ever wanted to see what an AI-powered robot 'thinks'? Nvidia just made that happen—pretty much. At this year's SIGGRAPH, they ripped the wraps off a suite of Cosmos world models and infrastructure designed specifically to bring physical AI to life. The attraction here is Cosmos Reason, a 7-billion-parameter reasoning vision-language model (VLM) that adds real-world physics and common sense to robotic decision-making. Think of it as giving robots a low-key IQ test: analyse an environment, break down a task, and plan its next move, all without getting lost in translation. Next in line is Cosmos Transfer-2, a synthetic data powerhouse for generating endless 3D scenes in varying lighting, textures, and weather. Perfect for training self-driving cars, drones, or warehouse bots, without touching the real world. The fast distilled version lets developers scale at speed. Alongside the models, Nvidia rolled out new neural reconstruction libraries for ultra-realistic scene building from sensor data. They've also plugged these into popular simulators like CARLA for immediate testing. On the hardware front, the RTX Pro Blackwell Server delivers raw power for training and simulation, while DGX Cloud opens the door to scalable AI deployment, no on-prem hardware required. This isn't just about fancier robots, it's about collapsing the gap between training and deployment. By giving AI systems both reasoning skills and a realistic virtual sandbox, Nvidia's ecosystem could accelerate everything from autonomous deliveries to industrial inspections. Expect faster prototyping, fewer real-world risks, and AI that can adapt to unpredictable environments before ever leaving the lab. Nvidia's Cosmos ecosystem brings reasoning, world-building, and deployment into one loop. If your AI project needs a brain, a playground, and a launchpad, this is it.
Yahoo
6 days ago
- Entertainment
- Yahoo
Two Decades of Progress in a Frame: SIGGRAPH's Advances in Real-Time Rendering in Games Turns 20
Now Entering Its Third Decade, SIGGRAPH's Advances in Real-Time Rendering in Games Course has Become the Premier Forum for Unveiling Cutting-edge Rendering Techniques That Power the Real-time Experiences of Today and Tomorrow. VANCOUVER, BC, Aug. 11, 2025 /PRNewswire/ -- SIGGRAPH 2025, taking place 10-14 August at the Vancouver Convention Centre, marks a milestone in the world of computer graphics with the 20th anniversary of the Advances in Real-Time Rendering in Games course, one of the most enduring and influential innovation forums in computer graphics. Since its debut in 2006, the course has served as a launchpad for groundbreaking rendering techniques that have fundamentally reshaped how artists, rendering engineers, and game developers simulate lighting, geometry, and motion in real-time applications, especially in video games. What began as a grassroots effort to bring together rendering engineers, game developers, and technical artists, has evolved into a global forum that bridges latest research insight with industry innovation and execution. The program has consistently spotlighted state-of-the-art techniques shaping the evolution of video games, virtual productions, architectural visualization, and interactive experiences at scale. "Advances in Real-Time Rendering in Games endures because it's more than a technical session, it's a living time capsule of the state of real-time rendering," said Natalya Tatarchuk, CTO of Activision, the longtime program organizer. "Every year, we bring together the visionaries, the tinkerers, the optimists who believe that you can push one more millisecond, one more polygon, one more texture. That culture of shared exploration is what makes it meaningful. It's a celebration of curiosity made concrete." A Legacy of InnovationInitially created to fill a gap in knowledge-sharing for game developers and real-time rendering practitioners, Advances in Real-Time Rendering in Games now influences the pipelines of top studios and game engines alike. The program has introduced many of the field's most transformative techniques, from screen-space ambient occlusion and physically based lighting to GPU-driven pipelines and neural-based global illumination, often years before they became mainstream. Key moments include the introduction of screen-space ambient occlusion by Crytek, which has become a foundational technique across real-time graphics, the debut of temporal anti-aliasing, the early exploration of compute-based rendering pipelines by Alex Evans, and groundbreaking GPU-driven pipeline talks from AMD and Ubisoft. More recently, innovations in global illumination (including using machine learning and neural compression) from Activision, along with production insights from technical artists at studios like Naughty Dog, just to name a few presented at the forum, have highlighted the course's consistent focus: showcasing not just novel techniques, but production-tested solutions shaped by real-world constraints. "Advances in Real-Time Rendering in Games doesn't just spotlight the victories of innovation, the program dives deeper," Tatarchuk explained. "We explore the limitations, we surface the intuition behind the techniques, and share the tradeoffs behind the choices made, talking openly about what didn't work and why. That culture of shared exploration is what makes Advances in Real-Time Rendering in Games meaningful, it's a place where we all learn together, not just from success, but from the honest, hard-earned lessons, too." 2025 LineupThis year's two-part course, held on Tuesday, 12 August, in the morning and afternoon, will feature a special 20-year retrospective by Tatarchuk, along with the following cutting-edge presentations: Activision will present a novel solution for order-independent transparency developed for "Call of Duty®", addressing a longstanding challenge in real-time graphics with a scalable and performant algorithm tailored for unpredictable player-driven scenes. Ubisoft will share technical insights from "Assassin's Creed Shadows", where the team successfully merged ray tracing and global illumination techniques within a complex open-world pipeline, showcasing the intersection of cinematic quality and expansive interactivity. MachineGames will reveal how it achieved film-quality rendering at 60+ frames per second across platforms in "Indiana Jones and the Great Circle" using strand-based hair as the sole solution for human hair, leveraging GPU optimizations to deliver film-quality visuals across a wide range of hardware. id Software will showcase how idTech 8's real-time global illumination replaced pre-baked lighting to accelerate workflows, boost artistic flexibility, and power "DOOM: The Dark Age" at 60Hz or higher on all platforms. HypeHype will present a novel stochastic tile-based lighting algorithm that enables fully dynamic, fixed-cost local lighting with shadows, even on low-end mobile GPUs, empowering user-generated content across devices by delivering high-performance, scalable lighting with minimal technical overhead for creators. NVIDIA will unveil a hybrid real-time subsurface scattering technique that combines volumetric path tracing and a new physically based diffusion model, enhanced with ReSTIR sampling, to deliver near path-traced quality for lifelike skin and translucent materials at interactive frame rates. Epic Games will showcase MegaLights, a new stochastic direct lighting system in Unreal Engine 5 that enables artists to place vastly more dynamic, shadowed area lights while maintaining performance and realism on ray-tracing-enabled platforms and current-gen consoles through a fully scalable, hardware-conscious pipeline. Looking Forward: The Next Era of Real-TimeAs the Advances in Real-Time Rendering in Games course enters its third decade, the field faces both complexity and opportunity. "We are at a significant inflection point. We haven't yet exhausted the potential of traditional GPU pipelines, and we're still learning how to better parallelize and scale real-time systems for truly dynamic content," said Tatarchuk. "At the same time, we're facing increasing platform fragmentation, which means we need rendering solutions that are deeply scalable, capable of delivering consistent fidelity across a vast range of hardware generations. And then there's generative AI, which is the wild card." With advancements in generative AI, real-time pipelines could be poised for disruption. Meanwhile, real-time methods originally forged for gaming are increasingly influencing film, virtual production, and simulation, a cross-pollination first envisioned in the program's earliest days. "The one thing that hasn't changed, and for Advances in Real-Time Rendering in Games it is non-negotiable, is that every frame is still built under a promise that it will respond to the player; that their interaction matters, and it isn't predetermined," Tatarchuk explained. "That is the core of what we mean by real-time. And that is why the field, in my opinion, remains so creatively and technically exciting and challenging." To dive deeper into the evolution and future of real-time rendering in games, listen to "SIGGRAPH Spotlight: Episode 76 – Creating a Lasting Impact With Real-Time Rendering", featuring program organizer Natalya Tatarchuk, and explore the full Advances in Real-Time Rendering in Games schedule at About ACM, ACM SIGGRAPH, and SIGGRAPH 2025The Association for Computing Machinery (ACM) is the world's largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. The SIGGRAPH conference is the world's leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2025, the 52nd annual conference hosted by ACM SIGGRAPH, will take place live 10–14 August at the Vancouver Convention Centre, along with a Virtual Access option. View original content to download multimedia: SOURCE SIGGRAPH 2025
Yahoo
6 days ago
- Business
- Yahoo
Nvidia unveils new Cosmos world models, infra for robotics and physical uses
Nvidia on Monday unveiled a set of new world AI models, libraries, and other infrastructure for robotics developers, most notable of which is Cosmos Reason, a 7-billion-parameter 'reasoning' vision language model for physical AI applications and robots. Also joining the existing batch of Cosmos world models are Cosmos Transfer-2, which can accelerate synthetic data generation from 3D simulation scenes or spatial control inputs, and a distilled version of Cosmos Transfers that is more optimized for speed. During its announcement at the SIGGRAPH conference on Monday, Nvidia noted that these models are meant to be used to create synthetic text, image, and video datasets for training robots and AI agents. Cosmos Reason, per Nvidia, allows robots and AI agents to 'reason' thanks to its memory and physics understanding, which lets it 'serve as a planning model to reason what steps an embodied agent might take next.' The company says it can be used for data curation, robot planning, and video analytics. The company also unveiled new neural reconstruction libraries, which includes one for a rendering technique that lets developers simulate the real world in 3D using sensor data. This rendering capability is also being integrated into open source simulator CARLA, a popular developer platform. There's even an update to the Omniverse software development kit. There are new servers for robotics workflows, too. The Nvidia RTX Pro Blackwell Server offers a single architecture for robotic development workloads, while Nvidia DGX Cloud is a cloud-based management platform. These announcements come as the semiconductor giant is pushing further into robotics as it looks toward the next big use case for its AI GPUs beyond AI data centers. We're always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch, you can help us! Fill out this survey to let us know how we're doing.