
MIT Teaches Soft Robots Body Awareness Through AI And Vision
Researchers from the Massachussets Institute of Technology's (MIT) CSAIL lab have developed a new system that teaches robots to understand their bodies, using only vision. Using consumer-grade cameras, the robot watched itself move and then built an internal model of its geometry and controllability.
According the researchers this could dramatically expand what's possible in soft and bio-inspired robotics, enabling affordable, sensor-free machines that adapt to their environments in real time.
The team at MIT said that this system and research is a major step toward more adaptable, accessible robots that can operate in the wild with no GPS, simulations or sensors. The research was published in June in Nature.
Daniela Rus, MIT CSAIL Director said with Neural Jacobian Fields, CSAIL's soft robotic hands were able to learn to grasp objects entirely through visual observation with no sensors, no prior model and no manual programming.
'By watching its own movements through a camera and performing random actions, the robot built an internal model of how its body responds to motor commands. Neural Jacobian Fields mapped these visual inputs to a dense visuomotor Jacobian field, enabling the robot to control its motion in real time based solely on what it sees,' added Rus.
Rus adds that the reframing of control has major implications.
"Traditional methods require detailed models or embedded sensors but Neural Jacobian Fields lifts those constraints, enabling control of unconventional, deformable, or sensor-less robots in real time, using only a single monocular camera.'Vincent Sitzmann, Assistant Professor at MIT's Department of Electrical Engineering and Computer Science and CSAIL Principal Investigator said the researchers relied on techniques from computer vision and machine learning. The neural network observes a single image and learns to reconstruct a 3D model of the robot which relies on a technique called differentiable rendering which allows machine learning algorithms to learn to reconstruct 3D scenes from only 2D images.
'We use motion tracking algorithms - point tracking and optical flow - to track the motion of the robot during training,' said Sitzmann. "By relating the motion of the robot to the commands that we instructed it with, we reconstruct our proposed Neural Jacobian Field, which endows the 3D model of the robot with an understanding of how each 3D point would move under a particular robot action.'
Sitzmann says this represents a shift towards robots possessing a form of bodily self-awareness and away from pre-programmed 3D models and precision-engineered hardware. 'This moves us towards more generalist sensors, such as vision, combined with artificial intelligence that allows the robot to learn a model of itself instead of a human expert,' said Sitzmann. "This also signals a new class of adaptable, machine-learning driven robots that can perceive and understand themselves.'
The researchers said that three different types of robots acquired awareness of their bodies and the actions they could take as a result of that understandi
A 3D-printed DIY toy robot arm with loose joints and no sensors learned to draw letters in the air with centimeter-level precision. It discovered which visual region corresponds to each actuation channel, mapping 'which joint moves when I command actuator X' just from seeing motion.
A soft pneumatic hand learned which air channel controls each finger, not by being told, but just by watching itself wiggle. They inferred depth and geometry from color video alone, reconstructing 3D shape before and after actions.
A soft, wrist-like robot platform, physically disturbed with added weight, learned to balance and follow complex trajectories. They quantified motion sensitivity, for example, measuring how a command that slightly changes an actuator produces millimeter‑level translations in the gripper.
Changing soft robotics
The CSAIL researchers aid that soft robots are hard to model because they deform in complex ways. One reasercher said in an email interview that the method they used in the research doesn't require any manual modeling. The robot watches itself move and figures out how its body behaves similar to a human learning to move their arm by watching themselves in a mirror.
Sitzmann says conventional robots are rigid, discrete joints connected by rigid linksbuilt to have low manufacturing tolerance. "Compare that to your own body, which is soft: first, of course, your skin and muscles are not perfectly solid but give in when you grasp something.'
'However, your joints also aren't perfectly rigid like those of a robot, they can similarly bend and give in, and while you can sense the approximate position of your joints, your highest-precision sensors are vision and touch, which is how you solve most manipulation tasks,' said Sitzmann. "Soft robots are inspired by these properties of living creatures to be similarly compliant, and must therefore necessarily also rely on different sensors than their rigid cousins.'
Sitzmann says that this kind of understanding could revolutionize industries like soft robotics, low‑cost manufacturing, home automation and agricultural robotics.
'Any sector that can profit from automation but does not require sub-millimeter accuracy can benefit from vision‑based calibration and control, dramatically lowering cost and complexity,' said Sitzmann. "In the future, with inclusion of tactile sensing (=touch), this paradigm may even extend to applications that require high accuracy.'
A new approach to soft robotics
Researchers say their approach removes the need for experts to build an accurate model of the robot, a process that can take months. It also eliminates reliance on expensive sensor systems or manual calibration. The simplified process entails recording the robot moving randomly and the model learns everything it needs to know from that video.
'Instead of painstakingly measuring every joint parameter or embedding sensors in every motor, our system heavily relies on a camera to control the robot," said Sitzmann. 'In the future, for applications where sub-millimeter accuracy is not critical, we will see that conventional robots with all their embedded sensors will increasingly be replaced by mass-producible, affordable robots that rely on sensors more similar to our own: vision and touch."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
23 minutes ago
- Yahoo
NC-based app uses AI to fight denied health insurance claims
A new app developed in North Carolina is using artificial intelligence to fight denied health insurance claims. The app, from Counterforce, lets residents upload their insurance coverage documents along with their denial letter. ALSO READ: The cost of AI: Who pays to power the future? It then combines them to create a medically based analysis that residents can print and send back to their insurance company. Counterforce is free for anyone to use online. VIDEO: The cost of AI: Who pays to power the future?


CNET
24 minutes ago
- CNET
How Can You Turn Your Thoughts Into a Visible Ideas Board? I Spoke to an AI Company About it
After more than a year of researching, reviewing and writing about artificial intelligence, I've been wanting to learn more about AI as a thinking and planning platform. While it's a great tool for streamlining work, it's also being used in more creative ways, and I've been curious about my own thinking style. Why do I see connections and visualizations when someone speaks? Why do I often feel like I'm not just listening and digesting information, but instead, piecing through what's being said and translating it into a deeper connection point? "Problem solving, thinking… It's a journey you go through where you need to be able to have that freedom to explore," Stephen Chau, the co-founder of AI platform Cove, tells me. The story of Cove began with Chau and his co-founders Andy Szybalski and Michael Chu experimenting with AI, only to realize that something was missing. AI needed to work more like a human collaborator -- and that means thinking in flexible and responsive ways, while able to support complex, ongoing projects. So why do most AI tools still feel limited to linear, transactional chatbot interactions? Chau and his co-founders "we were having a lot of fun experimenting with AI," he tells me. "But the more we built, the more we saw fundamental limitations in chatbots." This insight sparked the creation of Cove: a flexible, visual workspace designed to match the natural ways humans think and work together, with AI as an active partner in the process. I was reminded of Flora, a creative operating system that contains a beautiful and fluid setup for ideation. It mimics a designer's sketchbook or filmmaker's storyboard. Cove, in contrast, is for thought clarity -- more architectural, but still beautiful and clean, and focused on sharpening what's been built and creating insight along the way. I dug into Cove's functionality and, more importantly, why it matters right now in a rapidly evolving age of AI. What is Cove, and how does it use AI? Cove's best feature, and one it has in common with many AI tools, is that it responds and adapts in real time. Chau shared that Cove pulls from a mix of top AI models -- OpenAI, Anthropic, Gemini, Perplexity -- without requiring you to choose or toggle between them. The goal is for it to feel less like software and more like inviting a collaborator into your thought process. The way Cove works is simple: You tell it what you're working on, and instead of a single thread or linear chat, Cove opens into a canvas, one that shapes itself around whatever you're trying to figure out. As Chau described it, Cove creates a flexible workspace for you, filled with cards that break your project into manageable pieces. Enlarge Image Cove / Screenshot by CNET "One card might show the pros and cons of living in Palo Alto versus Vancouver," Chau explains. "Another might list moving considerations, immigration info or other related topics." These cards let you explore multiple ideas at once, compare options side by side and dive deeper through suggested actions on each card. This visual, interactive approach helps you think through complex problems in a way traditional chatbots don't. When you drag in content like PDFs, spreadsheets, links and images, those files become part of the workspace. Cove ingests and learns from your uploads, using them to inform its suggestions and build a board that evolves as your project does. Similar to a chatbot, this deepens the workspace's intelligence over time, making future insights more relevant and specific to your needs. But it's more like continuing a dialogue with a tool that remembers what matters to you. I see it as an integrated workspace. It doesn't require me to search for old threads, desperately trying to find what I was working through days prior (which is something I've repeatedly wished I could do with ChatGPT). Who should use Cove? What genuinely excited me about Cove is how it feels like it was made for "snowball thinkers" like myself -- rapid brainstorming with a need for visualization as my ideas evolve. Given how deeply personal and ongoing these projects can be, data privacy is fundamental. Since privacy is a common concern around AI tools, I asked Chau how Cove handles it. "A lot of what we're doing is leaning on these large LLM model providers and their fundamental policies," he tells me. "But we also want to make sure that from a user data privacy standpoint, we have the right policies as well." These policies include maintaining control over your data when you're using the app, such as a toggle to opt out of having your content used for AI training. As for who uses Cove, Chau suggested the "chief household officer" juggling one project after another. Education has also emerged as a significant use case, with students adopting Cove for collaborative research and iterative drafting. Entrepreneurs use it, too -- as a virtual co-founder and thought partner or a place to lay out brain dumps for team clarity. How to use Cove to brainstorm and share ideas Enlarge Image A Cove workspace I made about a Yosemite camping trip, featuring an AI-generated image, a map, a website and AI-generated text with itineraries and tips on stargazing. Cove/Screenshot by CNET Ready to try it for yourself? Here's how to go from idea to execution inside Cove, which exists on desktop and as a Google Chrome extension. Head to and click Get Started to sign up with Google or by creating an account. Cove uses an "infinite" canvas. Each idea is represented by a card, which can contain things like text, attachments, images and tables. These cards can be added endlessly to your canvas. There's also an option to paste URLs, and Cove will create cards with fetched content from the web, as well as things you upload, including PDFs, pitch decks and working docs -- or use empty text cards and generate content from there. Cove will generate images you describe in a new card, too. Using these cards, you can create a mind map of different ideas, where you're dragging cards around, zooming in and out and reorganizing them. It also has the capability to predict what you're going to need next, working in real time to create something that serves your initial prompt or purpose. Open the chatbot at the bottom right of the screen, where you can interact with the AI tool and use it to highlight cards that have been created, or select multiple and ask Cove's AI to connect them together and find patterns or ways of thinking. Now, use your cards to create AI-powered tools right in your canvas. Describe what you want and Cove will auto-generate an app based on your context or needs. You can customize it as much as you need and then deploy a new app within your workspace. Since Cove was made for collaboration, you can invite teammates into your workspace for real-time ideation and process feedback. (You have agency over who accesses your workspace -- think of it like a virtual desk.) There's also the option to share your app cards or choose to Publish to Gallery, which is how you can make your workflow public or to share a tool that others can duplicate. Lastly, revision history allows you to keep ideas evolving without losing work. Since everything is divided by your app cards, there's the possibility for focused edits or individual-turned-collaborative experimentation. For more information on how to use Cove, head to the support page on its website, and check out this demo video, which visually walks you through the process above. Chau sums up the vision simply: People using Cove are "working through a long-term problem." "Over time, we hope to use that information as frameworks for tackling those problems, so more users can benefit," he tells me. Ultimately, Cove creates space for non-linear output -- think of it less as a tool and more of a partner in thoughtful growth and discovery -- one that you can share with others who are equally interested in understanding the brain's layered, intricate process. Happy thinking.


Gizmodo
24 minutes ago
- Gizmodo
Dating App That Lets Women ‘Rate' Men Hits Number 1 on the App Store, Immediately Suffers Data Breach
Tea, an app that lets women 'rate' and 'review' the men in their lives, has been on a hot streak lately, having shot to the top of the App Store and enjoyed several recent write-ups in major media outlets. Unfortunately, the app has now disclosed a data breach involving self-submitted user images. One report cites claims that some of the data has been shared on 4chan, the incel-ridden internet backwater best known for helping to spawn the QAnon conspiracy theory. 404 Media first reported on the data breach, writing that users from 4chan 'claim to have discovered an exposed [Tea] database hosted on Google's mobile app development platform, Firebase.' The notorious site's resident trolls bragged that they were parsing personal data and selfies from the app's internal databases. 404 attempted to verify the claims made on the site. 'While reporting this story, a URL the 4chan user posted included a voluminous list of specific attachments associated with the Tea app,' the outlet wrote. While the files were initially viewable, the page now gives an error and 404 says that it 'verified that Tea does contain the same storage bucket URL that 4chan claims was related to the exposure.' Gizmodo has not been able to independently verify this reporting. On Friday, Tea confirmed to Gizmodo that a data breach had occurred. 'We can confirm that at 6:44 AM PST on Friday, July 25th, Tea identified unauthorized access to one of our systems and immediately launched a full investigation to assess the scope and impact,' a PR representative shared. The breach partially involved selfies submitted to the app for verification purposes, they said: Preliminary findings indicate that the incident involved a legacy data storage system containing information from over two years ago. Approximately 72,000 images – including approximately 13,000 images of selfies and photo identification submitted during account verification and 59,000 images publicly viewable in the app from posts, comments and direct messages – were accessed without authorization. The spokesperson told Gizmodo that the company has seen no evidence 'that current or additional user data was affected.' Ironically, Tea has also said that the information in question was 'originally stored in compliance with law enforcement requirements related to cyberbullying prevention.' Gizmodo further inquired about 4chan's supposed role in the incident; we'll update this post when we receive a reply. Tea dubs itself a 'women's safety app' and allows its users to anonymously post pictures and the real names of the men they've dated, with appended criticisms and concerns. While the goal of giving women a way to vet their dates is ostensibly an honorable one, the Washington Post points out that Tea 'doesn't limit its feedback to safety concerns,' and that criticism is also frequently aimed at men's appearance or the way a specific relationship came to an end. Arguably, that would make it the perfect target for the internet's most disgruntled and misogynistic hordes. More to the point, any time you share personal information with an app, you're just asking for that information to be shared with the rest of the world. The internet—in particular the app industry—is a deeply insecure place, governed as it is by male egos and burned-out coders.