logo
#

Latest news with #ArtificialIntelligenceLab

MIT scientists show how they're developing AI for humanoid robots
MIT scientists show how they're developing AI for humanoid robots

CBS News

time2 days ago

  • Science
  • CBS News

MIT scientists show how they're developing AI for humanoid robots

We've all seen what artificial intelligence can do on our screens: generate art, carry out conversations and help with written tasks. Soon, AI will be doing more in the physical world. Gartner, a research and advisory firm, estimates that by 2030, 80% of Americans will interact daily — in some way — with autonomous, AI-powered robots. At the Massachusetts Institute of Technology, professor Daniela Rus is working to make that possible — and safe. "I like to think about AI and robots as giving people superpowers," said Rus, who leads MIT's Computer Science and Artificial Intelligence Lab. "With AI, we get cognitive superpowers." "So think about getting speed, knowledge, insight, creativity, foresight," she said. "On the physical side, we can use machines to extend our reach, to refine our precision, to amplify our strengths." Sci-fi stories make robots seem capable of anything. But researchers are actually still figuring out the artificial brains that machines need to navigate the physical world. "It's not so hard to get the robot to do a task once," Rus said. "But to get that robot to do the task repeatedly in human-centered environments, where things change around the robot all the time, that is very hard." Rus and her students have trained Ruby, a humanoid robot, to do basic tasks like prepare a drink in the kitchen. "We collect data from how humans do the tasks," Rus said. "We are then able to teach machines how to do those tasks in a human-like fashion." Rus' students wear sensors to capture motion and force, which helps teach robots how tightly to grip or how fast to move. "So you can tell, like, how tense they're holding something or how stiff their arms are," said Joseph DelPreto, one of Rus' students. "And you can get a sense of the forces involved in these physical tasks that we're trying to learn." "This is where delicate versus strong gets learned," Rus said. Robots already in use are often limited in scope. Those found in industrial settings perform the same tasks repeatedly, said Rus, who wants to expand what robots can do. One prototype in her lab features a robotic arm that could be used, in the future, for household chores or in medical settings. Some, however, might feel uneasy having robots in home settings. But Rus said every machine they've built includes a red button that can stop it. "AI and robots are tools. They are tools created by the people for the people. And like any other tools they're not inherently good or bad," she said. "They are what we choose to do with them. And I believe we can choose to do extraordinary things."

Do We Have a Moral Obligation To AI Because of Evolution?
Do We Have a Moral Obligation To AI Because of Evolution?

Newsweek

time23-05-2025

  • Science
  • Newsweek

Do We Have a Moral Obligation To AI Because of Evolution?

Last month in San Francisco, AI entrepreneur Dr. Ben Goertzel invited me to publicly debate him on the future of machine intelligence at his event, The Ten Reckonings of AGI. Goertzel is best known for popularizing AGI, a term signifying artificial general intelligence as equal to human-level intelligence. I've long promoted AGI in essays and interviews as a likely liberator to the human race and its problems. Now that generative AI, like ChatGPT, is here and starting to upend society and take jobs, I'm not so sure I'm correct anymore. AI has evolved too fast and spurious for me to continue to optimistically support it. In the debate, I asked Goertzel a question all AI enthusiasts should answer: Do you think humans have a moral obligation to try to bring AI superintelligence into the world because of evolution? ROBOY, a humanoid robot developed by the University of Zurich's Artificial Intelligence Lab is shaking hands with his human counterpart on June 21, 2013. ROBOY, a humanoid robot developed by the University of Zurich's Artificial Intelligence Lab is shaking hands with his human counterpart on June 21, 2013. EThamPhoto/Getty Images Goertzel winced, because the question is challenging. He believes AI is a creation and extension of ourselves—and therefore an extension of our own evolution, as well as a part of evolution itself. I can admit the issue is complex, having recently finished my graduate degree in ethics from the University of Oxford, where philosophers like Nick Bostrom were my professors. On one hand, billionaire visionaries like Sam Altman and Elon Musk desire to see how far they can evolve machine intelligence. Both Altman and Musk have hinted at possibly creating god-like intelligences. Afterall, why stop at AGI when you might be able to create a superintelligence that can help solve all the problems in the world? On the other hand, what if the newly created superintelligence doesn't like humans—maybe because we've ecologically damaged the planet? Or maybe because humans might one day attempt physics experiments that could harm Earth and the universe? In this case, it's plausible a superintelligent AI would try to stop us or even pursue human extinction. During our debate, I told Goertzel my first priority was to protect humans and ensure their survival and well-being. Only after that can we ascertain if people have a moral obligation to create AI as the next leading force in evolution on our planet. I worry, like the Greek mythological figure, Icarus, who flew too close to the sun, that our self-righteousness will blind humans to why we wanted to create AI in the first place. Humanity's goal with AI was to build a tool to help us prosper, not a tool that would become more powerful than us. But some experts now expect AI to surpass human intelligence within 5-10 years. Goertzel thinks it could happen in the next 24-36 months, he told Newsweek. People often think of the creation of AGI as something similar to the creation of nuclear weapons; humanity will find a way to have it not directly harm the world, as has been the case since 1945 with nukes. But that analogy is misguided. AGI is very different than nuclear weaponry. First, it's impossible to say if we will be able to control any AI intelligence that surpasses our own intellect; some experts think that's unlikely. Second, inviting intelligences smarter than us into our world is similar to inviting aliens smarter than us to Earth. It's unlikely we'd do that under nearly any circumstances, because remaining the dominant species in a predatory world is a priority. Afterall, humans and our ancestors spent millions of years escaping the clutches of being a tasty, regular part of the food chain. Nobody knows if superintelligent AI will ultimately be kind and beneficial to humans or not. But many people, including myself, increasingly don't want to find out—which puts us at odds with AI inventors that do. This lack of caution on CEOs and AI engineers building out AGI is frightening, made worse by the fact that some people believe we have a moral obligation to evolution to create this superintelligence. Some experts twist this even further saying if we don't purposefully create this AI, in the future when others create it, it will then punish those who didn't help bring it into existence. As a transhumanist and longevity advocate, a primary goal in my life has been overcoming biological death with science. While we're still likely a few decades away from that, creating an AI superintelligence before 2030 is quite plausible. So even if humans could overcome biological death, it won't make a difference if we can't overcome a harmful superintelligence. I understand the allure of using technology to build something better and smarter than us. But in doing so, we must be absolutely sure we are not helping to bring about harm or doom on humanity. I feel strongly that stopping the march toward inventing a superintelligence must become the most important priority of the human race and its governments around the world. Zoltan Istvan writes and speaks on transhumanism, artificial intelligence, and the future. He is running for California governor as a Democrat in the 2026 elections. The views expressed in this article are the writer's own.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store