logo
#

Latest news with #DylanHadfield-Menell

AI that thinks for itself: Can we trust it?
AI that thinks for itself: Can we trust it?

Boston Globe

time21-05-2025

  • Business
  • Boston Globe

AI that thinks for itself: Can we trust it?

But what happens when AIs try to figure more things out on their own? A Chinese company called Advertisement Manus is an entry-level glimpse at one of the hottest topics on software. AI-powered agents that think for themselves could make corporate and government computer systems vastly more efficient — or if we're not careful, a lot more dangerous. Anybody can visit the Manus site and give it a try. Like the original ChatGPT, Manus is buggy and crude, but just cool enough to be exciting. Like many other things that beep and buzz, the concept of software agents came from the Massachusetts Institute of Technology. The idea, which dates from the 1950s, is that a piece of software can be turned loose to do things without waiting for explicit human commands. Ever get a text message from a pharmacy? 'You're due for a Ozempic refill. Should we send it over? Press 1 for 'yes' or 2 for 'no.'' Advertisement The drug store is using a simple agent that checks the pharmacy database to see which customers are due for a refill, and sends them reminders. But suppose the agent is an AI? Then it might wonder if the prescription is on the verge of expiring. It could reach out to the doctor's computer to find out, and if necessary request a renewal. It might check a drug database to see if there's a cheaper generic, and recommend that you switch. It might notice that you haven't had a checkup in a year, and ask the doctor's computer to make an appointment. The AI wouldn't be programmed in advance to do these things. Instead, like a human, it would come up with these ideas on its own. Manus can't do stuff like this. But it's got just enough horsepower to hint at greater things to come. For instance, I asked Manus to draw up an interactive map of Boston showing the locations of public parks. Even more impressive was what happened when I asked Manus to compare housing costs, crime rates and public school quality in West Roxbury and Somerville. It spewed forth Advertisement Houses seen in Somerville in 2023. David L. Ryan/Globe Staff The current version is dead slow, often taking about an hour to complete a request. But it's pretty awe-inspiring when it works. In the months ahead, it'll work better and better. And properly applied, AI agents could go a long way toward making our lives easier. 'If you can automate a bunch of tasks wholesale,' said Dylan Hadfield-Menell, an assistant professor of AI at MIT, 'this gets us closer to everyone having their own personal assistant.' For example, said Hadfield-Menell, imagine planning a family reunion of 50 to 100 people. Feed the guests' home addresses and emails to an AI, along with the location and date of the reunion. Without further ado, the agent could generate a unique travel and hotel itinerary for each guest, and email it to them for their approval. But suppose an agent orders another computer to do something stupid? Imagine a utility using AI agents to keep electric power flowing smoothly. if these agents aren't smart enough to monitor the weather forecasts, they could be caught off guard by a freak winter storm and leave thousands freezing in the dark. Or imagine an AI investment agent that responds to a false story about a banking crisis by spawning a massive stock selloff that costs investors billions. For Phaedra Boinodiris, global leader for Trustworthy AI at IBM Consulting, these risks become less hypothetical by the day. Advertisement Poinodiris said that in an IBM survey of corporate chief executives, 61 percent were already installing agents to oversee some of their business processes, and were looking to install still more. IBM itself makes a suite of AI agents to help with human resources, finance and customer service tasks. But in an excellent How can we be sure that the AI's decisions will always match up with what humans want it to do? And can we understand why an AI chooses a particular course of action? Can it explain its thinking to a human? Unless you can answer these questions and quite a few more, AI agents will never be entirely trustworthy. Boinodiris says setting up AI agents is 'like building a skyscraper in an earthquake zone: you need reinforced foundations, constant monitoring, and contingency protocols for every scenario.' For instance, certain critical activities--prescribing a drug for a hospital patient, for instance--should never be carried out solely by an agent. There must be a human in the loop, ready to step in and avert disaster. There probably ought to be a law about this, but the Trump administration has made clear that federal regulation of AI is not forthcoming, and the budget reconciliation bill now before Congress would ban individual states from issuing their own regulations for a period of 10 years. Besides, it's not clear what such regulations should look like. As agents get smarter and become more deeply embedded in our institutions, their potential benefits and harms might be impossible to predict. After all, they're intended to think for themselves, just like humans. And you know how we are. Advertisement Hiawatha Bray can be reached at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store