Latest news with #ImaginationinAction


Forbes
20-05-2025
- Science
- Forbes
Physical And Agentic AI Is Coming
Some interesting questions are coming up in the world of artificial intelligence that have to do with the combination of physical environments and agentic AI. First of all, that term, agentic AI, is only a couple of years old. But it's taking hold in a big way – in enterprise, and government, and elsewhere. The key is this, though – if the AI agents can do things, how do they have the access to do those things? If it's digital tasks, the LLM has to be supported by APIs and connective tissue, like a Model Context Protocol or something else. But what if it's physical? In a recent panel at Imagination in Action in April, my colleague, Daniela Rus, director of the MIT CSAIL lab, talked to a number of experts about how this would work in both the public and private sectors. 'The bridge is when we can take AI's ability to understand text, images and other online data about the physical, world to make real-world machines intelligent,' Rus said. 'And now, if you can get a machine to understand a high-level goal, break it down into sub-components, and execute some of the sub-goals by itself, you end up with agentic AI.' So what did panelists center on? Here are a few major ideas that came out of the discussion on how AI can work more humanly in the physical world where humans live. In exploring what makes humans different from machines, there was the idea that people do things on a personal basis, which differentiates them from the herd. So the AI will have to learn not to follow a consensus-based model all of the time. That's a key bit of difference, what you might call a 'foible' that makes humans special - so in the enterprise world, it may not be a foible at all, but a value add. 'What you do not want is consensus regression to the mean information, like generally accepted ways of doing things,' said panelist Emrecan Dogan. 'This is not how, as humans, we create value. We create value by taking a subjective approach, taking the path that is very personal, very subjective, very idiosyncratic. We are not always right, but when we are right, this is how we create value.' As for government, panelist Col. Tucker Hamilton talked about electronic warfare, and stressed the importance of a human in the loop. 'I think we want to embed (HITL) so that a human is still in control,' he said. I think we need… explainability, traceability, and that goes along with governability as well. And I think we want to be able to favor adaptability to perfection.' You have to reason, you have to think and understand,' added panelist Jonas Diezun. Another way to think about this is that the programs have to be just the right amount of deterministic guidance. 'They don't always repeat,' Dogan said of these tools. 'They don't behave exactly the same way the second, third, fourth time you run it. So I think the big (idea) is the right blend of determinism that you can embed along with the stochasticity. So I think the truly powerful agents will convey some expression of deterministic behavior, but then the stochastic upside of AI models. Some other components of this have to do, simply, with infrastructure. 'Sensors, we're gathering information off of a sensor multi-modal (program), like sensor gathering,' Hamilton said. 'How do we summarize that information? How do we make sure that one sensor is fused with another sensor? How do we have pipelines that we can get that information to, in order to have someone just assess that, like sensor information, let alone how do we adopt flight autonomy?' In other words, all of those real-world pieces have to be connected the right way for the system to work in physical space, and not just digitally. Finally, Rus asked each panelist their timeline for AI taking over most human tasks. The lower numbers represent when these panelists think that the simple tasks can be adopted by AI. The second number is a projection of when AI would take over the more complex tasks. The verdict? 'Quarters, not years.' I thought all of this was very instructive in showing some of what we have to contend with as we anticipate the rest of the AI revolution. It's been a long time coming, but the exponential curve of the technology is finally here, and likely to be integrated into our worlds quickly almost suddenly. Job displacement is an enormous concern. So is the potential for runaway systems that could do more harm than good. Let's continue to be vigilant as 2025 rolls on.


Forbes
15-05-2025
- Business
- Forbes
AI Crosstalk On Your Claim
sifting through stacks of paper files and folders Sometimes it's still difficult to envision exactly how the newest LLM technologies are going to connect to real life implementations and use cases in a given industry. Other times, it's a lot easier. But just this past year, we've been hearing a lot about AI agents, and sort of, for lack of a better term, humanizing the technology that's in play. An AI agent is specialized – it's focused on a set of tasks. It is less general than a generic sort of neural network, and it's trained towards some particular goals and objectives. We've seen this work out in handling the tough kinds of projects that used to require a lot more granular human attention. We've also seen how API technology and related advances can allow models like Anthropic's Claude to perform tasks on computers, and that's a game-changer for the industry, too. So what are these models going to be doing in business? Cindi Howson has an idea. As Chief Data Strategy Officer at Thoughtspot, she has a front-row seat to this type of innovation. Talking at an Imagination in Action event in April, she gave an example of how this would work in the insurance industry – I want to include this in monologue form, because it lays out how an implementation could work, in a practical way. 'A homeowner will have questions,' she said, ''should I submit a claim? What will happen if I do that? Is this even covered? Will my policy rates go up?' The carrier will say, 'Well, does the policy include the coverage? Should I send an adjuster out? If I send an adjuster now … how much are the shingles going to cost me, or steel or wood? and this is changing day to day.' All of this includes data questions. So if you could re-imagine, all of this is now manual (and) can take a long time. What if we could say, let's have an AI agent … looking at the latest state of those roofing structures. That agent then calls a data AI agent, so this could be something like Thoughtspot, that is looking up how many homeowners have a policy with roofs that are damaged. The claims agent, another agent could preemptively say, 'let's pay that claim.' Imagine the customer loyalty and satisfaction if you did that preemptively, and the claims agent then pays the claim.' It's essentially ensemble learning for AI, in the field of insurance, and Howson suggested there are many other fields where agentic collaboration could work this way. Each agent plays its particular role. You could almost sketch out an org chart the same way that you do with human staff. And then, presumably, they could sketch humans in, too. Howson mentioned human in the loop in passing, and it's likely that many companies will adopt a hybrid approach. (We'll see that idea of hybrid implementation show up later here as well.) Our people at the MIT Center for Collective Intelligence are working on this kind of thing, as you can see. In general, what Howson is talking about has a relation to APIs and the connective tissue of technology as we meld systems together. 'AI is the only interface you need,' she said, in thinking about how things get connected, now, and how they will get connected in the future. Explaining how she does research on her smartphone, and how AI connects elements of a network to, in her words, 'power the autonomous enterprise,' Howson led us to envision a world where our research and other tasks are increasingly out of our own hands. Of course, the quality of data is paramount. 'It could be customer health, NPS scores adoption trackers, but to do this you've got to have good data,' she said. 'So how can you prepare your data? And AI strategy must align to your business strategy, otherwise, it's just tech. You cannot do AI without a solid data foundation.' Later in her talk, Howson discussed how business leaders can bring together the structured data from things like live chatbots, and more structured data, for example, semi-structured PDFs sitting on old network drives. So legacy migration is going to be a major component of this. And the way that it's done is important. 'Bring people along on the journey,' she said. There was another point in this presentation that I thought was useful in the business world. Howson pointed out how companies have a choice – to send everything to the cloud, to keep it all on premises, or to adopt a hybrid approach. Vendors, she said, will often recommend either all one or all the other , but a hybrid approach works well for many businesses. She ended with an appeal to the imagination: Think big, imagine big,' she said. 'Imagine the whole workflow: start small, but then be prepared to scale fast.' I think it's likely that a large number of leadership teams will implement something like this in the year 2025. We've already seen some innovations like MCP that helped usher in the era of AI agents. This gives us a little bit of an illustration of how we get there.


Forbes
03-05-2025
- Business
- Forbes
What Are Digital Defense AI Agents?
We're in the new world of agentic AI, which means that everyone's looking at how to use AI agents to their advantage. In a certain simplistic sense, that means that companies are looking to use AI agents to sell, while governments are trying to use AI agents to do - whatever they are used to doing. Some consumer advocates argue that individual people who are so often being targeted by businesses and government activities need their own AI agents to defend them. When Alex 'Sandy' Pentland took the stage at this year's Imagination in Action event, he was talking about specifically this type of thing. 'They're going to try and hack me, do bad things to me,' he said of those ubiquitous agents controlled by business, government or big interest parties. 'They are going to twist my mind around politics, all of those things. And my answer to this is I need an AI agent to defend me. I need something who's on my side who can help me navigate returning things or avoiding scams, or all that whole sort of thing.' The idea that Pentland describes is that your AI agent addresses all of that other agent activity that's aimed at you, and intervenes on your behalf. The idea of a personal 'digital defender' in the form of an AI agent is not very widely talked about on the web. Pentland's video is up there, but you don't see much about the specific type of project in research papers, or on corporate sites, or even at Consumer Reports (more on this later). In a way, it's like having a public defender in court. There's a legal effort against you, so you need your own advocacy to represent you on your side. Although some might call these attorneys 'public pretenders' due to underpayment, short staffing, or other problems, hopefully the AI agent is more effective in a global sense. It's also sort of like consumer reporting – Pentland mentioned how Consumer Reports has been doing this kind of work for 80 years with polls and other tools. 'This is why we have seat belts in cars,' he said. 'At Consumer Reports, what they do is, they pull all their people, they do tests and things like that to find good products. That's what I want, is, I want somebody who's on my side that way.' Another sort of similar idea is cybersecurity agents who are created by a company called Twine that are intended to protect people from cyberattacks. But all that aside, Pentland's idea is still in its infancy. In fact, one of the most interesting parts of his presentation was when he talked about all of these business people making their way into one room to talk about personal AI defense agents. 'We had C-level representation, the head of AI products for every single major AI producer, show up on one week's notice,' he explained. 'We also had all the payers show up … people (who handle) credit cards, etc. We had all the systems guys show up. Now (you're in a) little room with more C-level people than you've ever seen in your entire life. Very busy people who showed up on one week's notice.' It's largely liability, he suggested, that brought them to the table 'If they're going to deploy these things, and they're going to be interacting with you, they had better not cheat, they'd better not be biased, or scam you,' he said. 'They have a lot of liability, legal liability, as well as reputational liability. They have to be fair in helping you do things, otherwise they're going to end up in class action courts. That's what they wanted. They wanted someone to build a standard best practice personal agent.' He mentioned a couple of caveats: the agentic system has to undergo legal testing. Ideally, it should be hosted in academia to show impartiality. While best practices are good, he said, companies and other parties really want a standard, because a standard is bulletproof. Pentland also talked about a sort of digital populism that's appealing to those who feel like there's strength in numbers. 'You're just you,' he said. 'But if there were a million yous, or 10 million yous, all (of them) trying to get a good deal, avoid scams, fill out that legal form, you could actually have Ais that are competitive with the best results. So that solves the own, your own data problem (pretty well).' In response to questions, Pentland went over some advice for those who are just starting their careers now. Part of it had to do with solving big questions around how these defense agents will work. 'How do I know what's good for me, and what I want?' he asked, raising some of the essential questions of how an AI agent can target its efforts correctly, according to the user's preference and welfare. He also brought up questions around how to put agents together, to build toward what he called a 'network effect' that magnifies what a connected system of agents can do. He also talked about another kind of game theory where it's easy to upset the apple cart with just a small adjustment. Essentially, Pentland argued, a bad actor can easily throw a system out of balance by being 'just a little edgy,' by making small changes that lead to a domino effect that can be detrimental. He used the example of a traffic jam, which starts off as just one car in dense traffic changing its behavior. This type of game theory, he asserted, has to be factored into how we create our digital defense agent networks. With all of this in mind, it's probably a good idea to think about building those digital defense agents. They might not be perfect right away, but they might be the defense that we need against an emerging army of hackers utilizing some of the most potent technologies we've ever seen. The idea also feeds back into the whole debate about open source and closed source models, and when tools should be published for all the world to use. It's imperative to keep a lid on the types of bad actors that could otherwise jeopardize systems. In the cryptocurrency days, we had the notion of a 51% attack, where as soon as somebody held more than half of a given blockchain item, they had full control, with no exceptions. The solution to our AI liability might be something like this. Look for this type of research to continue.


Forbes
25-04-2025
- Business
- Forbes
The Existential Threat To Universities
CAMBRIDGE, MASSACHUSETTS - JULY 08: A view of the campus of Harvard University on July 08, 2020 in ... More Cambridge, Massachusetts. Harvard and Massachusetts Institute of Technology have sued the Trump administration for its decision to strip international college students of their visas if all of their courses are held online. (Photo by) It's strange times for Ivy League schools – aside from all of the other sorts of campus and political issues that administrators have to deal with, so many of those in higher education planning are looking over their shoulders at the prospect of emerging artificial intelligence. Is AI coming for education? It's a fair question, and one that's inspiring educators and others with skin in the game to think about how this might play out. 'Universities are grappling with how to integrate AI into curricula while also addressing ethical concerns and potential academic integrity issues,' writes Matthew Lynch at The Tech Edvocate. 'Many institutions have implemented AI literacy courses to ensure students understand both the capabilities and limitations of these tools. ... Some universities have embraced AI-powered chatbots for student support services, reporting increased efficiency and student satisfaction. However, concerns persist about the impact of AI on critical thinking skills and the potential for over-reliance on technology. Faculty members are adapting their teaching methods and assessment strategies to encourage original thought in an AI-saturated world.' That's part of the picture – but some employees of university systems are also worried about their business. In addition to privacy and ethics concerns, there is the threat of disruption, which could change the business model of the university as an institution. In a recent presentation at the Imagination in Action April 15 event, former Harvard student Will Sentance talked about this existential threat and how it might affect big schools, using Harvard as an example. Harvard, he quipped, is often described as 'a hedge fund with a classroom attached,' or rather, a multipurpose institution – he named things like a health network and publications as other components of the Harvard octopus. A long-standing practice with universities and other businesses, he contends, is to 'bundle' services and products, to augment the appeal of a business brand. Higher education may be less different than we think. Citing 'hundreds of years of expansion,' Sentance suggested the best laid plans of Harvard people and those elsewhere in the Ivy League could come unraveled fairly quickly, given the pace of AI development. As a precursor, Sentance cited 20 years of 'unbundling' of newspapers and cable TV, where disruptive models emerged and took down these traditional media companies. 'Unbundling could now come for education,' he said. As for a 'moat' in higher ed, Sentance noted the importance of education and research, and a symbiotic systems where world class professors mentor the 'brightest young intellects,' who, he added, are vital to lab science work. But he also referenced Andrej Karpathy and Eureka Labs, mentioning AI's ability to 'emulate any teacher,' which he said raises 'existential questions for universities.' 'How do we nurture what is human?' Sentance asked rhetorically. 'Open AI and others know this is coming,' Sentance said, citing rumors about what Sam Altman says about AI and education in private. While positing that some disruption can be constructive, Sentance asked whether universities are capable of what he calls 'inventive renewal.' On the other hand, he acknowledged that humans are guiding other humans toward educational objectives, and AI doesn't seem poised to change that anytime soon. 'Struggle is hard,' he added, conceding that right now, it's humans (professors) supporting students in learning. In not too many years, we might have an entirely new system of education. As my colleague John Sviokla envisions, we may finally have a universal tutoring model, where it's always one teacher (human, AI or blended) and one student. Sentance also suggests that while where may be growing pains, we might end up with more human teachers, not fewer. 'Journalism experienced something similar,' he pointed out, ' the collapse in cost of information distribution, complete disruption to journalism, but that (didn't lead) to fewer storytellers, rather more the growth of their own audiences. … I'm confident the same will happen for education: new forms of learning, new forms of teaching, but more teachers, more lifelong learning, and at a whole new scale, to audiences previously left out.' That's optimistic. But it's compelling, too. To the extent that Substack and other venues are the new journalism, maybe tomorrow's teachers will be thriving in places other than traditional schools. Education might look different, but in many ways, it might actually get better.


Forbes
28-03-2025
- Forbes
The Legacy Of The Web: And Where We Go From Here
Portrait of British computer scientist and engineer Tim Berners-Lee as he poses in a classroom at ... More the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, March 23, 1998. Berners-Lee, founder of the World Wide Web, was photographed during a shoot for Red Herring Magazine. (Photo by) All over the place, you see people making analogies between the current moment in AI and those heady times in the early millennial years when the Internet was taking off in a big way. Few would contest that the Internet itself was less than a real revolution of its own kind, setting the stage for everything else that's happening in the 21st-century. Even if we did create all of this new AI capability, how would it move around the world at lightning speed, if not for the global system of interconnects that the Internet represents? Think back to those times when people struggled to even comprehend how the Internet worked, and what it would look like a quarter century later – i.e. right now. I caught up with Sir Tim Berners-Lee at the Imagination in Action event at Davos in January, and asked him about what it was like to create the Internet all of those years ago, when he was around 35 years of age. He stressed the non-commercial nature of his work, and said everything should be 'royalty free' in this kind of innovation. 'I got to meet a lot of interesting people,' he said. We also talked about how the web has been perceived over the years, and what kind of environment it represents. Citing a backlash against some kinds of Internet activity like social media. Berners-Lee noted how the Internet really isn't homogenous, but instead, a collection of so many types of content and social arenas, some of which are more valuable than others. He urged greater regulation of the Internet, to get rid of addictive and harmful elements, and make it safer for children, saying that it's also important for young users to remain anonymous on the web to guard their personal identities and data from outside parties. 'If you make (the bad stuff) illegal, suddenly the phone is all good stuff,' he said. I asked Berners-Lee about the situation in the Eurozone, and how the Europeans are doing on tech development. He was optimistic. He also pointed out that his early work was done at CERN in the European Union, and not on the British mainland. He pointed to areas like Barcelona and other EU cities where tech hubs are evolving. 'People gravitate towards our company, Inrupt, because they are passionate about what we do,' he said. 'And we find that in Germany, we've got Germany, Netherlands, Belgium, also people, even in England.' In terms of the opportunity for today's young career professionals, he noted how his own work was based on an employer, giving him permission to innovate. By the same token, he said, today's companies should give those young people in their ranks the ability to disrupt, and see what happens. All of this was illuminating as we look at interacting with our community on where technology is going. Things are happening at lightning speed again – we have the actual evolution of digital sentience and everything that entails, and a lot of times, it's simply bewildering. But we can get guidance from these past innovators, who did so much in the days before AI, to think about how to keep incorporating new technologies into our lives. I'm going to continue bringing the latest headlines from the tech world as we move through 2025, which we all see as an inflection point for media, for business, and frankly speaking, for life.