Latest news with #ImaginationinAction


Forbes
a day ago
- Forbes
Should We Think Of AI As A Mother?
Mother, child and happy piggyback in summer. getty This is a very unique time in the development of artificial intelligence. When historians go back and sort through the last few years, and the years to come, it might be hard to put a finger on just where the critical mass moment occurred, when it was that we vaulted into the future on the wings of LLMs. But to me, it's telling that we are talking so much about systems and products and opportunities that would have been unimaginable just a decade ago. Image creation, for instance. Even in the oughts, in the early years of the new millennium, you still had to make your own pretty pictures and charts and graphs. Not anymore. Voice cloning, realistic texting companions, robots running races… the list goes on and on. Amid this rapid set of developments, some of those closest to the industry are warning that we need a certain trajectory to make sure that AI is safe. One such person is Yann LeCun, former head of research at Meta, who has been on stage at multiple Imagination in Action events, and gets top billing on many panels and conferences where he discusses innovation. Right now, LeCun is in the news for suggesting that AI needs 'guardrails,' that there are specific principles that we will need to keep in mind to ensure the fidelity of our use cases. What he's calling for is two-fold: first, that the systems be able to represent 'empathy' or compassion, and second, that the AI entities need to be deferential to human authority. That second one speaks to the way that breakout forces escape the food chain of the natural world: humans did this aeons ago, with weaponry and protective systems that basically eliminated natural predators. I guess the idea is that we now have a new potential predator that must be neutralized in a different way. To wit, LeCun said this: "Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans.' That word, instinct, helps to explain those deep-level motivations that do, in a real sense, guide behaviors. Hopefully we haven't lost ours, as humans, and hopefully we can help AIs form theirs. Reporting on LeCun's comments notes that he's speaking in the wake of some input from Geoffrey Hinton, who is often called the 'godfather of AI' but ended up disavowing his brainchild, to a certain extent. Hinton's own comments go right to the core of how we see human-to-human interactions, and by extension, those we will have with humanoid AI. He asks us to imagine if AI could be like our mothers. 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,' Hinton reportedly said. 'If it's not going to parent me, it's going to replace me. …These super-intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Unfortunately, this goal seems to fly in the face of the hubris observed in our modern societies – with both superpowers and domestic populations armed to the teeth against each other, what chance do we have of internalizing the right instinct, to bond with a more powerful partner? On the other hand, ascribing maternal roles to AI seems like a positive thing, but is it the right thing, at the end of the day? Ultimately, those aspirations that LeCun and Hinton mention (empathy, etc.) are objectives for us, too. It's also sobering that these comments come at a time when a jury has just brought the top self-driving vehicle company to heel with a $200 million fine for a fatality involving technology: ruling on the death of Naibel Benavides Leon , struck by a Tesla car on Autopilot, the jury found that technology makers are responsible, to an extent, for that lack of guardrails that has real and tragic consequences. It's a powerful metaphor: that to build correctly, we have to deliberate, not only on market principles, but on greater ones, too – that we have to have a long-term picture of how society is going to work with these AGIs and agentic systems in play. AI is now able to 'do things for you,' and so, what sorts of things will it be doing? I'm reminded, again, of the proposal by my colleague Dr. John Sviokla that AI could provide individual tutors for humans , to help them work through various kinds of critical thinking, and the suggestion from other quarters that one human priority should be to hire an army of philosophers to keep us nicely in the lane when it comes to AI development. Here's an interesting resource from Selmer Bringsjord and Konstantine Arkoudas at the Rensselaer Polytechnic Institute (RPI) in Troy, NY, talking in 2007 about the fundament of AI research . They cite another team of authors in suggesting: 'The fundamental goal of AI research is not merely to mimic intelligence or produce some clever fake. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.' 'This 'theoretical conception' of the human mind as a computer has served as the bedrock of most strong-AI research to date,' Bringsjord and Arkoudas write. 'It has come to be known as the computational theory of the mind; we will discuss it in detail shortly. On the other hand, AI engineering that is itself informed by philosophy, as in the case of the sustained attempt to mechanize reasoning, discussed in the next section, can be pursued in the service of both weak and strong AI.' There's a lot more in here, about speculation, logic, mechanistic thought, etc. – to sink your teeth into. And similarly, quite a few MIT people are working somewhere in the junction of neuroscience, AI, and biological modeling, to come to a more informed perspective on what the future will look like. And perhaps, as Paul Simon sings, the mother and child reunion is only a motion away.


Forbes
02-07-2025
- Science
- Forbes
Beyond Computer Vision, Brains In Jars, And How They See
System of neurons with glowing connections on black background In trying to move forward with the intersection of AI and neuroscience, one of the abiding tasks that teams focus on is to look at the similarities and differences between biological brains that develop naturally, and artificial ones created by scientists. But now we have this whole other dichotomy between neural networks, which are themselves only really 10 years old or so in terms of evolved systems, and new biological organoid brains developed with biological materials in a laboratory. If you feel like this is going to be deeply confusing in terms of neurological research, you're not alone – and there are a lot of unanswered questions around how the brain works, even with these fully developed simulations and models. Terminology and Methodology with Organoids A couple of weeks ago, I wrote about the process of growing brain matter in a lab. Not just growing brain matter, but growing a small pear-shaped brain, which scientists call an organoid, that apparently can grow its own eyes. Observing that sort of strange phenomenon feeds our instinctive tendency to connect vision to intelligence – to explore the relationship between the eye and the brain. People on both sides of the aisle, in AI research and bioscience, have been looking at this relationship. In developing some of the neural net models that are most promising today, researchers were inspired by the primitive visual capabilities of a roundworm called C elegans, which famously led to the development of some kinds of AI and medical research. Back to the production of biological brain-esque organoids, in further research, I found that scientists use stem cells and something called 'Matrigel' – and that this developed from a decades-long analysis of tumor material in lab mice. There's a lot to unpack there, and we'll probably hear a lot more about this as people realize that these mini-brains are around. Exploring Vision and Intelligence One of the tech talks at a recent Imagination in Action event also piqued my interest in this area. It came from Kushagra Tiwary, who talked about exploring 'what if' scenarios involving different kinds of evolution. 'One of the first questions that we ask is: what if the goals of vision were different, right? What if vision evolved for completely different things? The second question we're asking is: We all have lenses in our eyes, our cameras have lenses. What if lenses didn't evolve? How would we see the world if that didn't happen? Would we be able to detect food? … Maybe these things wouldn't happen. And by asking these questions, we can start to investigate why we have the vision that we have today, and, more importantly, why we have the visual intelligence that we have today.' He had one more question. (Two more questions, really.) 'Our brains also develop at kind of the same pace as our eyes, and one would argue that, you know, we really see with our brains, not with our eyes, right? So what if the computational cost of the brain were much lower?' He talked about the brain/eye scaling relationship, and key elements of how we process information visually. Then Tiwary mentioned that this could inform AI research as we build agents in some of the same ways that we humans are built ourselves. Computer Vision, Robotics, and Industrial Applications There was another tech talk at the same event that covered collaborative visual intelligence. Annika Thomas went over some of the characteristics of multi-agent systems in a three-dimensional workspace – their ability to localize and extract objects, and something called 'Gaussian splatting' that informs how we think about information processing between the eye and the brain. The bottom line is that we have all of these highly complex models – we have the neural nets, which are fully digital, and now we have proto-brains growing in a petri dish. Then we also have these bodies of research that show us things like how the human brain evolved, how it differs from its artificial alternatives, and how we can continue to drive advancements in this field. Last, but not least, I recently saw that scientists believe we'll be able to harvest memories from a dead human brain in about 100 years, by 2125. Why so long? I asked ChatGPT, and the answer that I got was threefold – first, the process of decomposition makes the job difficult, second, we don't have full mapping of the human brain, and third, the desired information is stored in delicate frameworks. In other words, our memories in our brain are not in binary ones and zeros, but in neural structures and synaptic strengths, and those can be hard to measure by any outside party. It occurs to me, though, that if artificial intelligence itself has this vast ability to perceive small differences and map patterns, this type of capability may not be as far away as we think. That's the keyword here: think.


Forbes
20-05-2025
- Science
- Forbes
Physical And Agentic AI Is Coming
Some interesting questions are coming up in the world of artificial intelligence that have to do with the combination of physical environments and agentic AI. First of all, that term, agentic AI, is only a couple of years old. But it's taking hold in a big way – in enterprise, and government, and elsewhere. The key is this, though – if the AI agents can do things, how do they have the access to do those things? If it's digital tasks, the LLM has to be supported by APIs and connective tissue, like a Model Context Protocol or something else. But what if it's physical? In a recent panel at Imagination in Action in April, my colleague, Daniela Rus, director of the MIT CSAIL lab, talked to a number of experts about how this would work in both the public and private sectors. 'The bridge is when we can take AI's ability to understand text, images and other online data about the physical, world to make real-world machines intelligent,' Rus said. 'And now, if you can get a machine to understand a high-level goal, break it down into sub-components, and execute some of the sub-goals by itself, you end up with agentic AI.' So what did panelists center on? Here are a few major ideas that came out of the discussion on how AI can work more humanly in the physical world where humans live. In exploring what makes humans different from machines, there was the idea that people do things on a personal basis, which differentiates them from the herd. So the AI will have to learn not to follow a consensus-based model all of the time. That's a key bit of difference, what you might call a 'foible' that makes humans special - so in the enterprise world, it may not be a foible at all, but a value add. 'What you do not want is consensus regression to the mean information, like generally accepted ways of doing things,' said panelist Emrecan Dogan. 'This is not how, as humans, we create value. We create value by taking a subjective approach, taking the path that is very personal, very subjective, very idiosyncratic. We are not always right, but when we are right, this is how we create value.' As for government, panelist Col. Tucker Hamilton talked about electronic warfare, and stressed the importance of a human in the loop. 'I think we want to embed (HITL) so that a human is still in control,' he said. I think we need… explainability, traceability, and that goes along with governability as well. And I think we want to be able to favor adaptability to perfection.' You have to reason, you have to think and understand,' added panelist Jonas Diezun. Another way to think about this is that the programs have to be just the right amount of deterministic guidance. 'They don't always repeat,' Dogan said of these tools. 'They don't behave exactly the same way the second, third, fourth time you run it. So I think the big (idea) is the right blend of determinism that you can embed along with the stochasticity. So I think the truly powerful agents will convey some expression of deterministic behavior, but then the stochastic upside of AI models. Some other components of this have to do, simply, with infrastructure. 'Sensors, we're gathering information off of a sensor multi-modal (program), like sensor gathering,' Hamilton said. 'How do we summarize that information? How do we make sure that one sensor is fused with another sensor? How do we have pipelines that we can get that information to, in order to have someone just assess that, like sensor information, let alone how do we adopt flight autonomy?' In other words, all of those real-world pieces have to be connected the right way for the system to work in physical space, and not just digitally. Finally, Rus asked each panelist their timeline for AI taking over most human tasks. The lower numbers represent when these panelists think that the simple tasks can be adopted by AI. The second number is a projection of when AI would take over the more complex tasks. The verdict? 'Quarters, not years.' I thought all of this was very instructive in showing some of what we have to contend with as we anticipate the rest of the AI revolution. It's been a long time coming, but the exponential curve of the technology is finally here, and likely to be integrated into our worlds quickly almost suddenly. Job displacement is an enormous concern. So is the potential for runaway systems that could do more harm than good. Let's continue to be vigilant as 2025 rolls on.


Forbes
15-05-2025
- Business
- Forbes
AI Crosstalk On Your Claim
sifting through stacks of paper files and folders Sometimes it's still difficult to envision exactly how the newest LLM technologies are going to connect to real life implementations and use cases in a given industry. Other times, it's a lot easier. But just this past year, we've been hearing a lot about AI agents, and sort of, for lack of a better term, humanizing the technology that's in play. An AI agent is specialized – it's focused on a set of tasks. It is less general than a generic sort of neural network, and it's trained towards some particular goals and objectives. We've seen this work out in handling the tough kinds of projects that used to require a lot more granular human attention. We've also seen how API technology and related advances can allow models like Anthropic's Claude to perform tasks on computers, and that's a game-changer for the industry, too. So what are these models going to be doing in business? Cindi Howson has an idea. As Chief Data Strategy Officer at Thoughtspot, she has a front-row seat to this type of innovation. Talking at an Imagination in Action event in April, she gave an example of how this would work in the insurance industry – I want to include this in monologue form, because it lays out how an implementation could work, in a practical way. 'A homeowner will have questions,' she said, ''should I submit a claim? What will happen if I do that? Is this even covered? Will my policy rates go up?' The carrier will say, 'Well, does the policy include the coverage? Should I send an adjuster out? If I send an adjuster now … how much are the shingles going to cost me, or steel or wood? and this is changing day to day.' All of this includes data questions. So if you could re-imagine, all of this is now manual (and) can take a long time. What if we could say, let's have an AI agent … looking at the latest state of those roofing structures. That agent then calls a data AI agent, so this could be something like Thoughtspot, that is looking up how many homeowners have a policy with roofs that are damaged. The claims agent, another agent could preemptively say, 'let's pay that claim.' Imagine the customer loyalty and satisfaction if you did that preemptively, and the claims agent then pays the claim.' It's essentially ensemble learning for AI, in the field of insurance, and Howson suggested there are many other fields where agentic collaboration could work this way. Each agent plays its particular role. You could almost sketch out an org chart the same way that you do with human staff. And then, presumably, they could sketch humans in, too. Howson mentioned human in the loop in passing, and it's likely that many companies will adopt a hybrid approach. (We'll see that idea of hybrid implementation show up later here as well.) Our people at the MIT Center for Collective Intelligence are working on this kind of thing, as you can see. In general, what Howson is talking about has a relation to APIs and the connective tissue of technology as we meld systems together. 'AI is the only interface you need,' she said, in thinking about how things get connected, now, and how they will get connected in the future. Explaining how she does research on her smartphone, and how AI connects elements of a network to, in her words, 'power the autonomous enterprise,' Howson led us to envision a world where our research and other tasks are increasingly out of our own hands. Of course, the quality of data is paramount. 'It could be customer health, NPS scores adoption trackers, but to do this you've got to have good data,' she said. 'So how can you prepare your data? And AI strategy must align to your business strategy, otherwise, it's just tech. You cannot do AI without a solid data foundation.' Later in her talk, Howson discussed how business leaders can bring together the structured data from things like live chatbots, and more structured data, for example, semi-structured PDFs sitting on old network drives. So legacy migration is going to be a major component of this. And the way that it's done is important. 'Bring people along on the journey,' she said. There was another point in this presentation that I thought was useful in the business world. Howson pointed out how companies have a choice – to send everything to the cloud, to keep it all on premises, or to adopt a hybrid approach. Vendors, she said, will often recommend either all one or all the other , but a hybrid approach works well for many businesses. She ended with an appeal to the imagination: Think big, imagine big,' she said. 'Imagine the whole workflow: start small, but then be prepared to scale fast.' I think it's likely that a large number of leadership teams will implement something like this in the year 2025. We've already seen some innovations like MCP that helped usher in the era of AI agents. This gives us a little bit of an illustration of how we get there.


Forbes
03-05-2025
- Business
- Forbes
What Are Digital Defense AI Agents?
We're in the new world of agentic AI, which means that everyone's looking at how to use AI agents to their advantage. In a certain simplistic sense, that means that companies are looking to use AI agents to sell, while governments are trying to use AI agents to do - whatever they are used to doing. Some consumer advocates argue that individual people who are so often being targeted by businesses and government activities need their own AI agents to defend them. When Alex 'Sandy' Pentland took the stage at this year's Imagination in Action event, he was talking about specifically this type of thing. 'They're going to try and hack me, do bad things to me,' he said of those ubiquitous agents controlled by business, government or big interest parties. 'They are going to twist my mind around politics, all of those things. And my answer to this is I need an AI agent to defend me. I need something who's on my side who can help me navigate returning things or avoiding scams, or all that whole sort of thing.' The idea that Pentland describes is that your AI agent addresses all of that other agent activity that's aimed at you, and intervenes on your behalf. The idea of a personal 'digital defender' in the form of an AI agent is not very widely talked about on the web. Pentland's video is up there, but you don't see much about the specific type of project in research papers, or on corporate sites, or even at Consumer Reports (more on this later). In a way, it's like having a public defender in court. There's a legal effort against you, so you need your own advocacy to represent you on your side. Although some might call these attorneys 'public pretenders' due to underpayment, short staffing, or other problems, hopefully the AI agent is more effective in a global sense. It's also sort of like consumer reporting – Pentland mentioned how Consumer Reports has been doing this kind of work for 80 years with polls and other tools. 'This is why we have seat belts in cars,' he said. 'At Consumer Reports, what they do is, they pull all their people, they do tests and things like that to find good products. That's what I want, is, I want somebody who's on my side that way.' Another sort of similar idea is cybersecurity agents who are created by a company called Twine that are intended to protect people from cyberattacks. But all that aside, Pentland's idea is still in its infancy. In fact, one of the most interesting parts of his presentation was when he talked about all of these business people making their way into one room to talk about personal AI defense agents. 'We had C-level representation, the head of AI products for every single major AI producer, show up on one week's notice,' he explained. 'We also had all the payers show up … people (who handle) credit cards, etc. We had all the systems guys show up. Now (you're in a) little room with more C-level people than you've ever seen in your entire life. Very busy people who showed up on one week's notice.' It's largely liability, he suggested, that brought them to the table 'If they're going to deploy these things, and they're going to be interacting with you, they had better not cheat, they'd better not be biased, or scam you,' he said. 'They have a lot of liability, legal liability, as well as reputational liability. They have to be fair in helping you do things, otherwise they're going to end up in class action courts. That's what they wanted. They wanted someone to build a standard best practice personal agent.' He mentioned a couple of caveats: the agentic system has to undergo legal testing. Ideally, it should be hosted in academia to show impartiality. While best practices are good, he said, companies and other parties really want a standard, because a standard is bulletproof. Pentland also talked about a sort of digital populism that's appealing to those who feel like there's strength in numbers. 'You're just you,' he said. 'But if there were a million yous, or 10 million yous, all (of them) trying to get a good deal, avoid scams, fill out that legal form, you could actually have Ais that are competitive with the best results. So that solves the own, your own data problem (pretty well).' In response to questions, Pentland went over some advice for those who are just starting their careers now. Part of it had to do with solving big questions around how these defense agents will work. 'How do I know what's good for me, and what I want?' he asked, raising some of the essential questions of how an AI agent can target its efforts correctly, according to the user's preference and welfare. He also brought up questions around how to put agents together, to build toward what he called a 'network effect' that magnifies what a connected system of agents can do. He also talked about another kind of game theory where it's easy to upset the apple cart with just a small adjustment. Essentially, Pentland argued, a bad actor can easily throw a system out of balance by being 'just a little edgy,' by making small changes that lead to a domino effect that can be detrimental. He used the example of a traffic jam, which starts off as just one car in dense traffic changing its behavior. This type of game theory, he asserted, has to be factored into how we create our digital defense agent networks. With all of this in mind, it's probably a good idea to think about building those digital defense agents. They might not be perfect right away, but they might be the defense that we need against an emerging army of hackers utilizing some of the most potent technologies we've ever seen. The idea also feeds back into the whole debate about open source and closed source models, and when tools should be published for all the world to use. It's imperative to keep a lid on the types of bad actors that could otherwise jeopardize systems. In the cryptocurrency days, we had the notion of a 51% attack, where as soon as somebody held more than half of a given blockchain item, they had full control, with no exceptions. The solution to our AI liability might be something like this. Look for this type of research to continue.