logo
What Are Digital Defense AI Agents?

What Are Digital Defense AI Agents?

Forbes03-05-2025
We're in the new world of agentic AI, which means that everyone's looking at how to use AI agents to their advantage.
In a certain simplistic sense, that means that companies are looking to use AI agents to sell, while governments are trying to use AI agents to do - whatever they are used to doing. Some consumer advocates argue that individual people who are so often being targeted by businesses and government activities need their own AI agents to defend them.
When Alex 'Sandy' Pentland took the stage at this year's Imagination in Action event, he was talking about specifically this type of thing.
'They're going to try and hack me, do bad things to me,' he said of those ubiquitous agents controlled by business, government or big interest parties. 'They are going to twist my mind around politics, all of those things. And my answer to this is I need an AI agent to defend me. I need something who's on my side who can help me navigate returning things or avoiding scams, or all that whole sort of thing.'
The idea that Pentland describes is that your AI agent addresses all of that other agent activity that's aimed at you, and intervenes on your behalf.
The idea of a personal 'digital defender' in the form of an AI agent is not very widely talked about on the web. Pentland's video is up there, but you don't see much about the specific type of project in research papers, or on corporate sites, or even at Consumer Reports (more on this later).
In a way, it's like having a public defender in court. There's a legal effort against you, so you need your own advocacy to represent you on your side. Although some might call these attorneys 'public pretenders' due to underpayment, short staffing, or other problems, hopefully the AI agent is more effective in a global sense.
It's also sort of like consumer reporting – Pentland mentioned how Consumer Reports has been doing this kind of work for 80 years with polls and other tools.
'This is why we have seat belts in cars,' he said. 'At Consumer Reports, what they do is, they pull all their people, they do tests and things like that to find good products. That's what I want, is, I want somebody who's on my side that way.'
Another sort of similar idea is cybersecurity agents who are created by a company called Twine that are intended to protect people from cyberattacks.
But all that aside, Pentland's idea is still in its infancy.
In fact, one of the most interesting parts of his presentation was when he talked about all of these business people making their way into one room to talk about personal AI defense agents.
'We had C-level representation, the head of AI products for every single major AI producer, show up on one week's notice,' he explained. 'We also had all the payers show up … people (who handle) credit cards, etc. We had all the systems guys show up. Now (you're in a) little room with more C-level people than you've ever seen in your entire life. Very busy people who showed up on one week's notice.'
It's largely liability, he suggested, that brought them to the table
'If they're going to deploy these things, and they're going to be interacting with you, they had better not cheat, they'd better not be biased, or scam you,' he said. 'They have a lot of liability, legal liability, as well as reputational liability. They have to be fair in helping you do things, otherwise they're going to end up in class action courts. That's what they wanted. They wanted someone to build a standard best practice personal agent.'
He mentioned a couple of caveats: the agentic system has to undergo legal testing. Ideally, it should be hosted in academia to show impartiality. While best practices are good, he said, companies and other parties really want a standard, because a standard is bulletproof.
Pentland also talked about a sort of digital populism that's appealing to those who feel like there's strength in numbers.
'You're just you,' he said. 'But if there were a million yous, or 10 million yous, all (of them) trying to get a good deal, avoid scams, fill out that legal form, you could actually have Ais that are competitive with the best results. So that solves the own, your own data problem (pretty well).'
In response to questions, Pentland went over some advice for those who are just starting their careers now. Part of it had to do with solving big questions around how these defense agents will work.
'How do I know what's good for me, and what I want?' he asked, raising some of the essential questions of how an AI agent can target its efforts correctly, according to the user's preference and welfare.
He also brought up questions around how to put agents together, to build toward what he called a 'network effect' that magnifies what a connected system of agents can do.
He also talked about another kind of game theory where it's easy to upset the apple cart with just a small adjustment.
Essentially, Pentland argued, a bad actor can easily throw a system out of balance by being 'just a little edgy,' by making small changes that lead to a domino effect that can be detrimental.
He used the example of a traffic jam, which starts off as just one car in dense traffic changing its behavior. This type of game theory, he asserted, has to be factored into how we create our digital defense agent networks.
With all of this in mind, it's probably a good idea to think about building those digital defense agents. They might not be perfect right away, but they might be the defense that we need against an emerging army of hackers utilizing some of the most potent technologies we've ever seen. The idea also feeds back into the whole debate about open source and closed source models, and when tools should be published for all the world to use. It's imperative to keep a lid on the types of bad actors that could otherwise jeopardize systems. In the cryptocurrency days, we had the notion of a 51% attack, where as soon as somebody held more than half of a given blockchain item, they had full control, with no exceptions.
The solution to our AI liability might be something like this. Look for this type of research to continue.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Should We Think Of AI As A Mother?
Should We Think Of AI As A Mother?

Forbes

time3 days ago

  • Forbes

Should We Think Of AI As A Mother?

Mother, child and happy piggyback in summer. getty This is a very unique time in the development of artificial intelligence. When historians go back and sort through the last few years, and the years to come, it might be hard to put a finger on just where the critical mass moment occurred, when it was that we vaulted into the future on the wings of LLMs. But to me, it's telling that we are talking so much about systems and products and opportunities that would have been unimaginable just a decade ago. Image creation, for instance. Even in the oughts, in the early years of the new millennium, you still had to make your own pretty pictures and charts and graphs. Not anymore. Voice cloning, realistic texting companions, robots running races… the list goes on and on. Amid this rapid set of developments, some of those closest to the industry are warning that we need a certain trajectory to make sure that AI is safe. One such person is Yann LeCun, former head of research at Meta, who has been on stage at multiple Imagination in Action events, and gets top billing on many panels and conferences where he discusses innovation. Right now, LeCun is in the news for suggesting that AI needs 'guardrails,' that there are specific principles that we will need to keep in mind to ensure the fidelity of our use cases. What he's calling for is two-fold: first, that the systems be able to represent 'empathy' or compassion, and second, that the AI entities need to be deferential to human authority. That second one speaks to the way that breakout forces escape the food chain of the natural world: humans did this aeons ago, with weaponry and protective systems that basically eliminated natural predators. I guess the idea is that we now have a new potential predator that must be neutralized in a different way. To wit, LeCun said this: "Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans.' That word, instinct, helps to explain those deep-level motivations that do, in a real sense, guide behaviors. Hopefully we haven't lost ours, as humans, and hopefully we can help AIs form theirs. Reporting on LeCun's comments notes that he's speaking in the wake of some input from Geoffrey Hinton, who is often called the 'godfather of AI' but ended up disavowing his brainchild, to a certain extent. Hinton's own comments go right to the core of how we see human-to-human interactions, and by extension, those we will have with humanoid AI. He asks us to imagine if AI could be like our mothers. 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,' Hinton reportedly said. 'If it's not going to parent me, it's going to replace me. …These super-intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Unfortunately, this goal seems to fly in the face of the hubris observed in our modern societies – with both superpowers and domestic populations armed to the teeth against each other, what chance do we have of internalizing the right instinct, to bond with a more powerful partner? On the other hand, ascribing maternal roles to AI seems like a positive thing, but is it the right thing, at the end of the day? Ultimately, those aspirations that LeCun and Hinton mention (empathy, etc.) are objectives for us, too. It's also sobering that these comments come at a time when a jury has just brought the top self-driving vehicle company to heel with a $200 million fine for a fatality involving technology: ruling on the death of Naibel Benavides Leon , struck by a Tesla car on Autopilot, the jury found that technology makers are responsible, to an extent, for that lack of guardrails that has real and tragic consequences. It's a powerful metaphor: that to build correctly, we have to deliberate, not only on market principles, but on greater ones, too – that we have to have a long-term picture of how society is going to work with these AGIs and agentic systems in play. AI is now able to 'do things for you,' and so, what sorts of things will it be doing? I'm reminded, again, of the proposal by my colleague Dr. John Sviokla that AI could provide individual tutors for humans , to help them work through various kinds of critical thinking, and the suggestion from other quarters that one human priority should be to hire an army of philosophers to keep us nicely in the lane when it comes to AI development. Here's an interesting resource from Selmer Bringsjord and Konstantine Arkoudas at the Rensselaer Polytechnic Institute (RPI) in Troy, NY, talking in 2007 about the fundament of AI research . They cite another team of authors in suggesting: 'The fundamental goal of AI research is not merely to mimic intelligence or produce some clever fake. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves.' 'This 'theoretical conception' of the human mind as a computer has served as the bedrock of most strong-AI research to date,' Bringsjord and Arkoudas write. 'It has come to be known as the computational theory of the mind; we will discuss it in detail shortly. On the other hand, AI engineering that is itself informed by philosophy, as in the case of the sustained attempt to mechanize reasoning, discussed in the next section, can be pursued in the service of both weak and strong AI.' There's a lot more in here, about speculation, logic, mechanistic thought, etc. – to sink your teeth into. And similarly, quite a few MIT people are working somewhere in the junction of neuroscience, AI, and biological modeling, to come to a more informed perspective on what the future will look like. And perhaps, as Paul Simon sings, the mother and child reunion is only a motion away.

Kikoff Launches AI Credit Disputes to Help 1M Users Fix Credit Report Errors for Free
Kikoff Launches AI Credit Disputes to Help 1M Users Fix Credit Report Errors for Free

Business Wire

time5 days ago

  • Business Wire

Kikoff Launches AI Credit Disputes to Help 1M Users Fix Credit Report Errors for Free

SAN FRANCISCO--(BUSINESS WIRE)-- Kikoff, the credit-building platform used by over a million consumers, today announces the launch of AI Credit Disputes, a proprietary feature that helps users identify and correct errors on their credit reports today. Now available in the Kikoff app, AI Credit Disputes is the latest addition to the company's growing suite of tools designed to help people access financial services, from which they were historically excluded. This launch is part of Kikoff's broader mission to replace predatory products with radically affordable, effective solutions. Now available in the Kikoff app, AI Credit Disputes is the latest addition to the company's growing suite of tools designed to help people access financial services, from which they were historically excluded. Share 'Mistakes on a credit report can have a real impact on someone's life, but many people don't know where to start or feel overwhelmed by the process,' said Cynthia Chen, Founder and CEO of Kikoff. 'We built AI Credit Disputes to make it easier for people to take that first step. By using AI to simplify the process and giving users a clear path forward, we're helping them take back control and make real progress.' AI-Powered Credit Disputes Now Available to Help More People Take Control Errors are more common than many realize. According to Consumer Reports, 44 percent of people who reviewed their credit reports found at least one error. For Kikoff users—many of whom are working to build credit and carry an average of 18 derogatory marks—these errors can be a major barrier towards reaching their financial goals. Disputing credit report errors has long been so complex that many turn to credit repair services for help. Some of the credit repair services charge upwards of $300 a month. Kikoff's AI Credit Disputes is free for Kikoff users and offers a smarter alternative, making the process simple and personalized. During the two‑month pilot with a segment of users, AI Credit Disputes has already helped these pilot users dispute over 70,000 credit report errors. The number of disputes will continue to climb as the feature rolls out to all 1M+ active Kikoff users. The tool simplifies a traditionally confusing and costly process by using AI to generate personalized, FCRA-compliant dispute letters tailored to each user's specific situation. Instead of relying on generic templates, users can now use personalized language that reflects their reason for the dispute, to increase chances of approval and making the process more effective and approachable. Ashley Weeks, a Kikoff user from Florida, experienced that firsthand: 'Before Kikoff, I had tried submitting disputes on my own, but the process was confusing. I've used Kikoff's credit disputes multiple times now, and it honestly makes things so much easier. Being able to see the letter that Kikoff prepared gave me more confidence, and I really believe it helped lead to a better outcome.' AI Credit Disputes is the latest step in Kikoff's mission to use AI to dismantle financial barriers. With more innovations on the way, Kikoff remains focused on expanding access to tools that create real, lasting financial progress. To try AI Credit Disputes, go to: To learn more about Kikoff and its credit-building tools, visit: About Kikoff Kikoff was built on the belief that predatory financial services shouldn't exist. As a consumer-focused personal finance platform, Kikoff is making financial progress accessible, especially for those overlooked by traditional systems. We offer simple, radically affordable tools powered by technology and AI to help people build credit, lower debt, and move toward long-term financial stability. To date, Kikoff has helped over 1 million people increase their credit scores by more than 80 million points. Our growing suite of products also helps users reduce debt, access liquidity, and unlock greater financial opportunity. Learn more at or by downloading the Kikoff app.

Consumer Reports: Convenience comes at a cost with smart appliances
Consumer Reports: Convenience comes at a cost with smart appliances

Yahoo

time11-08-2025

  • Yahoo

Consumer Reports: Convenience comes at a cost with smart appliances

From vacuuming your floors to cleaning your grill, smart appliances can offer convenience and save time. Consumer Reports weighed in on if they really deliver. Consumer Reports' Dan Wroclawski says the products have their limitations. 'Most of these devices can be convenient and make less work for you, but they only usually get the job done about ninety percent of the time,' Wroclawski said. ALSO READ >> Consumer Reports: How to save on laptops ahead of the school year For example, a $5,000 refrigerator has an AI camera that's supposed to recognize the groceries inside. When Consumer Reports tested the feature, it worked only about half the time, and it struggled with packaged foods, like milk and soda. 'They tend to be pricier than their traditional counterparts, and they often have complicated setup processes and require extra tinkering to keep them working,' he said. A Consumer Reports evaluation of robotic lawn mowers found that before you can set the robot loose, you have to pre-mow your grass to a specific height with a traditional mower. On the other hand, some smart products, like robotic vacuums, have come a long way. Consumer Reports says they're better a dodging obstacles, navigating tight spaces, and pulling dirt from edges and corners. While it's not a true robot, Consumer Reports says a smart thermostat is a great investment. 'These will cut down on your heating and cooling bills, using AI to automatically adjust the temperature, especially when you're not home,' Wroclawski said. Smart thermostats also connect to demand response programs from your power company, which can give you discounts on your bill. With any of these smart home products, Consumer Reports says to be mindful of the security and privacy settings, and turn off data sharing if you can. VIDEO: Consumer Reports: Counterfeit knockoffs may pose safety risks

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store