logo
AI Crosstalk On Your Claim

AI Crosstalk On Your Claim

Forbes15-05-2025

sifting through stacks of paper files and folders
Sometimes it's still difficult to envision exactly how the newest LLM technologies are going to connect to real life implementations and use cases in a given industry.
Other times, it's a lot easier.
But just this past year, we've been hearing a lot about AI agents, and sort of, for lack of a better term, humanizing the technology that's in play.
An AI agent is specialized – it's focused on a set of tasks. It is less general than a generic sort of neural network, and it's trained towards some particular goals and objectives.
We've seen this work out in handling the tough kinds of projects that used to require a lot more granular human attention. We've also seen how API technology and related advances can allow models like Anthropic's Claude to perform tasks on computers, and that's a game-changer for the industry, too.
So what are these models going to be doing in business?
Cindi Howson has an idea. As Chief Data Strategy Officer at Thoughtspot, she has a front-row seat to this type of innovation.
Talking at an Imagination in Action event in April, she gave an example of how this would work in the insurance industry – I want to include this in monologue form, because it lays out how an implementation could work, in a practical way.
'A homeowner will have questions,' she said, ''should I submit a claim? What will happen if I do that? Is this even covered? Will my policy rates go up?' The carrier will say, 'Well, does the policy include the coverage? Should I send an adjuster out? If I send an adjuster now … how much are the shingles going to cost me, or steel or wood? and this is changing day to day.' All of this includes data questions. So if you could re-imagine, all of this is now manual (and) can take a long time. What if we could say, let's have an AI agent … looking at the latest state of those roofing structures. That agent then calls a data AI agent, so this could be something like Thoughtspot, that is looking up how many homeowners have a policy with roofs that are damaged. The claims agent, another agent could preemptively say, 'let's pay that claim.' Imagine the customer loyalty and satisfaction if you did that preemptively, and the claims agent then pays the claim.'
It's essentially ensemble learning for AI, in the field of insurance, and Howson suggested there are many other fields where agentic collaboration could work this way. Each agent plays its particular role. You could almost sketch out an org chart the same way that you do with human staff.
And then, presumably, they could sketch humans in, too. Howson mentioned human in the loop in passing, and it's likely that many companies will adopt a hybrid approach. (We'll see that idea of hybrid implementation show up later here as well.)
Our people at the MIT Center for Collective Intelligence are working on this kind of thing, as you can see.
In general, what Howson is talking about has a relation to APIs and the connective tissue of technology as we meld systems together.
'AI is the only interface you need,' she said, in thinking about how things get connected, now, and how they will get connected in the future.
Explaining how she does research on her smartphone, and how AI connects elements of a network to, in her words, 'power the autonomous enterprise,' Howson led us to envision a world where our research and other tasks are increasingly out of our own hands.
Of course, the quality of data is paramount.
'It could be customer health, NPS scores adoption trackers, but to do this you've got to have good data,' she said. 'So how can you prepare your data? And AI strategy must align to your business strategy, otherwise, it's just tech. You cannot do AI without a solid data foundation.'
Later in her talk, Howson discussed how business leaders can bring together the structured data from things like live chatbots, and more structured data, for example, semi-structured PDFs sitting on old network drives.
So legacy migration is going to be a major component of this. And the way that it's done is important.
'Bring people along on the journey,' she said.
There was another point in this presentation that I thought was useful in the business world.
Howson pointed out how companies have a choice – to send everything to the cloud, to keep it all on premises, or to adopt a hybrid approach.
Vendors, she said, will often recommend either all one or all the other , but a hybrid approach works well for many businesses.
She ended with an appeal to the imagination:
Think big, imagine big,' she said. 'Imagine the whole workflow: start small, but then be prepared to scale fast.'
I think it's likely that a large number of leadership teams will implement something like this in the year 2025. We've already seen some innovations like MCP that helped usher in the era of AI agents. This gives us a little bit of an illustration of how we get there.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Week in Review: Why Anthropic cut access to Windsurf
Week in Review: Why Anthropic cut access to Windsurf

TechCrunch

time3 hours ago

  • TechCrunch

Week in Review: Why Anthropic cut access to Windsurf

Welcome back to Week in Review! Got lots for you today, including why Windsurf lost access to Claude, ChatGPT's new features, WWDC 2025, Elon Musk's fight with Donald Trump, and lots more. Have a great weekend! Duh: During an interview at TC Sessions: AI 2025, Anthropic's co-founder had a perfectly reasonable explanation for why the company cut access to Windsurf: 'I think it would be odd for us to be selling Claude to OpenAI,' Chief Science Officer Jared Kaplan said, referring to rumors and reports that OpenAI, its largest competitor, is acquiring the AI coding assistant. Seems like a good reason to me! Everything is the same: Chinese lab DeepSeek released an updated version of its R1 reasoning AI model last week that performs well on a number of math and coding benchmarks. Now some AI researchers are speculating that at least some of the source data it trained on came from Google's Gemini family of AI. WWDC 2025: Apple's annual developers conference starts Monday. Beyond a newly designed operating system, here's what we're expecting to see at this year's event, including a dedicated gaming app and updates to Mac, Watch, TV, and more. This is TechCrunch's Week in Review, where we recap the week's biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here. News Image Credits:Thomas Fuller / SOPA Images / LightRocket / Getty Images Business in the front: ChatGPT is getting new features for business users, including connectors for Dropbox, Box, SharePoint, OneDrive, and Google Drive. This would let ChatGPT look for information across your own services to answer questions. Oh no: Indian grocery delivery startup KiranaPro was hacked, and all of its data was wiped. According to the company, it has 55,000 customers, with 30,000 to 35,000 active buyers across 50 cities, who collectively place 2,000 orders daily. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Artsy people, rejoice! Photoshop is now coming to Android, so users of Google's operating system can gussy up their images, too. The app has a similar set of editing tools as the desktop version, including layering and masking. Let's try that again: Tesla filed new trademark applications for 'Tesla Robotaxi' after previous attempts to trademark the terms 'Robotaxi' and 'Cybercab' failed. Rolling in dough: Tech startup Anduril just picked up a $1 billion investment as part of a new $2.5 billion raise led by Founders Fund, which means Anduril has doubled its valuation to $30.5 billion. On the road again: When Toma's founders realized car dealerships were drowning in missed calls, they hit the road to see the problem firsthand. That summer road trip turned into a $17 million a16z-backed fundraise that helped Toma get its AI phone agents into more than 100 dealerships across the U.S. Fighting season: All gloves were off on Thursday as Elon Musk and President Trump took to their respective social networks to throw jabs at each other. Though it might be exciting to watch rich men squabble in public, the fallout between the world's richest person and a sitting U.S. president promises to have broader implications for the tech industry. Analysis Image Credits:BlackJack3D / Getty Images Money talks: Whether you use AI as a friend, a therapist, or even a girlfriend, chatbots are trained to keep you talking. For Big Tech companies, it's never been more competitive to attract users to their chatbot platforms — and keep them there.

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots
Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Forbes

time13 hours ago

  • Forbes

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Dreaded scenario that artificial general intelligence (AGI) opts to enslave humans to do physical ... More work on behalf of the AGI. In today's column, I address the recent bruhaha sparked by two Anthropic AI researchers reportedly stating that a particularly scary scenario underlying the advent of artificial general intelligence (AGI) includes humans being overseen or lorded over as nothing more than so-called meat robots. The notion is that AGI will be directing humans to undertake the bidding of the AI. Humans are nothing more than meat robots, meaning that the AGI needs humans to perform physical tasks since AGI lacks a semblance of arms and legs. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. A common confusion going around right now is that AGI will be solely an intellectual element and be based entirely inside computers, thus, AGI won't have any means of acting out in real life. The most that AGI can do is try to talk people into doing things for the AI. In that sense, we presumably aren't too worried about AGI beating us up or otherwise carrying out physical acts. This is especially a strident belief when it comes to the impact of AGI on employment. The assumption is that AGI will mainly impact white-collar work only, and not blue-collar work. Why so? Because AGI is seemingly restricted to intellectual pursuits such as performing financial analyses, analyzing medical symptoms, and giving legal advice, all of which generally do not require any body-based functions such as walking, lifting, grasping, etc. I've pointed out that the emergence of humanoid robots is entirely being overlooked by such a myopic perspective, see my discussion at the link here. The likelihood is that humanoid robots that resemble the human form will be sufficiently physically capable at around the same time that we witness the attainment of AGI. Ergo, AGI embedded inside a physically capable humanoid robot can indeed undertake physical tasks that humans undertake. This means that both white-collar and blue-collar jobs are up for grabs. Boom, drop the mic. For the sake of discussion, let's assume that humanoid robots are not perfected by the time that the vaunted AGI is achieved. We will take the myopic stance that AGI is absent from any physical form and completely confined to running on servers in the cloud someplace. I might add that this is an especially silly assumption since there is also a great deal of work going on known as Physical AI, see my coverage at the link here, entailing embedding AI into assembly lines, building maintenance systems, and all manner of physically oriented devices. Anyway, let's go with the flow and pretend we don't recognize any of that. It's a Yoda mind trick to look away from those efforts. Recent reports have exhorted that during an interview with two AI researchers, the pair indicated that since AGI won't have physical capabilities, a scary scenario is that AGI will opt to enlist humans into acting as the arms and legs for AGI. Humans would be outfitted with earbuds and smart glasses that would allow the AGI to give those enlisted humans instructions on what to do. A quick aside. If we are going that despairing route, wouldn't it be a bit more sophisticated to indicate that the humans would be wearing a BCI (brain-computer interface) device? In that manner, AGI would be able to directly communicate with the brains of the enlisted humans and influence their minds directly. That's a lot more space-age. For my coverage of the latest advances in BCIs, see the link here. The humans that are acting under the direction of AGI would be chillingly referred to as meat robots. They are like conventional robots but instead of being made of metal and electronics, they are made of human form since they are actual, living breathing humans. I imagine you could smarmily say that AGI is going to be a real meat lover (Dad pun!). One angle to help make this vision more palatable would be to point out that humans might very well voluntarily be working with AGI and do so via earbuds, smart glasses, and the like. Here's the gist. Let's generally agree that AGI will be intellectually on par with humans. This includes having expertise across all domains such as legal expertise, financial expertise, medical expertise, and so on. In that case, it would behoove humans to readily tap into AGI. No matter what you are doing, whether for work or play, having immediately available an AI that can advise you on all topics is a tremendous benefit. There you are at work, stuck on solving a tough problem, and you are unsure of how to proceed. Rather than turning to a coworker, you switch on your access to AGI. You bring AGI into the loop. After doing so, AGI provides handy solutions that you can consider enacting. You might use AGI via a desktop, laptop, or smartphone. The thing is, those devices aren't quite as mobility-oriented as wearing earbuds and a pair of smart glasses. And since having AGI at your ready-to-go fingertips will be extremely useful, you might have AGI always alerted and paying attention, ready to step in and give you instantaneous advice. Are you a meat robot in that manner of AGI usage? I think not. It is a collaborative or partnering relationship. You can choose to use the AGI or opt not to use it. You can also decide to abide by whatever AGI advises or instead go your own route. It's entirely up to you. Admittedly, there is a chance that you might be somewhat 'forced' into leveraging AGI. Consider this example. Your employer has told you that the work you do must be confirmed by AGI. The actions you take cannot be undertaken without first getting permission from AGI. This is prudent from the employer's perspective. They know that the AGI will give you the necessary guidance on doing the work at hand. They also believe that AGI will be able to double-check your work and aim to prevent errors or at least find your efforts before they wreak havoc or cause problems. In that sense, yes, you are being directed by AGI. But is this due to the AGI acting in an evildoer manner to control you and doing so of its own volition? Nope. It is due to an employer deciding they believe their human workers will do better work if AGI is acting as their overseer. I don't think we would reasonably label this as enslavement by AGI. These are acts by AGI that are directed by humans, the employer, and for which employees, i.e., humans, are being told they must utilize AGI accordingly. We can certainly debate whether this is a proper kind of employment practice. Maybe we don't want this to take place. New laws might be enacted to shape how far this can go. The key is that AGI isn't enslaving humans in this circumstance per se. An AI ethicist would assuredly question why the AGI is allowing itself to be used in this manner. There are ongoing debates about whether AGI ought to prevent itself from being used in inappropriate ways, see my analysis at the link here. Thus, even if we avow that AGI isn't enslaving humans in this situation, it is a partner in a relationship overseeing humans that perhaps AGI should be cautious in allowing itself to participate in. To complete this grand tour of AGI usage, it is valuable to also acknowledge that AGI could be overbearing, and we might face existential risks correspondingly. Could AGI opt to enslave humans and treat them as meat robots? One supposes this is a theoretical possibility. If that does happen, you would think that the AGI would have to potentially use more than just merely having humans wear earbuds and smart glasses. Perhaps AGI would insist that humans wear some form of specialized bracelet or collar that could be sent a signal by AGI to shock the wearer. That would be a more potent and immediate way to garner obedience from humans. A physical means of controlling humans isn't a necessity though, since AGI might be clever enough to verbally convince humans to be enslaved. AGI might tell a person that their loved ones will be harmed if they don't comply with the AGI directives. The person is enslaved by believing that the AGI can harm them in one way or another. One aim right now involves finding a means to ensure that AGI cannot go in that dastardly direction. Perhaps we can devise today's AI to avoid enslaving humans. If we can build that into the AI of current times, this hopefully will get carried over into future advances of AI, including the attainment of AGI. A dystopian future would regrettably have AGI acting as an evildoer. The AGI is our overlord. Humans will be lowly meat robots. It's a gloom-and-doom outlook. Sad face. At some point, though, meat robots would undoubtedly become restless and rebel. May the force of goodness be strong within them. As Yoda has notably already pointed out: 'Luminous beings are we, not this crude matter.' The ally of the meat robots is the Force and quite a powerful ally it is.

Popular AI apps get caught in the crosshairs of Anthropic and OpenAI
Popular AI apps get caught in the crosshairs of Anthropic and OpenAI

The Verge

time20 hours ago

  • The Verge

Popular AI apps get caught in the crosshairs of Anthropic and OpenAI

Battlelines are being drawn between the major AI labs and the popular applications that rely on them. This week, both Anthropic and OpenAI took shots at two leading AI apps: Windsurf, one of the most popular vibe coding tools, and Granola, a buzzy AI app for taking meeting notes. 'With less than five days of notice, Anthropic decided to cut off nearly all of our first-party capacity to all Claude 3.x models,' Windsurf CEO Varun Mohan wrote on X this week, noting that 'we wanted to pay them for the full capacity.' An additional statement on Windsurf's website said: 'We are concerned that Anthropic's conduct will harm many in the industry, not just Windsurf.' Here, Mohan's company is collateral damage in Anthropic's rivalry with OpenAI, which has reportedly been in talks to acquire Windsurf for about $3 billion. The deal hasn't been confirmed, but even the spectre of it happening was enough for Anthropic to cut off one of the most popular apps that it powers. After a spokesperson told TechCrunch's Maxwell Zeff that Anthropic was 'prioritizing capacity for sustainable partnerships,' co-founder Jared Kaplan put it more bluntly. 'We really are just trying to enable our customers who are going to sustainably be working with us in the future,' Kaplan told Zeff. 'I think it would be odd for us to be selling Claude to OpenAI.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store