Latest news with #AIDailyBrief


Forbes
5 hours ago
- Business
- Forbes
13 Big Hunches On AI And Consultants
The Scream', 1893. The Scream is one of four versions painted by Edward Munch in 1893. The ghostly, ... More agonised figure against the background of a red sunset is one of the most well known images in the world of art - a symbol of despair and alienation. From the Nasjonalgalleriet, Oslo. Artist Edvard Munch. (Photo by Art Media/The) Nathaniel Whittemore has some thoughts about consulting in the AI era. For those who might not know, Whittemore is the intrepid host of AI Daily Brief, a podcast I tend to pay attention to as we move through a profound technological shift that has most of us looking for answers. In a rare episode of the podcast where he acknowledged being on the road, Whittemore went through his own LinkedIn post about consulting companies, to glean some predictions and insights (13 of them) on how these firms will work as AI continues to take hold. Chunks of legal and accounting work, he suggested, will get sent to AI (that's #1) calling them 'total bye-byes.' Consulting, he said, will 'distill' things into a targeted set of realities around AI. He described a kind of musical chairs (that's #2) where companies will move into other market areas, big firms to mid-market, etc. as the change happens. As for the displacement of humans, Whittemore spoke to a demand for external validation that, he suggested, will keep the HITL (human in the loop) in play. Enterprises will divide into two categories, either slicing headcount, or use a 'flexible roster' of partners, consultants and fellow travelers. 'Personally, I'm pretty inclined towards smaller, more liberal organizations powered by a flexible roster of partners and consultants,' he said, 'because I think it aligns also with people's (takes) on their professional services, figuring out how far they can take it with AI.' Then there's Whittemore's theorization of new products and new lines of business, as well as 'new practices' (that's #4 on the list, read the list for the others.) His caveat: these predictions are 'ridiculously generic' and nobody really knows what the future holds. 'I think the way that this plays out is going to be pretty enterprise by enterprise,' he said. In the second half of the podcast, he suggested that business people are going to be 'Jerry Macguiring,' where they start to have a different picture of enterprise value. He talked about small teams of consultants managing large swarming agents as a 'default model' and suggested that 'pricing experiments' are going to become common. Buyers, he concluded, are going to prioritize intangibles. Here's another of Whittemore's points: people want to work with people who they like working with, or, in the parlance of his list, 'j𝘂𝗱𝗴𝗲𝗺𝗲𝗻𝘁, 𝘁𝗮𝘀𝘁𝗲𝗮𝗻𝗱𝗘𝗤𝗮𝗹𝗹𝗮𝗹𝘀𝗼𝗿𝗶𝘀𝗲𝗮𝘀𝗺𝗼𝗮𝘁𝘀𝗳𝗼𝗿𝗰𝗼𝗻𝘀𝘂𝗹𝘁𝗮𝗻𝘁𝘀.' (That's #10). More from Whittemore: the value of prototyping, a prediction of companies operating in what he called 'iteration-land,' and an advantage to parties who can build frameworks for the AI future. 'Continuous iteration requires data measurement, analytics, and systems,' he said, 'to make sense of it all: I think these are table stakes aspects of engagements, now that consultants and professional services firms just have to build in an understanding and an expectation of how they're going to measure their impact.' There's a lot of insight here, and if you want to go back and read the whole thing over again, I wouldn't blame you. Whittemore goes over a lot of his analysis at lightning speed. But you do see certain things crop up time and time again – The idea that rapid change is going to shake out in certain ways, and that companies and people need to pivot as our AI revolution keeps heating up. Check it out.


Forbes
28-04-2025
- Business
- Forbes
The Agents Are Coming – More On What We Will Do Next To AI Partners
If the prior year was the year of artificial intelligence becoming more familiar to the average person, and the rise of certain brand names, like ChatGPT, this year is the year of the AI agent. In a nutshell, this is the idea that LLM engines can go beyond just predicting words or simulating conversation, and start doing things themselves. Some of Anthropic's Claude tools are an excellent example of the AI taking more initiative and doing more on its own. At MIT, researchers are working on something called the AI agent index that maintains a database of agentic AI systems, exploring how AI agents are used for things like research, software development, and more. A resource from our CSAIL lab shows some of the major benefits of AI agents, including efficiency, specialization, and lower operational costs (more on that in a moment). The article also has a list of MIT notables handling projects related to agentic AI, including the work of my colleague Daniela Rus, the director of CSAIL MIT, in integrating natural language processing for self-driving vehicles. It lists challenges, too, and takeaways for business. It's a good survey. Here's another interesting source for direction on agentic AI. In a recent edition of AI Daily Brief, Nathaniel Whittemore goes over an essay by Gian Segato about new kinds of companies that will leverage technology in specific ways. 'A new breed of companies is emerging, lean, unconventional and wildly successful,' Segato writes. 'They generate hundreds of millions of dollars, yet have no sales teams, no marketing departments, no formal HR, not even vertically specialized engineers. They're led by a handful of people doing the work of hundreds, leveraging machines to scale their impact. For years, we feared automation would replace humans, but as AI reshapes the economy, it's becoming clear that far from replacing human ingenuity, AI has amplified it.' Segato also goes over a version of what can make AI 'agentic,' related to human ingenuity. 'True agency is an unruly psychological trait,' he writes. 'It's the willingness (to do things) without explicit validation, instruction or even permission. It's the meme you can just do things knowing that you could poke life and something will pop out the other side.' As Whittemore reads Segato's essay, outsourcing this task to an Elevenlabs voice approximator, the listener hears a thesis taking shape – that AI is changing the calculus on specialized labor. Noting that the past has 'not been kind to generalists,' Segato describes a shift where specialized human knowledge is going to become less valuable, too: 'We're now facing a rupture, a phase transition. AI has eroded the value of specialization, because for many tasks, achieving the outcome (that previously took) several years of experience, it now takes a $20 ChatGPT subscription … a decade ago, it took me nine months to gain enough experience to ship a single prototype. Now it takes just one week to build a state-of-the-art platform ready to be shipped, a project once only achievable by a full team of professionals.' It will change the way training works, he posits, and may result in many companies favoring credentials over outcomes. In the course of the essay, Segato uses terms like 'homeostatic equilibrium' to describe an environment disrupted by AI, and 'bimodal shape distribution of deployment,' noting that we might trend toward a need for 'specialized human accountability' in managing these agents. 'This will include sectors such as defense, healthcare, space exploration, biological research and AI administration itself,' Segato writes, 'all domains where variance of prediction models are higher than the acceptable risk threshold. Wherever mistakes can kill, and AI can't prove to be virtually all-knowing, we can expect regulation to enforce natural barriers and the need to hire experts. It's similar to why we continue to require human pilots: despite having the technological capacity for autonomous flight, sometimes we just want the ability to point a finger.' On the other hand, he describes situations where iterative failure toward success is an option, writing: 'Wherever we are okay with trying again after getting a bad AI generation, we will see market disruption. Data science, marketing, financial modeling, education, graphic design, counseling and architecture will all experience an influx of non-specialized, high-agency individuals. Sure, machines will keep making mistakes, but their rate of improvement has been astronomical and will only continue to delay the point at which generalists feel the need to hire experts.' After reading the entire piece, Whittemore provides some words of his own, after the obligatory vendor snippets. 'I think it's a great piece,' he starts out, 'very thought provoking, and I'm really excited that Gian has shared it and gotten us all to chatter.' Whittemore referred to a 'Microsoft work trend index' leading to a prediction of a time when humans plan, and AI executes. '(The index) basically predicted that the end state of agents In the office is human orchestrators and agent operators, basically, that humans were going to do the planning, and that agents were going to do the execution. That's a different way of saying that the key skill sets and attributes of people in the workforce in the future is going to be around planning and coordination of agents everywhere (that agentic AI) becomes popular.' He also weighs in on that difference between mission-critical applications and others that have room for error. 'We're even seeing this sort of division in the way that companies are experimenting with agents right now,' Whittemore says. 'There are certain parts of their business where they simply can't abide (the) current fail rate or hallucination rate or underperformance rate, or however you want to determine it, of agents, because it's so critical. On the other hand, there are areas where the consequence of those problems is simply less pertinent. It is in those consequence-light areas that companies are (using) agents now, with the knowledge that capabilities continue to trend up.' Listening to the podcast and looking at the components of the essay. I realize that a lot of the same ideas that we saw in conferences earlier this year are sounding out around the near future. I'm hearing a lot of experts talking about these likelihoods as AI develops rapidly. We'll have AI agents baked into various industries and verticals, and humans will have to figure out how to adapt and coexist with these tools. What will that change do in the context of classical business and its power relationships? We'll have to see.


Forbes
22-04-2025
- Business
- Forbes
Companies Race Toward AI Agent Adoption
Internet technology and people's networks use AI to help with work, AI Learning or artificial ... More intelligence in business and modern technology, AI technology in everyday life. People have been talking about it for a while, but now the industry is seeing part of that new rush to utilize what can be a game-changer for companies. Of course, the rise of the AI agent is no small thing. Many people who had a front-row seat to the cloud and all of the disruption that it brought understand that it was a pebble in the ocean compared to what's coming. The prospect of simulating human decision-making, and handing knowledge work to large language models, is a big deal. It leads to replacing human workers with something much less costly and more durable – workers who never need lunch, or a bathroom break. Yes, AI engines are far more efficient than humans in so many ways, and now we're seeing that bear out in the enterprise markets. Reading through some reports on the utility of enterprise AI agents, I noticed that many of them refer to customer support, as well as marketing and process support or fulfillment, as popular implementations. A few other top use cases involve knowledge assistance, generative AI in existing workflows, and the daily usage of productivity tools by front-line workers. That last one speaks to the often-promoted idea of the 'human in the loop' or HITL, and the desire that AI not replace humans, but augment their work instead. Practically, though, some of these AI agents leave us wondering: what is the HITL actually needed for? Consultants and reporting companies are chiming in with rosy projections for the year ahead. estimates $3.6 billion for the enterprise AI agent market in 2023 and $139 billion by 2033. Deloitte adds the following projection: 25% of companies expected to embrace AI agents by 2025, and 50% two years later. However, given that nearly all companies everywhere will want some of this functionality, the numbers, in both cases, are likely to be much higher. And here's this from a McKinsey report: 'McKinsey research sizes the long-term AI opportunity at $4.4 trillion in added productivity growth potential from corporate use cases.' In a recent episode of the AI Daily Brief podcast, (one of my favorites), host Nathaniel Whittemore talks about a move in executive estimates of investment: from $9 million last quarter to $114, million in Q1 of 2025. 'I think that those of us, probably most of you, who are listening, who have used these tools, find them very quickly making their way into your daily habits,' Whittemore adds. 'I would expect to see (the numbers do) nothing but increase in the coming quarters.' He also talks about a KPMG study of companies launching enterprise pilots after experimentation with the technology, suggesting that doubled from 37% in Q4 to 65% in Q1, and that 99% of companies said they intend to deploy these agents at some point '99% of organizations surveyed said that they plan to deploy agents, suggesting to me that 1% of organizations misread the question,' he adds. Whether or not 1% deliberately forswear the technology is probably beside the point – we have to anticipate that demand is going to be very high. Although models like OpenAI's o3 are evolving quickly, and no-code tools are democratizing the process of creating applications, there are still some clear boundaries to what AI can do in the workplace. A main one consists of accuracy challenges. The most common word applied to this for LLMs is 'hallucinations.' Experts are finding, in general, that models with more inference are producing more hallucinations, and that's a problem as these uses become more important to the companies that have already jumped on the bandwagon. Case in point: a news story showing a customer support engine named Sam at Cursor, who apparently created a new policy erroneously, and started shutting down people's access to the platform. The shakeout showed why these kinds of mistakes make a difference. Another concern is hacking, where bad actors could take advantage of the functionality to compromise systems. A third is regulation – what is the landscape around these agents going to be like? All of these should be considered as top brass mull opportunities. I also came across this handy chart and process description from Gartner, of which the firm's magic quadrant report has been so helpful in the IT world. Garner representatives suggest mapping the enterprise pain points, and then addressing them with the AI agents. Address them, to do what? 'Enhance customer experiences,' the authors write. 'Streamline operations, and uncover new products, services, or revenue streams.' Another approach that might deal with hallucinations is ensemble learning. Having one model check the work of another can prevent those hallucinations and mistakes from percolating into the places where AI agents help with production. Some are suggesting even access to web search can help mitigate a model's hallucinations, which is another thing brought up on that AI Daily Brief episode. In so many of the events that I've been privileged to attend, and even host over the last year or so, I have heard the same refrains: that we have to get ready to welcome in AI agents into the fold. What all of this tells us is that the idea of company AI agent adoption is not just a flash in the pan. It's happening all around us, and we should be paying attention.


Forbes
18-04-2025
- Business
- Forbes
Everyone's Getting Better At Using AI: Thoughts On Vibe Coding
Female freelance developer coding and programming. Coding on two with screens with code language and ... More application. As we look at the landscape around the second quarter of 2025, we see amazing developments. You can look at what's happening in conferences and at trade shows. You can ask engineers what they're doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. What's the difference between people who actually use the keyboard to write code, and others who manage people and processes at an abstract level? Well, in the AI age, that gap is getting smaller quickly. But there's still an emphasis on people who know how to code, and especially, people who know how to engineer. Coding is getting automated, but engineering is still a creative component of the human domain - for now. I was listening to a recent episode of AI Daily Brief, where Nathaniel Whittemore talked to Shawn Wang, professionally known as 'Swyx,' about valuing the engineering role. 'It has always been valuable for people who are involved to keep the pulse on what builders are building,' Swyx said. The two conceded, though, that right now, 'building' is becoming a vague term, as it's getting easier to develop project on a new code basis. You just tell AI what you want, and it builds it. Having said that, in putting together events for the engineering community, Swyx sees the effort as vital to the industry itself. 'The people who have hands on a keyboard also need a place to gather,' he said, noting that for some of these events, attendees have to publish or otherwise prove their engineering capabilities. Later on, in thinking about how this works logistically, the two talked about a new tool called Model Context Protocol, which lives on GitHub, and how it's being used. MCP connects LLMs to the context that they need. This utility involves prebuilt integrations, a client server architecture, and APIs, as well as environments like Claude Desktop. The 'hosts' are LLMs, the 'client' provides a 1:1 server connection, and servers handle context, data and prompts. The system utilizes the transport layer for various communication events including requests, results and errors. 'You're not stuck to one model,' Swyx pointed out in illustrating how versatile these setups can be. Noting an 'S curve' for related technology, Swyx discussed timing of innovations, invoking Moore's law. 'If you're correct, but early, you're still wrong,' he said. Mentioning how companies are 'moving away from a cost plus model to one where you deliver outcomes.' Paraphrasing Shakespeare, he suggested that at companies like Google, execs are asking: 'To MCP, or not to MCP?' And there's another question for implementers: 'How much of my job can you do?' As for a timeline for MCP, Swyx cited the work of Alex Albert, also at Latent Space. 'The immediate reaction was good,' he said. 'There was a lot of immediate interest. I don't think there was a lot of immediate follow through.' Later on, Swyx brought up the contributions of Lilian Wang, who he said defined an AI agent as 'LLM + memory + planning + tool use.' He also laid out his own definition based on the acronym IMPACT, noting that he sees a lot of this type of work as disordered or unstructured, and that people should really ideally be able to define agent engineering well. The 'I', he said, stands for intent and intensity, goals, and evaluations. 'M' is memory; 'P' is planning. 'A' is authority. 'Think of (the agent) as like a real estate agent,' he said, suggesting that the agent should have specialized knowledge. 'C' is control flow, and 'T' is tool use, which he said everyone can agree on. Swyx called for a 'tight feedback loop' and processes that 'organically take traction' in enterprise. This part of the conversation was absolutely fascinating to me as a clear eyed assessment of the different ways people use the term 'vibe coding.' I've written about how figures like Andrej Karpathy and Riley Brown define this practice of working with AI that can craft code. But there are two interpretations of this phrase, and they're radically different. One that the duo mentioned is that the human programmer can get the vibe of the code and analyze it as a professional, where they need to already have some knowledge of what code is supposed to look like. But then there's the other definition. 'Vibe coding gets taken out of context,' Swyx said. In this latter interpretation, you don't need expertise, because you just evoke the vibe of the code and let the AI figure it out. But this way, he said, you can get into trouble and wasted dollars. As for best practices in vibe coding, Swyx suggested dealing with legacy code issues, having the appropriate skepticism about the limitations of vibe coding, and sampling the space There's something here,' he said, displaying enthusiasm for the democratization of code. 'I don't know if vibe coding is the best name for it.' In addition to all of the above, people are going to need some form of expertise, whether they are leaders, or builders, or both. Regardless of which way you view the new coding world, there's little question that reskilling for humans is going to be a piece of the puzzle. This resource from Harvard talks about tackling the challenge: 'As new technologies are integrated into organizations, with greater frequency, transforming how we work, the need for professionals to adapt and continue to learn and grow becomes more imperative.' I agree. All of this is quite instructive at this point in time when companies are looking for a way forward. Let's continue with this deep analysis of business today, as AI keeps taking hold throughout the rest of the year.


Forbes
17-04-2025
- Business
- Forbes
8 Major Problems With AI Initiatives In Enterprise
With so much enthusiasm about the rapid advancement we've made in using LLMs this year, some of the remaining barriers and bottlenecks tend to get lost in the shuffle. As with all prior technologies, companies have to introduce an AI project the right way. The way I've heard it said is that new workflows and tools need to be a help, not a hindrance, to a company. We often talk about this as a productivity issue – if it's instituted correctly, the new project will help workers to be more productive, confident, and on top of their jobs. If it's done poorly, it can mire them in low productivity, and actually inhibit the work that needs to get done. Let's talk about some of the specific problems that I've heard discussed in panels and in interviews around the AI industry, as 2025 got underway. This is another way that AI follows all other prior technologies. Yes, it's a more powerful technology with a lot more versatility for implementation – but you still need stakeholder buy-in. Otherwise, you're starting from a position of weakness, and it's an uphill battle. This Substack piece talking about common challenges uses the phrase 'low user adoption,' which basically means that people aren't choosing to use a new AI tool or system. That on its own is a core problem for enterprise AI. Suppose someone in a company orders everyone to immediately 'move everything to AI.' There are a couple potential problems with this. First, there's lack of clarity about what these directives mean. There's also likely to be a lot of overlap and redundant efforts, as well as chaos inside of departments. It's better to create a detailed strategic plan and go from there. In some ways, it's easier to create an initiative than it is to manage it. That's where this next problem comes in – suppose someone in-house or a vendor has dreamed up and built some kind of AI program, but as it is in production, there are issues with adoption and use. Users have questions – and these are often front-line people using the tools for vital business processes. Who do you go to in order to iron these questions out? If each department says 'this isn't our problem,' you have an intractable situation on your hands. So that's some thing else to look out for: not just support in the initial phases, but support later on as the AI systems become part of the workflows and business processes. This issue starts with a big question – will AI agents replace humans? You can check out this input from none other than Bill Gates, where he suggests that we 'won't need humans' for most things as AI becomes ascendant. 'There will be some things we reserve for ourselves,' Gates famously said of human initiatives. 'But in terms of making things and moving things and growing food, over time those will be basically solved problems.' For more, you can listen to a recent edition of one of my favorite podcasts, AI Daily Brief with Nathaniel Whittemore. Whittemore is talking with Nufar Gaspar, and suggests that AI agents inherently replace humans. In other words, because they're so naturally capable, it's easy for companies to just plug them in and get rid of the human that was doing the job before. 'I think that agents are inherently more replacing than augmenting, at least in terms of how people think about them,' he argues. 'Currently, you know, with agents, the ROI that companies are looking for from agents is, 'can they do a thing more cheaply, efficiently, more quickly than our people do it?'' He notes that companies may choose to reinvest in human potential, or not. 'What that doesn't say is how companies are going to choose to use those new efficiency gains,' he adds. 'Are they going to just slash headcount, or are they going to reinvest people's time that's now freed up in further growth like that? You know, each company has to make those decisions.' That gap between the theory of AI as assistive, and the reality of agentic replacers, is a big potential problem in any company. This is a little bit of a different issue that doesn't have as much to do with company integration and has a lot more to do with branding and company reputation. The basic idea is that companies have to be sincere when it comes to AI adoption and not just giving lip service to this kind of initiative. Here's some of our own Forbes reporting on the topic from Sujai Hajela a few years ago. A lot of it is still applicable now. (and here's more from CNN). 'AI washing' is synonymous to anything else like greenwashing, where companies might claim to be more ecological than they are. It's just a best practice to avoid this kind of mismatch, and the idea that a company might not 'practice what they preach.' Time and time again, we see companies moving ahead with AI projects without thinking about the ethics of the thing – bias, privacy issues, etc. Top figures in the tech world have warned against leaving ethics out of the equation. This includes voices like Bill Gates and Elon Musk early in the AI revolution, as well as others more recently who are warning about the intersection of AI with privacy and human data ownership. AI systems also need to be used in a secure way. Going back to the podcast, Whittemore talks about compliance with standards like HIPAA and the European GDPR. All of this is similarly important in AI implementation and design. Simply put, companies need a good roadmap to be successful. Again, AI is unique in its scope, but not unique in some of the best practices that business should apply. Anything is less effective without a good plan, so companies should make sure that AI factors into their business planning in a concrete and definable way. That's all for now: think about these common recommendations when it comes to AI adoption in enterprise.