Latest news with #NathanielWhittemore


Forbes
9 hours ago
- Business
- Forbes
13 Big Hunches On AI And Consultants
The Scream', 1893. The Scream is one of four versions painted by Edward Munch in 1893. The ghostly, ... More agonised figure against the background of a red sunset is one of the most well known images in the world of art - a symbol of despair and alienation. From the Nasjonalgalleriet, Oslo. Artist Edvard Munch. (Photo by Art Media/The) Nathaniel Whittemore has some thoughts about consulting in the AI era. For those who might not know, Whittemore is the intrepid host of AI Daily Brief, a podcast I tend to pay attention to as we move through a profound technological shift that has most of us looking for answers. In a rare episode of the podcast where he acknowledged being on the road, Whittemore went through his own LinkedIn post about consulting companies, to glean some predictions and insights (13 of them) on how these firms will work as AI continues to take hold. Chunks of legal and accounting work, he suggested, will get sent to AI (that's #1) calling them 'total bye-byes.' Consulting, he said, will 'distill' things into a targeted set of realities around AI. He described a kind of musical chairs (that's #2) where companies will move into other market areas, big firms to mid-market, etc. as the change happens. As for the displacement of humans, Whittemore spoke to a demand for external validation that, he suggested, will keep the HITL (human in the loop) in play. Enterprises will divide into two categories, either slicing headcount, or use a 'flexible roster' of partners, consultants and fellow travelers. 'Personally, I'm pretty inclined towards smaller, more liberal organizations powered by a flexible roster of partners and consultants,' he said, 'because I think it aligns also with people's (takes) on their professional services, figuring out how far they can take it with AI.' Then there's Whittemore's theorization of new products and new lines of business, as well as 'new practices' (that's #4 on the list, read the list for the others.) His caveat: these predictions are 'ridiculously generic' and nobody really knows what the future holds. 'I think the way that this plays out is going to be pretty enterprise by enterprise,' he said. In the second half of the podcast, he suggested that business people are going to be 'Jerry Macguiring,' where they start to have a different picture of enterprise value. He talked about small teams of consultants managing large swarming agents as a 'default model' and suggested that 'pricing experiments' are going to become common. Buyers, he concluded, are going to prioritize intangibles. Here's another of Whittemore's points: people want to work with people who they like working with, or, in the parlance of his list, 'j𝘂𝗱𝗴𝗲𝗺𝗲𝗻𝘁, 𝘁𝗮𝘀𝘁𝗲𝗮𝗻𝗱𝗘𝗤𝗮𝗹𝗹𝗮𝗹𝘀𝗼𝗿𝗶𝘀𝗲𝗮𝘀𝗺𝗼𝗮𝘁𝘀𝗳𝗼𝗿𝗰𝗼𝗻𝘀𝘂𝗹𝘁𝗮𝗻𝘁𝘀.' (That's #10). More from Whittemore: the value of prototyping, a prediction of companies operating in what he called 'iteration-land,' and an advantage to parties who can build frameworks for the AI future. 'Continuous iteration requires data measurement, analytics, and systems,' he said, 'to make sense of it all: I think these are table stakes aspects of engagements, now that consultants and professional services firms just have to build in an understanding and an expectation of how they're going to measure their impact.' There's a lot of insight here, and if you want to go back and read the whole thing over again, I wouldn't blame you. Whittemore goes over a lot of his analysis at lightning speed. But you do see certain things crop up time and time again – The idea that rapid change is going to shake out in certain ways, and that companies and people need to pivot as our AI revolution keeps heating up. Check it out.


Forbes
22-04-2025
- Business
- Forbes
Companies Race Toward AI Agent Adoption
Internet technology and people's networks use AI to help with work, AI Learning or artificial ... More intelligence in business and modern technology, AI technology in everyday life. People have been talking about it for a while, but now the industry is seeing part of that new rush to utilize what can be a game-changer for companies. Of course, the rise of the AI agent is no small thing. Many people who had a front-row seat to the cloud and all of the disruption that it brought understand that it was a pebble in the ocean compared to what's coming. The prospect of simulating human decision-making, and handing knowledge work to large language models, is a big deal. It leads to replacing human workers with something much less costly and more durable – workers who never need lunch, or a bathroom break. Yes, AI engines are far more efficient than humans in so many ways, and now we're seeing that bear out in the enterprise markets. Reading through some reports on the utility of enterprise AI agents, I noticed that many of them refer to customer support, as well as marketing and process support or fulfillment, as popular implementations. A few other top use cases involve knowledge assistance, generative AI in existing workflows, and the daily usage of productivity tools by front-line workers. That last one speaks to the often-promoted idea of the 'human in the loop' or HITL, and the desire that AI not replace humans, but augment their work instead. Practically, though, some of these AI agents leave us wondering: what is the HITL actually needed for? Consultants and reporting companies are chiming in with rosy projections for the year ahead. estimates $3.6 billion for the enterprise AI agent market in 2023 and $139 billion by 2033. Deloitte adds the following projection: 25% of companies expected to embrace AI agents by 2025, and 50% two years later. However, given that nearly all companies everywhere will want some of this functionality, the numbers, in both cases, are likely to be much higher. And here's this from a McKinsey report: 'McKinsey research sizes the long-term AI opportunity at $4.4 trillion in added productivity growth potential from corporate use cases.' In a recent episode of the AI Daily Brief podcast, (one of my favorites), host Nathaniel Whittemore talks about a move in executive estimates of investment: from $9 million last quarter to $114, million in Q1 of 2025. 'I think that those of us, probably most of you, who are listening, who have used these tools, find them very quickly making their way into your daily habits,' Whittemore adds. 'I would expect to see (the numbers do) nothing but increase in the coming quarters.' He also talks about a KPMG study of companies launching enterprise pilots after experimentation with the technology, suggesting that doubled from 37% in Q4 to 65% in Q1, and that 99% of companies said they intend to deploy these agents at some point '99% of organizations surveyed said that they plan to deploy agents, suggesting to me that 1% of organizations misread the question,' he adds. Whether or not 1% deliberately forswear the technology is probably beside the point – we have to anticipate that demand is going to be very high. Although models like OpenAI's o3 are evolving quickly, and no-code tools are democratizing the process of creating applications, there are still some clear boundaries to what AI can do in the workplace. A main one consists of accuracy challenges. The most common word applied to this for LLMs is 'hallucinations.' Experts are finding, in general, that models with more inference are producing more hallucinations, and that's a problem as these uses become more important to the companies that have already jumped on the bandwagon. Case in point: a news story showing a customer support engine named Sam at Cursor, who apparently created a new policy erroneously, and started shutting down people's access to the platform. The shakeout showed why these kinds of mistakes make a difference. Another concern is hacking, where bad actors could take advantage of the functionality to compromise systems. A third is regulation – what is the landscape around these agents going to be like? All of these should be considered as top brass mull opportunities. I also came across this handy chart and process description from Gartner, of which the firm's magic quadrant report has been so helpful in the IT world. Garner representatives suggest mapping the enterprise pain points, and then addressing them with the AI agents. Address them, to do what? 'Enhance customer experiences,' the authors write. 'Streamline operations, and uncover new products, services, or revenue streams.' Another approach that might deal with hallucinations is ensemble learning. Having one model check the work of another can prevent those hallucinations and mistakes from percolating into the places where AI agents help with production. Some are suggesting even access to web search can help mitigate a model's hallucinations, which is another thing brought up on that AI Daily Brief episode. In so many of the events that I've been privileged to attend, and even host over the last year or so, I have heard the same refrains: that we have to get ready to welcome in AI agents into the fold. What all of this tells us is that the idea of company AI agent adoption is not just a flash in the pan. It's happening all around us, and we should be paying attention.


Forbes
18-04-2025
- Business
- Forbes
Everyone's Getting Better At Using AI: Thoughts On Vibe Coding
Female freelance developer coding and programming. Coding on two with screens with code language and ... More application. As we look at the landscape around the second quarter of 2025, we see amazing developments. You can look at what's happening in conferences and at trade shows. You can ask engineers what they're doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. What's the difference between people who actually use the keyboard to write code, and others who manage people and processes at an abstract level? Well, in the AI age, that gap is getting smaller quickly. But there's still an emphasis on people who know how to code, and especially, people who know how to engineer. Coding is getting automated, but engineering is still a creative component of the human domain - for now. I was listening to a recent episode of AI Daily Brief, where Nathaniel Whittemore talked to Shawn Wang, professionally known as 'Swyx,' about valuing the engineering role. 'It has always been valuable for people who are involved to keep the pulse on what builders are building,' Swyx said. The two conceded, though, that right now, 'building' is becoming a vague term, as it's getting easier to develop project on a new code basis. You just tell AI what you want, and it builds it. Having said that, in putting together events for the engineering community, Swyx sees the effort as vital to the industry itself. 'The people who have hands on a keyboard also need a place to gather,' he said, noting that for some of these events, attendees have to publish or otherwise prove their engineering capabilities. Later on, in thinking about how this works logistically, the two talked about a new tool called Model Context Protocol, which lives on GitHub, and how it's being used. MCP connects LLMs to the context that they need. This utility involves prebuilt integrations, a client server architecture, and APIs, as well as environments like Claude Desktop. The 'hosts' are LLMs, the 'client' provides a 1:1 server connection, and servers handle context, data and prompts. The system utilizes the transport layer for various communication events including requests, results and errors. 'You're not stuck to one model,' Swyx pointed out in illustrating how versatile these setups can be. Noting an 'S curve' for related technology, Swyx discussed timing of innovations, invoking Moore's law. 'If you're correct, but early, you're still wrong,' he said. Mentioning how companies are 'moving away from a cost plus model to one where you deliver outcomes.' Paraphrasing Shakespeare, he suggested that at companies like Google, execs are asking: 'To MCP, or not to MCP?' And there's another question for implementers: 'How much of my job can you do?' As for a timeline for MCP, Swyx cited the work of Alex Albert, also at Latent Space. 'The immediate reaction was good,' he said. 'There was a lot of immediate interest. I don't think there was a lot of immediate follow through.' Later on, Swyx brought up the contributions of Lilian Wang, who he said defined an AI agent as 'LLM + memory + planning + tool use.' He also laid out his own definition based on the acronym IMPACT, noting that he sees a lot of this type of work as disordered or unstructured, and that people should really ideally be able to define agent engineering well. The 'I', he said, stands for intent and intensity, goals, and evaluations. 'M' is memory; 'P' is planning. 'A' is authority. 'Think of (the agent) as like a real estate agent,' he said, suggesting that the agent should have specialized knowledge. 'C' is control flow, and 'T' is tool use, which he said everyone can agree on. Swyx called for a 'tight feedback loop' and processes that 'organically take traction' in enterprise. This part of the conversation was absolutely fascinating to me as a clear eyed assessment of the different ways people use the term 'vibe coding.' I've written about how figures like Andrej Karpathy and Riley Brown define this practice of working with AI that can craft code. But there are two interpretations of this phrase, and they're radically different. One that the duo mentioned is that the human programmer can get the vibe of the code and analyze it as a professional, where they need to already have some knowledge of what code is supposed to look like. But then there's the other definition. 'Vibe coding gets taken out of context,' Swyx said. In this latter interpretation, you don't need expertise, because you just evoke the vibe of the code and let the AI figure it out. But this way, he said, you can get into trouble and wasted dollars. As for best practices in vibe coding, Swyx suggested dealing with legacy code issues, having the appropriate skepticism about the limitations of vibe coding, and sampling the space There's something here,' he said, displaying enthusiasm for the democratization of code. 'I don't know if vibe coding is the best name for it.' In addition to all of the above, people are going to need some form of expertise, whether they are leaders, or builders, or both. Regardless of which way you view the new coding world, there's little question that reskilling for humans is going to be a piece of the puzzle. This resource from Harvard talks about tackling the challenge: 'As new technologies are integrated into organizations, with greater frequency, transforming how we work, the need for professionals to adapt and continue to learn and grow becomes more imperative.' I agree. All of this is quite instructive at this point in time when companies are looking for a way forward. Let's continue with this deep analysis of business today, as AI keeps taking hold throughout the rest of the year.


Forbes
31-03-2025
- Business
- Forbes
5 Reasons You Should Still Learn To Code
With AI doing so much of the coding and software development that's happening these days, do humans still need to learn all of these computer programming skills? It's a big question for a lot of people who are making career choices, and for leaders and talent developers, too. I'm going to take a stab at answering this question using resources like a recent edition of the AI Daily Brief podcast that I listen to with Nathaniel Whittemore, where he broke a lot of this down. I've also heard a lot of input from movers and shakers all over the industry about that essential question – should people still learn to code? But before I do that, I'd like to go back to the term 'vibe coding' – the idea that humans Illustrate the broad strokes of a program, and use AI to complete the details. Vibe coding doesn't mean that you're completely removed from the coding process. But it does mean automating a lot of this work. That said, here are some of the reasons I've heard most commonly expressed for people continuing to learn programming languages. Input from Steve Jobs and others promoting the practice of coding resonates in the context of the tasks that career professionals have to do. 'Everybody in the country should learn how to program a computer,' the now-deceased tech mogul said. 'It teaches you how to think.' In a way, that says it all. 'Certainly, there's an argument that in a world where even more of our world is mediated by code, the particular genre of thinking that coding enables is even more valuable,' Whittemore adds. In the podcast, Whittemore also talks about people who know how to do a queue sort or write a hash table may be better at using AI to code than others who don't. Here's another argument for humans coding – the AI doesn't have all of the contextual details about your business. Unless you've connected something through an API, or entered a whole lot of data, the human still knows more about the enterprise activity than the computer does. So there are some aspects that AI won't be as capable at. Basically speaking, although AI can excel at code syntax and logic and reasoning, it reaches a limitation when it comes to creativity. I'll use another example from the podcast, where Whittemore talked about how computers and AI may not be able to come up with new programming languages. He also invoked the new hot slogan from Andre Karpathy that 'English is the hottest new programming language,' but suggested that we can still utilize the syntax from languages like Python and C. Many experts in the field have also pointed out that humans can be essential in helping with debugging and fixing glitches in code. The example that Whittemore uses is working with the tool Lovable to create a codebase. When something goes wrong, he notes, it's important to be able to get in there and fix it. So that's another reason for human involvement in coding processes. Now that I've enumerated those arguments for community coding, let's talk about how this is approached in the industry. Later in the podcast, Whittemore talked about how senior developers may use AI instead of junior developers, and there might not be any junior developer jobs left. So should people stop learning to code if they won't be able to get a job as a junior developer? That, he says, is missing the big picture. 'Learning to code to get a junior developer job seems a little insane right now,' he says. 'On the flip side, I think that there is basically nothing higher leverage than you can be doing right now than learning this new vibe coding paradigm.' Don't learn the traditional way, he urges, learn differently, and combine your coding knowledge with a knowledge of how the modern world works – how to create things, how to move the needle with so much creative power at your fingertips. I'll leave out the part about predictions by notable entrepreneurs, like Dario Amodei's suggestion the AI will be doing 90% of coding soon, or Sundar Pichai saying that Google relies on AI for 25% of its codebase. Whittemore lays out some of the arguments for and against larger percentages of AI responsibility for code, and you can find that in the audio itself. Whittemore ends that particular podcast with a neat reference to an atavistic literary movement, combining it with AI, and not for the first time, either. Not too long ago, I looked up the word 'shoggoth' as it's used in the AI community, and found out it's a Lovecraft term referring to something like an amorphous blob in AI parlance. Whittemore, for his part, talks about how he used AI to generate a game like the classic Oregon Trail that GenXers played on school library computers featuring monochrome stick drawings. He took that model, he said, and applied it to the Lovecraftian world for an interesting look at AI-generated game development. He also apparently worked on new sets of Magic the Gathering resources. All of that shows us how these things work in aid of greater human creativity. 'Don't tell us,' Whittemore says. 'Show us.' So there you have it – several reasons to still be involved in knowing the syntax and use of modern programming languages, even though AI can do a lot of it by itself.