logo
Corporate giants ramp up AI spending amid persistent ‘FOMO'

Corporate giants ramp up AI spending amid persistent ‘FOMO'

Yahoo07-05-2025
Good morning. Over the past few years, AI has become a major focus for companies across industries.
'There's been a huge amount of activity since [OpenAI's] ChatGPT launched about two and a half years ago,' Benedict Evans, an independent analyst, told the audience at Fortune Brainstorm AI in London on Tuesday. Evans has spent 20 years analyzing media and technology, having worked in equity research, corporate strategy, and venture capital.
'We've got lots of new buzzwords and acronyms, and people talk about [AI] agents—or, if they want to sound clever, they say 'agentic,' which I think is a bit like saying 'societal' instead of 'social,'' he said. But most profoundly, there has been a huge increase in the amount of AI investment, Evans said.
'Last year, the Big Four cloud companies spent about $220 billion building AI infrastructure,' he explained. Evans refers to the "Big Four" as Alphabet, Meta, Amazon Web Services, and Microsoft. 'This year, based on their guidance, they'll probably spend something over $300 billion,' he said. For example, Meta recently announced an increase in its 2025 capital expenditure outlook to between $64 billion and $72 billion to support its AI buildout and the expected costs of infrastructure hardware, Fortune reported.
Along with the massive AI investment by companies, there's still 'a strong sense of FOMO'—fear of missing out, he said. 'You don't want to be the company that didn't participate, and then be faced with having other companies dictate how you could do business,' he explained. That's what happened to Meta in the last platform shift, and that's why Google created Android, Evans said.
Regarding generative AI, some use cases that are really working for people include job development, marketing, and customer support, he said. However, he noted that some organizations still struggle to identify the best use cases for generative AI.
'There are people who've looked at it a couple of times and said, 'I don't know what to do with this,'' Evans explained. 'It's not really clear how you handle error rates, or where you should put the human in the loop.' People are also worried about data security.
For example, the "2025 AI Survey: Generative AI Adoption in the Enterprise," released by Writer in March, found that 72% of C-suite leaders surveyed said their company has faced at least one challenge on their journey to generative AI adoption. Internal power struggles, underperforming tools, and clashing perspectives among executives and employees are among the obstacles. However, the majority of workers and executives who have actively used generative AI say they benefited from the tools, according to the report.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques
Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques

Forbes

timean hour ago

  • Forbes

Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques

In today's column, I provide GPT-5 prompt engineering tips and techniques that will aid in getting the best outcomes when using this newly released generative AI. I'm sure that just about everyone by now knows that OpenAI finally released GPT-5, doing so after a prolonged period of immense and wildly fantastical speculation about what it would be like. Well, now we know what it is (see my in-depth review of GPT-5 at the link here). Bottom line is that GPT-5 is pretty much akin to all the other generative AI and large language models (LLMs) when it comes to doing prompting. The key is that if you want to ensure that GPT-5 works suitably for your needs, you must closely understand how GPT-5 differs from prior OpenAI AI products. GPT-5 has distinctive features and functionality that bring forth new considerations about composing your prompts. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI. Prompting Is Still Tried And True The first place to begin when assessing GPT-5 from a prompt engineering perspective is that prompts are still prompts. Boom, drop the mic. I say that somewhat irreverently. Here's the deal. There was prior conjecture that perhaps GPT-5 would turn the world upside down when it came to using prompts. The floated ideas of how GPT-5 might conceivably function were astounding and nearly out of this world ('it will read your mind', 'it will know what you want before you even know', etc.). The truth is now known. GPT-5 is essentially a step-up from ChatGPT and GPT-4, but otherwise you do prompting just like you've done all along. There isn't a new kind of magical way to write prompts. You are still wise to compose prompts as you've been doing since the early days of contemporary generative AI. To clarify, I am emphasizing that you should astutely continue to write clearly worded prompts. Be direct. Don't be tricky. Write prompts that are long enough to articulate your question or task at hand. Be succinct if possible. Definitely don't be overly profuse or attempt to be complicated in whatever your request is. And so on. Those are all golden rules and remain perfectly intact when using GPT-5. I am confident that all the prompt engineering specialized techniques that I've previously covered will generally work appropriately with GPT-5. Some might require a tweak or minor refinement, but otherwise, they are prudent and ready to go (see my list at the link here). Auto-Switching Can Be A Headache We can next consider how to artfully try and accommodate GPT-5 via composing prompts that GPT-5 will efficiently and effectively act on. The biggest aspect that entails both good news and bad news about GPT-5 is that OpenAI decided to include an auto-switcher. This is a doozy. It will require you to potentially rethink some of your prompting since it is quite possible that GPT-5 isn't going to make the right routing decisions on your behalf. Allow me a moment to explain the quandary. It used to be that you would have to choose which of the various OpenAI AI products you wanted to use for a particular situation at hand. There had been an organic expansion of OpenAI's prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI's AI capabilities, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower. It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts. GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. The good news is that depending on how your prompts are worded, there is a solid chance that GPT-5 will select one of the GPT-5 submodels that will do a bang-up job of answering your prompt. The bad news is that the GPT-5 auto-switcher might choose a less appropriate GPT-5 submodel. Oops, your answer will not be as sound as if the more appropriate submodel had been chosen. Worse still, each time that you enter a prompt or start a new conversation, the GPT-5 auto-switcher might switch you to some other GPT-5 submodel, back and forth, doing so in a wanton fashion. It can make your head spin since the answers potentially will vary dramatically. Craziness In Design The average user probably won't realize that all these switcheroo mechanics are happening behind the scenes. I say that because GPT-5 doesn't overtly tell you that it is taking these actions. It just silently does so. I appreciate that the designers apparently assumed that no one would care or want to know what is going on under the hood. The problem is that those who are versed in using AI and are up-to-speed on prompting are being bamboozled by this hidden and secreted behavior. A savvy user can almost immediately sense that something is amiss. Frustratingly, GPT-5 won't let you directly control the auto-switching. You cannot tell the AI to use a particular submodel. You cannot get a straight answer if you ask GPT-5 which submodel it intends to use on your prompt. It is perhaps like trying to get the key to Fort Knox. GPT-5 refuses to play ball. The marketplace has tweeted vociferously that something needs to be done about this lack of candor by GPT-5 regarding the model routing that is occurring. Sam Altman sent out a tweet on X that suggested they are going to be making some changes on this aspect (see his X posting of August 8, 2025). The thing is, we can applaud the desire to have a seamless, unified experience, but it is similar to having an automatic transmission on a car. Some users are fine with an automatic transmission, but other, more seasoned drivers want to know what gear the car is in and be able to select a gear that they think is most suitable for their needs. Prompting GPT-5 For Routing As the bearer of bad news, I should also add that the auto-switching comes with another said-to-be handy internal mechanism that decides how much processing time will be undertaken for your entered prompt. Again, you have no particular say in this. It could be that the prompt gets tons of useful processing time, or maybe the time is shortchanged. You can't especially control this, and the settings are not within your grasp (as an aside, to some degree, if you are a developer and are using the API, you have more leeway in dealing with this; see the OpenAI GPT-5 System Card for the technical details). Let me show you what I've been doing about this exasperating situation. First, here is a mapping of the prior models to the GPT-5 submodels: The GPT-5 submodels are considered successors and depart from the earlier models in various ways. That being said, they still are roughly on par as to the relative strengths and weaknesses that previously prevailed. I will show you what I've come up with to try and sway the GPT-5 auto-switcher. Prompting With Aplomb Suppose I have a prompt that I believe would have worked best on GPT-4o. But I am using GPT-5, thus I am not using GPT-4o, plus OpenAI has indicated that it will sunset the prior models, so you might as well get used to using GPT-5. Darned if you cannot simply tell GPT-5 to use gpt-5-main (i.e., realizing that gpt-5-main is now somewhat comparable to GPT-4o, per my chart above). The AI will either tell you it doesn't function that way or might even imply that it will do as you ask, yet it might do something else. Bow to the power of the grand auto-switcher. This eerily reminds me of The Matrix. Anyway, we need to somehow convince GPT-5 to do what we want, but we must do so with aplomb. Asking straightaway isn't a viable option. The need to sway the AI is our best option at this ugly juncture. In the specific case of my wanting to use gpt-5-main, here is a prompt that I use and seems to do the trick (much of the time): It appears that by emphasizing the nature of what I want GPT-5 to do, it seems possible to sway the direction that the auto-switcher will route my next prompt. Not only will I possibly get the submodel that I think is the best choice for the prompt, observe that I also made a big deal about the depth of reasoning that ought to take place. This potentially helps to kick the AI into giving an allotment of processing time that it, by enigmatic means, would have perhaps inadvertently shortcut (OpenAI refers to processing time as so-called 'thinking time' – an anthropomorphizing of AI that I find to be desperate and despairing). I am not saying this sway-related prompting is a guaranteed result. After trying a bunch of times, it seemed to be working as hoped for. I came up with similar prompts for each of the other GPT-5 submodels. If there is enough interest expressed by readers, I will do a follow-up with those details. Be on the watch for that upcoming coverage. On a related note, I will also soon be covering the official GPT-5 Prompting Guide that OpenAI has posted, along with their Prompt Optimizer Tool. Those are aimed primarily at AI developers and not especially about day-to-day, ordinary prompting in GPT-5. Watch Out That Writing Is Enhanced On the writing side of things, GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won't discern much of a difference. For a more seasoned user, you are bound to notice that the writing has gotten an upgrade. I suppose it is something like getting used to a third grader and now being conversational with a sixth grader. Or something like that. I use this prompt to get GPT-5 to be closer to the way it was in the GPT-4 series: That seems to get me the kind of results that I used to see. It is not an ironclad method, but it generally works well. I realize that some people are going to scream loudly that I ought not to suggest that users revert to the GPT-4 writing style. We all should accept and relish the GPT-5 writing style. Are we going backwards by asking for GPT-5 to speak like GPT-4? Maybe. I grasp the angst. It's up to you, and I'm not at all saying that everyone should use this prompting tip. Please use it at your personal discretion. Lies And AI Hallucinations OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth). I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here. A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern. Here is my prompt to help try and further reduce the odds of GPT-5 lying to you: Here is my prompt to help further reduce the odds of GPT-5 incurring a so-called hallucination: My usual caveats apply, namely, these aren't surefire, but they seem to be useful. The crucial motto, as always, still is that if you use generative AI, make sure to remain wary and alert. One other aspect is that you would be shrewd to use both of those prompts so that you can simultaneously try to strike down the lying and the hallucinations. If you only use one of those prompts, the other unresolved side will potentially arise. Try to squelch both. It's your way of steering out of the range of double trouble. Personas Are Coming To The Fore I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. For example, you might tell AI to pretend to be Abraham Lincoln. The AI will respond based on having pattern-matched on the writings of Lincoln and the writings about Lincoln. It is instructive and useful for students and learners. I even showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities. The good news is that I hope this spurs people to realize that personas are a built-in functionality and easily activated via a simple prompt. It doesn't take much work to invoke a persona. Here is my overall prompt to get a persona going in GPT-5: Use personas with due caution. I mention this because some people kind of get lost in a conversation where the AI is pretending to be someone. It isn't real. You aren't somehow tapping into the soul of that actual person, dead or alive. Personas are pretenses, so keep a clear head accordingly. Prompt Engineering Still Lives I hope that these important prompting tips and insights will boost your results when using GPT-5. One last comment for now. You might know that some have fervently claimed that prompt engineering is a dying art. No one will need to write prompts anymore. I've discussed in great depth the automated prompting tools that try to do the prompting for you (see my aforementioned list of prompt engineering strategies and tactics). They are good and getting better, but we are still immersed in the handwriting of prompts and will continue down this path for quite a while to come. GPT-5 abundantly reveals that to be the case. A final remark for now. It has been said that Mark Twain made a wry comment that when a newspaper reported him as deceased, he said that the audacious claim was a bit exaggerated. That was smarmily tongue-in-cheek. I would absolutely say the same about prompt engineering. It's here. It isn't disappearing. Keep learning about prompting. You'll be glad that you spent the prized time doing so.

Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell
Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell

Yahoo

timean hour ago

  • Yahoo

Realtors Are Using AI Images of Homes They're Selling. Comparing Them to the Real Thing Will Make You Mad as Hell

As if suspiciously AI-generated descriptions of real estate listings weren't enough, agents are starting to use AI-generated images of houses that don't exist to sell expensive properties. The Register spotted a dubious listing of a fully renovated 3-bedroom in the UK with a photo — which has since been deleted, but can still be viewed in an archived version — that shows the tell-tale signs of generative AI, including awnings that don't line up, hedges that inexplicably turn into walls, and a flowerbed obstructing a neighboring door. The real — or at least less-edited — version of the image tells a dramatically different story, with a tiled awning, no flowerbed obstructing the path to the neighboring door, and an adjoining hair salon that was seemingly deleted from the tampered-with version entirely. In some ways, the haphazard use of AI shouldn't come as much of a surprise. The real estate world has long made use of CGI tools to virtually "stage" homes, a cost-cutting measure designed to save agencies from having to physically move furniture around. But given the damning evidence — alongside society's current obsession with shoehorning generative AI into virtually every aspect of modern life — the trend has seemingly found its next logical application. And we're not just talking about photoshopping a couch into the photo of an empty living room. As The Register points out, the original listing of the house pictured above confused entire structural elements, down to the placement of the toilet in the bathroom. Reached by The Register, a spokesperson for the agency responsible for the AI carnage responded with a "sharp intake of breath and a declaration that 'we're a little busy at the moment.'" It didn't reply to followup questions. The trend paints a troubled picture of an industry that already has a tarnished reputation for twisting the truth. It's a worrying reality, given that entire livelihoods are being put on the line in the pursuit of buying one's dream home. Worse, experts say this kind of use of AI could even run afoul of the law. "For me, the use of AI for imagery in property listings is a major red flag aligned to what was previously covered by the Property Misdescriptions Act," University of Reading associate professor of building surveying Adrian Tagg told The Register. Tagg explained that building surveyors are "bound by regulations to deliver evidence-based opinion and hold a duty of care to deliver correct, appropriate advice." "Estate agency has never really had this professional duty, and ultimately it's all about sales and doing 'the deal,'" he explained. "Therefore I'm not surprised that there appears an openness to accept AI when ultimately it's an industry with little obligation to be accountable for their actions." Despite very real concerns of misleading home buyers, the real estate market has welcomed the age of generative AI with open arms. Consultancy firm McKinsey & Company predicted in 2023 that the tech could "generate $110 billion to $180 billion or more in value for the real estate industry." The consultancy said that generative AI could help in "customer engagement," and generating "new creative content," among other use cases. An entire cottage industry has cropped up, offering cheap, "AI-powered virtual staging" and real estate "photo editing" services. But prospective buyers are understandably furious. Earlier this year, one user in the FirstTimeHomeBuyer subreddit came across a listing on Zillow, which included a (since-deleted) picture of a small bungalow in Newport, Kentucky. The heavily edited image didn't even get the house number right, showing up as a mangled "418," instead of 1026. "Using AI in listing photos should be illegal," the user wrote. More on generative AI: Microsoft Releases List of Jobs Most and Least Likely to Be Replaced by AI

Alibaba Group Announces Its Pair of Smart Glasses Powered by Its AI Model
Alibaba Group Announces Its Pair of Smart Glasses Powered by Its AI Model

Yahoo

time2 hours ago

  • Yahoo

Alibaba Group Announces Its Pair of Smart Glasses Powered by Its AI Model

Alibaba Group Holding Limited (NYSE:BABA) is one of the Best Affordable AI Stocks to Buy. On July 28, Alibaba Group Holding Limited (NYSE:BABA) revealed its pair of smart glasses powered by its AI models. Alibaba plans to launch the smart AI glasses by the end of 2025 in China. This marks the Chinese e-commerce giant's first foray into the product category. Alibaba stated that the Quark AI Glasses will be powered by the firm's Qwen LLM and its advanced AI assistant called Quark. The company also said its glasses will support hands-free calling, music streaming, real-time language translation, and meeting transcription. The glasses will feature a built-in camera as well. Pixabay/Public Domain Quark is currently available as an app in China, and Alibaba is entering into the hardware space as a way to widely penetrate the applications. The Quark AI Glasses are a response to Meta's smart glasses that were designed in partnership with Ray-Ban. The Chinese tech firm will also compete with another Chinese firm, Xiaomi, which launched its AI glasses about a month ago. Alibaba Group Holding Limited (NYSE:BABA) is a leading technology infrastructure and market platform provider. The company operates through seven segments and follows an AI-driven strategy. Alibaba has emerged as a leader in open-source AI, especially with its Qwen series of AI models. While we acknowledge the potential of BABA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store