logo
CEOs "shoving" AI "into everything" — with mixed results

CEOs "shoving" AI "into everything" — with mixed results

Axios09-03-2025

Companies across corporate America are experimenting with generative AI to see if it can make them better, smarter and more productive — with mixed success.
Why it matters: C-suite AI proponents have been pushing a "use it or get left behind" mentality, but it's often up to the rank and file to figure out how to actually implement AI in their day-to-day work.
What we're hearing: AI is helping workers offload time-consuming menial tasks, and it's handling some complex work better than humans.
Jason Rabinowitz, head of content creation at airline retailing firm ATPCO, told Axios that days of translating airline marketing content has been reduced to "about two hours" with AI's help in handling complex workflows and multiple spreadsheets.
Rabinowitz also described pitting AI-translated materials against human-translated versions in a "blind trial" — and finding that, so far, the AI-translated versions are "more readable and more accurate."
Reality check: Generative AI models suffer from "hallucinations" — techspeak for making stuff up.
AI's work needs to be checked, and that process is sometimes more time-consuming than not using AI at all.
And there remains the perennial concern among workers that generative AI will take people's jobs (with AI proponents countering that, like past disruptive technologies, it will create new, unforeseen jobs).
By the numbers: About 1 in 6 U.S. workers say they're using AI to do at least some of their work, per a recent Pew survey, while another 25% say AI could do at least part of their jobs.
52% of workers are worried about AI's impact, while 32% say it'll reduce job opportunities.
Yet 36% say they're optimistic about AI's potential.
The big picture: AI's value comes down to how it's used, says Alexia Cambon, senior director of research at Microsoft. (Microsoft is a major investor in ChatGPT maker OpenAI and runs a GenAI chatbot called Copilot.)
"There's a command-based approach, where you look at AI and you think, 'AI has to obey me — I'm going to give it a really simple prompt, and it has to do what I want it to,'" Cambon says.
"And then there's the conversation-based approach, which is ... 'I'm going to use it as a thought partner, and I'm going to use it to brainstorm' — and that requires a lot of critical thinking, and that is the preferable way to use AI in a work context."
The other side: Ed Zitron, CEO of PR agency EZPR and prominent AI skeptic, argues that many corporate leaders are pushing AI despite being too disconnected from their companies' day-to-day work to understand its actual use.
"What I think we're seeing is the biggest mask-off in corporate history, of bosses that do not know what they're talking about, that do not touch their businesses, shoving ChatGPT and other generative AI into everything because they don't know how anything works," Zitron says.
What's next: Generative AI proponents will tell you that the technology remains in its infancy and whatever comes next will be more capable.
The jury's out on whether that's true — hallucinations are an especially sticky problem.
But for anyone with an "email job," it couldn't hurt to at least start experimenting with AI to see what it can do for you — and what it can't.
Disclosure: Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios' story archives while helping fund the launch of Axios in four local cities and providing some AI tools. Axios has editorial independence.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity
Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Forbes

timean hour ago

  • Forbes

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Speculating on the future of AI including artificial general intelligence (AGI) and artificial ... More superintelligence (ASI). In today's column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. In a new posting on June 10, 2025, entitled 'The Gentle Singularity' by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts): There's a whole lot in there to unpack. His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more. Let's briefly explore the mainstay elements. A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren't. Sam Altman's reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started. Just to note, that's a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint. Consider these vital facets. First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI. However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck. Worse still, we might be off-target and going in the wrong direction altogether. Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we've already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking. Time will tell, as they say. Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here. Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat. Perhaps though the singularity will be a long and drawn-out activity. There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we've just concocted some zany theory that has no basis in reality. Sam Altman's posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one. Interesting conjecture. Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them. There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here. A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Depending on how you interpret Sam Altman's latest blog post, it isn't clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I've previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here. We'll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly. An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That's certainly happy news for the world at large. Utopia awaits. There is a decidedly other side to that coin. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. You can readily discern which camp the posting sides with, namely roses and fine wine. It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture. Franklin D. Roosevelt wisely stated: 'There are as many opinions as there are experts.' Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI. You'll be immeasurably glad you were cautious and alert.

Millions of PS5 Players Can't Buy or Play Gears of War: Reloaded
Millions of PS5 Players Can't Buy or Play Gears of War: Reloaded

Yahoo

timean hour ago

  • Yahoo

Millions of PS5 Players Can't Buy or Play Gears of War: Reloaded

version won't be available to millions of console owners this August, it has emerged. Microsoft has confirmed that the game will not launch in Japan due to 'regional and platform' restrictions. However, it will be available on PC and Xbox Series X|S in the region. Gears of War: Reloaded PS5's Japan release plans has left players puzzled. Initially, it was assumed that the game was simply denied rating by the country's strict rating board, CERO. But things got confusing when it emerged that Gears of War: Reloaded will release on PC and Xbox in Japan. So, what exactly happened here and what 'platform' restrictions is Microsoft talking about? Automaton Media has the answer. For games to release in Japan, they must either be rated by CERO or an international body called IARC (International Age Rating Coalition). On their platforms in Japan, Microsoft and Nintendo allow games that are refused classification by CERO but approved by IARC. Sony, on the other hand, still requires publishers to have CERO's seal of approval for mature games (18+) to be sold on the PS Store in Japan. And since Gears of War: Reloaded has not been approved by CERO, it'll be skipping PS5 in the country. Other platforms are content with the IARC rating, so the launch will go ahead as planned. Automaton Media suggests that Sony currently doesn't have any plans to change its policies in Japan. The post Millions of PS5 Players Can't Buy or Play Gears of War: Reloaded appeared first on PlayStation LifeStyle.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store