logo
Axios AI+ NY Summit: AI's rapid rise outpaces guardrails

Axios AI+ NY Summit: AI's rapid rise outpaces guardrails

Axios2 days ago

NEW YORK – AI and the internet are outpacing oversight — threatening kids, creatives, national security and even basic innovation, leaders across tech, politics and entertainment said at the Axios' AI+ Summit.
Why it matters: AI is transforming industries and society faster than it can be regulated, creating sweeping, serious security risks, according to several speakers.
The June 4 summit hosted multiple conversations and was sponsored by BCG, Booking Holdings, Snyk, Varonis, and Workato.
Here are some key takeawayes:
WndrCo founding partner Jeffrey Katzenberg said kids' unsupervised use of the internet is "destroying a generation."
Lumen Technologies president and CEO Kate Johnson said telcos aren't innovative enough and have ceded too much ground to Big Tech.
The Weather Company CEO Rohit Agarwal said AI could help forecasters be as specific as giving guidance on what time of day to walk your dog.
Gov. Kathy Hochul (D-N.Y.) made a dig at Rep. Marjorie Taylor Greene (R-Ga.) for saying she didn't know the GOP's "big, beautiful" tax bill included a provision that would ban states and municipalities from regulating the tech for 10 years.
Lux Capital co-founder Josh Wolfe said the best way to beat China in the AI race is to "make sure every single young" person is super well-versed in AI.
Actor and entrepreneur Joseph Gordon-Levitt said there needs to be an incentive to keep creatives paid and employed as AI disrupts the entertainment business.
Content from the sponsored View from the Top conversations:
Vlad Lukić, BCG managing director and senior partner, and global leader for its tech & digital advantage practice discussed the disconnect between corporate AI investment and tangible outcomes.
According to a recent study of 1,000 companies, "Over 75% of them are with budgets in this year deploying AI at scale, but only 25% of them have a line of sight to value creation from those activities," he added.
Danny Allan, chief technology officer at Snyk, said visibility, false expectations, and proper policies are lacking behind the pace of AI-powered software development.
"The speed and velocity that software is coming through the pipelines is like nothing I have ever seen in my career right now. It's so, so fast. And the trouble that CISOs have is they don't have the trust that what is coming through that pipeline is actually secure."
Bhaskar Roy, Workato chief of AI products and solutions, said businesses will experience real transformation when AI agents tackle the "messy middle."
"There are a few companies that are targeting the core and looking at how they can transform the core with … agentic AI and that's what excites us."
Rob Sobers, chief marketing officer at Varonis, warned that security risks, like "AI model poisoning" where attackers inject malicious data into AI models, could impact people's lives.
While working with an organization researching Alzheimer's, they noticed a hacker feeding the organization's custom AI model new data out of nowhere, Sobers said. And, "that could change subtly the dosage of a medication that you're giving somebody. … It's super important to get the trust and security layer right."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT's Sam Altman sends strong 2-word message on the future
ChatGPT's Sam Altman sends strong 2-word message on the future

Miami Herald

time22 minutes ago

  • Miami Herald

ChatGPT's Sam Altman sends strong 2-word message on the future

As the AI boom continues to take over both the tech industry and the news cycle, there's one thing that's for sure: it's scaring a lot of people. AI is a technically complex topic that can be difficult to explain to the average person, but there's one sentiment that isn't hard to explain at all: the concept that AI might take your job. Don't miss the move: Subscribe to TheStreet's free daily newsletter So rather than try to understand AI's capabilities, or why every major tech company from Meta to Google to Nvidia is pouring billions of dollars into developing it, most people are going to zero in on the part that's personally applicable to them. Related: Cathie Wood has a bold take on AI stealing your job Some voices in the tech space have tried to present an opposite take on the whole "AI making you jobless" rhetoric. Ark Invest CEO Cathie Wood said in a recent tweet, "History shows that new technologies create many more jobs than they displace. We do not think that this time will be different." OpenAI's Sam Altman is easily the AI movement's biggest figurehead, thanks to ChatGPT's runaway success. The company hit three million paid ChatGPT subscribers as of June. This proves that people are flocking to it in droves - and away from search engines. Research firm Gartner has even predicted that by 2026, traditional search engine volume will drop 25%. Now Altman has penned a blog post addressing the topic of AI and how it's changing our world. It's a refreshing take that, for once, will give you some hope about the future of your career. Altman's post emphasizes that compared to any time that has come before, the 2030s can be described with two powerful words: "wildly different." Altman offers a reality check, saying, "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less weird than it seems like it should be." "We do not know how far beyond human-level intelligence we can go, but we are about to find out," he continued. More Tech Stocks: Palantir gets great news from the PentagonAnalyst has blunt words on Trump's iPhone tariff plansOpenAI teams up with legendary Apple exec The OpenAI CEO doesn't hesitate to say that his company has recently built systems that are "smarter than people in many ways, and are able to significantly amplify the output of people using them." Altman also says ChatGPT is "already more powerful than any human who has ever lived," a phrase that may feel threatening to some, considering that LLMs are not human to begin with. But Altman sees even more ahead, predicting that AI will significantly mold our future. Related: Microsoft has good news for Elon Musk, bad news for Sam Altman "In the 2030s, intelligence and energy - ideas, and the ability to make ideas happen -are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else." Altman also acknowledged that, yes, many jobs will go away as AI continues to evolve, but that won't be the end of the story. "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything," he says. "There will be very hard parts like whole classes of jobs going away, but on the other hand, the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before. We probably won't adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big." Altman also points out a key asset of humanity that AI cannot duplicate, saying, "People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don't care very much about machines." Related: OpenAI teams up with legendary Apple exec The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

‘This is coming for everyone': A new kind of AI bot takes over the web
‘This is coming for everyone': A new kind of AI bot takes over the web

Washington Post

time25 minutes ago

  • Washington Post

‘This is coming for everyone': A new kind of AI bot takes over the web

People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would.

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity
Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Forbes

timean hour ago

  • Forbes

Sam Altman Says AI Has Already Gone Past The Event Horizon But No Worries Since AGI And ASI Will Be A Gentle Singularity

Speculating on the future of AI including artificial general intelligence (AGI) and artificial ... More superintelligence (ASI). In today's column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. In a new posting on June 10, 2025, entitled 'The Gentle Singularity' by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts): There's a whole lot in there to unpack. His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more. Let's briefly explore the mainstay elements. A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren't. Sam Altman's reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started. Just to note, that's a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint. Consider these vital facets. First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI. However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck. Worse still, we might be off-target and going in the wrong direction altogether. Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we've already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking. Time will tell, as they say. Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here. Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat. Perhaps though the singularity will be a long and drawn-out activity. There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we've just concocted some zany theory that has no basis in reality. Sam Altman's posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one. Interesting conjecture. Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them. There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here. A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Depending on how you interpret Sam Altman's latest blog post, it isn't clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I've previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here. We'll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly. An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That's certainly happy news for the world at large. Utopia awaits. There is a decidedly other side to that coin. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. You can readily discern which camp the posting sides with, namely roses and fine wine. It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture. Franklin D. Roosevelt wisely stated: 'There are as many opinions as there are experts.' Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI. You'll be immeasurably glad you were cautious and alert.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store