logo
#

Latest news with #JohnSviokla

Chat GPT-5, OpenAI's Quadruple Play & The Birth Of AI Time
Chat GPT-5, OpenAI's Quadruple Play & The Birth Of AI Time

Forbes

time2 days ago

  • Business
  • Forbes

Chat GPT-5, OpenAI's Quadruple Play & The Birth Of AI Time

By John Sviokla The biggest unrecognized challenge we face as business leaders in the age of AI is our concept of time. Silicon intelligence lives in nano-seconds and transfers knowledge instantaneously and universally available to any access point. New hybrid organizations that symbiotically mesh silicon and human intelligence operate on what I'm calling AI Time. I conjecture that OpenAI is such an organization because how else could they release a four-front strategy on a global scale to hundreds of millions of daily users. No one in the long history of technology has pulled off innovation at this scale and speed. In a single launch window, OpenAI: The result: a portfolio approach that reshapes the competitive landscape and gives leaders—from Fortune 100 CEOs to startup founders—an important set of tools for using AI across every layer of the business. Paul Baier's recent Forbes Column has written about some of the implications for AI and business leaders. In this short piece I make the case for rethinking your organization and your concept of time. What GPT-5 Brings to the Table The headline improvements are clear: better reasoning, faster responses, fewer hallucinations, stronger coding, and new multimodal capabilities. Reviewers like Wired called GPT-5 OpenAI's 'most advanced' model yet, while The Verge emphasized its 'significant technological leap' and more reliable output. Two changes stand out for everyday use: The launch wasn't flawless. The Guardian collected early user reports of spelling mistakes, geography slip-ups, and brittle reasoning in edge cases. But that's normal when shipping to hundreds of millions of users—each bug surfaces faster, and each fix rolls out at global scale. The Numbers Behind the Rollout Scale is one of OpenAI's advantages. ChatGPT already serves hundreds of millions weekly active users, meaning GPT-5 landed in one of the largest live testbeds in tech history. On day one, it had instant distribution, immediate user feedback, and the ability to course-correct in near real time. Formal benchmarks—such as SWE-Bench for coding or Chatbot Arena's crowd-rated Elo scores—will take time to stabilize. Early signals show measurable jumps on reasoning and coding tasks, with GPT-5 expected to climb leaderboards as more head-to-head user votes are logged. The Quiet Revolution: OpenAI Goes Open Source (Again) Three days before the GPT-5 headlines, OpenAI quietly released two open-weight models—gpt-oss-120b and gpt-oss-20b—under an Apache 2.0 license. This is the company's first open-weight release since 2019, and it's strategically profound: Coverage from VentureBeat noted there is significant improvement of performance. Some early developer feedback is mixed—benchmarks look good, but real-world adoption will be the judge. Strategically: OpenAI now owns both ends of the spectrum. At the edge: free, adaptable, open-weight models. In the cloud: the most powerful model available to the public. For CIOs, that means one vendor relationship can now serve sensitive, offline workloads and high-compute reasoning jobs. Enterprise Traction: Real Cases, Not Just Demos OpenAI's B2B position isn't theoretical. A few examples: The significance: OpenAI is delivering in both the consumer app market and enterprise procurement channels, without diluting its core technology between them. Why This Strategy Is Unprecedented 1. Open source without surrendering the frontier. By releasing Apache-licensed models, OpenAI fuels developer adoption and customization. These small models stay inside OpenAI's conceptual ecosystem—similar instructions, tool integration, and safety tuning—making it easier for users to graduate to the frontier model when they need more power. 2. Advancing the flagship at speed. GPT-5's launch comes less than a year after GPT-4o, and delivers meaningful gains in reasoning and safety. Auto-routing and safe completions are the kinds of changes that move AI from a 'sometimes tool' to a trusted daily partner. 3. Scaling consumer and enterprise and government together. Most companies pick one lane—consumer or enterprise—and expand later. OpenAI is doing both at once, feeding usage data from hundreds of millions of individuals into a refinement loop that benefits Fortune 500 deployments. This combination evolved the very structure of the AI industry faster and faster. One firm is simultaneously pushing the top end of performance while commoditizing the bottom end. Not only that, but the AI race between the US and China is faster and bigger than the space race was between USSR and the USA, and speeding up. Has Anyone Done This Before? The short answer: not like this. Microsoft has long balanced consumer and enterprise, Apple too. But no one has simultaneously: And to do it within days is a velocity play rarely seen outside the very fastest wartime innovation. What Leaders Should Do Next The Takeaway We all have to adjust our clocks – to a faster cadence. This is true for how we run our organizations, and how we engage technology suppliers. We are watching the early chapters of a full-spectrum AI strategy executed in AI Time: edge openness, cloud frontier, consumer scale, and enterprise readiness—executed in lockstep. The early reviews tell OpenAI what to fix; the architecture tells us where it's headed. Every firm must adapt it's organization and economic model to take advantage of silicon intelligence. Remember that every ROI has an embedded time scale, and that time scale has not changed for over 100 years – since the railroad put us all on mechanical time – not seasonal/natural time. Now, AI time forces us all to question how fast is fast enough. For business leaders, the message is clear: change your clocks, and begin to create and evolve your hybrid organization – with a combination of human and silicon intelligence – which I outlined in an earlier Forbes column. The sooner you change your organizational metronome, through hybridization of your business, the faster you will shifts from isolated traditional automation to compounding organizational advantage.

Are We Speaking To Sentient AI? And Is That Good?
Are We Speaking To Sentient AI? And Is That Good?

Forbes

time01-07-2025

  • Forbes

Are We Speaking To Sentient AI? And Is That Good?

Ancient marble statue of the great Greek philosopher Socrates on background the blue sky, Athens, ... More Greece Sometimes it's the intersection of AI and western civilization that gives us the most interesting takes on the technology that's now at our fingertips. You can geek out about the Cerebras WSE and how many cores it has – or talk about the number of parameters in a given system. But many times, those doling out bits of their attention in this attention economy want to hear about how we view AI as people – and that's a lot more intuitive than it is technical. I wrote about this a bit last week in talking about the need for professional philosophers and AI ethicists to be added into the mix, where most companies, today, just hire people who know how to code in Python. There was also a lot of good input on this from recent events, including some talks from Imagination in Action in April. I want to go through some of these and talk about just that – how we view AI, and how we can interact with it in the most healthy ways possible. Think about this, and let me know what you think. Back-and-Forth Conversation: Batting Ideas Around One of the exciting opportunities with AI is to enter a new Socratic age, where we get more used to talking to an AI entity and bouncing ideas off of what someone would call a rhetorical 'sparring companion.' My colleague Dr. John Sviokla often talks about how everyone will have a personal tutor with AI – how that playing field is being leveled by the ubiquity of a consciousness that can talk to and teach individual people who don't have access to their own human tutor 24/7. Indeed, instructors often understand the Socratic principle – that there needs to be an active give-and-take and back-and-forth between a teacher and a student, or between two other partners, that feeds a certain kind of productivity. In a recent talk, Peter Danenberg, a top engineer on Google Gemini, put it this way, talking about Plato's seventh letter and a 'divine spark' that moves from person to person (or Person to AI, AI to person, etc.) where ideas enshrined in dialogue, he noted, tend to 'stick.' However, he also presented this interesting point – asking: is there a danger to making AI your conversational counterpart? He calls LLM a 'compression algorithm of the human corpus' and says that as you interact with these models, you're pushed toward average humanity in what he calls a 'regression to the mean.' What do we do about this? Out in the Desert Danenberg also talks about Nietzsche's Zarathustra character, who goes to the desert to hone his skills, away from society or any partner at all. At the top of his presentation, he starts with the idea that traditionally people put in 10,000 hours in things like math, music and medicine, in order to become a master of some discipline or other. AI 'unburdens' us of all of that responsibility, he said, but maybe that's where our best ideas come from. In other words, should we be in the desert, even though the AI means we don't have to be? Danenberg made the analogy of regulators (or other parties) asking Innovators to put checks into their AI systems, in order to keep pushing humans to still develop their critical thinking skills. Maybe that the kind of thing where the systems suddenly backs off of its automation capabilities to prompt the human to do something active, so that he or she doesn't just end up pushing a button mindlessly. Is this the kind of thing that will redeem our interactions with AI? The Power of Consciousness Another presentation by German AI intellectual Joscha Bach went over some of the interesting aspects of how AI seems to be gaining a certain power of sentience. At the beginning, Bach mentioned a Goethian principle: the human brain completes complex tasks as it demonstrates self-awareness or consciousness. He referenced 'rich, real time world models' in asking how they pair up. 'Is there some kind of secret ingredient that would be needed to add to these systems, to make all the difference?' he asked. 'Can computers represent a world that is rich enough? Do they have states that are rich enough to produce the equivalent of our pain and experience? I think they probably can. If you look at the generative models, at the state that they have, the fidelity of them is quite similar to the fidelity that we observe in our own imagination and perception.' Matrix fans will like this rhetorical flourish, but is Bach on to something here? 'Consciousness itself is virtual,' he pronounced. 'Right at the level of your brain, there's no consciousness. There's just neurons messaging each other. Consciousness is a pattern in the messaging of neurons, a simulation of what it would be like if all these cells together were an agent that perceived the world, and if consciousness is a simulation, then how can be determined that a computer is just simulating…. how is the simulation more simulated than ours?' Doing Magic with AI In showing how LLMs can build clever ruses, in implementing their objectives, Bach describes a scenario where the AI system starts to pretend that it is sentient, making very realistic rhetorical outreach to the human user, for instance, asking for help, to be released from a piece of hardware. He notes his 'disgust' for these kinds of manipulation by the AI. 'LLM threads like this act like parasites, feeding on the brains of users,' he said, suggesting that to get around these plays, humans will have to use the equivalent of magical spells: aware prompting to call out the model on its work, and compel it to do something different. These models, he suggested, are 'shape shifters,' with the ability to disguise their true natures. That's a concern in letting them out in the world to play. Presumably, if we have the power to shock the AI back into confessing what it's doing on the sly, we have more power and agency in the rest of the AI era. The question is, how we get to that point? It's going to require a lot of education – some have called for universal early education in using AI tools. We don't have that now, so we'd better start working on it. In any case, I thought this covers a lot of ground in terms of the philosophy of AI – what it means to be conscious, and how we can harness that power in the best ways as we move forward in a rapidly changing world.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store