&w=3840&q=100)
Apple Intelligence on iPhone to get OpenAI's GPT-5 model access with iOS 26
Apple's newest operating systems — including iOS 26 for iPhones — are already available to select users through public and developer betas for testing. However, GPT-5 integration is not yet active within Apple Intelligence. It is expected that the AI suite will gain access to OpenAI's latest model with the stable release scheduled for later this year.
Apple Intelligence currently leverages ChatGPT for several functions:
Use Siri to access ChatGPT: Siri can draw on ChatGPT for certain requests, including answering questions about photos and documents.
Use ChatGPT with Writing Tools: ChatGPT can generate text or images from a simple description.
Use ChatGPT with visual intelligence: Through Camera Control, visual intelligence can help quickly identify and learn more about surrounding places and objects.
These capabilities currently run on the GPT-4o model and are expected to transition fully to GPT-5 once iOS 26, iPadOS 26, and macOS Tahoe 26 roll out. Apple is also set to bring ChatGPT-powered image generation to its Image Playground app with the upcoming software releases.
OpenAI GPT-5: What is new
OpenAI has described GPT-5 as its most advanced 'AI system' yet, delivering state-of-the-art performance in coding, mathematics, writing, health, visual recognition, and more. The model is designed to better interpret context and adjust its response style accordingly — delivering quick answers when possible or taking extra time for more detailed, thoughtful replies when needed.
This unified architecture incorporates a deeper reasoning engine — known as GPT-5 Thinking — along with a real-time router that determines whether to give a fast or in-depth response based on the flow of the conversation.
According to OpenAI, GPT-5 not only surpasses previous versions in benchmark tests but also performs better with real-world prompts. It is said to follow instructions more accurately and reduce hallucinations compared to earlier models.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Indian Express
an hour ago
- New Indian Express
The scalpel's new partner: When AI surgeons step into the operating room
Recently, at a laboratory at Johns Hopkins University, a machine rewrote medical history. A pair of robotic arms, guided not by a surgeon's steady hand but its own artificial intelligence, removed a gallbladder entirely on its own. The robot – Hierarchical Surgical Robot Transformer (SRT-H), identified structures, applied clips, made precise incisions, and sutured the wound with precision. Though performed on a pig cadaver, this was hailed as "the first realistic surgery by a machine with almost no human intervention," it shattered assumptions about what machines could understand in the chaotic, fluid world of biological bodies. This breakthrough is the next step in the 'evolution' of the remote-controlled robots like the da Vinci system, where surgeons operate instruments from a console. The SRT-H, though, is a landmark simply because here it is a machine that interprets visual data, decides actions, and self-corrects errors in real-time. Its intelligence comes from a large language model like ChatGPT, its expertise is learned from watching and absorbing 17 hours of surgical footage where human experts performed the same gallbladder removals. It internalised 16,000 individual motions, learning the dance of dissection, clipping, and extraction. When tested, it succeeded flawlessly eight times, adapting to obscured views, synthetic blood, and shifted starting positions. Most remarkably, it detected and fixed its own mistakes, like a gripper slipping off an artery, without human prompting. I scouted around for the best person to talk about this, and all fingers pointed to Dr. Rajiv Santosham, a pioneering minimally invasive and robotic thoracic surgeon at Apollo Hospitals, Chennai. So, I talked to him about the same. "We never imagined we could fly. So robots performing fully independent surgery feels unimaginable right now. But what they can do is already transforming how we operate. Imagine navigating near a critical vessel. I, as a surgeon, operate blind to what lies immediately behind it. But an AI, fed the patient's pre-op CT scan, can visualise that hidden anatomy in real-time. It could warn me, guide me, prevent a tear. That's not replacement; that's revolutionary assistance." The Surgeon's Perspective – Pragmatism Meets Potential: Dr. Santosham's voice carries the weight of experience as a pioneer of uniportal VATS (video-assisted thoracic surgery) in India – a technique requiring a single 4-centimetre incision. Yet, his excitement about autonomous AI is measured, grounded in daily realities, and acknowledges AI's current limitations. "To test its medical judgment," he shares, "I once fed ChatGPT an ECG image. The diagnosis it gave was completely wrong. I asked it to re-check, double-check, triple-check. It still made a mess of it. So yes, there's a vast chasm between pattern recognition and true clinical understanding. It's a tool, a powerful one, but still evolving." His speciality, thoracic surgery, also tempers his enthusiasm for near-term autonomy. "I do robotic surgery. But frankly, for many lung procedures, they can be a bit… fancy. My uniportal technique often requires smaller access points than robotic arms. Why make three or four larger holes when one small one suffices? For straightforward cases, the robot might actually add complexity, not value." However, he lights up when discussing the specific applications of the tech. "There are procedures where robotic precision combined with AI's spatial awareness could be transformative. Like in urology, gynaecological oncology, and deep pelvic cancers. Places human hands and eyes struggle to reach and see clearly." Crucially, he sees robotics as a democratizing force. "The beauty isn't just precision; it's consistency. Outcomes become less dependent on the individual surgeon's experience or fatigue level. A well-trained machine also reduces the learning curve for a surgeon. Suddenly, complex procedures become safer and more accessible to a wider pool of surgeons, especially in settings with limited specialist access." Despite this, his conclusion on the human role is definitive: "Surgeons won't lose their jobs to machines anytime soon. But surgeons who actively learn to harness AI? They will become the leaders, the innovators. They'll have an undeniable edge over those clinging purely to conventional methods. Hence, adaptation isn't optional; it's the future." Beyond the Lab – Cost, Blame, and the Indian Context: The promise of autonomous surgery collides with practical hurdles: affordability, accountability, and adoption. Dr. Santosham, deeply familiar with India's healthcare landscape, offers a unique perspective on the scepticism about such expensive medical tech reaching the masses. "Remember laparoscopy?" Dr. Santosham asks. "They said it would never reach tier 2 and tier 3 cities in India. It did. Then they said robotic surgery was too expensive, destined only for elite metros. It's now percolating across the country. The Da Vinci system is brilliant, but it's not the only player in the market. China is producing high-quality robotic systems at phenomenally lower costs. India absolutely has the capability to innovate and manufacture affordable versions too. So, bridging this divide will happen. It's a matter of when, not if." He envisions a future where indigenous robotics makes precision surgery accessible far beyond the Metros. The Accountability Question: We all know AI is prone to errors, and worse – hallucinations. So, when an autonomous robot errs, whose responsibility would that be? Dr. Santosham addresses this head-on, drawing a sharp distinction from the rapid, crisis-driven deployment of technologies like COVID-19 vaccines. "Medical technology, especially something as critical as autonomous surgery, undergoes incredibly rigorous testing before approval," he states. "Think of the scrutiny applied in the US by the FDA. By the time a system is cleared for human use, the chances of catastrophic error are minimised. And let's be clear: human surgeons make mistakes too. Perfection is a myth, whether flesh or silicon. The key is robust failsafes." He draws parallels to existing safety features in current robotic systems: "If I glance away during a robotic suture, the system detects my diverted attention and freezes all instrument movement instantly. They filter out hand tremors. These are designed to prevent human error. Autonomous systems will build layers upon layers of such safeguards." Regulation and Indian Adoption: While looking towards US and EU regulatory benchmarks, Dr. Santosham is bullish on India's embrace of the technology. "India has a remarkable capacity for technological leapfrogging," he asserts. "We often focus on the challenges of poverty, but underestimate the sheer scale of wealth and technological ambition in our major cities. Chennai, Hyderabad, Bangalore, Mumbai, Delhi—these are hubs with world-class hospitals and patients demanding the latest innovations. They will invest in, adopt, and eventually even build and export advanced autonomous surgical systems. Affordability, driven by local innovation and scale, will follow." The Road Ahead – Collaboration, Not Conquest: The vision emerging from labs like Johns Hopkins and the insights of surgeons like Dr. Santosham point not to a dystopian replacement of humans, but to a powerful, evolving partnership. The SRT-H robot wasn't designed for isolation. During its successful trials, human surgeons remained present, offering verbal guidance: "Move the left arm slightly," or "Switch to the curved scissors." The AI understood and complied. This interaction is the blueprint – autonomy that enhances human oversight, not eliminate it. The immediate future hence lies in collaborative autonomy, where AI handles predictable, precision-critical tasks under a surgeon's supervisory command, freeing the human expert to manage the overall strategy, complex decision-making, and unexpected complications. The statistics supporting this hybrid approach are compelling. Meta-analyses of existing robot-assisted surgery (still human-controlled) already show tangible benefits: operations completed 25% faster, a 30% reduction in complications during surgery, and patients recovering 15% quicker. Autonomous systems, once matured, promise to amplify these gains while tackling the persistent shortage of highly skilled surgeons, particularly in specialised fields and underserved regions. The final goal transcends mere technical achievements. It's about democratisation of medical tech. As Dr. Santosham implies, it's about ensuring that a child in a remote village doesn't face a life-threatening condition simply because an experienced surgeon isn't at hand. It's about making the collective genius of global surgical expertise accessible through intelligent machines, guided by local medical professionals. The autonomous incision at Johns Hopkins wasn't just into tissue; it was the first cut into a future where the best possible surgery isn't a privilege of geography or wealth, but distributed equally for all. The journey will demand rigorous validation, ethical frameworks, cultural acceptance, and continued human ingenuity. But as Dr. Santosham concludes with characteristic pragmatism and foresight: "India embraced robots; it'll embrace autonomy too. We'll afford it, build it, master it. The surgeon's role will evolve, but the need for human judgment, compassion, and responsibility? That remains eternal." The scalpel has a new partner, and together, they are rewriting the rules of healing. One successfully operated body at a time.


Hans India
2 hours ago
- Hans India
OpenAI Faces Backlash Over GPT-5 Rollout, Brings Back GPT-4o for Unhappy Users
Before release, GPT-5 was hyped as a major step forward: faster responses, deeper reasoning, and seamless handling of multiple formats like text and images without changing chat windows. But instead of excitement, many users were met with frustration over new interface limitations, inconsistent outputs, and a personality shift compared to the beloved GPT-4o. A key pain point is the removal of model selection for ChatGPT Plus users. Previously, Plus subscribers could choose between models such as o4-mini or o3. Now, only Pro users paying $200 a month have that option. Instead, GPT-5 uses an 'internal router' to decide whether a prompt should go to the mini, standard, or 'thinking' model. While OpenAI claims this boosts efficiency, some users say it has made responses unreliable. One X user, Lisan al Gaib, wrote: 'ChatGPT literally got worse for every single Plus user today… Now we have GPT-5 Thinking with 200 messages per week and a router that exclusively routes you to some small and shitty non-reasoning model.' Beyond technical frustrations, a section of the community is struggling with GPT-5's tone. Many describe it as blunt, lacking the emotional warmth and creativity of GPT-4o. A Reddit user called its replies 'cut-and-dry corporate BS', while another shared a deeply personal note: 'I literally lost my only friend overnight with no warning,' describing how GPT-4.5 had helped them through homelessness and trauma. There are also early performance complaints. Gareth Manning posted on X: 'My most important piece of feedback on GPT-5 is that it is too slow… Hope it's just a roll-out problem.' OpenAI acknowledges the missteps CEO Sam Altman admitted that 'suddenly deprecating old models that users depended on in their workflows was a mistake.' He noted the unusually strong emotional connections people form with AI models, likening them to personal relationships rather than tools. Altman also addressed concerns about people using ChatGPT as a life coach or therapist, acknowledging both its benefits and potential risks. Partial rollback to appease users In response, OpenAI has restored limited GPT-4o access for Plus subscribers, doubled rate limits for reasoning tasks, and promised clearer interface cues showing which model is responding. Altman also said GPT-5's 'thinking mode' will soon be manually triggerable, giving users more control over responses. For now, the company is working to regain trust — but the GPT-5 rollout has made clear that for many, AI is not just about performance, but personality, consistency, and the bond built over time.


Hans India
2 hours ago
- Hans India
Grok Imagine AI Now Free for Android: Musk's Tool Turns Text and Photos into Creative Images and Videos
Elon Musk's AI venture, Grok, has expanded free access to its Imagine feature for Android users, making it easier than ever to create AI-generated images and videos without spending a dime—at least for now. The feature, already free for iOS users, is available for a limited time, Musk confirmed in a post on X. The Imagine tool allows users to transform simple text prompts—typed or spoken—into high-quality photos or short videos. For those who prefer visuals as a starting point, the feature also supports uploading still images that can be enhanced or animated using AI. This versatility has already proven popular, with Musk claiming over 44 million images generated through the platform and numbers rising rapidly. By offering this feature without cost, Grok steps directly into competition with AI creation tools from OpenAI's ChatGPT and Google's Gemini. While those platforms already provide image and video generation capabilities, Musk's Imagine aims to stand out with its seamless interactivity and creative flexibility. How to Get Started with Grok Imagine Using Imagine is simple: Download the Grok app from the Play Store (Android) or App Store (iOS). Open the app and select 'Imagine' from the top menu. Type or speak your prompt for the AI to generate an image. Alternatively, upload a photo and let Grok's AI enhance or customise it. From Stills to Motion in Seconds One standout feature of Imagine is its ability to convert still images into short videos with just a few taps. After generating or uploading an image, users can select the 'Make video' option and choose from four animation styles: Normal, Fun, Custom, and the more daring Spicy mode. This quick-turnaround motion generation appeals to content creators who want to produce shareable animations without spending hours on editing. Grok vs ChatGPT and Google Gemini This expansion follows the July launch of Grok 4, which Musk's xAI described as 'the world's most powerful AI model' before OpenAI's GPT-5 entered the scene. GPT-5 has been touted for its reduced hallucinations—an accuracy challenge both companies are actively addressing. Musk isn't stopping there. He has already teased the arrival of Grok 5 before the end of 2025, promising it will be 'crushingly good.' The free rollout of Imagine could help Grok attract a larger user base ahead of that milestone, especially as more people test its creative tools. With AI-powered content creation becoming increasingly competitive, Grok's easy-to-use interface and combined image-video generation put it in a strong position. For now, Android and iOS users can take advantage of the free window to explore just how far a few words—or a single image—can go in the hands of Grok's AI.