Latest news with #OpenAIOpenAI


Phone Arena
22-05-2025
- Phone Arena
Jony Ive and ChatGPT's maker want to reinvent hardware — but didn't we already reject this idea?
OpenAI's next big thing... but what is it? Video credit – OpenAI OpenAI says this They've both said this device is going to be something different – something made specifically with AI in mind. According to Ive, people are "uneasy" with the current tech landscape and are hungry for something new. And hey, that might be true. But is this the answer? We've already seen attempts to create new AI-native gadgets, and let's just say the results haven't been great. Humane's AI Pin and the Rabbit R1 both promised the future... and kind of flopped right out of the gate. When AI-only gadgets crash and burn The OpenAI says this new device will be a level of consumer hardware we've never seen before . And with Jony Ive designing it, you can bet it's going to look and feel both said this device is going to be something different – something made specifically with AI in mind. According to Ive, people are "uneasy" with the current tech landscape and are hungry for something new. And hey, that might be true. But is this the answer?We've already seen attempts to create new AI-native gadgets, and let's just say the results haven't been great. Humane's AI Pin and the Rabbit R1 both promised the future... and kind of flopped right out of the idea behind the Humane AI Pin was simple : ditch the screen and let an AI assistant handle everything. No apps, no taps – just ask it to do things like make a call, send a message or look something up. It ran on its own OS, called CosmOS, and tried to be this ambient, voice-first helper. Same idea with the Rabbit R1. The R1 at least has a company still trying to improve it. Updates are coming and the team seems to be listening. But Humane? That project fizzled out before it even had time to figure out what it was. And even Jony Ive himself wasn't impressed. He called both products "very poor." Ouch. But I don't think their failure was just about bad design or buggy software. I guess it comes down to something much simpler: we don't actually need these things. Not yet, anyway. Same idea with the Rabbit R1. It showed up last year with a flashy keynote and wild promises . It wasn't just supposed to be smart – it was supposed to do everything your phone does, but better and faster. Except... it didn' R1 at least has a company still trying to improve it. Updates are coming and the team seems to be listening. But Humane? That project fizzled out before it even had time to figure out what it was. And even Jony Ive himself wasn't impressed. He called both products "very poor." I don't think their failure was just about bad design or buggy software. I guess it comes down to something much simpler: we don't actually need these things. Not yet, anyway. Are we even ready for this? From what we know, OpenAI and Ive are cooking up something screen-free, compact and smart enough to know your context – like where you are, what you're doing and how you're feeling. The goal? Make it feel natural like it just "gets you." Sounds cool in theory. But here's the thing – we kinda like our screens. We like to scroll, swipe, watch, text, snap pics and yes, doomscroll Instagram or X at 2 AM. Even if we complain about screen addiction, most of what we do on our phones isn't really about productivity – it's entertainment. And let's be honest, an AI device that just talks to you? It's not exactly YouTube or TikTok material. Without something fun or visual, it's hard to see people lining up to buy it. So yeah, maybe it's designed to break our phone habits, but if the replacement isn't fun or exciting, people just won't bite. Still, this one might actually work From what we know, OpenAI and Ive are cooking up something screen-free, compact and smart enough to know your context – like where you are, what you're doing and how you're feeling. The goal? Make it feel natural like it just "gets you."Sounds cool in theory. But here's the thing – we kinda like our screens. We like to scroll, swipe, watch, text, snap pics and yes, doomscroll Instagram or X at 2 AM. Even if we complain about screen addiction, most of what we do on our phones isn't really about productivity – it's let's be honest, an AI device that just talks to you? It's not exactly YouTube or TikTok material. Without something fun or visual, it's hard to see people lining up to buy yeah, maybe it's designed to break our phone habits, but if the replacement isn't fun or exciting, people just won't bite. I asked ChatGPT to imagine what an OpenAI device designed by Jony Ive might look like – and this is what it came up with. Feels possible, right? But we will see if the chatbot was actually onto something next year. Let's be real – this could be the first AI gadget that doesn't totally flop. And that is because it wouldn't just be slapping an AI model onto a fancy-looking box. Humane and Rabbit are more like interfaces to existing AI models. OpenAI's device, though, could be built with the model in mind from the ground up, meaning: Real-time functionality without relying on API calls. Personalized behavior that evolves with you. Maybe even a local, fine-tuned model for offline use. So instead of asking it to play a song or call a ride, it could learn your routines, understand your voice, read your mood and anticipate what you need – kind of like an AI brain in your pocket that just gets you. And then there's the design. Humane gave us a laser projector. Rabbit gave us a walkie-talkie vibe. Both were trying way too hard. But with Ive on board? Expect something clean, smooth and minimal – something that blends into your life without screaming "gadget." So yeah, I'm curious. I still don't think we need this kind of device right now, but for AI fans out there, this might finally be the one worth watching. If anyone can actually pull this off, it's this duo. Let's be real – this could be the first AI gadget that doesn't totally flop. And that is because it wouldn't just be slapping an AI model onto a fancy-looking box. Humane and Rabbit are more like interfaces to existing AI models. OpenAI's device, though, could be built with the model in mind from the ground up, meaning:So instead of asking it to play a song or call a ride, it could learn your routines, understand your voice, read your mood and anticipate what you need – kind of like an AI brain in your pocket that just gets then there's the design. Humane gave us a laser projector. Rabbit gave us a walkie-talkie vibe. Both were trying way too hard. But with Ive on board? Expect something clean, smooth and minimal – something that blends into your life without screaming "gadget."So yeah, I'm curious. I still don't think we need this kind of device right now, but for AI fans out there, this might finally be the one worth watching. If anyone can actually pull this off, it's this duo. What do you think? Would you buy a screen-free AI device? What would it need to do for you to ditch your phone (even just a little)? Let me know in the comments. So, you probably already got used to the idea that AI is here to stay, right? Just a couple of years ago, AI had nothing to do with our phones and now you can't launch a flagship without hearing the word at least ten times. It's in our phones, our laptops, browsers, apps – and just about every corner of the internet. I mean, AI's not coming anymore – it's already moved in and started rearranging the with AI evolving at lightspeed, it was only a matter of time before someone at the top said, "Hey, what if we built hardware around this thing?" And that is exactly what is happening. OpenAI, the company behind ChatGPT, just teamed up with none other than Jony Ive – yes, the guy who helped design the iPhone, iPod and Mac – to build a new kind of AI-first deal, which includes around $6.5 billion in equity and past investments, brings in io, a startup founded by Ive. LoveFrom, Ive's design studio, will stay independent but will now lead the design of OpenAI's products – including the software yeah, the brains behind ChatGPT and the guy who shaped Apple's most iconic gadgets are working on something entirely new. Sounds like a dream team. But here's the big question: do we really need it? Because recent history shows us... maybe not.


Forbes
15-04-2025
- Business
- Forbes
OpenAI Shifts Focus With GPT-4.1, Prioritizes Coding And Cost Efficiency
OpenAI OpenAI launched its GPT-4.1 family of AI models focusing on enhancing developer productivity through improved coding, long-context handling and instruction-following capabilities available directly via its application programming interface. The release includes three distinct models, GPT-4.1, GPT-4.1 mini and GPT-4.1 nano, signaling a move toward task-specific optimizations within the large language model landscape. These models are not immediately replacing user-facing interfaces like ChatGPT but are positioned as tools for developers building applications and services. For technology leaders and business decision makers, this release warrants attention. It indicates a strategic direction toward more specialized and potentially more cost-effective large language models optimized for enterprise functions, particularly software development complex data analysis and the creation of autonomous AI agents. The availability of tiered models and improved performance metrics could influence decisions around AI integration build-versus-buy strategies and allocating resources for internal development tools, potentially altering established development cycles. Technically, the GPT-4.1 series represents an incremental but focused upgrade over its predecessor GPT-4o. A significant enhancement is the expansion of the context window to support up to 1 million tokens. This is a substantial increase from the 128000 token capacity of GPT-4o, allowing the models to process and maintain coherence across much larger volumes of information equivalent to roughly 750000 words. This capability directly addresses use cases involving the analysis of extensive codebases, the summarization of lengthy documents, or maintaining context in prolonged complex interactions necessary for sophisticated AI agents. The models operate with refreshed knowledge, incorporating information up to June 2024. OpenAI reports improvements in core competencies relevant to developers. Internal benchmarks suggest GPT-4.1 shows a measurable improvement in coding tasks compared to both GPT-4o and the earlier GPT-4.5 preview model. Performance on benchmarks like SWE-bench, which measures the ability to resolve real-world software engineering issues, showed GPT-4.1 achieving a 55% success rate, according to OpenAI. The models are also trained to follow instructions more literally, which requires careful and specific prompting but allows for greater control over the output. The tiered structure offers flexibility: the standard GPT-4.1 provides the highest capability while the mini and nano versions offer balances between performance speed and reduced operational cost, with nano being positioned as the fastest and lowest-cost option suitable for tasks like classification or autocompletion. In the broader market context, the GPT-4.1 release intensifies competition among leading AI labs. Providers like Google with its Gemini series and Anthropic with its Claude models have also introduced models boasting million-token context windows and strong coding capabilities. This reflects an industry trend moving beyond general-purpose models toward variants optimized for specific high-value tasks often driven by enterprise demand. OpenAI's partnership with Microsoft is evident with GPT-4.1 models being made available through Microsoft Azure OpenAI Service and integrated into developer tools like GitHub Copilot and GitHub Models. Concurrently, OpenAI announced plans to retire API access to its GPT-4.5 preview model by mid-July 2025, positioning the new 4.1 series as offering comparable or better performance at a lower cost. OpenAI's GPT-4.1 series introduces a significant reduction in API pricing compared to its predecessor, GPT-4o, making advanced AI capabilities more accessible to developers and enterprises. Pricing Comparison This pricing strategy positions GPT-4.1 as a more cost-effective solution, offering up to 80% savings per query compared to GPT-4o, while also delivering enhanced performance and faster response times. The tiered model approach allows developers to select the appropriate balance between performance and cost, with GPT-4.1 Nano being ideal for tasks like classification or autocompletion, and the standard GPT-4.1 model suited for more complex applications. From a strategic perspective, the GPT-4.1 family presents several implications for businesses. The improved coding and long-context capabilities could accelerate software development cycles, enabling developers to tackle more complex problems, analyze legacy code more effectively, or generate code documentation and tests more efficiently. The potential for building more sophisticated internal AI agents capable of handling multi-step tasks with access to large internal knowledge bases increases. Cost efficiency is another factor; OpenAI claims the 4.1 series operates at a lower cost than GPT-4.5 and has increased prompt caching discounts for users processing repetitive context. Furthermore, the upcoming availability of fine-tuning for the 4.1 and 4.1-mini models on platforms like Azure will allow organizations to customize these models using their own data for specific domain terminology workflows or brand voice, potentially offering a competitive advantage. However, potential adopters should consider certain factors. The enhanced literalness in instruction-following means prompt engineering becomes even more critical, requiring clarity and precision to achieve desired outcomes. While the million-token context window is impressive, OpenAI's data suggests that model accuracy can decrease when processing information at the extreme end of that scale, indicating a need for testing and validation for specific long-context use cases. Integrating and managing these API-based models effectively within existing enterprise architectures and security frameworks also requires careful planning and technical expertise. This release from OpenAI underscores the rapid iteration cycles in the AI space, demanding continuous evaluation of model capabilities, cost structures and alignment with business objectives.