Latest news with #AIDecoded
Yahoo
23-05-2025
- Business
- Yahoo
Are OpenAI and Jony Ive headed for an iPhone moment?
Welcome to AI Decoded, Fast Company's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Tesla's Cybertruck is officially a flop House Republicans just gutted the IRA. What happened to all the supposed holdouts? Trump's 4,000 meme-coins-per-plate crypto dinner is an American embarrassment OpenAI will acquire the AI device startup co-founded by Apple veteran Jony Ive and Sam Altman, called 'io,' for nearly $6.5 billion, Bloomberg reported Wednesday. This almost certainly will put OpenAI in the consumer hardware business, and it seems like it will soon release a dedicated personal AI device. Ive is a pedigreed design guru with a track record of creating iconic tech products like the iPhone and the Apple Watch. Ive, a close friend of Steve Jobs, left Apple in 2019. 'I have a growing sense that everything I've learned over the last 30 years has led me to this place and to this moment,' Ive told Bloomberg. Coming from the world's best device designer, that's saying a lot. OpenAI brings a lot to the table, too. After setting off the AI boom with the late 2022 release of ChatGPT, the startup has quickly built its chatbot into a mass market AI app and a household name. More than 400 million people around the world now use ChatGPT every week, OpenAI COO Brad Lightcap recently told CNBC. (For comparison, longtime Apple analyst Gene Munster estimates that there are 1.7 billion Apple device users and 2.35 billion active devices worldwide.) 'OpenAI has all of the core leadership and the best-in-class models, as well as some of the most widely used interfaces,' says Gartner analyst Chirag Dekate. 'ChatGPT has defined and set the bar for what early experiences of generative AI ought to look like,' Dekate says. Ive helped Apple set such standards for smartphones and smartwatches. Apple's traditional power was its mastery of the user interface—–the technology that mediates between a human user and their digital tools and content. But iOS devices evolved from roots in traditional computing. AI models operate in a fundamentally different way, and may require a fundamentally different UX. (Apple, which has struggled to empower its devices with new generative AI capabilities, saw its stock price drop as much as 2.3% on news of the io deal Wednesday.) OpenAI will now get an opportunity to define the ideal software-hardware fusion for the new computing paradigm it helped usher in—after high-profile flops like Humane's AI pin. 'AI is such a big leap forward in terms of what people can do that it needs a new kind of computing form factor to get the maximum potential out of it,' OpenAI CEO Sam Altman told Bloomberg. Ive's and Altman's design firm has so far given the world no clue of its vision for a personal AI device. But whatever it builds—–a pendant or a bracelet or some kind of glasses—–will very likely provide an influential blueprint for how a vehicle for personal AI should look in the future. 'Jony Ive and io joining OpenAI is so important: It calls out the fact that in order to truly diffuse generative AI across society, we need to build new hardware experiences,' Dekate says. 'The QWERTY keyboard may not be the best way to experience generative AI.' Google looks like a company that's got its mojo back. Yes, yes, I know . . . the company still faces some very big problems—–antitrust actions, changes in its core search business, etc.—–but it also seems to have fully woken up to the AI revolution it helped start, and seems to be navigating it with level-headedness and even a sense of fun. The company's two-hour keynote at its developer event in Mountain View Tuesday was all about AI, a showcase for a comprehensive strategy of applying the company's Gemini AI models broadly across its product portfolio—–to everything from search tools to coding agents to video generators to smart glasses. Remember that Google is a company that entered the AI race only reluctantly in 2023, in the wake of the ChatGPT launch. Even though it was Google researchers who figured out how to architect and train the type of large language models that power ChatGPT, the company was, for legal and safety reasons, deeply conflicted about exposing such models to the public. But after OpenAI, Microsoft, and others hurried to apply and commercialize LLMs, Google found itself on the back foot, punished by Wall Street for not leading the race: a lackluster February 2023 demo of the Bard chatbot caused an 8% drop in Alphabet stock. Google's biggest announcement Tuesday may mark the moment when it retook the lead—–with a new AI product that could remake its core search business. 'AI Mode,' a chatbot-format AI search tool that will compete directly against ChatGPT and Perplexity, went from being an experimental product to being available to all users. It's powered by Google's best model, Gemini 2.5 Pro. In AI Mode, rather than entering a search term or phrase, users can describe in detail what they're looking for (often a solution to a problem rather than just some facts) and then work with Gemini's reasoning abilities to get to a fully responsive package of custom information. Features from the company's Project Astra work will increasingly let the AI gather information about objects (or problems) it sees (through a phone camera or smart glasses) in the real world around the user. Its work in Project Mariner will give AI more and more power to operate the user's device and call on various web tools (mapping, live weather, etc.) on the user's behalf. This summer AI Mode will be able to book appointments on the user's behalf, do deep research projects and data visualization, and help people shop for clothing and other products. The Gemini Live mode in the Gemini app leverages some of the same technologies. Google's Demis Hassabis described the experimental project as a universal AI assistant powered by a 'world model' that understands the context in which the user is moving, and understands a certain amount about how the physics of the world works, and can plan and take action on behalf of the user. In the demo video during I/O, a young man talks to the assistant while fixing a bicycle. Through the phone camera, the Gemini Live identifies problems and replacement parts, places a phone call to order a part, looks up schematics on the web, and makes suggestions. It can also control a user's device, analyze video, share its screen, and remember everything it sees or discusses in a session. Google has always been a medium between humans and all the data and intelligence that resides on the public web. No other company has as much experience accessing, parsing, organizing, and packaging all that information. Google believes that AI can make that medium a lot smarter, proactive, and personalized. And the power of Gemini goes well beyond search to best-in-class video/audio generation tools (Veo 3, Flow) to coding assistants (Jules) and to wearables (Android XR glasses). Anthropic on Thursday debuted the fourth generation of its AI models with Claude Opus 4 and Claude Sonnet 4, which the company says push the state of the art for coding, advanced reasoning, and AI agents. Both can use tools, such as web search. The company says Claude Opus 4 is its best model and the best coding model in the world. The model can work for several hours straight on complex, long-running tasks that involve thousands of steps, Anthropic says, and significantly expands what AI agents can accomplish. Claude Sonnet 4 replaces Anthropic's previous best effort, Claude 3.7 Sonnet. The company says the new model does everything 3.7 did, but pushes the envelope further on coding, reasoning, and precise instruction following. Claude Opus 4 and Sonnet 4 are hybrid models offering two modes: near-instant responses and extended thinking for deeper reasoning. Both models are available on the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI. Claude Sonnet 4 is also available in the free version of the Claude chatbot. The new models use chain-of-thought processing, as Claude 3.7 Sonnet did. But instead of displaying the model's raw thought process to users, Anthropic now shows summaries of the models' reasoning steps. Anthropic says it made this change 'to preserve visibility for users while better securing our models.' Anthropic's competitors could potentially use the raw chain-of-thought output to train their own models. While Anthropic's models are used to power a number of commercial coding assistants, the company has its own assistant, Claude Code, which it released as a research preview earlier this year. The assistant is now generally available. Anthropic says it'll start to release model updates more frequently in an effort to 'continuously refine and enhance' its models. Forget return-to-office. Hybrid now means human plus AI What it's like to wear Google's Gemini-powered AI glasses How Google is rethinking search in an AI-filled world Cartwheel uses AI to make 3D animation 100 times faster for creators and studios Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium. This post originally appeared at to get the Fast Company newsletter: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fast Company
15-05-2025
- Business
- Fast Company
Trump's Middle East tour is all about AI diplomacy
Welcome to AI Decoded, Fast Company 's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Trump's Middle East tour is all about AI diplomacy The U.S. enjoys its superpower status mainly because of two things: its military and its financial influence. What we're seeing in Trump's tour of the Middle East this week is the rise of another lever of geopolitical power: AI. And the competition between the U.S. and China in this realm is heating up. The U.S. is becoming more focused on exporting the best U.S. AI technology to other countries. Trump's lavish reception by heads of state in the Middle East this week can be explained in part by a major policy change: The Trump Commerce Department announced plans Tuesday to rescind Biden's 'AI diffusion rule,' which had restricted the export of the most powerful AI chips to other countries, including those in the Middle East. The removal of the chip restrictions opens big new markets for American AI chipmakers (to wit, Nvidia's stock rose 6% Tuesday) and could cause an increase in global investment in new AI data centers in the Middle East. Trump announced a series of U.S.-Saudi investment deals, including a partnership between Nvidia and Humain, a newly formed Saudi AI firm backed by the kingdom's sovereign wealth fund. The plan: to build AI data centers 'powered by several hundred thousand of Nvidia's most advanced GPUs.' The change in posture couldn't be starker. During Biden's presidency, the U.S. took a more cautious approach to AI. Biden-era chip export controls were seen as necessary to protect national security and preserve America's edge in the AI race. Many in the tech industry supported them, at least when it came to chips. Restricting access to the best hardware, as one source put it, was 'one lever that the U.S. can pull' to maintain its lead. The result: U.S. firms like OpenAI and Anthropic had access to elite silicon, while Chinese competitors like DeepSeek were left scrambling. But the game has changed since Biden was in office. The U.S. is no longer home to the only company (Nvidia) that can supply chips powerful enough to train state-of-the-art AI models. The Chinese multinational company Huawei is now shipping the Ascend 910C, a chip that rivals Nvidia's best, along with a high-end server rack, the CloudMatrix 384, that competes with Nvidia's GB200 NVL72. These systems are powering research inside China and are being pushed into global markets. That has raised alarms in D.C. The Commerce Department recently warned that organizations using Huawei's Ascend chips could be violating U.S. export rules, since the chips were likely manufactured with U.S.-origin technology. But enforcement will be difficult as more countries seek alternatives or try to hedge their bets between the U.S. and China. AI models and chips offer a new way for state actors to project power on the world stage. That's what's unfolding in the Middle East this week. The Trump administration isn't so much trying to open new markets for Nvidia as it is trying to advance American AI as the prevailing standard around the world. GOP bill would freeze state AI laws for a decade A sweeping AI regulatory ban that would prevent states from overseeing the technology for a decade has been quietly inserted into a powerful Republican tax and spending bill currently under review by the House Energy and Commerce Committee. If passed in its current form, the bill would mark a major victory for the U.S.'s largest tech companies, which argue that state-level regulations threaten innovation. It would impose a 10-year freeze on 'any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.' For companies like Meta, Microsoft, OpenAI, and Alphabet's Google, the provision offers a way to sidestep pending or active state laws that are imposing stricter oversight than the federal government. Their pitch in recent months has been that any slowdown in AI development could allow Chinese competitors to outpace the U.S., a message that's resonating with many Republicans. Currently, these companies face a wave of state-level scrutiny. In this year alone, states have introduced at least 550 AI-related bills—covering issues from deepfakes to algorithmic discrimination— according to a tracker by the National Conference of State Legislatures. And it's only May. The House committee's draft bill could effectively nullify these efforts, a move that has alarmed AI safety advocates and critics of Big Tech, including leading Democrats. 'This is an outrageous abdication of congressional responsibility and a gift-wrapped favor to Big Tech that leaves consumers vulnerable to exploitation and abuse,' said J.B. Branch, Big Tech accountability advocate at Public Citizen. 'This isn't leadership; it is surrendering to corporate overreach and abuse under the guise of 'protecting American innovation.' ' Sen. Ed Markey of Massachusetts warned that the proposal 'will lead to a dark age for the environment, our children, and marginalized communities.' Illinois Rep. Jan Schakowsky said the ban would allow 'AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI.' The bill is advancing through Congress via the budget reconciliation process, which allows certain legislation to bypass the Senate filibuster and pass with a simple majority. However, as Bloomberg reported, the provision may not survive this route, since Senate rules require that such measures be primarily fiscal in nature. Still, the proposal is offering insight into the GOP's broader stance on AI regulation. Vice President JD Vance has already cautioned that overregulation could 'kill' the AI industry—a sentiment that appears to be gaining traction among lawmakers. New Heartland/Rasmussen survey shows 60% of voters say AI companies should pay for lost jobs A new survey from the The Heartland Institute and Rasmussen Reports finds that voters support the idea of AI companies paying reparations for the jobs their technology eliminates. A majority of those surveyed (62%) said that if AI advancements were to cause the elimination of millions of jobs, they would support 'a government program that taxes big technology companies and then uses the funds to provide every American with an income large enough to pay for basic necessities like housing, clothes, and food.'