logo
Amazon will offer OpenAI models to customers for first time

Amazon will offer OpenAI models to customers for first time

Time of India4 days ago
Amazon.com Inc. plans to make Open
AI
's new open artificial intelligence models available to customers, the first time the cloud computing giant has offered products from the leading AI startup.
The models can mimic the human process of reasoning, months after China's DeepSeek gained global attention with its own open AI software. Amazon said it will offer the tools on its Bedrock and Sagemaker platforms, adding that their advanced reasoning capabilities make them suited for
AI agents
.
OpenAI announced the new 'open weight' models on Tuesday and said they can carry out complex tasks like writing code and looking up information online on a user's behalf.
Amid perceptions that Amazon is lagging behind its Big Tech peers in artificial intelligence, Chief Executive Officer Andy Jassy has positioned
Amazon Web Services
as a kind of supermarket that sells a range of AI tools to businesses. The company's Bedrock software platform was designed to make it easier to access other companies' large language models, as well as Amazon's own.
The company has partnered with Anthropic, investing $8 billion in the AI startup and using the relationship to bolster its credentials in AI services. AWS offers Anthropic's Claude models to clients on its AI marketplace. Anthropic plans to release a new version of its most powerful AI model on Tuesday that the company claims is more capable at coding, research and data analysis.
Amazon shares rose 1.5% at 1:22 p.m. in New York.
Amazon last week projected weaker-than-expected operating incoming for the current quarter and trailed the sales growth of its main cloud rivals, leaving investors looking for signs that the company's huge investments in AI are paying off.
During the second quarter, AWS revenue grew a little more than 17% to $30.9 billion, barely surpassing analysts' average estimate of $30.8 billion.
Amazon last year named Matt Garman the cloud division's CEO. A longtime AWS engineering leader who was previously sales chief, Garman succeeded Adam Selipsky.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI's $500 billion ambition puts it in elite club—and in the crosshairs
OpenAI's $500 billion ambition puts it in elite club—and in the crosshairs

Mint

time2 hours ago

  • Mint

OpenAI's $500 billion ambition puts it in elite club—and in the crosshairs

Just a week after OpenAI secured fresh funding at a $300-billion valuation, reports emerged of potential share sales at $500 billion. If it goes through, that kind of valuation would place OpenAI among only around 20 companies valued at over half a trillion dollars globally. OpenAI's latest funding round, worth $8.3 billion, was oversubscribed five times. The investor appetite reflected their confidence in the AI startup's ability to dominate a market that the UN Trade and Development projects will explode by 25 times in size in a decade. OpenAI's momentum is undeniable. The company has continuously upgraded its flagship ChatGPT product, recently launching GPT-5, which it claims can provide PhD-level expertise. Financially, its revenues have doubled in seven months, reaching $1 billion a month, with projections to hit $20 billion in annualised revenue by the end of the year. The capital influx will primarily help OpenAI scale its compute infrastructure, particularly Stargate, a joint venture with Japanese investment firm SoftBank and technology company Oracle to build the world's largest AI supercomputing infrastructure. OpenAI is also setting up its first data centre in Europe next year, which will house 100,000 Nvidia processors. This infrastructure investment is critical as companies race to control the data centres and AI chips essential for training and operating advanced artificial intelligence models. The numbers reflect this reality. Global data centre capacity surged from 20GW in 2016 to 57GW in 2024, with Goldman Sachs projecting 122GW by 2030. While OpenAI's valuation reflects investor confidence, the fundraising itself underscores the infrastructure investments needed to maintain leadership in the AI market. Challenger pack OpenAI faces growing competition from well-funded AI startups. Anthropic, founded by former OpenAI employees, is nearing a $5 billion funding round that would value it at $170 billion, up from $61.5 billion in March. Elon Musk's xAI has raised $10 billion at an $80 billion valuation and is seeking additional funding at a potential $200 billion valuation. Venture capital funding to AI companies has exceeded $40 billion in each of the past three quarters, according to Crunchbase. This financial backing is translating into competitive model performance. On the GPQA Diamond benchmark, which tests PhD-level science questions, xAI's Grok 4 Heavy scored 88.9% and Anthropic's Claude Opus 4.1 scored 80.9%. The landscape shifted when Chinese startup DeepSeek released powerful open-weight models available for free. OpenAI released its own open-weight models in response. The competition now spans both proprietary and open-source approaches. Incumbent advantage OpenAI also faces pressure from the Big Tech firms. Meta, Google, Amazon, and Microsoft have collectively spent $291 billion over the past year, largely for AI infrastructure. Last month, in a $2.4 billion deal, Google hired key executives from Windsurf, an AI coding company that OpenAI wanted to acquire. Google has also integrated 'AI Overviews' with its search engine, turning it into an 'answer engine" that directly competes with the core function of chatbots like ChatGPT. This strategy leverages Google's 2 billion monthly users and its market dominance. Meta, meanwhile, is restructuring its AI division into Meta Superintelligence Labs. It has also acquired top-tier AI researchers from OpenAI, with multi-million-dollar compensation packages. Partner paradox OpenAI's relationship with Microsoft, however, has turned complicated. Microsoft, OpenAI's primary backer with a $13.75 billion investment, is also a direct competitor seeking to lead the AI revolution. Copilot, Microsoft's AI platform, boasts over 100 million monthly users. Microsoft's server products and cloud services revenue jumped 27% year-over-year in the three months ended 30 June, driven by growth in Azure, its cloud or remote computing platform. Microsoft holds crucial leverage as OpenAI attempts to convert into a for-profit company—a prerequisite for unlocking SoftBank funding and IPO plans. However, Microsoft has been withholding approval as both companies negotiate revising their contract, set to expire in 2030. A major sticking point is a clause that could terminate Microsoft's access to future OpenAI technology if the startup's board declares that artificial general intelligence—AI's capacity to learn and understand like humans and apply that knowledge to execute tasks—has been achieved. This friction has real consequences: OpenAI's attempt to acquire AI coding startup Windsurf failed because Microsoft's IP rights would have extended to the new technology, which Windsurf rejected. OpenAI needs capital to overcome these structural challenges and funding obstacles. is a database and search engine for public data

Alexa got an AI brain transplant. How smart is it now?
Alexa got an AI brain transplant. How smart is it now?

Indian Express

time4 hours ago

  • Indian Express

Alexa got an AI brain transplant. How smart is it now?

For the last few years, I've been waiting for Alexa's AI glow-up. I've been a loyal user of Alexa, the voice assistant that powers Amazon's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated — playing music, setting timers and getting the weather forecast are basically it — they're good at what they do. But since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant — a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice assistant. Amazon agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic tasks. But finally, the new Alexa — known as Alexa+ — is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did well. Alexa+, which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.) The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems. I have good news and bad news for my fellow Alexa-heads. The good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.) And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my 3-year-old. The new Alexa is also better at handling multistep requests. 'Set three kitchen timers for 15, 25 and 45 minutes' and 'write a one-day itinerary for a trip to San Diego and send it to my email' were two prompts that worked for me. And Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice change. The bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic tasks. When I asked Alexa+ to cancel an alarm the other morning — a request I had made to the old Alexa hundreds of times with no issues — it simply ignored me. When I emailed a research paper to alexa@ in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be found. Alexa+ also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter's recommended box grater and add it to my Amazon cart, it responded that 'according to Wirecutter, the best box grater is the OXO Good Grips Box Grater.' Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, 'Oh, no, my wires got crossed.' And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a 'routine' feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of 'Eye of the Tiger,' but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.) Daniel Rausch, the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online. 'We've got some edges to sand,' he said. Alexa, Give Me 500 Words on the History of Kitchen Timers Rausch said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of systems. The old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room — all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by one. Adding generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are 'stochastic,' meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less reliable. It also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song — an 'excruciating' delay, he said, that led the team to rethink its approach. 'These models are slow to respond when they're following a deep set of instructions,' he said. 'We're asking them to do something quite hard.' Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen timers. The solution, Rausch said, was to spend several years combining more than 70 AI models — some Amazon's proprietary models and some from outside providers, like Anthropic's Claude — into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it. 'The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors,' he said. There are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to 'speak Alexa,' phrasing their daily requests in familiar commands that they know the system will understand. 'We all sort of came up with our way of setting a timer to get the pasta done on time,' he said. But Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human — no robot pidgin required — and that may necessitate some retraining. I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. (Apple, which has been struggling to give Siri an AI upgrade for years, certainly hasn't.) I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems — a lesson many companies, both inside and outside tech, are learning the hard way right now — and it's going to take some time to work out all the kinks. For now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.

Alexa got an AI brain transplant: How smart is it now
Alexa got an AI brain transplant: How smart is it now

Time of India

time6 hours ago

  • Time of India

Alexa got an AI brain transplant: How smart is it now

Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads For the last few years, I've been waiting for Alexa 's AI glow-up.I've been a loyal user of Alexa, the voice assistant that powers Amazon 's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated -- playing music, setting timers and getting the weather forecast are basically it -- they're good at what they since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant -- a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic finally, the new Alexa -- known as Alexa+ -- is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.)The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems.I have good news and bad news for my fellow good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.)And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my new Alexa is also better at handling multistep requests. "Set three kitchen timers for 15, 25 and 45 minutes" and "write a one-day itinerary for a trip to San Diego and send it to my email" were two prompts that worked for Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic I asked Alexa+ to cancel an alarm the other morning -- a request I had made to the old Alexa hundreds of times with no issues -- it simply ignored I emailed a research paper to alexa@ in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter 's recommended box grater and add it to my Amazon cart, it responded that "according to Wirecutter, the best box grater is the OXO Good Grips Box Grater." Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, "Oh, no, my wires got crossed."And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a "routine" feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of "Eye of the Tiger," but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.)Daniel Rausch , the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online."We've got some edges to sand," he said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room -- all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are "stochastic," meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song -- an "excruciating" delay, he said, that led the team to rethink its approach."These models are slow to respond when they're following a deep set of instructions," he said. "We're asking them to do something quite hard."Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen solution, Rausch said, was to spend several years combining more than 70 AI models -- some Amazon's proprietary models and some from outside providers, like Anthropic's Claude -- into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it."The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors," he are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to "speak Alexa," phrasing their daily requests in familiar commands that they know the system will understand."We all sort of came up with our way of setting a timer to get the pasta done on time," he Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human -- no robot pidgin required -- and that may necessitate some retraining.I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. ( Apple , which has been struggling to give Siri an AI upgrade for years, certainly hasn't.)I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems -- a lesson many companies, both inside and outside tech, are learning the hard way right now -- and it's going to take some time to work out all the now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store