Latest news with #AGISFLab
Yahoo
20-05-2025
- Business
- Yahoo
Amazon's Danielle Perszyk is coming to TechCrunch Sessions: AI
We're thrilled to announce that Danielle Perszyk, the leader of Amazon AGI SF Lab's human-computer interaction efforts, will be taking the main stage at on June 5 at UC Berkeley's Zellerbach Hall. The AGI SF Lab is at the forefront of developing foundational capabilities for AI agents that can operate in the real world — and Danielle is driving that vision forward. Don't miss this rare opportunity to hear directly from one of the key minds shaping the future of practical, agentic AI. For a limited time, save over $300 on your ticket — and get 50% off a second for your plus one. Don't wait—this offer won't last long. Register now before it expires! Join us at TC Sessions: AI for a full day of groundbreaking programming, interactive sessions, live demos, and high-impact networking with the brightest minds in AI. Bring a friend or colleague — because big ideas are better shared, and their ticket is half off. With new, more powerful AI models launching seemingly every week, the pace of innovation is both thrilling and overwhelming. In this dynamic conversation, Logan Kilpatrick, Senior Product Manager at Google DeepMind, Jae Lee, CEO of Twelve Labs, and Danielle Perszyk, PhD, Cognitive Scientist and Member of Technical Staff, Amazon AGI SF Lab will share firsthand insights from the frontlines of AI development. Together, they'll explore how startups can not only build on top of today's leading foundation models, but also, adapt and scale in a rapidly evolving landscape. From choosing the right models to anticipating future shifts, this session will equip founders, builders, and product leaders with strategies to stay ahead, stay relevant, and seize the opportunities of the AI era. To get all the latest on this session and check out everyone else joining us — . Danielle Perszyk is a cognitive scientist and member of the technical staff at Amazon's AGI SF Lab, where she leads the Human-Computer Interaction (HCI) team. Her work focuses on developing foundational capabilities for practical AI agents that can operate effectively in both digital and physical environments. Danielle earned her PhD from Northwestern University, where she studied the evolution of language and the development of social cognition. Prior to joining Amazon, she contributed to AI initiatives at Adept and Google, bringing a unique interdisciplinary perspective to the challenges of building intelligent, interactive systems. At , Danielle Perszyk will bring her deep expertise in cognitive science and human-computer interaction to a must-see panel on how founders can harness foundational models to scale AI in powerful, practical ways. Don't miss this chance to learn from one of the minds shaping the future of agentic AI, while pocketing up to $600 in ticket savings. Sign in to access your portfolio
Yahoo
20-05-2025
- Business
- Yahoo
Amazon's Danielle Perszyk is coming to TechCrunch Sessions: AI
We're thrilled to announce that Danielle Perszyk, the leader of Amazon AGI SF Lab's human-computer interaction efforts, will be taking the main stage at on June 5 at UC Berkeley's Zellerbach Hall. The AGI SF Lab is at the forefront of developing foundational capabilities for AI agents that can operate in the real world — and Danielle is driving that vision forward. Don't miss this rare opportunity to hear directly from one of the key minds shaping the future of practical, agentic AI. For a limited time, save over $300 on your ticket — and get 50% off a second for your plus one. Don't wait—this offer won't last long. Register now before it expires! Join us at TC Sessions: AI for a full day of groundbreaking programming, interactive sessions, live demos, and high-impact networking with the brightest minds in AI. Bring a friend or colleague — because big ideas are better shared, and their ticket is half off. With new, more powerful AI models launching seemingly every week, the pace of innovation is both thrilling and overwhelming. In this dynamic conversation, Logan Kilpatrick, Senior Product Manager at Google DeepMind, Jae Lee, CEO of Twelve Labs, and Danielle Perszyk, PhD, Cognitive Scientist and Member of Technical Staff, Amazon AGI SF Lab will share firsthand insights from the frontlines of AI development. Together, they'll explore how startups can not only build on top of today's leading foundation models, but also, adapt and scale in a rapidly evolving landscape. From choosing the right models to anticipating future shifts, this session will equip founders, builders, and product leaders with strategies to stay ahead, stay relevant, and seize the opportunities of the AI era. To get all the latest on this session and check out everyone else joining us — . Danielle Perszyk is a cognitive scientist and member of the technical staff at Amazon's AGI SF Lab, where she leads the Human-Computer Interaction (HCI) team. Her work focuses on developing foundational capabilities for practical AI agents that can operate effectively in both digital and physical environments. Danielle earned her PhD from Northwestern University, where she studied the evolution of language and the development of social cognition. Prior to joining Amazon, she contributed to AI initiatives at Adept and Google, bringing a unique interdisciplinary perspective to the challenges of building intelligent, interactive systems. At , Danielle Perszyk will bring her deep expertise in cognitive science and human-computer interaction to a must-see panel on how founders can harness foundational models to scale AI in powerful, practical ways. Don't miss this chance to learn from one of the minds shaping the future of agentic AI, while pocketing up to $600 in ticket savings. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
31-03-2025
- Business
- Yahoo
Amazon unveils AI that can take control of your web browser
Amazon has unveiled a new artificial intelligence model that can take control of a user's web browser and perform actions on their behalf. The general-purpose AI agent, called Nova Act, is part of a new generation of of artificial intelligence that gives AI chatbots the ability to act semi-autonomously in order to serve as a digital assistant. 'We think of agents as systems that can complete tasks and act in a range of digital and physical environments on behalf of the user,' an Amazon blog post stated. 'Today, such agents are still in an early stage. The Nova Act SDK is a crucial step forward, toward building reliable agents by enabling developers to break down complex workflows into atomic commands (e.g., search, checkout, answer questions about the screen).' Amazon Nova Act is the first AI model to come out of the tech giant's new AGI SF Lab, which is dedicated to developing artificial general intelligence, or human-level intelligence. Led by former OpenAI researchers, the lab claims that Nova Act is able to outperform rival AI agents developed by Anthropic and OpenAI. It is currently only available for developers, who will use it to build real-world applications, such as making online reservations at a restaurant or ordering food. Once ready, these tools could then potentially be rolled out via its Alexa voice assistant to offer the functionality to its millions of users. Earlier this month, researchers in China unveiled an advanced AI agent called Manus that is capable of carrying out a vast range of tasks independently, from buying property to developing video games. Yichao Ji, co-founder and chief scientist of Manus AI, described it as 'the next evolution of AI' and 'the next paradigm of human-machine collaboration'. Victor Mustar, head of product at AI platform Hugging Face, said it was 'mind-blowing' in its capabilities. 'Got access and it's true... Manus is the most impressive AI tool I've ever tried,' he wrote on X. 'The user experience is what so many others promised... but this time it just works.'


WIRED
31-03-2025
- Business
- WIRED
Amazon's AGI Lab Reveals Its First Work: Advanced AI Agents
Mar 31, 2025 9:00 AM Led by a former OpenAI executive, Amazon's AI lab focuses on the decision-making capabilities of next generation of software agents—and borrows insights from physical robots. The Amazon logo on the façade of one of their global headquarters. Photograph:Amazon is still seen as a bit of a laggard in the race to develop advanced artificial intelligence, but it has quietly created a lab that is now setting records when it comes to AI performance. Amazon's AGI SF Lab, which is located in San Francisco and dedicated to building artificial general intelligence, or AI that surpasses the capabilities of humans, revealed the first fruits of its work today: A new AI model capable of powering some of the most advanced AI agents available anywhere. The new model, called Amazon Nova Act, outperforms ones from OpenAI and Anthropic on several benchmarks designed to gauge the intelligence and aptitude of AI agents, Amazon says. On the benchmarks GroundUI Web and ScreenSpot, Amazon Nova Act performs better than Claude 3.7 Sonnet and OpenAI Computer Use Agent. A major part of Amazon's plan to compete in the AI market is to focus on building agents, and the new model's abilities reflect its efforts to build a generation of tools that can measure up to the very best available. 'I believe that the basic atomic unit of computing in the future is going to be a call to a giant [AI] agent,' says David Luan, who leads Amazon's AGI SF Lab. He was previously a vice president of engineering at OpenAI and later cofounded Adept, a startup that pioneered work on AI agents, before joining Amazon in 2024 when the ecommerce giant took a stake in the company. Most of the leading AI labs are now focused on building increasingly capable AI agents. Getting AI to master independent actions, as well as conversation, promises to make the technology more useful and valuable. The shift from chat to action is still very much a work in progress, however. In the past six months, OpenAI, Anthropic, Google, and others have demonstrated web-browsing agents that take actions in response to a prompt. But for the most part, these agents are still unreliable, and they can easily be tripped up by open-ended requests. Luan says that Amazon's goal is building AI agents that are dependable rather than flashy. The thing holding agents back is not the need for 'more cool demos of interesting capabilities that work 60 percent of the time, it's the Waymo problem,' he says, referring to how self-driving cars needed to be trained to deal with unusual edge cases before they could take to the streets unsupervised. Many so-called agents are built by combining large language models with multiple human-written rules that are designed to prevent them from veering off course, but also makes their behavior brittle. Amazon Nova Act is a version of the company's most powerful homegrown model Amazon Nova that has received additional training to help it make decisions about what actions to take and at what time. In general, Luan says, AI models struggle to decide when they should intervene in a task. To improve Nova's agential abilities, Amazon is using reinforcement learning, a method that has helped other AI models better simulate reasoning. Amazon is also taking inspiration from physical robots with its new models. Laun's team is working with another group at Amazon based in San Francisco led by Pieter Abbeel, a professor at University of California, Berkeley who works on finding AI applications for robotics. Abbeel, a fellow early OpenAI employee, joined Amazon in August 2024 after it invested in his startup, Covariant. Amazon is well positioned to make progress in robotics given the vast numbers of those already deployed in its fulfillment centers. The release of Amazon Nova Act suggests that Amazon could emerge as a dark horse in the race to create useful software agents. The company was slow in responding to ChatGPT, but it has more recently shown signs of getting its act together. In February, the company announced a new version of its voice assistant Alexa with improved conversational abilities as well as the capacity to automate certain web tasks. One use case Amazon cited is Alexa helping to book a repair service for a broken oven. Luan says Alexa's new agentic capabilities were developed by his team. As WIRED revealed last October, Amazon has also done research on how agents might eventually improve ecommerce by automating the process of finding and buying things. Such an agent might preemptively add items to a user's cart based on their interests and habits, Amazon's engineers said. Besides unveiling the new model, Amazon today announced a software development kit (SDK) designed to make it easier for computer engineers to use Amazon Nova Act to build software agents. The SDK lets developers give their agent specific instructions to help them navigate an internet built for human users. For example, an agent can be instructed 'don't accept the insurance upsell' when booking a rental car. Ultimately, Luan says, Amazon's agents should become smart enough not to fall for the upsell on their own. 'Nova Act is really like the very first step in that vision,' he says.