
Pattern Integrates AWS-Powered Generative AI Solution with Amazon Nova to Enhance Its Ecommerce Acceleration Platform
With Amazon Nova, we're now executing millions of optimizations each day, helping our brand partners win across marketplaces worldwide with our patented technology. — Dave Wright, Co-Founder and CEO of Pattern
Share
An early adopter of generative AI, Pattern launched Content Brief, a content optimization technology solution for consumer brands powered by Amazon Bedrock and Amazon Nova. Streamlining the content optimization process enables users to quickly analyze millions of data points across text, imagery, and video, providing brands with specific recommendations for how to modify product titles, descriptions, and visual assets to significantly improve both traffic and conversion rates. These actionable insights are synthesized into concise reports that deliver clear guidance on content and image changes proven to drive marketplace performance.
"Ecommerce is benefiting disproportionately from generative AI. And from the beginning, Pattern's technology has leveraged AI to drive growth for our brand partners," said Pattern Co-Founder and CEO Dave Wright. "With Amazon Nova, we're now executing millions of optimizations each day, helping our brand partners win across marketplaces worldwide with our patented technology. This collaboration allows us to process vast amounts of data more efficiently, generate insights faster, and constantly build new capabilities that keep our partners ahead of the game."
Pattern's use of Amazon Nova via Amazon Bedrock on AWS marks a significant step in the company's commitment to innovation and increasing efficiency by leveraging AI and machine learning models. The company has seen more than a 76% reduction in costs compared to other similar generative AI models, along with dramatic decreases in processing time. In one example, Pattern's Content Brief improvements led to a 48% lift in traffic compared to 8% category growth.
"Generative AI technology is a game-changing accelerator in so many areas of ecommerce. We are heavily investing in the development of AI applications across our product portfolio to enhance the Pattern experience,' said Pattern's Chief Architect and Technical Advisor to the CEO, Jason Wells. "AWS has been a great partner, providing stability for our applications and confidence to our brand partners. When we brought in Amazon Nova, we realized it was not only reliable and accurate but also affordable – key factors when building software we hope to scale to thousands, if not millions, of users."
'Pattern's use of Amazon Nova demonstrates exactly why we developed this technology – to help companies turn massive, complex datasets into real business outcomes,' said Rich Geraffo, Vice President & Managing Director, AWS North America. 'By harnessing the power of its trillions of data points and data platform alongside our generative AI, Pattern is delivering tangible marketplace advantages to brands while drastically reducing their AI processing costs and making their own teams more productive. This exemplifies our commitment to AI innovation that makes customers' lives easier and better – providing the freedom to invent differentiated solutions that would otherwise be unfeasible or cost-prohibitive, all built on a foundation they can trust.'
For more information, visit Pattern.com.
About Pattern Pattern is the category leader for global ecommerce and marketplace acceleration. Since its founding in 2013, Pattern has profitably grown to more than 1,800 team members operating from 18 global locations. Hundreds of global brands depend on Pattern's ecommerce acceleration platform every day to drive profitable revenue growth across hundreds of global marketplaces—including Amazon, Walmart.com, Target.com, eBay, Tmall, JD, and Mercado Libre. For more information about Pattern please visit Pattern.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
30 minutes ago
- Yahoo
Amazon.com (AMZN) Partners With Sendbird To Enhance AI Capabilities And Market Reach
Sendbird Inc.'s recent collaboration with AWS marks a significant development in the AI agent space, likely aligning positively with market expectations for growth in agentic AI. During the last quarter, saw its stock price move 5%, a change that can be attributed to a combination of factors, including its robust Q2 earnings report and the completion of a substantial share repurchase program. The company's strategic partnerships and expansion initiatives, such as the $10 billion investment in North Carolina for infrastructure, complemented broader market trends, which have been buoyant with the S&P 500 and Nasdaq hitting all-time highs. We've identified 1 weakness with and understanding the impact should be part of your investment process. Diversify your portfolio with solid dividend payers offering reliable income streams to weather potential market turbulence. The collaboration between Sendbird Inc. and AWS could enhance Amazon's role in AI-driven solutions, particularly in the AI agent space. This complements the narrative that cloud transition and AI adoption will propel Amazon's growth. The ongoing development in AI, coupled with AWS's leadership, might positively influence future revenue and earnings forecasts, as AWS leverages its infrastructure to meet increasing AI demands. However, competitive and regulatory pressures remain, potentially affecting the ability to maintain margins. Over a three-year period, Amazon's total shareholder return reached 54.56%, underscoring strong performance. This return outpaced the US Multiline Retail industry's one-year performance, where Amazon exceeded the 28.3% industry gain. In comparison to the broader market, Amazon also surpassed the US Market's 19.4% increase over the past year. Considering the current share price of US$221.30 against the consensus price target of US$262.14, there is an 18.46% discount to the target, suggesting room for growth if forecasts align with reality. This potential upside reflects analyst expectations, hinging on whether Amazon can achieve anticipated revenue and earnings milestones. The recent news and strategic developments might aid in closing the gap towards the price target, as market confidence in the company's growth initiatives strengthens. Examine earnings growth report to understand how analysts expect it to perform. This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Companies discussed in this article include AMZN. This article was originally published by Simply Wall St. Have feedback on this article? Concerned about the content? with us directly. Alternatively, email editorial-team@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Anthropic's Claude AI model can now handle longer prompts
Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company's popular AI coding models. For Anthropic's API customers, the company's Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That's roughly five times Claude's previous limit (200,000 tokens), and more than double the 400,000 token context window offered by OpenAI's GPT-5. Long context will also be available for Claude Sonnet 4 through Anthropic's cloud partners, including on Amazon Bedrock and Google Cloud's Vertex AI. Anthropic has built one of the largest enterprise businesses among AI model developers, largely by selling Claude to AI coding platforms such as Microsoft's GitHub Copilot, Windsurf, and Anysphere's Cursor. While Claude has become the model of choice among developers, GPT-5 may threaten Anthropic's dominance with its competitive pricing and strong coding performance. Anysphere CEO Michael Truell even helped OpenAI announce the launch of GPT-5, which is now the default AI model for new users in Cursor. Anthropic's product lead for the Claude platform, Brad Abrams, told TechCrunch in an interview that he expects AI coding platforms to get a 'lot of benefit' from this update. When asked if GPT-5 put a dent in Claude's API usage, Abrams downplayed the concern, saying he's 'really happy with the API business and the way it's been growing.' Whereas OpenAI generates most of its revenue from consumer subscriptions to ChatGPT, Anthropic's business centers around selling AI models to enterprises through an API. That's made AI coding platforms a key customer for Anthropic, and could be why the company is throwing in some new perks to attract users in the face of GPT-5. Last week, Anthropic unveiled an updated version of its largest AI model, Claude Opus 4.1, which pushed the company's AI coding capabilities a bit further. Generally speaking, AI models tend to perform better on all tasks when they have more context, but especially for software engineering problems. For example, if you ask an AI model to spin up a new feature for your app, it's likely to do a better job if it can see the entire project, rather than just a small section. Abrams also told TechCrunch that Claude's large context window also helps it perform better at long agentic coding tasks, in which the AI model is autonomously working on a problem for minutes or hours. With a large context window, Claude can remember all its previous steps in long-horizon tasks. But some companies have taken large context windows to an extreme, claiming their AI models can process massive prompts. Google offers a 2 million token context window for Gemini 2.5 Pro, and Meta offers a 10 million token context window for Llama 4 Scout. Some studies suggest there's a limit to how large context windows can be, and AI models are not great at processing massive prompts. Abrams said that Anthropic's research team focused on increasing not just the context window for Claude, but the 'effective context window,' suggesting that its AI can understand most of the information it's given. However, he declined to reveal Anthropic's exact techniques. When prompts to Claude Sonnet 4 are over 200,000 tokens, Anthropic will charge more to API users, at $6 per million input tokens and $22.50 per million output tokens (up from $3 per million input tokens and $15 per million output tokens).


TechCrunch
2 hours ago
- TechCrunch
Anthropic's Claude AI model can now handle longer prompts
Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company's popular AI coding models. For Anthropic's API customers, the company's Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That's roughly five times Claude's previous limit (200,000 tokens), and more than double the 400,000 token context window offered by OpenAI's GPT-5. Long context will also be available for Claude Sonnet 4 through Anthropic's cloud partners, including on Amazon Bedrock and Google Cloud's Vertex AI. Anthropic has built one of the largest enterprise businesses among AI model developers, largely by selling Claude to AI coding platforms such as Microsoft's GitHub Copilot, Windsurf, and Anysphere's Cursor. While Claude has become the model of choice among developers, GPT-5 may threaten Anthropic's dominance with its competitive pricing and strong coding performance. Anysphere CEO Michael Truell even helped OpenAI announce the launch of GPT-5, which is now the default AI model for new users in Cursor. Anthropic's product lead for the Claude platform, Brad Abrams, told TechCrunch in an interview that he expects AI coding platforms to get a 'lot of benefit' from this update. When asked if GPT-5 put a dent in Claude's API usage, Abrams downplayed the concern, saying he's 'really happy with the API business and the way it's been growing.' Whereas OpenAI generates most of its revenue from consumer subscriptions to ChatGPT, Anthropic's business centers around selling AI models to enterprises through an API. That's made AI coding platforms a key customer for Anthropic, and could be why the company is throwing in some new perks to attract users in the face of GPT-5. Last week, Anthropic unveiled an updated version of its largest AI model, Claude Opus 4.1, which pushed the company's AI coding capabilities a bit further. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Generally speaking, AI models tend to perform better on all tasks when they have more context, but especially for software engineering problems. For example, if you ask an AI model to spin up a new feature for your app, it's likely to do a better job if it can see the entire project, rather than just a small section. Abrams also told TechCrunch that Claude's large context window also helps it perform better at long agentic coding tasks, in which the AI model is autonomously working on a problem for minutes or hours. With a large context window, Claude can remember all its previous steps in long-horizon tasks. But some companies have taken large context windows to an extreme, claiming their AI models can process massive prompts. Google offers a 2 million token context window for Gemini 2.5 Pro, and Meta offers a 10 million token context window for Llama 4 Scout. Some studies suggest there's a limit to how large context windows can be, and AI models are not great at processing massive prompts. Abrams said that Anthropic's research team focused on increasing not just the context window for Claude, but the 'effective context window,' suggesting that its AI can understand most of the information it's given. However, he declined to reveal Anthropic's exact techniques. When prompts to Claude Sonnet 4 are over 200,000 tokens, Anthropic will charge more to API users, at $6 per million input tokens and $22.50 per million output tokens (up from $3 per million input tokens and $15 per million output tokens).