logo
Google's new AI app Doppl lets you try on outfits virtually

Google's new AI app Doppl lets you try on outfits virtually

Engadget6 hours ago

To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so.
Google Labs is making virtual outfit try ons available to all with a new experimental AI app called Doppl, the company announced in a blog post. You can upload a photo of yourself and any outfit to see how it will look on you and can even create an AI-generated video of yourself and the clothing in motion.
To use it, first upload a full-body photo of yourself, then choose photos or screen shots of outfits. For instance, you can screenshot or download photos from sources like Pinterest or clothing websites, or take photos of clothing from locations like thrift stores. You could even snap a photo of a friend wearing a desired outfit.
Once the outfit is selected, Doppl (short for doppelgänger one imagines) will create an AI-generated image of you wearing it even and convert the static image into a moving video. You can continue to browse through outfits, save your favorites and share different looks. It may not work perfectly for you — Google pointed out that "Doppl is in its early days and... fit, appearance and clothing details may not always be accurate."
Google recently unveiled a similar try-on feature for its Shopping experience, but Doppl works strictly as a standalone app. It looks like the kind of thing people could have some fun with, particularly on social media, but it may also aid Google in gathering data on users' buying and shopping habits. The app is now available on iOS and Android, but only in the US for now. If you buy something through a link in this article, we may earn commission.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Reddit will play an important role in the AI age, says COO
Reddit will play an important role in the AI age, says COO

Yahoo

time29 minutes ago

  • Yahoo

Reddit will play an important role in the AI age, says COO

Content publishers such as Reddit (RDDT) are facing challenges as AI advances, with concerns that AI search will drive traffic away from the original sites. Reddit COO Jennifer Wong joins Yahoo Finance Executive Editor Brian Sozzi for an interview at Cannes Lions 2025 to discuss the company's investments in AI and its lawsuit against Anthropic. Reddit alleged the Amazon-backed (AMZN) startup scraped its users' personal data without their consent, then used the data to train its large language model Claude. To watch more expert insights and analysis on the latest market action, check out Opening Bid here. Google is wreaking havoc on the publishing industry. Um I think of what they're doing with their AI snippets. It's really causing massive issues for media enterprises. What have the changes they have made, how is it, I guess, causing disruption at Reddit? I mean, search is under heavy construction right now, uh because there's this change that's happening with the introduction of LLMs and search generative results and Google has its own canyon to cross in bridging what's been the traditional UI to this new experience. Um, and everyone's feeling the volatility of that. Our corpus is obviously big and that's a source of traffic. But I think, you know, when I think about the long, if I just pull up and I think about the long term, I think that there is in the end, no AI or human or artificial intelligence without human intelligence. I think Reddit plays a really big role in that, both in broad-based search, whatever that looks like in the end, you know, maybe with summarization through LLMs, or the opportunity to have onsite search. I mean, Reddit, this is, you know, we are one of the few that has a really significant corpus that regenerates, you know, human opinion, real time that responds to everything happening in the world, that also creates an incredible opportunity for us to bring that experience, a unique experience onto Reddit. Now, uh, Reddit recently sued Anthropic for allegedly scraping your site. From the CEO perspective, how were you, how are you able to determine that they were allegedly doing that? Um, well, we know. I mean, we're able to see, uh, you know, what happens on our property. Um, but what I would say there is what's important to us is that, um, that we are able to protect our users privacy, their deletion rights. Like we have policies, um, that ensure that, you know, when users take down a post, like the post is taken down. And so it's really important, and as we said in our terms of service, that, you know, we have a conversation with folks who have access to our data because that's a commitment that we have in terms of our policies. Um, and it's also, you know, to know how Reddit data is used. And so that's what you see there just, you know, that's very important to us and we will, we will ensure that that's the case and that's very, very important to Reddit. Is the, is the best outcome that Anthropic would pay Reddit for your content or license it? I mean, I can't comment further on, on outcomes or, you know, what's happened. Reddit, the, you know, what's been filed publicly. Um but I will say that following our policies is incredibly important to us. As you look forward to, uh, on AI, how will AI show up throughout the platform? I know you're making a big push internationally, for example. Um, AI is an incredible tool for making our products better. Um, and it's, you see it throughout Reddit already today. Right? So our strategy is, you know, number one, improving that core community use case. Well, one of the things AI helps us do and helps moderators do is scale their moderation and help people post content more easily, help them follow and traverse the rules of communities, um, and help moderators moderate. That's incredible in terms of scale. In terms of growing internationally, as you know, Reddit is a lot of conversation and words, so AI allows us to machine translate global conversations into multiple languages and we're doing this today in French and German in fact and almost 12 different languages right now. And these are conversations that are about universal life experience, like parenting or pop culture, these things, um, uh, really apply and can be enjoyed globally where you take out the language friction. So that's a really exciting, you know, application. And then you see AI weaved into our advertising, you know, announcement yesterday just putting an LLM layer on top of the Reddit corpus creates insights for uh, businesses and brand marketers to understand, you know, where their opportunities are, what people think about their brands, where they can find communities to engage and grow their businesses. I think AI is unlocking just better and better product experiences. It'll make Reddit better and better. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

The biggest AI companies you should know
The biggest AI companies you should know

Yahoo

timean hour ago

  • Yahoo

The biggest AI companies you should know

AI continues to be the hottest trend in tech, and it doesn't appear to be going away anytime soon. Microsoft (MSFT), Google (GOOG, GOOGL), Meta (META), and Amazon (AMZN) continue to debut new AI-powered software capabilities while leaders from other AI firms split off to form their own startups. But the furious pace of change also makes it difficult to keep track of the various players in the AI space. With that in mind, we're breaking down what you need to know about the biggest names in AI and what they do. From OpenAI ( to Perplexity ( these are the AI companies you should be following. Microsoft-backed OpenAI helped put generative AI technology on the map. The company's ChatGPT bot, released in late 2022, quickly became one of the most downloaded apps in the world. Since then, the company has launched its own search engine, 4o image generator, a video generator, and a file uploader that allows you to ask the bot to summarize the content of your documents, as well as access to specialized first- and third-party GPT bots. Microsoft uses OpenAI's various large language models (LLM) in its Copilot and other services. Apple (AAPL) also offers access to ChatGPT as part of its Apple Intelligence and Visual Intelligence services. But there's drama behind the scenes. OpenAI is working to restructure its business into a public benefit corporation overseen by its nonprofit arm, which will allow it to raise more capital. To do that, it needs Microsoft's sign-off, but the two sides are at loggerheads over the details of the plan and what it means for each company. In the meantime, both OpenAI and Microsoft are reportedly working on products that will compete with each other's existing offerings. Microsoft offers its own AI models, and OpenAI is developing a productivity service, according to The Information. Still, the pairing has been lucrative for both tech firms. During its most recent quarterly earnings call, Microsoft said AI revenue was above expectations and contributed 16 percentage points of growth for the company's Azure cloud business. OpenAI, meanwhile, saw its annualized revenue run rate balloon to $10 billion as of June, according to Reuters. That's up from $5.5 billion in Dec. 2024. OpenAI offers a limited free version of its ChatGPT bot, as well as ChatGPT Plus, which costs $20 per month, and enterprise versions of the app. Google's Gemini offers search functionality using the company's Gemini 2.5 family of AI models. You can choose between using Gemini Flash for quick searches or Gemini Pro, which is meant for deep research and coding. Gemini doesn't just power Google's Gemini app. It's pervasive across Google's litany of services. Checking your email or prepping an outline in Docs, Gemini is there. Get an AI Overviews result when using standard Google Search? That's Gemini too. Google Maps? That also takes advantage of Gemini. Chrome, YouTube, Google Flights, Google Hotels — you name it, it's using Gemini. But Google's Gemini, previously known as Bard, got off to a rough start. When Google debuted its Gemini-powered AI Overviews in May 2024, it began offering up wild statements like recommending users put glue on their pizza to help make the cheese stick. But during its I/O developer conference in May, Google showed off a number of impressive new developments for Gemini, including its updated video-generation software Veo 3 and Gemini running on prototype smart glasses. A limited version of Gemini is available to use for free. A paid tier that costs $19.99 per month gives you access to advanced AI models and integration with Google's productivity suite. A $249 subscription lets you use Google's most advanced Gemini models and 30TB of storage via Google Drive, Photos, and Gmail. Mark Zuckerberg's Meta has gone through a number of transformations over the years, from desktops to mobile to short-form video to an ill-advised detour into the metaverse. Now the company is leaning heavily into AI with the goal of dominating the space so it doesn't have to rely on technologies from rivals like Apple and Google, like it did during the smartphone wars. It helps that Meta has a massive $70 billion in cash and marketable securities on hand that it can deploy at a moment's notice and data from billions of users to train its models. Unlike most competitors, Meta is offering its Llama family of AI models as open-weights software, which means companies and researchers can adjust the models as they see fit, though they don't get access to the original training data. More people developing apps and tools that use Llama means Meta effectively gets to see how its software can evolve without having to do extra work. But Llama 4 Behemoth, the company's massive LLM, has been delayed by months, according to the Wall Street Journal. To seemingly offset similar delays moving forward, Meta is scooping up AI talent left and right. The company invested $14.3 billion in Scale AI and hired its CEO, Alexandr Wang. Meta also grabbed Safe Superintelligence CEO Daniel Gross and former GitHub CEO Nat Friedman. Meta's AI, like Google's, runs across its various platforms, including Facebook, Instagram, and WhatsApp, as well as its smart glasses. Founded in 2021 by siblings and ex-OpenAI researchers Dario and Daniela Amodei, Anthropic ( is an AI company focused on safety and trust. The duo split off from OpenAI over disagreements related to AI safety and the company's general direction. Like OpenAI, Anthropic has accumulated some deep-pocketed backers, including Amazon and Google, which have already poured billions into the company. The company's Claude models are available across various cloud services. Its Anthropic chat interface offers a host of capabilities, including web searches, coding, as well as writing and drafting documents. Anthropic also allows users to build what it calls artifacts, which are documents, games, lists, and other bite-sized pieces of content you can share online. In June, a federal judge sided with Anthropic in a case in which the company was accused of breaking copyright law by training its models on copyrighted books. But Anthropic allegedly downloaded pirated versions of some books and will now face trial over the charge. Elon Musk's xAI, a separate company from X Corp, which owns X (formerly Twitter), offers its own Grok chatbot and Grok AI models. Users can access Grok through a website, app, and X. Like other AI services, it allows users to search for information via the web, generate text and images, and write code. The company trains Grok on its Colossus supercomputer, which xAI said will eventually include 1 million GPUs. According to Musk, Grok was meant to have an edgy flair, though like other chatbots, it has been caught spreading misinformation. Musk previously co-founded OpenAI with Sam Altman but left the company after disagreements over its future and leadership positions. In 2024, Musk filed a lawsuit against OpenAI and Sam Altman over the AI company's effort to restructure itself as a for-profit organization. Musk says OpenAI has abandoned its original mission statement to build AI to benefit humanity and instead is working to enrich itself and Microsoft. Perplexity takes a real-time web search approach to AI chatbots, serving as a true threat to the likes of Google and its own search engine. Headed by CEO Aravind Srinivas, who previously worked as a research scientist at OpenAI, Perplexity allows users to choose from a number of different AI models, including OpenAI's GPT-4.1, Anthropic's Claude 4.0 Sonnet, Google's Gemini 2.5 Pro, xAI's Grok 3, and the company's own Sonoar. Perplexity also provides users with Discover pages for topics like finance, sports, and more, with stories curated by both the Perplexity team and outside contractors. As with other AI companies, Perplexity has been criticized by media organizations for allegedly using their content without permission. Dow Jones is suing the company over the practice. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Adobe's new camera app is making me rethink phone photography
Adobe's new camera app is making me rethink phone photography

The Verge

timean hour ago

  • The Verge

Adobe's new camera app is making me rethink phone photography

Adobe's Project Indigo is a camera app built by camera nerds for camera nerds. It's the work of Florian Kainz and Marc Levoy, the latter of whom is also known as one of the pioneers of computational photography with his work on early Pixel phones. Indigo's basic promise is a sensible approach to image processing while taking full advantage of computational techniques. It also invites you into the normally opaque processes that happen when you push the shutter button on your phone camera — just the thing for a camera nerd like me. If you hate the overly aggressive HDR look, or you're tired of your iPhone sharpening the ever-living crap out of your photos, Project Indigo might be for you. It's available in beta on iOS, though it is not — and I stress this — for the faint of heart. It's slow, it's prone to heating up my iPhone, and it drains the battery. But it's the most thoughtfully designed camera experience I've ever used on a phone, and it gave me a renewed sense of curiosity about the camera I use every day. This isn't your garden-variety camera app You'll know this isn't your garden-variety camera app right from the onboarding screens. One section details the difference between two histograms available to use with the live preview image (one is based on Indigo's own processing and one is based on Apple's image pipeline). Another line describes the way the app handles processing of subjects and skies as 'special (but gentle).' This is a camera nerd's love language. The app isn't very complicated. There are two capture modes: photo and night. It starts you off in auto, and you can toggle pro controls on with a tap. This mode gives you access to shutter speed, ISO, and, if you're in night mode, the ability to specify how many frames the app will capture and merge to create your final image. That rules. Indigo's philosophy has as much to do with image processing as it does with the shooting experience. A blog post accompanying the app's launch explains a lot of the thinking behind the 'look' Indigo is trying to achieve. The idea is to harness the benefits of multi-frame computational processing without the final photo looking over-processed. Capturing multiple frames and merging them into a single image is basically how all phone cameras work, allowing them to create images with less noise, better detail, and higher dynamic range than they'd otherwise capture with their tiny sensors. Phone cameras have been taking photos like this for almost a decade, but over the past couple of years, there's been a growing sense that processing has become heavy-handed and untethered from reality. High-contrast scenes appear flat and 'HDR-ish,' skies look more blue than they ever do in real life, and sharpening designed to optimize photos for small screens makes fine details look crunchy. Indigo aims for a more natural look, as well as ample flexibility for post-processing RAW files yourself. Like Apple's ProRAW format, Indigo's DNG files contain data from multiple, merged frames — a traditional RAW file contains data from just one frame. Indigo's approach differs from Apple's in a few ways; it biases toward darker exposures, allowing it to apply less noise reduction and smoothing. Indigo also offers computational RAW capture on some iPhones that don't support Apple's ProRAW, which is reserved for recent Pro iPhones. After wandering around taking photos with both the native iPhone camera app and Indigo, the difference in sharpening was one of the first things I noticed. Instead of seeking out and crunching up every crumb of detail it can find, Indigo's processing lets details fade gracefully into the background. I especially like how Indigo handles high-contrast scenes indoors. White balance is slightly warmer than the standard iPhone look, and Indigo lets shadows be shadows, where the iPhone prefers to brighten them up. It's a whole mood, and I love it. High-contrast scenes outdoors tend toward a brighter, flat exposure, but the RAW files offer a ton of latitude for bringing back contrast and pumping up the shadows. I don't usually bother shooting RAW on a smartphone, but Indigo has me rethinking that. Whether you're shooting RAW or JPEG, Indigo (and the iPhone camera, for that matter) produces HDR photos — not to be confused with a flat, HDR-ish image. I mean the real HDR image formats that iOS and Android now support, using a gain map to pop the highlights with a little extra brightness. Since Indigo isn't applying as much brightening to your photo, those highlights pop in a pleasant way that doesn't feel eye-searingly bright as it sometimes can using the standard camera app. This is a camera built for an era of HDR displays and I'm here for it. According to the blog post, Indigo captures and merges more frames for each image than the standard camera app. That's all pretty processor-intensive, and it doesn't take much use to trigger a warning in the app that your phone is overheating. Processing takes more time and is a real battery killer, so bring a battery pack on your shoots. It all makes me appreciate the job the native iPhone camera app has to do even more. It's the most popular camera in the world, and it has to be all things to all people all at once. It has to be fast and battery-efficient. It has to work just as well on this year's model, last year's model, and a phone from seven years ago. If it crashes at the wrong time and misses a once-in-a-lifetime moment, or underexposes your great-uncle Theodore's face in the family photo, the consequences are significant. There are only so many liberties Apple and other phone camera makers can take in the name of aesthetics. To that end, the iPhone 16 series includes revamped Photographic Styles, allowing you to basically fine-tune the tone map it applies to your images to tweak contrast, warmth, or brightness. It doesn't offer the flexibility of RAW shooting — and you can't use it alongside Apple's RAW format — but it's a good starting point if you think your iPhone photos look too flat. There are only so many liberties Apple and any other phone camera maker can take in the name of aesthetics Between Photographic Styles and ProRAW, you can get results from the native camera app that look very similar to Project Indigo's output. But you have to work for it; those options are intentionally out of reach in the main camera app and abstracted away. ProRAW files still look a little crunchier than Indigo's DNGs, even when I take them into Lightroom and turn sharpening all the way down. Both Indigo's DNGs and ProRAW files include a color profile to act as a starting point for edits; I usually preferred Indigo's warmer, slightly darker image treatment. It takes a little more futzing with the sliders to get a ProRAW image where I like it. Project Indigo invites you into the usually mysterious process of taking a photo with a phone camera. It's not an app for everyone, but if that description sounds intriguing, then you're my kind of camera nerd. Photography by Allison Johnson / The Verge

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store