Adobe Project Indigo is a new photo app from former Pixel camera engineers
Adobe launched its own take on how smartphone cameras should work this week with Project Indigo, a new iPhone camera app from some of the team behind the Pixel camera. The project combines the computational photography techniques that engineers Marc Levoy and Florian Kainz popularized at Google, with pro controls and new AI-powered features.
In their announcement of the new app, Levoy and Kainz style Project Indigo as the better answer to typical smartphone camera complaints of limited controls and over-processing. Rather than using aggressive tone mapping and sharpening, Project Indigo is supposed to use "only mild tone mapping, boosting of color saturation, and sharpening." That's intentionally not the same as the "zero-processing" approach some third-party apps are taking. "Based on our conversations with photographers, what they really want is not zero-process but a more natural look — more like what an SLR might produce," Levoy and Kainz write.
The new app also has fully manual controls, "and the highest image quality that computational photography can provide," whether you want a JPEG or a RAW file at the end. Project Indigo achieves that by dramatically under-exposing the shots it combines together, and relying on a larger number of shots to combine — up to 32 frames, according to Levoy and Kainz. The app also includes some of Adobe's more experimental photo features, like "Remove Reflections," which uses AI to eliminate reflections from photos.
Levoy left Google in 2020, and joined Adobe a few months later to form a team with the express goal of building a "universal camera app". Based on his LinkedIn, Kainz joined Adobe that same year. At Google, Kainz and Levoy were often credited with popularizing the concept of computational photography, where camera apps rely more on software than hardware to produce quality smartphone photos. Google's success in that arena kicked off a camera arms race that's raised the bar everywhere, but also led to some pretty over-the-top photos. Project Indigo is a bit of a corrective, and also an interesting test whether a third-party app that might produce better photos is enough to beat the default.
Project Indigo is available to download for free now, and runs on either the iPhone 12 Pro and up, or the iPhone 14 and up. An Android version of the app is coming at some point in the future. If you buy something through a link in this article, we may earn commission.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Everything, everywhere, all Firefly AI: New Adobe app launches for iPhone and Android
If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. Generative AI products that can create amazing images and videos indistinguishable from real ones have practically democratized photoshopping. You no longer need years of training or expensive software to create and edit any kind of image or video. Just issue commands in natural language, and the advanced AI model of your choice will deliver stunning results in seconds. You'd think these developments would directly threaten Adobe, the creator of Photoshop. But Adobe isn't backing down. Instead, the company has adapted its tools to take advantage of generative AI innovations. Products like Photoshop and Firefly already let you use AI to brainstorm and create images and videos tailored to your needs. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 Adobe isn't even trying to one-up the likes of OpenAI, Google, and other AI firms that might seem like competitors. Instead, the company is embracing those alternatives, integrating them into apps like Firefly. Just like that, Firefly can become your one-stop shop for all things photo and video creation that benefit from advanced AI tools. Adobe has just expanded the list of AI partners in the Firefly app and released iPhone and Android versions. Adobe hosted its Max event in London a few weeks ago, where it announced several big updates to Firefly, including support for high-end third-party AI models and a new Firefly Boards feature designed to help teams collaborate on AI-generated content. Adobe also confirmed at Max that iPhone and Android Firefly apps were coming soon, though it didn't share release dates. Fast-forward to June 17th, and Adobe has released the Firefly app for iPhone and Android. Along with it, Adobe announced a new partnership with third-party genAI services for generating and editing photos and videos, plus new Firefly Boards features. You can use Adobe's own models, also called Firefly, in the Firefly apps to generate photos and videos. But if you prefer something from the competition, Firefly gives you that option too. Here's the current list of partners, including the new AI models Adobe announced on Tuesday: Image Models: Black Forest Lab's Flux 1.1 Pro and Flux.1 Kontext; Ideogram's Ideogram 3.0; Google's Imagen 3 and Imagen 4; OpenAI's image generation model; Runway's Gen-4 Image Video Models: Google's Veo 2 and Veo 3; Luma AI's Ray2; Pika's text-to-video generator Of those, Ideogram, Luma, Pika, and Runway are new Adobe partners for Firefly. The Firefly app for iPhone and Android is available to download now, so you can create AI content 'wherever inspiration strikes.' The mobile app gives you quick access to tools you might already use in the desktop version of Firefly, including Generative Fill, Generative Expand, Text to Image, Text to Video, and Image to Video. Creators can choose between Adobe's Firefly models or rely on third-party frontier AI from Google and OpenAI. The Firefly mobile app lets you save your creations with your Creative Cloud account, making it easy to switch between mobile and desktop without interrupting your work. One big advantage of using the Firefly app instead of going directly to OpenAI, Google, or other genAI tools is that it brings everything together in one place. That's especially useful if you're using multiple content generation platforms for a single project. That's exactly what Adobe is aiming for. 'We built the Firefly app to be the ultimate one-stop shop for creative experimentation, where you can explore different AI models, aesthetics, and media types all in one place,' said Adobe's vice president of generative AI, Alexandru Costin. 'Every new partner model we add gives creators even more flexibility to experiment, iterate, and push their ideas further.' Adobe also addressed content safety, saying a 'durable 'nutrition label'' will be attached to everything created in the Firefly apps. This will identify whether Firefly AI or a partner model was used. It's unclear if this label will be visibly marked, though. You'll need an Adobe account and a plan to unlock all Firefly features. Access to third-party models depends on your subscription. In-app purchases include a Firefly Mobile Monthly plan ($4.99) and a Firefly Mobile Yearly plan ($49.99). Adobe also introduced new features for Firefly Boards, which debuted a few weeks ago. Firefly Boards let you generate video using either the Firefly Video model or an AI from an Adobe partner. You can also make iterative edits to images using the AI model of your choice. The feature helps organize your Boards with a single click so everything's ready for a presentation. Adobe Docs can also be linked to Boards. Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the
Yahoo
2 hours ago
- Yahoo
Remarkable new AI can tell your age by looking at your eyes
If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. One of the most impressive areas of generative AI software like ChatGPT right now involves enhanced computer vision. AI can understand and interpret data from images. That's why we now have such advanced image and video generation models in ChatGPT, Gemini, Firefly, and other AI software. Models like ChatGPT o3 can accurately guess the location of an image by analyzing its details. Google offers advanced photo editing tools in its Photos app, and also directly in Gemini. These tools let you alter real photos in ways that weren't possible before. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 These image-related AI capabilities aren't just used to generate memes or melt OpenAI's servers. Researchers are developing AI models that can interpret images for various purposes, including medicine. The latest study showing such advancements comes from China. Researchers from several universities have been able to determine a person's age with high accuracy by having AI inspect an image of their retina. The readings also showed differences between the person's age and the eye's age. The researchers found that the retinal age gap the AI provided can be especially helpful for women. A simple retinal scan might help doctors offer better support to couples trying to conceive and to women at risk of early menopause. Retinal fundus imaging, or a photo of the back of the eye, lets doctors see microvascular features that reflect systemic aging. An AI trained on thousands of images can then predict the eye's age and compare it to the person's actual age to 'predict retinal age from fundus images with high precision.' The scientists used an AI called Frozen and Learning Ensemble Crossover (FLEX) to predict retinal age from fundus images. They fed FLEX over 20,000 eye photos from more than 10,000 adults of all ages to teach it how the back of the eye looks as people age. FLEX also analyzed over 2,500 images from nearly 1,300 pre-menopausal women. The AI was then able to estimate a person's age by examining a retinal fundus photo. If the eye appears older than the woman's actual age, the retinal age gap is positive. That could also mean other organs in the body are aging faster. The implications for reproductive health are clear. Fertility and menopause issues could benefit directly from such an AI screening tool. The researchers linked a larger retinal age gap to lower blood levels of anti-Müllerian hormone (AMH), a marker for ovarian reserve. The lower the AMH value, the harder it is for older women to conceive. The scientists studied women ages 40 to 50 and found that each additional retinal year raised the risk of a low AMH result. The risk increased by 12% in the 40-44 age group and by 20% in the 45-50 group for every extra retinal year. The study also found that having more childbirths at younger ages was associated with lower AMH levels than average. Each additional retinal year increased the risk of developing menopause before age 45 by 36%, according to the paper. We're still in the early days of using AI for medical imaging, but the study shows promise for using a simple, non-invasive technique to improve reproductive health protocols. Imagine getting a retinal scan in your late 20s or early 30s to help decide whether to get pregnant or freeze your eggs. Similarly, women over 40 concerned about pre-menopause or menopause could use an eye scan to check their retinal age and assess the risk of early symptoms. This might help them prepare for the years ahead with hormonal therapies to delay or ease symptoms. For any of this to happen, the conclusions from Hanpei Miao & Co. would need to be confirmed by further research. Separately, the FLEX AI model used in this study could be explored for other health conditions where eye scans might serve as early indicators of age-related health risks. The full study is available in Nature magazine. Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the


Newsweek
4 hours ago
- Newsweek
F1 Movie Breaks Boundaries with Apple iPhone Tech For Onboard Cameras
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. With less than ten days left until the F1 movie's release, it has come to light that the makers used Apple technology to record onboard footage of the racing action. A report from Wired confirmed that components for the custom camera were borrowed from an iPhone that could record ProRes footage in LOG format, providing maximum flexibility in post-processing. Cinema cameras are too bulky to mount on Formula One cars for filming at high speed, and mounting a GoPro wasn't exactly what F1 cinematographer Claudio Miranda and director Joseph Kosinski were considering. A custom option was the way to move forward, which gave the production team the high-quality cockpit POV they wanted. Apple's engineering team extracted the broadcast module from the F1 car that is designed to record lower-resolution footage for live TV. The device, located behind the driver and by the side of the engine intake, was replaced with a camera with borrowed parts from an iPhone. The team had to ensure that the camera replicated the broadcast module and adhered to a weight limit. The Apex Grand Prix car of fictional driver Sonny Hayes, to be portrayed by Brad Pitt, for the upcoming Formula One based movie Apex, is seen driving on track prior to final practice ahead of... The Apex Grand Prix car of fictional driver Sonny Hayes, to be portrayed by Brad Pitt, for the upcoming Formula One based movie Apex, is seen driving on track prior to final practice ahead of the F1 Grand Prix of Great Britain at Silverstone Circuit on July 08, 2023 in Northampton, England. MoreThe sensor on the module is most likely powered by an A17 Pro chipset and features the 48-megapixel primary camera from an iPhone 15 Pro. The module was built in a way that could withstand shocks, vibrations, and heat at high speeds. Running on an iPhone battery, the camera lens used an ND filter to limit the amount of light entering the lens. The engineering team ensured the module ran on a custom iOS firmware and recorded videos in ProRes lossless video codec to help filmmakers perform advanced color grading in post. Not only was this feature a huge advantage for the film crew, but also for Apple, since the new firmware helped release two new features on the iPhone 15 Pro. The F1 movie hits theaters on June 25 (June 27 for North America), and fans could receive a high dose of adrenaline-filled action captured on these custom-made onboard cameras. The story is about a veteran F1 driver, played by Brad Pitt, who returns to the sport to mentor an up-and-coming driver, played by Damson Idris. Despite the dramatic story, the producer of F1, Jerry Bruckheimer, stated that high emphasis was given to ensure the movie maintained F1 authenticity. Seven-time world champion Lewis Hamilton served as an executive producer whose role was to ensure the film adhered to F1 realism. Bruckheimer revealed one instance while filming at Silverstone when Hamilton asked Pitt to shift the car into the correct gear to simulate actual Grand Prix conditions. Newsweek Sports reported his comments: "Lewis Hamilton saw a part of it, gave us a critique of how the drivers actually do various things. "The level of specifics that he gave us, like in Silverstone in Turn 3 you're in second gear, and he could hear with his ear we [Brad Pitt] were in third gear. "So it's that kind of thing that he's bringing to the movie, and he said... when he finishes a race, especially like Singapore, where it's very hot, he can barely get out of the car, they lose 10 pounds... and it's not only him, it's all of them. "They're just completely exhausted [and] we're going to show what it takes to be an F1 driver." He added: "Everything that he has brought to this movie, I can't even express our thanks to him and all the folks from F1 who made this all possible. "But the authenticity that he brings, we just can't imagine what goes into what a driver does and what the sport brings to an audience. "Sometimes we don't like to hear some of the things he says, because it costs us more money to fix things, but we're all in, we want to make it great, and he's certainly helping us."