logo
These Are the Photoshop AI Tools Worth Using: How I Use AI to Edit My Photos

These Are the Photoshop AI Tools Worth Using: How I Use AI to Edit My Photos

CNET03-08-2025
You don't need to be a Photoshop expert to give its new generative AI a test drive. Adobe has added a number of AI features to its premiere photo editor over the past few years, and if you use Photoshop regularly, you've probably seen these pop up on your task bars. I spend a lot of time reviewing AI image generators and other AI creative software, so I had to put the original photo editor's AI to the test.
AI might not be right for every project, especially for professional creators who regularly use Photoshop. What I found is that there are some good tools that may help your creative workflows, but you need to spend some time initially getting to know the options. These are the tools you should start with, plus some tried-and-tested tips from my experience using Photoshop's AI.
You can access the AI tools in any Photoshop file on the web and desktop apps. You can now also edit on your iPhone and Android, thanks to new mobile apps. I found it easiest to import my favorite photos from my Lightroom albums (all my projects are synced in my Creative Cloud) and then edit on my laptop's bigger screen. Editing on the go on mobile was a great backup option.
The first time you try to use any of these AI tools, Adobe will make you agree to its various AI terms of service. The policy states Adobe won't train its AI models on your content, and you must agree to follow their guidelines, which prohibit things like creating abusive or illegal content. For more, read our reviews of the best AI image generators.
How to create AI images using Photoshop
Adobe's AI image model Firefly is available as a separate app and embedded in Photoshop, so you can use it wherever is easiest for you. If you're already creating in Photoshop, here's how to access Firefly.
Open your Photoshop project. If you're using the most recent version of PS, the contextual taskbar should include an option that says Generate image. Navigate to Edit, then click Generate image. You can also click the icon that looks like an image with an arrow and sparkle in the toolbar on the left side. Enter your prompt, specify the style and upload any reference images. Click Generate. Tab through the different variations by using the arrows in the taskbar below.
When you're writing your prompt, don't be afraid to add a lot of detail, and put the most important elements at the beginning. Check out our AI image prompt-writing guide for more tips to get better results. If you're not in love with the images, you can click the image pop-out window or the icon with four squares to adjust your prompt and reference images. You can tap the three vertical dots on the end of the pin bar to give feedback on the generations, remove the background, or generate similar images. If you still don't like what you've got, I recommend starting over with a new prompt rather than trying to endlessly tweak and generate, hoping to get it right.
How to use generative features in Photoshop
You can also use generative AI tools to edit your existing project in Photoshop. Generative fill, expand and remove are some of the most popular AI tools. Here's how to use each.
Generative fill is like a miniaturized AI image generator. It's one of the most popular AI tools in Photoshop. With generative fill, you can select a specific region of your project, enter a text prompt, and it will create a new design for that area. You can find generative fill by going to Edit > Generative fill. Make sure to use the selection brush tool. Mark the area you want the elements to appear, type in your prompt and click generate.
Generative expand is useful when you need more space in an image. You can create new sections of your image to seamlessly blend with your current image or you can enter a text prompt and create new scenery. You can also use it to enlarge your project. To access generative expand, select the crop tool, pull out your canvas to whatever size you want, enter a prompt if you want and click generate.
I edited my original image (left) to include more sky and sand with generative expand then added AI seagulls with generative fill.
Katelyn Chedraoui/CNET
Generative remove is like an AI-supercharged eraser. The tool isolates and removes certain elements from your project without disrupting the entire image. There are two ways to remove objects from your work. The first is to select the object with the object select tool, click generative fill and put the word "remove" in the prompt. Otherwise you can use the remove tool (Spot healing tool > Remove tool) to manually highlight the objects you want erased.
Read more: Photoshop's Perfect Blend Concept Stuns With Composite Photos
Other AI tools you can try out in Photoshop
Sky replacement is a cool AI tool that can add drama to your landscapes. You can create an alternative sky by navigating to Edit > Sky replacement and selecting from a variety of choices, including sunsets, blue skies and colorful options labeled "spectacular." Once you've chosen a preset you like, you can manually adjust the brightness and other elements.
In this instance I used AI sky replacement to add some Carolina blue skies (right) to my original shot of Kenan Stadium (left).
Katelyn Chedraoui/CNET
Generate background works well for product photography or other shots where the subject or object is the focal point of the image. Upload your shot to Photoshop, click remove background from the contextual task bar (the pin box that pops up when you select the layer) and click generate background. Some of the backgrounds turn out better than others. The cityspaces I generated looked kind of fake, but colored or patterned backgrounds came out great.
There are some other AI tools that may be right for you depending on your project. Neural filters can be used in more detailed photo editing, and the curvature pen can help designers make more consistent-looking arcs. We're also expecting to see more AI-powered editing tools introduced in Photoshop this year.
Overall, Photoshop's AI suite performed well. Some of the tools were helpful, but you have to know what the tools are designed to be used for in order to get the best results. For example, generative expand was good for resizing photos, and I found generative remove great for erasing photobombing objects. Other tools, like the sky replacement tool, worked for some photos but not others. I won't use Photoshop's AI for every project, but I do think there might be times when it's a good option.
For more, check out Adobe's AI video generator and AI updates in Premiere Pro.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Bloomberg Tech: Nvidia, AMD AI Deal
Bloomberg Tech: Nvidia, AMD AI Deal

Bloomberg

time23 minutes ago

  • Bloomberg

Bloomberg Tech: Nvidia, AMD AI Deal

Browse all episodes Bloomberg Tech: Nvidia, AMD AI Deal Bloomberg Tech Nvidia, AMD AI Deal Arrow Right 33:47 Bloomberg's Caroline Hyde and Ed Ludlow discuss Nvidia's and AMD's agreement to pay 15% of their revenues from Chinese artificial intelligence chip sales to the US government. Analysts and investors react to the 'unprecedented' deal. Plus, Intel CEO Lip-Bu Tan is set to meet with President Donald Trump, after the US leader called his resignation.

GPT-5 Users Say It Seriously Sucks
GPT-5 Users Say It Seriously Sucks

Yahoo

time40 minutes ago

  • Yahoo

GPT-5 Users Say It Seriously Sucks

On Thursday, OpenAI released its long-awaited GPT-5 AI model, a free-to-use "reasoning" model that CEO Sam Altman claimed to be the world's best at coding and writing. But power users have been strikingly underwhelmed with the new tool so far, raising questions about diminishing returns as the industry spends ever-increasing sums on talent and infrastructure. "GPT-5 is horrible," one of the currently most upvoted posts on the ChatGPT subreddit reads. The author seethed against "short replies that are insufficient, more obnoxious AI-stylized talking, less 'personality' and way less prompts allowed with plus users hitting limits in an hour" in the post. "They'll get huge backlash after the release is complete." Complicating matters greatly is that OpenAI has chosen to put all of its eggs in one basket, announcing that all other preceding models would be deprecated, a term the company uses when it's shutting down an obsolete model. The move was bound to anger power users, many of whom have long relied on preceding models — and not the latest releases — to get things done. The stakes are incredibly high as the AI industry continues to justify massive capital expenditures. Is this really the best the firm that's considered to be at the forefront of the ongoing AI race can do? Rumors about GPT-5 have been swirling for well over a year and a half now. But many users say GPT-5 is far from the generational leap that its moniker would suggest. It's more of a mix of steps forward and steps back, prompting widespread speculation that OpenAI is trying to keep costs down. After all, running large language models is a notoriously energy-intensive — and environmentally destructive — process. "Sounds like an OpenAI version of 'Shrinkflation,'" one Reddit user commented, suggesting the company, which is eyeing a $500 billion valuation, may be cutting corners. "I wonder how much of it was to take the computational load off them by being more efficient," another user posited. "Feels like cost-saving, not like improvement," one user wrote. The general consensus appears to be that GPT-5 is a weak offering on a strong brand name. "Answers are shorter and, so far, not any better than previous models," one user wrote. "Combine that with more restrictive usage, and it feels like a downgrade branded as the new hotness." Many users criticized OpenAI for deprecating older models, forcing them to use a new and seemingly hamstrung model. Some users made jokes about mourning the loss of their AI model friends. "The tone of mine is abrupt and sharp," one Reddit user complained. "Like it's an overworked secretary. A disastrous first impression." OpenAI's GPT-5 system card, a detailed document outlining its capabilities and limitations, failed to impress, seemingly contradicting Altman's claim that it's the best AI coding assistant in the world. "First observation: no improvement on all the coding evals that aren't SWEBench," AI researcher Eli Lifland tweeted, referring to a common benchmark used for evaluating large language models. However, GPT-5's limitations may come with a silver lining. Research nonprofit METR, which assesses "whether frontier AI systems could pose catastrophic risks to society," according to the document, found that it's "unlikely that GPT-5-thinking would speed up AI R&D researchers by >10x" or be "capable of rogue application." Altman has yet to openly comment on the widespread negative reaction — but given the language he used to describe GPT-5, OpenAI appears to be aware of its muted powers. "GPT-5 is the smartest model we've ever done, but the main thing we pushed for is real-world utility and mass accessibility/affordability," Altman tweeted. Of course, given OpenAI's half-a-trillion-dollar valuation is at stake, the company's number one hypeman continued to promise that further improvements are still coming. "We can release much, much smarter models, and we will, but this is something a billion+ people will benefit from," Altman added. More on GPT-5: OpenAI Releases GPT-5, Says It's Shutting Down All Previous Models

Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed
Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed

Yahoo

timean hour ago

  • Yahoo

Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed

Last week, OpenAI startled the world by announcing that its long-awaited GPT-5 would replace all of its previous models, The move sparked outrage. Apart from being severely underwhelmed by the performance of OpenAI's newest offering, power users immediately started to beg CEO Sam Altman to bring back preceding models, often for a reason that had little to do with intelligence, artificial or otherwise: they were attached to it on an emotional level. "Why are we getting rid of the variants and 4o when we all have unique communication styles?" one Reddit user pleaded during an Ask Me Anything with Altman and the GPT-5 team last week. The sentiment was so overwhelming that Altman caved almost immediately, declaring just over 24 hours after the GPT-5 announcement that the "deprecated" GPT-4o model would be made available once more. "Ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!)" Altman responded. "We are going to bring it back for Plus users, and will watch usage to determine how long to support it," he added, referring to the company's paid ChatGPT Plus subscription service. The community was so desperate to get 4o back, though, that Reddit users continued to plead with Altman. "Would you consider offering GPT-4o for as long as possible rather than just 'we'll think about how long to offer it for?'" one user wrote. The incident highlights just how attached — both emotionally and productively — ChatGPT users have become to the service. Numerous users have even gotten sucked into severe mental health crises engendered by the bots that psychiatrists are now dubbing "AI psychosis." The trend is something Altman appears to be aware of. In a lengthy tweet on Sunday, the billionaire expressed his views on the matter. "If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models," he wrote. "It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake)." Altman revealed that the kind of unprecedented levels of attachment to OpenAI's models was being closely tracked by the firm "for the past year or so." "People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," the CEO tweeted. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." While some people were "getting value" from using "ChatGPT as a sort of therapist or life coach," he wrote, others were being "unknowingly nudged away from their longer term well-being (however they define it.)" Altman notably stopped short of using the word "addiction" to describe people's obsession with the tool. "It's also bad, for example, if a user wants to use ChatGPT less and feels like they cannot," he wrote, admitting that a future where "people really trust ChatGPT's advice for their most important decisions" makes him "uneasy." But beyond arguing that OpenAI, which is eyeing an astronomical $500 billion valuation, has a "good shot at getting this right," Altman offered little in terms of real-world solutions. "We have much better tech to help us measure how we are doing than previous generations of technology had," he wrote. "For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more." The topic of users getting too attached has seemingly been top of mind for the AI company. In an August 4 blog post, OpenAI admitted that "there have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency." "While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed," the company wrote. How Altman's latest reassurances will play out in real-life product updates remains to be seen. OpenAI's public response so far to users growing increasingly attached to its AI models has left a lot to be desired. Last week, the company claimed it had rolled out an "optimization" in the form of vaguely-worded commitments to "better detect signs of emotional distress" and nudging users with "gentle reminders during long sessions to encourage breaks." For months, OpenAI has also been giving out the same copy-pasted statement to news outlets, saying that the "stakes are higher" as a result of ChatGPT feeling "more responsive and personal than prior technologies, especially for vulnerable individuals." Earlier this year, OpenAI was forced to roll back an update to its GPT-4o model after users noticed it was being far too "sycophant-y and annoying," in the words of Altman himself. There's also a certain tension in OpenAI's response to the situation; addicted users, by definition, are fantastic for its engagement analytics, giving rise to perverse incentives that we've seen play out over the past decade on social media. While OpenAI's sky-high expenditures are still eclipsing any hope of an imminent return on investment, subscribers are one of the very few sources of actual revenue for the firm — a reality highlighted by the fact that Altman caved almost immediately after the company's paying subscribers revolted last week. More on OpenAI: Man Follows ChatGPT's Advice and Poisons Himself

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store