logo
Adobe Updates Creative Cloud And Firefly At Adobe MAX London 2025

Adobe Updates Creative Cloud And Firefly At Adobe MAX London 2025

Forbes01-05-2025

Attendees gather for the opening keynote at Adobe MAX London.
Adobe MAX has always been a conference where the company puts creativity at the forefront. I've experienced this firsthand by attending U.S. versions of the event over the past few years. Last week's Adobe MAX 2025 event in London continued this tradition, featuring a broad range of updates and new features across the company's Creative Cloud suite and the Firefly generative AI platform. These announcements highlight Adobe's emphasis on enhancing creative tools for professionals and emerging generations of creators, focusing on generative AI, collaborative workflows and accessibility across devices.
As I break down the important announcements from the event, I'll also share some insights gleaned from my briefings with company leaders before Adobe MAX London.
(Note: Adobe is an advisory client of my firm, Moor Insights & Strategy.)
At MAX, Adobe introduced several enhancements to Firefly, its generative AI platform. The new Firefly Image Model 4 now offers 2K native resolution output and improved rendering of humans, animals, architecture and landscapes. These new Firefly capabilities are designed to provide advanced users with greater control through detailed prompts, style references and structural guidance. However, they will become more accessible to all users soon. Adobe has initially targeted creative professionals who are comfortable with more advanced techniques, but it plans to make the offering more user-friendly over the next six to eight weeks.
The Firefly platform also includes a now GA generative video module positioned as a commercially safe solution for creating production-ready video content for designers and professionals. Adobe also announced additional capabilities such as text-to-avatar, text-to-vector and image-to-video modules, providing users with expanded ways to create and control multimedia assets.
In a significant move, Firefly will now integrate third-party generative models, including those from Google. This integration will allow users to select from multiple models and capabilities within the same workflow. Adobe executives stressed the importance of understanding each model's strengths and nuances to achieve optimal results.
Ross McKegney, a product leader at Adobe, noted that different models have unique characteristics. 'They're a whole new category of models, [not only] in terms of what they're able to do, but also the way you interact with them,' he said. He emphasized that Adobe is focused not just on prompts but more broadly on understanding how to interact with different models through various types of controls. Specifically, he mentioned that prompts have limited use for complex tasks such as video generation, and he advocates for more nuanced control methods, including reference images or specific structural controls.
In this context, McKegney also talked about how the company is building out instruction-based interfaces 'so you can just take something and describe how you want to change it.' This applies to both Firefly AI models and to other models in the Adobe ecosystem. In his presentation, he also discussed the need to understand the 'personality' of each model.
Another important addition announced at MAX was Firefly Boards, a multiplayer, generative AI-driven platform designed for creative collaboration. This platform offers an 'infinite canvas' where multiple users can work together, combining generative and traditional creative tools to support structured and unstructured creative exploration. Firefly Boards differs from traditional whiteboarding products, which usually focus on drawing and sticky notes. It offers AI-powered tools that allow users to generate and edit images, apply different visual styles and remix content. Adobe says it is especially suited for designers, creative teams, marketers and content creators who need to quickly develop and organize visual ideas such as mood boards, storyboards or pitch materials with more flexibility than traditional whiteboarding tools provide. Integrating Firefly Boards with Adobe's creative applications aims to facilitate a smooth transition from design to production.
Adobe envisions Firefly Boards as enabling teams with varying skill levels to learn from each other and experiment in real time. This collaborative model aims to help teams transition from initial brainstorming to more structured creative outcomes by leveraging each participant's strengths and the flexibility of generative AI. Highlighting the internal success of the tool, McKegney explained its value as 'having multiple people come into one place where maybe one person on the team is better with generative AI than the other to help you organize your thoughts into more linear storytelling.'
Adobe unveiled several updates targeting video and animation workflows to Adobe Express, the company's web-based design tool. New features include automatic captioning, video reframing, enhanced audio noise removal and physics-based animation presets. The platform now supports generating videos from text and image prompts, as well as creating image collections with consistent branding. All of this aims to make content creation more accessible and efficient for both casual and professional users.
Adobe's core Creative Cloud applications — Photoshop, Illustrator and Premiere Pro — received updates at MAX London, but the changes were focused mainly on performance improvements, workflow refinements and incremental feature additions. In Photoshop, Adobe introduced faster editing, an expanded range of AI-assisted tools and improved features for selection and color adjustment. However, these were extensions of existing capabilities rather than dramatic new directions. Illustrator's update included fixes for speed, responsiveness and ease of use, with a handful of new creative options, but nothing transformative. Premiere Pro now has some enhanced AI tools and editing workflows, which are also aimed at making existing tasks faster and more efficient. Adobe says the changes were informed by ongoing user feedback and extensive beta testing.
Adobe also introduced updates to its Content Authenticity Initiative, including the GA of its web app that allows creators to verify their identity and tag their content with credentials. This tool enables creators to control how their work is used in generative AI models and provides greater transparency regarding the provenance of digital assets. Adobe's initiative should set a precedent for creator-first standards and lay the foundation for potential industry-wide adoption of opt-out mechanisms.
Responding to the increasing importance of mobile and web-based creation, Adobe announced its Firefly mobile app, with preorders and preregistrations now available. The company is specifically tailoring its strategy to younger users, particularly Gen Z, offering accessible free and low-cost premium options with workflows designed for social media content creation.
Deepa Subramaniam, Adobe's vice president of Creative Cloud product marketing, explained the company's targeted approach, saying it has been actively engaging with influencers and marketing within Gen Z communities to promote Photoshop and drive adoption. She noted this demographic's positive reception and high usage rates. 'It's really exciting to bring that audience into our ecosystem because they're the future,' Subramaniam said.
She added that the product and pricing models for mobile users represent a significant shift from Adobe's typical desktop-centric approach. With substantial functionality in the free version and a low monthly cost for the paid app, she emphasized that '[this model] is just totally different than in the desktop ecosystem.'
Having followed Adobe for years, I appreciate the significant shift in its approach. This change is a welcome development as, historically, Adobe has concentrated on its desktop tools, which aligned with where most of its customers were. I believe that the company's recent expansion into more flexible and accessible mobile offerings indicates thoughtful adaptation to the evolving needs of today's rising creative community.
This transformation naturally requires a shift in strategy. For Adobe's go-to-market team, this includes collaborating with influencers, gathering feedback from younger users during pre-release testing and developing features that align with Gen Z's creative pursuits, such as fan art, music visuals and social media graphics. These emerging creators prioritize mobile platforms and seek powerful yet user-friendly tools, and Adobe is making product changes to suit them.
By simplifying advanced tools and making them mobile- and web-friendly, Adobe is looking to address the ease-of-use gap between itself and Canva, which has long excelled with Gen Z and non-professional users. Adobe's partnerships with influencers and targeted marketing within Gen Z communities should also help it build cultural relevance and brand loyalty among the next wave of creators.
Adobe's announcements at MAX London offered more than a glimpse into how the company imagines the future of creative work. The deep integration of generative AI across platforms, enhanced collaborative features and a clear emphasis on accessibility could pave the way for new forms of expression for multiple generations.
The focus on empowering the next generation through mobile-first tools and tailored workflows is particularly noteworthy. I am eager to see how these emerging talents embrace and leverage these capabilities. It will be equally interesting to observe how Adobe continues to adapt and evolve its offerings to meet the needs and innovative approaches of this next wave of creatives.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

If you don't know about these video tools, you're already behind
If you don't know about these video tools, you're already behind

Fox News

time25 minutes ago

  • Fox News

If you don't know about these video tools, you're already behind

I've said it before, and I'll say it again, AI is changing everything. And now, it's coming for video in a huge way. We're not just talking about deepfakes or filters anymore. This is next-level, movie-magic stuff. Enter to win $500 for you and $500 for your favorite person or charity in our Pay It Forward Sweepstakes. Hurry, ends soon! Let's talk about the wild part first. You don't need any editing software. You don't need a green screen. Heck, you don't even need a camera. Your keyboard is now the director, the producer, the editor, and your creative partner. All you have to do is type what you want to see, and boom, AI turns your words into high-def, studio-quality video clips. It's like having a Hollywood studio tucked inside your laptop. Two of the biggest names in this space right now are Sora from OpenAI and Veo from Google. And yes, you can actually try them both right now. I've tested them side by side, and you can watch my results at the end to see which one came out on top. Sora is built right into ChatGPT and is shockingly good. Type something simple like, "a golden retriever puppy playing on the beach with a tennis ball," and within seconds, Sora delivers a smooth, visually rich video clip. No stock footage. No need to mess with editing timelines or export settings. Just write what you want to see. Here's the breakdown: And here's a hot tip: Microsoft just announced that Sora is being integrated into their new Bing Video Creator, so you'll start seeing it pop up in more places soon. Now over to Google's Veo, and wow, this one is aiming for cinematic quality. Veo gives you up to 60-second videos in 1080p, and the realism is something else. You can include camera moves, lighting shifts, and smooth transitions, just by typing a sentence. For example: "A cat runs across a Paris rooftop at sunset." And it delivers exactly that, complete with lens flares and realistic shadows. To get your hands on Veo, you'll need to sign up for Google's Gemini plans: I wanted to see how each platform handled the same prompt, so I asked both to generate a video based on: "Show me a golden retriever puppy playing on the beach with a tennis ball." Now you can watch the videos side-by-side. My AI puppy Bella stars in both (Veo's on the left, Sora's on the right). My thoughts? Sora nailed the adorable close-up details, Bella's floppy ears, the texture of the fur, the light hitting the waves. But it fumbled a bit when it came to physics. The ball's bounce didn't feel quite right. On the other hand, Veo did a better job with the action. The puppy's movements were more lifelike but it wasn't as sharp on the visual details. We are entering a new age of video creation where your ideas are enough. No gear, no crew, no edits. Just imagination and a little typing. Whether you're a marketer, content creator, educator, or just someone who likes to experiment, this is the future of storytelling. Trust me, it's just getting started. Award-winning host Kim Komando is your secret weapon for navigating tech. Copyright 2025, WestStar Multimedia Entertainment.

What Is Agentic AI? Everything to Know About Artificial Intelligence Agents
What Is Agentic AI? Everything to Know About Artificial Intelligence Agents

CNET

time41 minutes ago

  • CNET

What Is Agentic AI? Everything to Know About Artificial Intelligence Agents

You've probably heard a lot about ChatGPT, Google's Gemini, image generators and AI writing tools. But there's a new term making the rounds: agentic AI. And while it might sound like another buzzword, it's not a new invention. Recent advances, however, have made it far easier to build, deploy and interact with these kinds of systems. Some of them you might have already seen at work, like customer service banking bots, self-driving cars and smart home assistants. If you're using Perplexity in the US as a Pro subscriber, a perfect example is its "Buy with Pro" feature. Rather than assisting with your shopping and handing you off to a retailer, it collects your preferences, processes the transaction (sometimes even selecting the best available retailer) and uses your stored payment and shipping information to complete the order. Experts say it's time to start paying attention to what these AI agents are capable of doing on their own, though widespread use across industries will take time before AI agents become mainstream. Unlike AI chatbots, which often require explicit instructions at each step, AI agents can break down complex objectives into smaller, manageable actions. So instead of simply responding to your questions or prompts, agentic AI is designed to take initiative. That means understanding its environment, making decisions and acting without human direction at every step. So what does that look like in practice, and how is it different from what artificial intelligence is already doing? I spoke to several experts and will break down everything you need to know about agentic AI -- including whether it can be trusted. From passive tools to proactive agents Self-driving cars like the Zoox robotaxi are examples of agentic AI. Zoox Agentic AI systems aren't passive tools waiting for input. They operate in a cycle to sense the environment, decide what to do, and then act. That structure makes them more autonomous and lets them take on complex, goal-oriented tasks across multiple systems. "Agentic AI…is now making this sort of sense-decide-act loop available to everybody," Peter Stone, professor at the University of Texas and chief scientist at Sony AI America, told me. "Rather than waiting for the input-output behavior, you're able to task a program with sensing the world, deciding what to do and actually acting." Ankur Patel, CEO and founder of enterprise agentic AI company Multimodal, called it "a fundamental shift from reactive tools to proactive systems capable of complex decision-making." He gave an example of a loan underwriter who might otherwise spend hours cross-referencing pay stubs, tax returns and credit reports. "The AI agent automatically ingests and validates hundreds of data points from diverse sources. Think bank feeds, HR systems and government databases, while flagging inconsistencies like mismatched employment dates," Patel told me. In other words, it's not mere automation. "Agentic AI connects complex, multisource inputs with internal rules or manuals, and gives accurate, critical outputs in much shorter time frames," Patel explained. What makes agentic AI different from generative AI and chatbots? MF3d/Getty Images Generative AI creates content such as text, images, music and even videos, based on what it learned during training and your prompt. Agentic AI can use those same models, but adds a layer of autonomy with reasoning and planning to proactively achieve goals through a sequence of actions. A generative AI tool might write you a vacation itinerary. AI agents could plan the trip, book your flights, reserve the hotel and even rebook everything if your flight gets delayed or canceled. Large language models, like ChatGPT or Claude, can become agentic when connected to external tools, sensors or APIs. This ability to interact with the world (either physical or digital) is what makes the difference. While systems like ChatGPT and Siri are designed to answer questions, agentic AI is built to solve problems. "Chatbots answer questions. Agentic AI solves problems by turning insights into outcomes," Patel said. That means orchestrating tasks across platforms. "For example, it can verify documents, assess risks and even trigger real-world actions like loan approvals or insurance payouts." Automation vs. augmentationLike most new tech, agentic AI raises concerns about jobs. Will it replace workers, or help them do their jobs better? Stone said the answer isn't simple. "Usually, when people say automation, they're thinking of replacing jobs. When people say augmentation, they're thinking of changing jobs, making them more efficient," Stone said. He compared it to the transition from hand-washing dishes in a restaurant to using a dishwasher -- there's still a human in the loop, but they're doing less of the repetitive labor. Another relatable example is correspondence. While writing letters by hand and sending them via snail mail might trigger nostalgia in romantic folks like me, we now send messages and emails instantly from smartphones. Patel agreed that agentic systems free people up from the grunt work. "It's unfortunate that a lot of man hours even today are spent on drudgery," he said. "Good AI can take care of them without needing much supervision." For Patel, the bigger risk is falling behind. "The question isn't 'will AI take my job?' It's 'will I be working alongside AI or getting outpaced by those who do?'" While that might sound daunting to anyone hesitant about the shift, AI is advancing fast enough that it feels inevitable. Where you might see agentic AI in action Google's new AI Mode is agentic AI in action. Google/Screenshot by CNET Enterprise software vendors are already rolling out agentic systems in industries like: Robotics: Autonomous robots that can navigate and perform tasks in complex environments. Software development: AI agents that can write and debug code independently. Customer service: Advanced chatbots that can resolve complex issues without human assistance. Supply chain: AI agents that manage inventory, forecast demand and optimize delivery routes. Manufacturing: Systems that monitor equipment, detect defects and streamline production lines. Cybersecurity: Agents that detect threats, isolate risks and respond in real time. Insurance: AI agents that process claims, assess risk and draft responses. Banking: Systems that verify income, credit and approve loans. Healthcare: Tools that flag anomalies, suggest diagnoses and draft notes. AI agents in these industries process documents, extract data, flag inconsistencies and route information with minimal human intervention. But, you don't have to work in any of these sectors to notice it. Opera's agentic web browser and Google's new agentic search, called AI Mode, aim to help you go from inspiration to purchase without clicking through pages of results. AI assistants that can book your travel, manage your inbox or compare online deals are all signs of what's coming in the consumer sector as well. Even Microsoft is adding an AI agent to Windows that can change system settings for you. Patel says everyday users should care for three reasons: "First, it gives people their most precious resource back -- time. Second, it vastly improves customer experience. Third, it prevents costly human errors." That said, there are still limitations. AI agents struggle in open-ended or unpredictable environments, especially when tasks lack clear structure or context. They also depend heavily on well-formed prompts or goals. Meaning, vague input can lead to irrelevant or faulty actions. Can it be trusted? Autonomy brings benefits, but also risks. When systems make decisions or take action without supervision, what happens if something goes wrong? And who is responsible? Is it the person using the AI, or the company/developers that built it? Legal dilemmas continuously expand with these AI advancements. Stone also warns that the risks aren't hypothetical. "The new type of risk… is not a person acting incorrectly or irresponsibly as a result of what the AI advises, but rather the AI system actually taking a dangerous action," he told me. Say you let an autonomous car drive itself, it can do more than just suggest a route and cause harm if it malfunctions or has you drive in circles in a parking lot like this unfortunate passenger. The stakes depend on what the AI is allowed to do. Booking a low-cost flight? Low risk. Accessing medical records or spending thousands of dollars? Much higher. "The risk is directly related to the space of actions and the agency or autonomy that you give to the agent," Stone emphasized. Patel pointed out that safeguards are essential, especially in regulated industries. "To truly trust AI, it needs to have detailed audit trails and decision logs, process documentation, confidence scoring, and the ability to route decisions to humans where absolutely necessary," he said. What's next? While the hype around agentic AI is rising fast, don't expect to hand over your entire life to AI agents anytime soon. It will take years for most agentic AI systems to be tailored to specific industries or problems, not one-size-fits-all assistants. "There's a real chance that by demanding perfection from autonomous agents, we're missing an opportunity to do a lot better than the status quo," Stone said. "I think we need to accept that there are going to be mistakes. But they're going to get better over time." And so, the direction is clear. We're moving from AI that chats with us to AI that does things for us. Add robotics to the mix, and it's a whole new ballgame. FAQs What is agentic AI? Agentic AI is artificial intelligence that can independently make decisions and take actions to achieve a goal. Instead of waiting for step-by-step commands, AI agents decide what needs to be done and take action across systems with minimal human involvement. What are the key characteristics of agentic AI?

Custom dark mode scheduling is reliable again in the latest Android 16 beta
Custom dark mode scheduling is reliable again in the latest Android 16 beta

Android Authority

time41 minutes ago

  • Android Authority

Custom dark mode scheduling is reliable again in the latest Android 16 beta

Paul Jones / Android Authority TL;DR A bug in the Android 16 QPR1 beta 1 caused custom dark mode scheduling to stop working if you manually activated the mode via Quick Settings. The latest version of the beta has now fixed this issue. Although dark mode is great, there may be times of the day when you want to turn it off. Thankfully, Android gives you the option to set custom times of the day when you want the mode to turn on. When Android 16 QPR1 beta 1 rolled out, a bug made this feature stop working. Now it looks like the bug has been fixed with the latest update. Google rolled out Android 16 QPR1 beta 1 last month. As with any beta, there's an increased chance of running into bugs. One of the bugs that was discovered during this release was an issue with dark mode. Specifically, if you set a custom time for dark mode to kick on, going into Quick Settings to activate dark mode manually would cause custom scheduling to break. Yesterday, Google released Android 16 QPR1 beta 1.1, which includes a long list of fixes. According to one report, it appears this update has solved the problem. You can now manually activate dark mode in Quick Settings without it disrupting the custom schedule you set. In addition to fixing this bug, the update has also fixed a few other known issues. For example, crashes no longer happen when trying to open effects in wallpaper. Also, the missing approve button in Device Admin settings is no longer missing. Here's the full changelog for you to check out. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store