
Adobe debuts all-in-one Firefly AI app with access to models by OpenAI, Google Cloud
Adobe has announced over a hundred new upgrades to its suite of creative software tools accessible through its Creative Cloud platform, including certain AI features powered by its flagship Firefly technology to enable users to instantly generate visuals and graphics, animate still images, convert long video footage into shorter clips, and more.
The design software giant also introduced a new Firefly AI app at its annual creativity conference Adobe Max London on Thursday, April 24. The app is designed to be an all-in-one platform for AI-assisted content ideation, creation, and production, the company said.
Through the app, users will not only be able to access Adobe's in-house AI models (Firefly Image Model 4 and Firefly Video Model) but also third-party AI models such as Gemini and ChatGPT developed by its partners Google and OpenAI, respectively.
AI models from other developers including fal.ai, Ideogram, Luma, Pika, and Runway will be available in the coming months, as per the company.
Adobe also said it is rolling out a new feature called Firefly Boards that helps users mood board, ideate, and explore concepts with the help of AI. This feature will be available in beta through the Firefly app.
'Adobe is laser focused on empowering creative professionals with the best tools to do their best work, which means bringing them more speed, precision, control, flexibility and, of course, amazing creative superpowers,' said Deepa Subramaniam, Vice President, Product Marketing, Creative Cloud at Adobe.
'Today, we're bringing creative professionals major advancements in app performance, highly requested productivity features and all-new AI features powered by Firefly to give creators everything they need to bring their creativity to the world,' she said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
23 minutes ago
- Time of India
OpenAI to appeal in NYT copyright case, CEO Sam Altman says ‘AI should be like talking to a…'
ChatGPT-maker said that it is appealing in a copyright case filed by The New York Times, which requires the company to keep all ChatGPT output data indefinitely. The company argues that this demand goes against its promise to protect users' privacy . Tired of too many ads? go ad free now CEO shared a post on microblogging platform X (formerly Twitter) writing: 'we will fight any demand that compromises our users' privacy; this is a core principle'. The post comes after a court directed the company to preserve and separate all user-generated data, following a request from The New York Times last month. The data is part of the ongoing legal case over the use of copyrighted content. OpenAI to appeal against the court's decision In a series of posts, OpenAI CEO Sam Altman wrote: 'Recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision. we will fight any demand that compromises our users' privacy; this is a core principle.' He further stated ' we have been thinking recently about the need for something like "AI privilege"; this really accelerates the need to have the conversation. imo talking to an AI should be like talking to a lawyer or a doctor. i hope society will figure this out soon.' The company has also shared a blog post where Brad Lightcap, COO, OpenAI said the New York Times and other plaintiffs have made a 'sweeping and unnecessary demand' in what it called 'baseless lawsuit'. OpenAI COO said that the demand to not delete users' chat 'fundamentally conflicts with the privacy commitments' and 'abandons long-standing privacy norms and weakens privacy protections.' 'We strongly believe this is an overreach by the New York Times. We're continuing to appeal this order so we can keep putting your trust and privacy first,' he added.


Mint
36 minutes ago
- Mint
A year ago, everyone said Google lost the AI race. Sundar Pichai now responds: 'We were building...'
Google CEO Sundar Pichai has talked about how he handled a crisis of sorts at the company when critics had started aiming for his head as the tech giant lagged behind in the AI race to OpenAI and other rivals. In the past year, however, Google has come back strongly with the rollout of its Gemini 2.5 models, new AI features across different apps and now with an AI chat mode coming to Google Search. In a recent podcast with YouTuber Lex Fridman, Pichai was asked how he battled a lot of outside chatter asking for his resignation because of Google was supposedly losing the AI race. To this, the Google CEO responded saying, 'Look, lots to unpack. Obviously, the main bet I made as CEO was to make sure the company was approaching everything in an AI first way, really setting ourselves up to develop AGI responsibly and make sure we're putting out products which are very useful for people. Even through moments like that last year, I had a good sense of what we were building internally. I had already made many important decisions, bringing together teams of the caliber of Brain and DeepMind and setting up Google DeepMind.' 'Anytime you're in a situation like that, a few aspects help. I'm good at tuning out noise, separating signal from noise… Sometimes you jump in the ocean and it's so choppy, but you go one foot under, it's the calmest thing in the entire universe. There's a version of that. Running Google, you may as well be coaching Barcelona or Real Madrid. You have a bad season. So there are aspects to that. But I'm good at tuning out the noise. I do watch out for signals. It's important to separate the signal from the noise. There are good people sometimes making good points outside. You want to listen to it. You want to take that feedback in.' 'To me, this moment felt like one of the biggest opportunities ahead for us as a company. The opportunity space over the next decade, next 20 years, is bigger than what has happened in the past. And I thought we were set up better than most companies in the world to go realise that vision.' Pichai further noted.


The Hindu
44 minutes ago
- The Hindu
Google AI Mode can make interactive visualisations of financial data
Google has started rolling out interactive chart visualisations in AI Mode in Labs so users can help compare stocks or prices during a specific time period. In a blog posted by the company, Google said that their advanced AI models are able to bring financial data to life drawing from historical and real-time data. 'Ask to 'compare the stock performance of blue chip CPG companies in 2024.' Instead of manually researching individual companies and their stock prices, AI Mode does the heavy lifting for you using Gemini's advanced multi-step reasoning and multimodal capabilities. And ask a follow up like 'did any of these companies pay back dividends?' and AI Mode understands what to research for you,' the blog noted. Users must enable AI Mode in Search Labs to test the feature. A report by 9to5Google also said that they had also restored the Lens and voice input to the Search bar on the Discover feed. Google Labs has also announced they're testing a new feature called 'Portraits' where users can interact 'conversationally with AI representations of trusted experts built in partnership with the experts themselves.' The company has an open call for people in the U.S. who want to partner with them for 'Portraits.'