logo
#

Latest news with #AdobeFirefly

Adobe launches collaborative AI film project to empower creators
Adobe launches collaborative AI film project to empower creators

Techday NZ

time01-08-2025

  • Entertainment
  • Techday NZ

Adobe launches collaborative AI film project to empower creators

Adobe has announced The Unfinished Creator Film, a collaborative project encouraging creators from around the world to contribute to a single, evolving short film using generative artificial intelligence tools via the Adobe Firefly platform. The Unfinished Creator Film provides filmmakers, designers and storytellers access to Firefly's AI capabilities, enabling them to work together iteratively and express their unique artistic visions. Adobe collaborated on this initiative with multiple creatives, including AI artist and director Sam Finn, to explore possibilities for community-driven storytelling in a digital landscape. Collaboration at its core The project is described as "a remixable film started by one and completed by many", highlighting its open, collaborative model. Adobe highlights Firefly's role in supporting creative freedom, positioning it as an environment where participants across disciplines - including animators, sound designers and those new to filmmaking - can contribute their own interpretations to an unfinished film sequence. "It used to take years of learning - and a lot of money - to make a film. Now anyone with a vision can start. I'm excited to see what creators from non-traditional backgrounds will make when those barriers fall away. That's what The Unfinished Film is about: opening the door for new voices and unexpected perspectives." -Sam Finn, director of The Unfinished Film project The ethos of the initiative is to "turn unfinished into unforgettable" by allowing each participant to remix or reshape the original film using a set of digital tools. The process leverages Firefly's image and video generation technology, familiar to experienced users of Adobe products such as Photoshop, Premiere and Lightroom, while extending its use to a broader creative community. AI as collaborator, not creator Adobe's message underlines the use of AI as a tool to enhance - not replace - human creativity. Participants can use Firefly for ideation, mood boarding, as well as for producing final audio, video, or effects, with each AI-generated asset carrying direct attribution via Content Credentials, supporting transparency and control for the original creators. Sam Finn describes the transformation AI has brought to the creative process: "Before AI, creativity felt like a tradeoff - your imagination was shaped by time, tools, and complexity. With Firefly and generative tools, I can now create exactly what I envision, almost as quickly as I think it. For the first time, filmmaking feels like a stream-of-consciousness art form" -Sam Finn, director of The Unfinished Film project Adobe's campaign invites participants to download the starter sequence created by Finn and to apply Firefly's suite of features to put their own mark on the story, remixing and sharing their versions within the larger creative community. Contributions are then highlighted alongside tips and behind-the-scenes content from the original artists on Adobe's digital channels. Contributors' perspectives To demonstrate the range of outcomes achievable with this approach, Adobe has partnered with four creators: VFX artist Phil Cohen, creative director Noémie Pino, cinematographer Keenan Lam, and designer Jad Kassis. Each used Firefly to tailor Finn's original sequence to their own aesthetic and storytelling preferences. Noémie Pino combined claymation techniques with digital compositing using Firefly, Photoshop and After Effects. "I've always loved the tangible feel of clay, but the process is incredibly time-consuming - I never imagined I'd be able to bring it to life like this. With Firefly, Photoshop and After Effects I could blend the handcrafted with the digital and turn seconds of motion into something that feels real and emotionally textured." - Noémie Pino Phil Cohen, a creative director known for integrating fashion and visual effects, commented on the accessibility offered by AI tools: "We're entering a new era where imagination doesn't have to wait on budget or big teams. With tools like Firefly, designers can direct, photographers can animate - creativity becomes immediate and accessible. This project is a glimpse of what's next." -Phil Cohen Jad Kassis focused on how AI-driven workflows accelerated the pace of idea execution: "Firefly changed how I use time. What once took days now takes minutes - so I can chase ideas while they're still alive, still emotional. I'm not just designing frames; I'm directing fleeting moments before they disappear." -Jad Kassis Keenan Lam, both a creator and resource developer for digital tools, explored Firefly from the perspective of a cinematographer: "As someone who's spent years behind the camera, I've always been a perfectionist - so the idea of giving up control to AI was intimidating at first. But this project challenged me in the best way. I realised generative tools like Firefly can extend my creativity, not replace it. They unlock shots and transitions I couldn't easily achieve on my own." -Keenan Lam The Unfinished Creator Film is positioned as an ongoing, collaborative project designed to showcase and test the boundaries of creative engagement between human creators and AI technology within a globally accessible digital framework.

Jim Cramer on Adobe: 'Might Get This Thing at a Cheaper Price'
Jim Cramer on Adobe: 'Might Get This Thing at a Cheaper Price'

Yahoo

time25-07-2025

  • Business
  • Yahoo

Jim Cramer on Adobe: 'Might Get This Thing at a Cheaper Price'

Adobe Inc. (NASDAQ:ADBE) is one of the stocks that Jim Cramer shared thoughts on. A caller asked if Cramer thinks the stock is a buy, sell, or hold, and he replied: 'Boy… I gotta tell you, this is a tough one. One, it keeps going down, but two, Figma is about to come, and you know a lot of people have switched from Adobe to Figma because it's much cheaper. Let's let the Figma deal come and then see what happens. Might get this thing at a cheaper price.' onur-binay-O2-EZNGZIyk-unsplash Adobe (NASDAQ:ADBE) develops software and cloud-based platforms for content creation, digital experiences, and document management. The company provides subscription-based tools, analytics solutions, and services for publishing, advertising, and customer engagement. During an April episode, Cramer called it 'a great company.' He said: 'Adobe, what a great company. Its stock is down almost 35% from its high set last year […] Adobe has come up with a few AI tools of its own headlined by Adobe Firefly – it's a Lamborghini, wow! It's a really impressive technology. But the problem is OpenAI can also do these things too. So is Adobe being hurt or helped by AI? It's really hard to say. […] I'm not sure I'd stick my neck out for Adobe with its generative AI threats.' While we acknowledge the potential of ADBE as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

4 AI Prompts That Will Transform How You Create Presentations Forever
4 AI Prompts That Will Transform How You Create Presentations Forever

Forbes

time25-07-2025

  • Business
  • Forbes

4 AI Prompts That Will Transform How You Create Presentations Forever

Generative AI tools, such as ChatGPT and specialized platforms, are revolutionizing presentation ... More creation by automating routine formatting tasks, allowing professionals to focus on creativity and communication. The need to occasionally create presentations, pitch decks and slides is a necessary, but often laborious and repetitive, fact of life. Fortunately for those who would rather spend their time flexing their creativity and communication skills than formatting slides to fit brand guidelines (or other repetitive tasks), it's an area where generative AI shines. Text, image generation and multi-modal tools like ChatGPT and Gemini are now great at handling mundane and routine elements of this work. And a wave of specialized platforms like Canva, Adobe Firefly, and Pitch have packaged generative AI into products that are immediately useful to anyone familiar with workplace productivity tools. But AI is no different from traditional computing in one important way—good output relies on the machine receiving good input from the user. So here are some tips, as well as some sample prompts, for anyone wanting to add this capability to their AI toolbox. Which Tools? Before we get started, it's worth noting that these tips aren't for any specific AI tool or application. They should be useful whether you're using chat-based bots like ChatGPT or a more specific genAI design platform like Canva. The idea is to give an overview of how genAI can be useful, which can be applied regardless of your choice of tool. Tips For Successful AI Presentations And Decks Start From The End: Begin with a clear picture of what you want to achieve, which ultimately means the key messages you want your audience to take from your deck. If you aren't sure what they should be, you can ask AI to help you work them out. Personalize Everything: Giving AI in-depth information about your audience—job titles, seniority, areas of expertise, etc—lets it create content tailored to them, without wasting their time with irrelevant information for other people. Define The Structure: When presenting big ideas, grab attention as quickly as possible by asking AI to help you structure the contents according to the 'reverse pyramid' principle. This will front-load the slides with your most important and exciting revelations, so your audience will see them when their attention is piqued at the start. Set The Rules: You can give the AI tool clear directions on branding, style, color schemes and design instructions like 'clean, minimalist look' or 'loud, attention-grabbing colors'. Telling Time: Not sure how many slides or how much information you need? Tell the AI how long the presentation should take to view, and it can work out these details for you. Give Examples: Providing your AI tool with examples of competitor decks or previous presentations that have worked well lets it understand what you're trying to achieve. Example Prompts For Better Presentations And Slide Decks Creates a storyboard that can be tweaked and fine-tuned alongside a human designer to create the perfect deck. Prompt: Please act as a business strategist and expert pitch-deck builder. Create a storyboard for a 12-page pitch deck targeted at persuading investors to back us. Ask me questions to gather the information you need, one at a time, then provide a storyboard for a deck presenting the information in the most engaging and persuasive way. Create visualizations from raw data that can be quickly dropped into slides and decks. Prompt: Act as an expert data analyst and communicator. Ask me for the raw data that you want analyzed, and for whom the insights are intended. Then pick the best methods, charts and visualizations to communicate the key or most relevant findings. Present one finding per slide, giving a headline insight, a visualization of the key data points relating to that insight, and explain the importance of the insight and any actions it suggests should be taken in straightforward, clear language. This creates notes that help you tailor your commentary to specific audiences. Prompt: Ask me to provide a slide, deck or presentation, then ask me who my intended audience is. Draft concise speaker notes, of a maximum length of 40 words, to accompany each slide. Each note should include a headline covering the single most important point the slide should land, bullet points highlighting the other major points, and a transition cue to the next slide. Use this with a chatbot with image generation to create templates that give your slides a consistent, on-brand look. Prompt: 'Act as a branding consultant. Ask me for our existing brand guidelines document, style guides, and the deck that needs branding. Apply the guidelines to create a new version of the deck that's in line with our corporate branding and style. Supercharge Your Deck Drafting And Design Remember, using AI well shouldn't mean using it as a replacement for your human creative skills. Instead, use it to become more efficient by bolstering your creativity and overcoming the sense of indecision or overwhelming choice we have when staring at a blank document. It doesn't take much practice to start using AI to build decks and presentations in more effective and efficient ways. And as it's something that we all have to do from time to time, it's a great opportunity to add a new AI skill to your toolbox.

Adobe Firefly can now generate sound effects from your audio cues
Adobe Firefly can now generate sound effects from your audio cues

Yahoo

time18-07-2025

  • Yahoo

Adobe Firefly can now generate sound effects from your audio cues

Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities. To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect. In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly. The other new features Adobe is adding today are called Composition Reference, Keyframe Cropping and Video Presets. The first of those allows you to upload a video or image you captured to guide the generation process. In combination with Video Presets, you can define the style of the final output. Some of the options Adobe is offering at launch allow you to create clips with anime, black and white or vector art styles. Lastly, with Keyframe Cropping you can upload the first and final frame of a video and select an aspect ratio. Firefly will then generate a video that stays within your desired format. In June, Adobe added support for additional third-party models, and this month it's doing the same. Most notable is the inclusion of Veo 3, which Google premiered at its I/O 2025 conference in May. At the moment, Veo 3 is one of the only AI models that can generate video with sound. Like with all the other partner models Adobe offers in Firefly, Google has agreed not to use data from Adobe users for training future models. Every image and video people create through Firefly is digitally signed with the model that was used to create it. That is one of the safeguards Adobe includes so that Firefly customers don't accidentally ship an asset that infringes on copyrighted material. According to Zeke Koch, vice president of product management for Adobe Firefly, users can expect the fast pace of updates to continue. "We're relentlessly shipping stuff almost as quickly as we can," he said. Koch adds Adobe will continue to integrate more third-party models, as long as their providers agree to the company's data privacy terms.

Adobe Firefly can now generate sound effects from your audio cues
Adobe Firefly can now generate sound effects from your audio cues

Engadget

time17-07-2025

  • Engadget

Adobe Firefly can now generate sound effects from your audio cues

Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities. To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect. In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly. The other new features Adobe is adding today are called Composition Reference, Keyframe Cropping and Video Presets. The first of those allows you to upload a video or image you captured to guide the generation process. In combination with Video Presets, you can define the style of the final output. Some of the options Adobe is offering at launch allow you to create clips with anime, black and white or vector art styles. Lastly, with Keyframe Cropping you can upload the first and final frame of a video and select an aspect ratio. Firefly will then generate a video that stays within your desired format. In June, Adobe added support for additional third-party models , and this month it's doing the same. Most notable is the inclusion of Veo 3 , which Google premiered at its I/O 2025 conference in May. At the moment, Veo 3 is one of the only AI models that can generate video with sound. Like with all the other partner models Adobe offers in Firefly, Google has agreed not to use data from Adobe users for training future models. Every image and video people create through Firefly is digitally signed with the model that was used to create it. That is one of the safeguards Adobe includes so that Firefly customers don't accidentally ship an asset that infringes on copyrighted material. According to Zeke Koch, vice president of product management for Adobe Firefly, users can expect the fast pace of updates to continue. "We're relentlessly shipping stuff almost as quickly as we can," he said. Koch adds Adobe will continue to integrate more third-party models, as long as their providers agree to the company's data privacy terms.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store