Adobe rolls out more generative AI features to Illustrator and Photoshop
How to make Adobe Generative Fill and Expand less frustrating
Experimenting with selections, context, and prompts can play a big role in getting a quality result. Make sure to keep in mind the size of the area you are generating and consider working in iterative steps, instead of trying to get the perfect result from a single prompt. Leading enterprises including the Coca-Cola Company, Dick’s Sporting Goods, Major League Baseball, and Marriott International currently use Adobe Experience Platform (AEP) to power their customer experience initiatives. Apparently, you can’t use the new Generative Fill feature until you’ve shared some personal identifying information with the Adobe Behance cloud service. Behance users, by contrast, will have already shared their confidential information with the service and be able to access the Photoshop Generative Fill AI feature.
And with great power comes responsibility so Adobe says it wants to be a trusted partner for creators in a way that is respectful and supportive of the creative community. Adobe Firefly generative AI tools riding shotgun can unlock limitless possibilities to boost productivity and creativity. Every content creator, solopreneur, side hustler, and freelance artist has hit roadblocks, maybe because of their skill level or perhaps a lack of time; it happens. When building a team isn’t possible, Adobe Firefly generative AI can help fill those gaps. Additional credits can be purchased through the Creative Cloud app, but only 100 more per month. That costs $4.99 a month if billed monthly or $49.99 if a full year is paid for up-front.
The recently launched GPU-accelerated Enhance Speech, AI Audio Category Tagging and Filler Word Detection features allow editors to use AI to intelligently cut and modify video scenes. Instead, it maintains that this update to its terms was intended to clarify its improvements to moderation processes. Due to the “explosion” of generative AI, Adobe said it has had to add more human moderation to its content submissions review processes.
Will the stock be an AI winner?
Remove Background is a good choice for those looking to build a composite, as simply removing the background is all that is required. However, for some Stock customers, they don’t want a background; they require a different one altogether. It brings new tools like the Generative Shape Fill, so you can add detailed vectors to shapes using just a few descriptive words. Another is a Text to Pattern feature, whichenables the creation of customizable, scalable vector patterns. This update integrates AI in a way that supports and amplifies human creativity, rather than replacing it.
The partnership also aims to modernize content supply chains using GenAI and Adobe Express to deploy innovative workflows, allowing for a more diverse and collaborative team to handle creative tasks. While the companies are yet to reveal further details about any products they will be releasing together, they did outline the following four cross-company integrations that joint customers will be able to access. These work similarly to Adaptive Presets, but they’ll pop up and disappear depending on what’s identified in your image. If a person is smiling, you’ll see Quick Actions relating to whitening teeth, making eyes pop, or realistic skin smoothing, for example. The new Adaptive Presets use AI to scan your image and suggest presets that suit the content of the image best. While they can edit them to your liking, they’ll adapt to what the AI thinks your image needs most.
Adobe Firefly
Illustrator, Adobe’s vector graphics editor, now includes Objects on Path, a feature that allows users to quickly arrange objects along any path on their artboard. The software also boasts Enhanced Image Trace, which Adobe says improves the conversion of images to vectors. Adobe’s flagship image editing software, Photoshop, received several new features.
Around 90% of consumers report enhanced online shopping experiences thanks to AI. Key areas of improvement include product personalization, service recommendations, and the ability to see virtual images of themselves wearing products, with 91% stating this would boost purchase confidence. Adobe made the announcement at the opening keynote of this year’s MAX conference and plans to add this new Firefly generative AI model to Premiere Pro workflows (more on those later).
By clicking the button, I accept the Terms of Use of the service and its Privacy Policy, as well as consent to the processing of personal data. Read our digital arts trends 2025 article and our 3D art trends 2025 feature for the latest tech, style and workflow predictions. “For best results when using Gen Remove is to make sure you brush the object you’re trying to remove completely including shadows and reflection. Any leftover fragments, no matter how small, will cause the AI to think it needs to attach a new object to that leftover piece. The GIP Digital Watch Observatory team, consisting of over 30 digital policy experts from around the world, excels in the fields of research and analysis on digital policy issues. The team is backed by the creative prowess of Creative Lab Diplo and the technical expertise of the Diplo tech team.
Historical investment performances are no indication or guarantee of future success or performance. We make no representations or warranties regarding the advisability of investing in any particular securities or utilizing any specific investment strategies. Adobe has embedded AI technologies into its existing products like Photoshop, Illustrator and Premiere Pro, giving users more reasons to use its software, Durn said. Digital media and marketing software firm Adobe (ADBE) impressed Wall Street analysts with generative AI innovations at the start of its Adobe Max conference on Monday. You can now remove video backgrounds in Express, allowing you to apply the same edits to your content whether you’re using a photo or a video of a cut-out subject. Adobe Express introduced a Dynamic Reflow Text tool, allowing you to easily resize your Express artboards—using the latest generative expand resize tool—and the text will dynamically flow to fit the space you’ve created.
These include Distraction Removal, which uses AI to eliminate unwanted elements from images, and Generative Workspace, a tool for simultaneous ideation and concept development. The company, which produces software such as Photoshop and Illustrator, unveiled over 100 new capabilities for its Creative Cloud platform, many of which leverage artificial intelligence to enhance content creation and editing processes. Adobe, known for its creative and marketing tools, has announced a suite of new features and products at its annual MAX conference in Miami Beach. Set to debut in beta form, the video expansion to the Firefly tool will integrate with Adobe’s flagship video editing software, Premiere Pro. This integration aims to streamline common editorial tasks and expand creative possibilities for video professionals.
The company’s latest Firefly Vector AI model is at the heart of these enhancements, promising to significantly accelerate creative workflows for graphic designers, fashion designers, interior designers or professional creatives. In a separate Adobe Community post, a professional photographer says they use generative fill “thousands of times per day” to “repair” their images. When Adobe debuted the Firefly-powered Generative Remove tool in Adobe Lightroom and Adobe Camera Raw in May as a beta feature, it worked well much of the time. However, Generative Remove, now officially out of its beta period, has confusingly gotten worse in some situations. Adobe’s Generative Fill and Expand tools can be frustrating, but with the right techniques, they can also be very useful.
That’s a key distinction, as Photoshop’s existing AI-based removal tools require the editor to use a brush or selection tool to highlight the part of the image to remove. In previews, Adobe demonstrated how the tool could be used to remove power lines and people from the background without masking. The third AI-based tool for video that the company announced at the start of Adobe Max is the ability to create a video from a text prompt. While text to video is Adobe’s video variation of creating something from nothing, the company also noted that it can be used to create overlays, animations, text graphics or B-roll to add to existing created-with-a-camera video. It’s based on Generative Fill, but rather than replacing a user-selected portion of an image with AI-generated content, it automatically detects and replaces the background of the image.
Behind the scenes: How Paramount+ used Adobe Firefly generative AI in a social media campaign for the movie IF – the Adobe Blog
Behind the scenes: How Paramount+ used Adobe Firefly generative AI in a social media campaign for the movie IF.
Posted: Mon, 09 Dec 2024 08:00:00 GMT [source]
The Generative Shape Fill tool is powered by the latest beta version of Firefly Vector Model which offers extra speed, power and precision. It includes text-to-image and generative fill, video templates, stock music, image and design assets, and quick-action editing tools to help you create content easily on the go. Once you have created content, you can plan, preview, and publish it to TikTok, Instagram, Facebook, and Pinterest without leaving the app. Recognising the growing need for efficient collaboration in creative workflows, Adobe announced the general availability of a new version of Frame.io.
Some of you might leave since you can’t pay the annual fee upfront or afford the monthly increase. We can hardly be bothered as we need more cash to come up with more and more AI-related gimmicks that photographers like you will hardly ever use. It’s not so much that Adobe’s tools don’t work well, it’s more the manner of how they’re not working well — if we weren’t trying to get work done, some of these results would be really funny. In the case of the Bitcoin thing, it just seems like it’s trying to replace the painted pixels with something similar in shape to the detected “object” the user is trying to remove. Last week, I struggled to get any of Adobe’s generative or content-aware tools to extend a background and cover an area for a thumbnail I was working on for our YouTube channel. Previous to the updates last year, the tasks I asked Photoshop to handle were done quickly and without issue.
Adobe is listening to feedback and making tweaks, but AI inconsistencies point toward a broader issue. Generative AI is still a nascent technology and, clearly, not one that exclusively improves with time. Sometimes it gets worse, and for those with an AI-reliant workflow, that’s a problem that undercuts the utility of generative AI tools altogether.
Adobe’s new AI tool can edit 10,000 images in one click
The Adobe Firefly Video Model — now available in limited beta at Firefly.Adobe.com — brings generative AI to video, marking the next advancement in video editing. It allows users to create and edit video clips using simple text prompts or images, helping fill in content gaps without having to reshoot, extend or reframe takes. It can also be used to create video clip prototypes as inspiration for future shots. Adobe unveiled its Firefly Video Model last month, previewing a variety of new generative AI video features. Today, the Firefly Video Model has officially launched in public beta and is the first publicly available generative video model designed to be commercially safe.
That covers the main set of controls which overlay the right of your image – but there is a smaller set of controls on the left that we must explore as well. Back up to the set of three controls, the middle option allows you to initiate a Download of the selected image. As Firefly begins preparing the image for download, a small overlay dialog appears.
There are also Text to Pattern, Style Reference and more workflow enhancements that can seriously speed up tedious design and drawing tasks enabling designers to dive deeper into their work. Everything from the initial conception of an idea through to final production is getting a helping hand from AI. If you do happen to have a team around you, features like brand kits, co-editing, and commenting will aid in faster, more seamless collaboration.
Adobe is using AI to make the creative process of designing graphics much easier and quicker, … [+] leaving users of programs like Illustrator and Photoshop free to spend more time with the creative process. Adobe has some language included that appears to be a holdover from the initial launch of Firefly. For example, the company stipulates that the Credit consumption rates above are for what it calls “standard images” that have a resolution of up to 2,000 by 2,000 pixels — the original maximum resolution of Firefly generative AI. Along that same line of thinking, Adobe says that it hasn’t provided any notice about these changes to most users since it’s not enforcing its limits for most plans yet.
To date, Firefly has been used by numerous Adobe enterprise customers to optimize workflows and scale content creation, including PepsiCo/Gatorade, IBM, Mattel, and more. This concern stems from the idea that eventually, AI-generated content will make up a large portion of training data, and the results will be AI slop — wonky, erroneous or unusable images. The self-perpetuating cycle would eventually render the tools useless, and the quality of the results would be degraded. It’s especially worrisome for artists who feel their unique styles are already being co-opted by generators, resulting in ongoing lawsuits over copyright infringement concerns.
- The samples shared in the announcement show a pretty powerful model, capable of understanding the context and providing coherent generations.
- IBM is experimenting with Adobe Firefly to optimize workflows across its marketing and consulting teams, focusing on developing reliable AI-powered creative and design outputs.
- Adobe has also improved its existing Firefly Image 3 Model, claiming it can now generate images four times faster than previous versions.
- It also emerged that Canon, Nikon and Leica will support its Camera to Cloud (C2C) feature, which allows for direct uploads of photos and videos to Frame.io.
But as the Lenovo example shows, there’s a lot of careful groundwork required to safely harness the potential of this new technology. If you look at the amount of content that we need to achieve end-to-end personalization, it’s pretty astronomical. To give you an example, we just launched a campaign for four products across eight marketing channels, four languages, and three variations. Speeding up content delivery in this way means that teams are then able to adjust and fine-tune the experience in real-time as trends or needs change.
However, at the moment, these latest generative AI tools, many of which were speeding up their workflows in recent months, are now slowing them down thanks to strange, mismatched, and sometimes baffling results. “The generative fill was almost perfect in the previous version of Photoshop to complete this task. Since I updated to the newest version (26.0.0), I get very absurd results,” the user explains. Since the update, generative fill adds objects to a person, including a rabbit and letters on a person’s face. Illustrator and Photoshop have received GenAI tools with the goal of improving user experience and allowing more freedom for users to express their creativity and skills. Our commitment to evolving our assessment approach as technology advances is what helps Adobe balance innovation with ethical responsibility.
We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing. GhostGPT can also be used for coding, with the blog post noting marketing related to malware creation and exploit development. Malware authors are increasingly leveraging AI coding assistance, and tools like GhostGPT, which lack the typical guardrails of other large language models (LLMs), can save criminals time spent jailbreaking mainstream tools like ChatGPT. Media Intelligence automatically recognises clip content, including people, objects, locations, camera angles, camera type and more. This allows editors to simply type out the clip type needed in the new Search Panel, which displays interactive visual results, transcripts, and other metadata results from across an entire project.
An Adobe representative says that today, it does have in-app notifications in Adobe Express — an app where credits are enforced. Once Adobe does enforce Generative Credits in Photoshop and Lightroom, the company says users can absolutely expect an in-app notification to that effect. As part of the original story below, PetaPixel also added a line stating that in-app notifications are being used in Adobe Express to let users know about Generative Credits use. Looking ahead, Adobe forecast fiscal fourth-quarter revenue of between $5.5 billion and $5.55 billion, representing growth of between 9% to 10%.
In addition, Adobe is adding a neat feature to the Remove tool, which lets you delete people and objects from an image with ease, like Google’s Magic Eraser. With Distraction Removal, you can remove certain common elements with a single click. For instance, it can scrub unwanted wires and cables, and remove tourists from your travel photos. Adobe is joining several other players in the generative AI (GAI) space by rolling out its own model. The Firefly Video Model is powering a number of features across the company’s wide array of apps.
It works great for removing cables and wires that distract from a beautiful skyscape. This really begins with defining our brand and channel guidelines as well as personas in order to generate content that is on-brand and supports personalization across our many segments. The rapid adoption of generative AI has certainly created chaos inside and outside of the creative industry. Adobe has tried to mitigate some of the confusion and concerns that come with gen AI, but it clearly believes this is the way of the future. Even though Adobe creators are excited about specific AI tools, they still have serious concerns about AI’s overall impact on the industry.
One capability generates visual assets similar to the one highlighted by a designer. The others can embed new objects into an image, modify the background and perform related tasks. Some of the capabilities are rolling out to the company’s video editing applications. The others will mostly become available in Adobe’s suite of image editing tools, including Photoshop. For photographers not opposed to generative AI in their photo editing workflows, Generative Remove and other generative AI tools like Generative Fill and Generative Expand have become indispensable.