What’s New in Adobe Creative Cloud 2025

The 2024 Adobe MAX conference brought the expected wave of major upgrades and new features. It won’t surprise anyone that Adobe Firefly generative AI was behind many new feature announcements. It’s also important to recognize other major themes that drove new features across Creative Cloud apps.

As we saw last year, it’s no longer the case that everything announced at MAX is a surprise. Instead of holding back all new features until MAX, some have already been available for user testing and feedback in the public beta versions of Photoshop, InDesign, and other apps. Some graduated from beta versions to regular features. In this way, MAX has also become a time for Adobe to review Creative Cloud progress over the past year.

First, let’s review notable enhancements to specific Creative Cloud apps. Then we’ll look at important themes and trends that Adobe has chosen to drive the development of Creative Cloud and its apps over the next year.

How to Learn About New Features

This article is an overview of the changes. If you want to review your favorite app’s new features in more detail, or if it isn’t covered in this article, where’s the list?

Probably the easiest way to see new features of a desktop Creative Cloud app is to go to the Home screen and click the “gift box” icon near the upper right corner. Some apps have a What’s New command on the Help menu. For many apps, you can also do a search in your web browser for “What’s New in Adobe…”, because Adobe publishes new feature summaries under that title for many Creative Cloud apps.

Find out about new features in Photoshop 2025 by using the What’s New command or button.

InDesign

A group of enhancements provides evidence that InDesign is alive. Naturally, Adobe thought of ways to adapt Firefly generative AI for InDesign 2025 (version 20).

Generative Expand and Text to Image

Generative Expand lets you “un-crop” an image, filling in the empty space if any side of a graphics frame extends beyond the graphic it contains. For example, this might be useful if your layout needs an image in a different aspect ratio than the original. However, in InDesign 20.0, Generative Expand has issues with color, resolution, and more, which Mike Rankin investigated in the CreativePro article Why You Should Use Generative Expand in Photoshop, Not InDesign.

Before and after applying Generative Expand in InDesign

Text to Image is similar to Generate Image in Photoshop: It’s the now-familiar ability to create an image for your layout by typing a text prompt describing what you want. Text to Image provides options similar to what you get in Photoshop, so you can guide and customize the result. Jason Hoppe explores Text to Image in the CreativePro article Using Text to Image in InDesign.

However, many InDesign users already use Photoshop to prep their images, where generative AI tools are advancing faster with more options and flexibility. Also, be aware that using Generative Expand or Text to Image creates new images and their variations in an automatically created folder named InDesign GenAI Assets, in a default location that might be very different than the folder containing your InDesign document and its existing links. If you use these generative AI tools, you’ll probably want to use the File > Package command to collect all of the correct linked images when you’re ready to hand off final files.

Insert MathML

If you create math textbooks or other technical publications, you might be interested in the new Insert MathML command. MathML is a form of XML designed for representing mathematical notation, and if you paste MathML code into the Insert MathML dialog box, InDesign can render it on the layout. However, depending on your needs, Insert MathML might not yet be robust enough to replace plug-ins such as MathTools. There are some initial issues; see the CreativePro article How to Avoid Print Problems with MathML in InDesign.

The Insert MathML dialog box opens with sample code that you can replace by pasting your own.

New Export Options for HTML5 and Adobe Express

InDesign 2025 can now export to HTML5. Some aspects of HTML5 export and MathML enhance accessibility, as Laura Brady covered in the CreativePro article New Accessibility Features in InDesign 2025.

InDesign 2025 can also export from InDesign to Adobe Express. A designer can create templates that clients or other employees can fill in using Adobe Express, without having to know or use InDesign. This may seem like a minor thing, but there is potential in linking up the power and precision of apps such as InDesign with the simple and easy productivity of Adobe Express.

Photoshop

As expected, many new features in Photoshop 2025 are related to Firefly generative AI. Adobe upgraded the Adobe Firefly Image Model, which should improve the results from all generative AI features including the ones covered below.

Distraction Removal in the Remove tool

The already capable Remove tool gained a new option to remove content that Adobe defines as “distractions.” Clicking the new Find Distractions button reveals a menu for recognizing two kinds of distractions: Wires and Cables (such as power lines), and people (such as tourists in the background).

Wires and Cables is a one-click removal: You click the button, and it finds and removes wires and cables. People is a two-stage process where it highlights people it plans to remove, and you can edit the highlights to keep people you actually want to stay in the picture and remove others.

The initial results are impressive, but as with many generative AI features, zoom in and check the results closely because again, the larger the area to be replaced, the higher the chance that generated details might not be acceptable and require a little manual retouching.

Steve Caplin looked at Distraction Removal in the CreativePro article Removing Distractions in Photoshop, including people removal.

Before and after applying Wires and Cables. It even removed most, but not all, wires along the walls.

Generate Similar and Generate Background

The Generative Fill and Generate Image features introduced since the previous Adobe MAX are now augmented with Generate Similar and Generate Background. Don’t be surprised if you don’t see them right away like you do with other generative AI features, because these become available only under certain conditions.

Generate Background appears in the Contextual Task Bar, but only after you click Remove Background. Clicking Generative Background presents a prompt field where you can type a description of the background you want. You can leave it blank, but you’ll probably like the results more if you guide it by typing a specific prompt. The generated background becomes a new Generative Fill layer behind the layer that was selected.

Remove Background and Generate Similar are options for variations.
Generate Background is available after you click Remove Background, and creates an entirely different background for the subject.

Generate Similar appears for each variation in the Properties panel, when a generative fill layer is selected. If you strongly prefer a specific variation and want to see more like it, choose Generate Similar.

Generative Workspace

As you refine a creative idea, it can be labor-intensive to work through many AI variations using the Contextual Task Bar and Properties panel. Adobe wants to support higher-volume generative AI workflows, and so they created the Generative Workspace. For now, Generative Workspace is available only in the Photoshop public beta, not the normal Photoshop 26.0.0 release that was current at the time this article was published.

Generative Workspace is a one-stop shop for working through iterations of generative AI results, as a sort of batch processor and organizer. The main benefits are:

  • You can generate variations in parallel, so you don’t have to wait for one prompt’s results before trying a different prompt.
  • You can generate multiple types of variations from a single prompt by using variables. For example, if you want to see variations in different specific colors, include each color name as a separate variable in the same prompt.
  • You can focus on your best ideas by marking variations as favorites, and filtering the list to show only favorites.
  • It provides a timeline of variations and their prompts, which persists across documents and sessions so that you can review how your ideas progressed.
  • You can batch download or batch delete variations. (Variations are stored in the cloud.)

When you want to move forward with a variation, in Generative Workspace you can easily open one or more selected variations as a new file, add it as a layer to an existing Photoshop document, or download them as images. Overall, Generative Workspace could be an accelerator for workflows that use generative AI intensively.

If you expect to use Generative Workspace heavily, when it graduates from beta to a regular feature, review the Generative Credits FAQ to see how quickly Generative Workspace might consume them.

Generative Workspace lets you iterate quickly to refine creative ideas. This single prompt generates multiple styles of houses by including variables in the text prompt.

Substance 3D Viewer

Many remember when Photoshop had a 3D menu, and that it was removed because the 3D code was becoming obsolete and incompatible with today’s hardware. For those who’ve been waiting for 3D features to return to Photoshop, they aren’t back yet, but Adobe MAX did bring news of a possible future. Adobe released a public beta of Substance 3D Viewer, free for a limited time. In this simple app, you can:

  • Open 3D objects and compose them into a scene.
  • Use Firefly generative AI to create 3D objects, and to generate a background around the 3D objects. Your 3D objects can be reference content that influences how the generated background will look, so that it all goes together.
  • Style 3D objects using colors, material presets, and more.
  • Add lighting and shadows to the 3D objects.
  • Render a more polished image, with ray tracing.

After you’re done, what about Photoshop? Just drag the 3D document and drop it in a window in the Photoshop public beta app (for now), and the document becomes a Smart Object layer. Need to edit it? Double-click the Smart Object to open it back in Substance 3D Viewer, with other layers from the Photoshop document visible for reference. Edit it, and send the edits back to the 3D Smart Object in Photoshop.

This new app’s name suggests a potential future workflow that connects Photoshop to the Substance 3D app suite that Adobe acquired a couple of years ago. The way Substance 3D Viewer includes 3D objects in Photoshop is similar to how Photoshop can include and edit a camera raw Smart Object layer using Adobe Camera Raw.

Note that the system requirements for Substance 3D Viewer are somewhat higher than for Photoshop. That’s typical for 3D apps.

Substance 3D Viewer can supply a 3D scene to Photoshop as a Smart Object layer.

Illustrator

You guessed it… Illustrator 2025 (version 29) gains Firefly generative AI tools. Adobe also said at MAX that the Illustrator team is working towards making every tool multi-threaded and GPU-accelerated, making better use of today’s multi-core CPUs and powerful GPUs.

Objects on Path

Behold, a new feature that isn’t generative AI. A new Objects on Path tool (and command) lets you attach multiple selected objects to a path you click, and they become evenly spaced along the path. You can adjust the sequence, spacing, and rotation of the objects on the path by dragging handles or editing values in the Properties panel, and you can edit each object individually.

What sets apart Objects on Path from similar features is that each object can be different. For example, Blend starts with two objects and can interpolate multiple evenly spaced intermediate objects; the Transform Again and Repeat features each start with one object and can create multiple duplicates of it. But with Objects on Path, you can select several completely different objects and attach those to a path.

Objects on Path adds another option for attaching objects to closed or open paths in Illustrator.

For more on this topic, see Steve Caplin’s article, Placing Objects on a Path in Illustrator.

Generate Vectors, Generative Shape Fill, Generate Patterns

These new features are, of course, powered by Firefly generative AI. All are offered as beta features in Illustrator 29.0 (the regular release, not a public beta release). They’re adapted for vector graphics, not simply copies of the Firefly features in Photoshop and InDesign.

You know how Generate Vectors works: You type a text prompt and get a graphic, with options similar to those in other Adobe apps. But appropriately, in Illustrator you get a vector graphic you can edit with path tools, instead of the pixel graphics you get in Photoshop and InDesign. In addition to a subject, Generate Vectors also lets you generate a multi-object scene, or an icon for a website, app, or game.

Generate Vectors is like Generate Image, but creates paths.

Generative Shape Fill lets you give Firefly a prompt that’s both text and visual. As usual, you type a text prompt, but the other part of the prompt is an outline that you draw. For example, if you want a car drawn so that you see its front and side, draw and select an outline with that perspective and Firefly generates that view, not a front view. A Shape Strength option determines how tightly the generated graphic follows your outline.

I drew an outline of a refrigerator in perspective (left), and asked Generative Shape Fill to fill it using the text prompt “green refrigerator” (right). This variation has an odd extra door handle.

Generate Patterns is also something different. The text prompt you type creates a pattern as an Illustrator pattern swatch, so you can use and edit it as you would any other pattern swatch.

Generate Patterns can save you the trouble of drawing and tiling a pattern manually. Here, patterns are explored for a cooking apron.

These features offer the same flexibility as the Firefly features in Photoshop, where you can guide the result using options in the Contextual Task Bar and Properties panel, choose from multiple variations, and use a style reference to help the generated images better match your own style or the look your client wants.

All of these features are commands on the Object menu (or a submenu). They can also appear as buttons in the Contextual Task Bar or the Quick Actions section of the Properties panel, but those can be elusive because the buttons appear and disappear in those locations depending on what’s selected.

Enhanced Image Trace

The Image Trace feature got some minor enhancements. Image Trace still hasn’t gotten the full benefit of machine learning (it still can’t recognize and separately trace real world objects and subjects in photos), but new options in Illustrator 2025 make Image Trace work a little better for converting images of logos and flat color graphics into vector graphics.

Running Image Trace can result in hundreds of new paths in the Layers panel, so the new Auto-Grouping option attempts to organize them into groups. But in my tests it left many paths ungrouped, apparently because the criteria for grouping is currently limited.

If an object in an image is similar to a shape you can draw with a tool in Illustrator, such as a rectangle or ellipse, Image Trace can trace it as a shape and not just a path. This is convenient because it enables shape-specific options such as Pie Angle for an ellipse shape.

If Image Trace detects a gradient in a traced image, it can apply a gradient fill to the path it traces. Previously that gradient might be traced as a large number of solid paths.

The new options don’t always work, especially on more complex images, but hopefully Adobe will continue to modernize Image Trace.

Some new Image Trace options are hidden in the Advanced settings, and some are available only if the Mode and Palette are set to allow many colors.

Project Neo

At Adobe MAX 2023, one of the Adobe sneaks (technology preview demos) was of Project Neo, a way to draw 3D objects using tools designed to be more familiar to Creative Cloud users than fully featured 3D apps such as the Substance 3D suite or Blender. At MAX 2024, Adobe announced that Project Neo is now available as a public beta, so you can try it. Check the system requirements because, like Substance 3D Viewer, they’re higher than for Photoshop. And be aware that for now, Neo is a web app that works in the desktop versions of Google Chrome or Apple Safari.

I included Project Neo under Illustrator because Adobe demonstrated it as a companion app to Illustrator, with Illustrator-like controls. There’s a Send to Illustrator as Vectors command, or you can download the 3D project as a 2D pixel or vector graphic, or as an animated MP4 video.

Project Neo is to Illustrator as Substance 3D Viewer is to Photoshop: They’re both standalone applications designed to specialize in preparing 3D graphics that are included in an Illustrator (vector graphics) or Photoshop (pixel graphics) document, respectively. Both can export in common formats for other applications.

Project Neo can be a fun way to quickly draw 3D scenes and convert them to 2D graphics.

Retype

Released as a beta feature before MAX, Retype (choose Window > Retype (Beta)) can identify fonts in pixel images and outline text, and convert it to editable text.

Camera Raw and Lightroom/Lightroom Classic

Camera Raw 17, Lightroom Classic 14, and Lightroom 8 gained many of the same features, but the details may vary among them. The mobile versions of Camera Raw and Lightroom may not have all of the options in the features below. Camera Raw 17 has some new features that aren‘t yet available in Lightroom or Lightroom Classic.

Generative Remove

Consistent with Photoshop, the Adobe raw processors include the latest retouching technology under what is now called the Remove tool, which is deeper than it might look at first.

The Remove tool has three modes: Remove, Heal, and Clone. If you’ve used Heal and Clone in the past, those are pretty much the same. The Remove mode is where the action is for this release. It offers a Use Generative AI option which can create more satisfying results than the traditional tools, and offers Variations. You can also improve the results by adding to or subtracting from the area to remove. If you find it tedious to paint precisely, a new Detect Objects option lets you roughly highlight or circle an area and the Remove tool analyzes it further, automatically limiting the removal to objects it recognizes within the area you highlighted.

The Use Generative AI option isn’t perfect. It takes more time than the other options, and it requires an Internet connection. If you want to work faster, or your device isn’t connected to the Internet, or you’re preparing work for a project where using generative AI isn’t allowed, you can disable Use Generative AI; the other options still work and the result will be processed on your device.

The Remove tool now offers Use Generative AI, Detect Objects, and selection refinement options.

Generative Expand

Camera Raw 17 adds the Generative Expand feature that’s also available in Photoshop and InDesign. It’s available for options that change the image geometry, such as Crop, Upright, and Manual Transformations. In other words, it’s available for features that traditionally offer a Constrain Crop option, because Generative Expand can now fill in areas that would have been empty if Constrain Crop was deselected. Of course, you can choose from one of the three Variations or generate more Variations. Generative Expand is not yet available in any version of Lightroom.

HDR Updates

HDR editing was introduced in 2023 and is different than the much older feature of merging multiple images to HDR. You can edit tonal levels beyond the SDR range, and on a compatible HDR display you can also see those levels.

In Lightroom Classic, one limitation of HDR editing was that the HDR levels were visible only in the Develop module, but in the versions released in October 2024, it’s now possible to view HDR levels in the Library module, Compare view, and Full Screen Preview mode.

When you export an HDR-edited image, Camera Raw, Lightroom Classic, and Lightroom now support ISO gain maps and more control over how they’re exported.

Content Credentials support on Export

Camera Raw, Lightroom Classic, and Lightroom now offer Early Access (testing) support for Content Credentials when exporting. Content Credentials are an implementation of the industry-wide Content Authenticity Initiative, which seeks to provide tamper-evident, verifiable provenance metadata. For more information, see the Adobe help article Content Credentials. After introducing Content Credentials first in Photoshop, Adobe has been gradually introducing and refining how Content Credentials work throughout Creative Cloud apps.

Improvements to Tethering for Nikon Cameras

Lightroom Classic users with Nikon cameras who do tethered shooting (shooting with a camera directly connected to a computer) have been waiting for better support, so the Nikon tethering code has been improved. On macOS, it’s no longer necessary to run Lightroom Classic in Rosetta (the macOS translation environment for Intel-based software) to do a tethered shoot with Lightroom Classic.

Preview Management

Many Lightroom Classic users don’t realize that if a catalog contains many thousands of images, its preview cache can grow to many gigabytes in size. It’s an expendable cache, so once in a while I would delete it and let Lightroom Classic rebuild it, but then I lose previews for images I’m still editing. In Lightroom Classic 14, Adobe added a Limit Preview Cache Size option so after setting that limit, I no longer have to manage that preview cache manually. You’ll find that option in the Catalog Settings dialog box.

The Limit Preview Cache Size option is especially welcome on laptops with limited internal storage space.

Denoise enhancements

The popular AI-powered Denoise feature (not to be confused with the older Manual Noise Reduction options) now works with linear DNG files in Camera Raw 17, Lightroom Classic 14, and Lightroom 8 on macOS and Windows. That opens up Denoise to many more formats such as DNG files created by Photo Merge commands (HDR and Panorama), Apple Pro Raw files, and Samsung Galaxy Expert Raw DNG files. However, we are still waiting for Denoise to work with non-raw formats such as JPEG and TIFF.

Camera Raw 17 offers a new option with the dull name New AI Features and Settings Panel, but word of it is spreading like wildfire because it represents a feature that many have wanted: Being able to apply the Enhance options (Denoise, Super Resolution, and Raw Details) interactively, without creating a new DNG file with the result. You’ll find the option in the Technology Preview section of Camera Raw Preferences because it’s available for public testing. Because of that, it might not work perfectly; for example there are reports that Camera Raw runs slower with that option enabled. After Adobe finishes debugging and optimizing it, it will no longer be a Technology Preview option and will likely be added to Lightroom Classic and Lightroom as well. Because Lightroom Classic and Lightroom don’t have this option yet, you might not want to use this Technology Preview option with files you also want to manage with Lightroom Classic or Lightroom.

On macOS, Adobe recently began supporting the Apple Neural Engine (an AI accelerator) to help speed Denoise processing, but have once again disabled that support (hopefully temporarily) due to bugs. However, Denoise speed on both macOS and Windows has always depended more on the power of the computer’s graphics hardware than on any other component.

Catalog upgrade refinement

Upgrading Lightroom Classic catalogs has traditionally been complicated by how the previous version catalog was handled and named. Adobe simplified this in Lightroom Classic 14; your catalog keeps its name during the upgrade and the old catalog is compressed and moved to another folder. I think they got it right this time.

Adobe Adaptive raw profile

Camera Raw 17 includes a new raw profile option named Adobe Adaptive, which is a beta feature for now. The existing raw profiles, such as those in the Adobe Color and Camera Matching categories, do the same thing every time. The Adobe Adaptive profile can apply to each image differently, using AI to analyze the image and then provide what it thinks is an optimal starting point for the specific image content. It is not the same as the Auto adjustment, which actually changes edit values. If you want to try the Adobe Adaptive profile, Adobe recommends applying the profile before making other edits. You can read more in the Adobe article The Adobe Adaptive Profile.

Quick Actions

The Lightroom mobile and web apps offer Quick Actions. Similar to the Adobe Adaptive profile, Quick Actions are not hard-coded, they’re adaptive; the Quick Actions you see depend on what AI finds in the image. For example, viewing a head shot will offer Quick Actions for portrait retouching; if you view a photo of a landscape you’ll see different Quick Actions. This is an Early Access feature.

Quick Actions in Lightroom suggest different next steps for a photo of a landscape (top), and of a person (bottom).

Smart Albums

Lightroom for Windows and macOS now have Smart Albums, which are albums generated by matching criteria you specify, similar to a saved search. If you’ve used Smart Collections in Lightroom Classic and Adobe Bridge, or Smart Folders in macOS, Smart Albums work the same way. For now, Smart Albums are not yet available in Lightroom on mobile devices or web browsers.

Bridge

Bridge 2025 (version 15) has few changes compared to the version 14 upgrade, but they’re still worth noting.

Similar to Lightroom Mobile, Quick Actions are buttons that let you do common tasks easily. In Bridge they’re adapted to what you might do with files in Bridge, including Remove Background, Trim Video, and Convert to GIF. Clicking a button opens an Adobe Express window where you can drop a file from a Content panel in Bridge. So the file you drop is actually processed using Adobe Express in the cloud.

Quick Actions in Bridge can take care of some tasks quickly, by using Adobe Express cloud servers.

In other news, in the Export panel Bridge adds Content Credentials support as an Early Access feature. On Windows, Bridge can now display thumbnails and preview for HEIC and HEIF images. Those formats were already supported on Bridge for macOS.

Premiere Pro and After Effects

The 2025 (version 25) releases of the Creative Cloud video editing apps include minor improvements, but they aren’t random; they tend to support larger overall goals.

Properties Panel

The Properties panel is now available in Premiere Pro and After Effects, presenting commonly used options in one panel so that you can keep fewer panels open. The Properties panel has proved its worth in the Adobe graphic design apps; the space savings might be even more welcome in the video apps which also lose screen space to the always open Timeline panel.

The Properties panel is now in Adobe video apps such as After Effects.

Design refresh

Adobe updated the visual design of Premiere Pro and After Effects for better accessibility and consistency with other Adobe apps.

Generative Extend

In its public beta release of Premiere 2025, Premiere Pro introduced Generative Extend. This is an example of Adobe adapting Firefly generative AI for the needs of a specific creative medium. In the Adobe photo and design apps, you’ve seen that Generative Expand extends a still image spatially (across space). With Generative Extend, the extension is temporal (across time): If you wish a video clip was another half second longer so that an edit or transition would be more effective, Generative Extend can create that extra half second, continuing any motion at the end of the original clip.

Other Enhancements

Several versions ago, Premiere Pro redesigned the New Project dialog box to be more approachable for novice users. However, there was a backlash from pros who felt that the guided design slowed down the way they want to create projects. In Premiere Pro 2025, the New Project dialog box has been re-simplified to be closer to the way it used to be. Novice users will still be led through a guided Import screen, but pros can select Skip Import Mode to go straight to the Premiere Pro workspace, where they can import media directly into the Project panel.

Trends and Goals Driving the Changes

You might have noticed some patterns in the new features across apps. These are important to recognize, because they help in understanding the priorities that Adobe chose for driving the changes in the 2025 releases of Creative Cloud apps.

Workflow Enhancements Across Apps

Some features that were introduced over time have proved useful enough that Adobe added them to more apps.

Properties panel. By combining options from other panels into collapsible sections of the Properties panel, other panels can be closed, saving screen space.

Contextual Task Bar. As its name implies, the Contextual Task Bar (CTB) focuses more on tasks, although it can also show settings. As the current selection changes, the CTB offers potential tasks such as Select Subject, Add to Mask, or type a text prompt to create generative AI content.

Originally in Photoshop, you can now find the Properties panel in the 2025 releases of InDesign, Illustrator, Premiere Pro, After Effects, and many Adobe mobile and web apps. The Properties panel and Contextual Task Bar have great value for mobile apps, simply because their screens tend to be too small to spread out many different panels.

Teaming the Properties panel and Contextual Task Bar turned out to be valuable for generative AI. Several apps let you enter an initial text prompt in the Contextual Task Bar, and the Properties panel has the space to fully list Variations and options.

History panel. Originally in Photoshop, the History panel was recently added to both Illustrator and InDesign.

Quick Actions. As covered earlier, these macro-like buttons can perform commonly used tasks with fewer clicks that might be required traditionally. They started as a feature in Photoshop and Adobe Express, now picked up by Lightroom and Bridge. The Express and Lightroom versions use AI to suggest which Quick Actions might apply best to selected content.

Express to Everywhere

Adobe Express is a quick way to produce graphics and video with little effort and at low cost, on desktop or mobile devices. On the surface, it looks like a cloud-native version of low-end apps such as Photoshop Elements, and Express might not be on the radar of pros working intensively in traditional workhorse apps such as Photoshop and InDesign.

There’s a lot more to Express than that. Some perceive that Adobe developed Express to respond to the emergence of Canva (also web-based and easy to use), and there’s probably some truth to that. Canva gave people a way to make and publish graphics online without the high price and steep learning curves of apps such as Photoshop. Like Canva, Express is also interdisciplinary: You can use it to edit and combine photos, graphics, layouts, and video, instead of having to learn three or four apps. And you don’t even need a computer, because Express is cloud-based and works on mobile devices.

One reason for pros to pay attention to Express is that Adobe is building ties between Express and its pro apps. We’re seeing this in how Express supports Photoshop and Illustrator files, so that you can use them in Express without exporting or converting them. And also in how you can now export an InDesign 2025 file as an Express template.

In Bridge 2025, Quick Actions buttons such as Resize and Convert to GIF upload content to Express services that perform the actual work. This suggests that Adobe has ideas about expanding the role of Express as a server-based resource for common tasks across Adobe apps.

Collaboration with frame.io

frame.io is a cloud-based service that’s popular with professional video production teams, because it lets them collaborate online efficiently. That’s possible because the frame.io architecture allows frame-accurate online viewing of and commenting on diverse video formats, hosted on cloud servers. Adobe acquired frame.io in 2021, giving Adobe video apps an online collaboration component similar to Share for Review and Invite to Edit in the graphics apps, but with the high quality and performance to be practical with high resolution pro video.

At Adobe MAX, Adobe announced that frame.io version 4 would extend its file format support significantly, including non-video formats such as Photoshop, InDesign, Illustrator, and PDF files. This change may remind you of Adobe Bridge in that you can now use frame.io to manage a wide variety of project assets. But Bridge is a local desktop single-user app, so the significance of frame.io 4 is that it’s a web and mobile multi-user app with assets stored on cloud servers. So frame.io is immediately ready for team collaboration online, including centralized asset storage, review, commenting, and approval on desktop and mobile.

frame.io is also known for its camera-to-cloud workflow. If a production team records video on cameras connected to frame.io in the cloud, their clips are uploaded and available as soon as possible to remote editors who can begin reviewing the clips immediately. Now Adobe extends that capability to still photographers by connecting frame.io to cloud-based Lightroom, a beta feature. A photographer can shoot at an event location or in their studio, and as soon as the uploaded photos reach the cloud, a team can review the pictures from many other locations. An increasing number of still cameras support the frame.io camera-to-cloud connection.

Given all of that, keep an eye on how Adobe might expand frame.io to connect more Adobe apps, to support more kinds of online team production workflows.

frame.io enables team approval workflows and online project asset management of many file formats, on desktop and mobile devices.

The Big Picture of Firefly Generative AI

At MAX 2023, Adobe introduced Generative Fill in Photoshop. In 2024, Adobe added diverse forms of generative AI across multiple Creative Cloud apps. Photoshop got Generate Image and Generate Background. Several apps got a Remove tool with a Generative AI option, along with other features such as Generative Expand/Extend and Generate Image/Vectors. These features usually come with options such as Variations, Reference Image, and Effects.

There is a method to this. It starts with what Adobe calls the Foundation Models for Firefly generative AI. For example, Firefly Image Model is the basis for the features in the graphics apps, and generative AI in the video apps is based on the Firefly Video Model. Then, each app adapts a Foundation Model for its users’ needs, such as how in Illustrator, Firefly generates vector graphics and offers Text to Pattern.

How Adobe has branded its solutions helps us understand their differences. What Adobe named Sensei is more about machine learning, which is largely about recognition (“that’s a sky, that’s a person…”) and decision (“this type of picture usually benefits from a boost in the shadows”). What Adobe named Firefly is generative AI, which is more about creating new content (“Fill the empty space beyond the edges of this photo”). Generative AI (Firefly) raises more questions and objections than machine learning (Sensei), so judging all AI the same way may not be appropriate. Controversies such as those about rights and ownership are primarily centered around generative AI.

Do We Trust Firefly?

In 2024, the creative community expressed widely publicized concerns about Firefly generative AI, and Adobe worked to address those concerns.

The Adobe Approach to Generative AI

In late 2024, Adobe decided to put their approach to generative AI in writing with specific points that they hope will allay many concerns and fears. They posted the article Our Firefly Approach, a list of statements that spell out how Adobe trains Firefly models and supports creators. It’s worth a read just to understand the public position Adobe takes on these issues. For example, four of the statements are:

“We do not and have never trained Adobe Firefly on customer content.”

“We only train Adobe Firefly on content where we have permission to do so.”

“We do not mine content from the web to train Adobe Firefly.”

“We do not claim any ownership of your content, including content you create with Adobe Firefly.”

On the web page, all of the statements are expandable for more detail. Making these points might help distinguish Firefly in the marketplace, because if other companies providing generative AI solutions are not able to make similar statements that they can stand by, Adobe may be seen as more trustworthy. For example, although generative AI for video is already offered by others, at MAX 2024 Adobe said that the Firefly Video Model is “the first publicly available video model designed to be safe for commercial use.”

Deciding to Use or Not Use Firefly Generative AI

There may be times when you don’t want to use or are not allowed to use generative AI. But Generative AI features are distributed in different parts of Adobe apps, so they’re not all in one place. Many users have asked for some kind of master switch to shut off all generative AI, for easier compliance on projects where generative AI is not allowed or wanted. At this time there is no master switch, but there are two clues that can help.

If a feature has the word “generate” or “generative” in it, it typically uses generative AI. This isn’t foolproof, because a feature like Text to Pattern doesn’t use those words, and in Photoshop 2025 the Generator Plugins submenu is an older feature that has nothing to do with generative AI.

It can be more reliable to watch out for a dialog box called Generative AI in Adobe Apps, which asks you to agree to the Adobe Generative AI User Guidelines. This message appears when you first use a generative AI feature. If you never agree to the terms, generative AI features are not enabled.

In some cases, Adobe provides an option at the feature level. For example, the Remove tool in the photography apps has a Use Generate AI option that you can disable.

You can use the Remove tool with Use Generative AI disabled, shown here for Photoshop 2025 (top) and Camera Raw 17 (bottom).

A Multi-Dimensional Solution for Generative AI

Adobe seeks to maintain our trust by offering the total package: Generative AI technology that’s effective, solves real problems, is seamlessly integrated into familiar Creative Cloud workflows, is “commercial-safe” (non-infringing, fairly sourced), and “creator-friendly” (respecting creator rights, and providing ways to let your own creative style guide the results). Other companies offering generative AI might satisfy some of those goals, but if they can’t satisfy them all, then Adobe could claim to have a more practical solution.

Sneaks

At Adobe MAX, the Sneaks (sneak peek demos of potential future features) are always popular because they give insight into directions Adobe is exploring. You can watch all of them on the MAX Sneaks web page, and here are some of the highlights.

Project Clean Machine restores full tonal detail to video frames blown out by camera flash or fireworks. The method wasn’t explained in detail, but my guess is that it reconstructs a blown-out video frame using image data from undamaged adjacent video frames.

Project Remix-a-Lot can convert a hand-drawn sketch or an image reference of a desired layout into a fully realized page layout of generated images and editable typography. If you want different aspect ratios for various delivery media, it can instantly recompose the layout and reformat the objects and type. Parts of this idea were explored at the MAX 2023 Sneaks, but this version is more polished and shown as if it was a feature of Illustrator.

Project InMotion simplifies creating animations of graphics and type, using visual prompts such as reference images to match a style. Again, this looks like the next stage of some ideas explored at MAX 2023 but now more consistent with how Firefly tools have evolved within current Adobe apps. It was demonstrated in the Chrome web browser, as a web app that looks a lot like current Adobe web and mobile apps.

Project SuperSonic explores using generative AI to create new sounds that are consistent with video images. An example was a video of a forest with no sound, using Project SuperSonic to generate ambient audio that sounds like birds and animals in a forest. (It wasn’t clear whether the sounds were consistent with that particular forest type.) As we have come to expect from Firefly, it supports variations and reference content. The presenter demonstrated reference content by making monster-like noises into a microphone, which Project SuperSonic used as a guide along with the animation of the monster to create the final sound of the monster’s roar.

Project HiFi generates high fidelity 3D scenes based on text prompts, hand sketches, Illustrator graphics, and even camera input of real-world examples. It was demonstrated by designing a room, with Photoshop as the host app for the feature.

Project Scenic demonstrates easier ways to generate an exterior scene. It also explores using text prompts to change the view (such as typing “top view”), and layered prompts for more control over changes at the object or scene level.

Project Perfect Blend can instantly composite multiple photos so that their tones, colors, and overall lighting are made consistent. This was demonstrated as a potential Photoshop feature, and looks like a much simpler and more intelligent form of the current Harmonize filter in the Neural Filters in Photoshop.

Project KnowHow demonstrates potential future implementations of Content Credentials, such as video fingerprinting, and invisible watermarks on printed designs (such as a package or mug). The demo showed a mobile app using a smartphone camera to detect an invisible printed watermark on printed products held up to the camera, connecting to the Content Authenticity website to identify the image creator and verify image provenance.

Project Turntable is an intriguing way to, for example, take a 2D drawing of a cartoon character and using generative AI to instantly convert it to a 3D object you can rotate, place in a 3D scene, and animate. It was demonstrated as a potential feature in Illustrator.

The MAX Sneaks left a strong impression that 3D, animation, video, and audio are some of the next frontiers into which Adobe seeks to expand the use of generative AI. Adobe also sees great potential in using machine learning and generative AI to make 3D artwork much easier to create compared to traditional 3D software.

Looking Ahead

For most of human history, artists developed mastery of a medium by spending a lifetime refining their understanding of the same mostly unchanging set of physical tools. Today’s digital creative tools advance at a rapid pace, frustrating creators who might struggle to keep up with software that constantly changes and can even become unfamiliar. Generative AI came out of nowhere with unusually high potential to affect more areas of our daily lives than just our work. Companies are racing to profit from that great potential before others do, so generative AI is in some cases changing our tools faster than ever. Sometimes companies do not sufficiently consider generative AI issues outside the purely technical, making decisions (including marketing, rights, and legal mistakes) that continue to create doubt, fear, and intense distrust of generative AI.

Across Creative Cloud apps, Adobe chose enhancements for the 2025 releases that make generative AI more of an everyday tool for simplifying idea development, creation, and production across all media and devices. Adobe is also paying attention to how to manage AI model training, creator rights, and provenance in a responsible way. And they continue to advance ways to improve workflows and help creative teams collaborate and manage projects more effectively online.

Over the next year we’ll learn more about how well Adobe is responding to concerns about generative AI, and how their multifaceted solutions will position Adobe among other companies that are also advancing generative AI as quickly as they can.

Conrad Chavez is the author of Adobe Photoshop Classroom in a Book (2025 release), and contributes to CreativePro.com and CreativePro Magazine. You can find out more about Conrad at his website, conradchavez.com.
>