Connect with us

Bussiness

AI Is Reimagining Every Aspect Of Business, Adobe’s Strategy Lead Says

Published

on

AI Is Reimagining Every Aspect Of Business, Adobe’s Strategy Lead Says

Throughout his career, Scott Belsky has been an entrepreneur, creator, author, investor and business advisor. He’s currently the chief strategy officer and executive vice president of design & emerging products at Adobe. I talked with him late last year about how AI has changed strategies for businesses and creators, as well as what is to come in the technology in 2025.

This conversation has been edited for length, clarity and continuity. It was excerpted in the Forbes CEO newsletter.

Tell me a bit about your role at Adobe.

Belsky: My responsibility is overseeing our emerging product groups. These are the groups that include 3D & Immersive. They include the AI-first products, like Project Concept and a whole roadmap of others. They include the Adobe Stock business, which is going through a re-imagination in the world of AI, and is also responsible for sourcing the data that we train our models on.

Then, of course, design. The design organization spans all of our products and services across the company, and it’s an important part of my role to make sure that the designers are aligned around the experiences we’re delivering to our customers. Upgrading our design system, ensuring that we don’t reinvent the wheel across different products when it relates to a particular component or function.

The Strategy & Corporate Development Group also is in my organization. That’s a group that plots the strategy for Adobe going forward, and explores the edges that may someday become the center for the company, then also takes action on some of the inorganic opportunities that we have on the M&A side.

When you’re looking forward, how large of a role does AI play in corporate strategy?

Artificial intelligence to me is reimagining and refactoring every function of an organization. Whether it be finance and accounting, or HR, or legal, or creative, or marketing, these functions are being re-imagined based on these tools. These tools are also collapsing the boundaries between some of these functions, which allow stakeholders who ordinarily were never able to access the power of these functions to be able to do stuff on their behalf.

When you look at the world of creativity and marketing, a few things become clear. First of all, you have a lot of marketers at the edge, like these social media marketers, and people who are engaging in real-time on behalf of the brand on social platforms, where a lot of the spend goes these days for marketing. Those folks need to be outfitted to be able to create in real time. They need to be able to create variations of marketing campaigns. They need to be able to create content that responds to the moment. So in some ways, the lines are blurry. These marketers are acting more like creators.

Similarly on the creator side, creators are starting to act as if they’re art directors and creative directors when they have the ability to assign tasks to AI. Instead of the three or five options that they have the time to explore, now they can explore 300 options leveraging the superpowers of AI. In some cases, that makes them more tastemakers, creative directors and art directors when they’re at that stage of the process.

When it comes to using tools to get exactly what’s in their mind’s eye reflected in some sort of content or creative asset, they have these new tools at their disposal. That’s one of the things that we’re most excited about at Adobe: Taking our creative pros from the prompt era of AI creativity to the controls era. Right now, the creative pros that I talk to are still tinkering with these models. They have fun putting in prompts and stuff like that. It’s a cool concepting and ideation process, but none of it is commercially ready for them to use for putting out something. They want to make all these changes, so they take it into Photoshop, they take it into Illustrator, they take it into our products. I think that control era, where AI becomes even more granular and they can take the reins, is an important step for creative pros.

It’s been two years since ChatGPT hit and every enterprise decided they wanted to have generative AI. How have you seen businesses change their perspective on AI? How do they look at it now?

I think there are two camps of businesses. There are the businesses that typically are the pragmatists. They’re always late to adopt new technologies, for all sorts of reasons. It could be regulatory, it could be deeply ingrained traditions, patterns and reflexes. And it could be the leaders are somewhat technophobic and they want to wait until they have to change something.

Then there’s the camp that’s always saying, we need to reimagine how we do what we do before someone else does. I think [a] characteristic of that camp that leans forward in new technology is they recognize that not only should they be optimizing their product, but they should also be optimizing how they work to create their product. They’re always saying: if we could have more efficient meetings, or if we could have more efficient usage of our data, or if we could optimize the way that we come up with marketing ideas, or if we could streamline this and streamline that.

In that camp, I feel like there’s a big lean in on this technology. They’re in most cases at least playing with it. Many of them are piloting it in some way, shape or form across specific projects. And those who are still playing and piloting are trying to protect the groups that are doing so by making the objectives more along the lines of learning, as opposed to a business outcome, because they want to make sure that they grasp the knowledge and learnings of these tools and their workflows.

And then there are some that have already started to achieve business outcomes and really refactor. When I look at Adobe, we are using products like Copilot and GitHub for our engineers. That has a material impact on how we work. We use Firefly for our branding and marketing work. We’re already creating variations of content and experiencing this kind of multiplier benefit of using those tools.

From where you sit and what you see, both at Adobe but also in the wider space, what do you see coming online and becoming available and accessible to businesses in the way of AI in 2025?

In 2025, there will be a few themes. Number one is that the memory that these AI tools have of our preferences is going to start to enhance the outcomes that we can achieve. On the content and marketing side, the AI remembering the character consistency of something you’re trying to generate content around—whether it be a product that you’re shooting marketing images of, or a character going through a commercial or a short piece of video that you’re developing. The remembering of your brand components, your preferences for brand tone and copy and all that sort of stuff.

On the personal consumer LLM stuff that we all use, the context windows are getting longer and more sophisticated. These AI agents remembering everything we’ve ever purchased, everything we’ve ever liked or disliked, those preferences are going to start to kick in to uplevel the quality of those experiences.

The rise of [AI] agents is going to surprise all of us. We’ve always had a learning curve when it comes to learning tools, software and processes. That’s the learning curve of starting a job or rising the ranks. I would imagine sometime this year, these agents are going to start to meet us where we are in the software tools that we use. They’re going to know what we know and don’t know about it, and they’ll allow us to leverage a lot of power with natural language.

I think it’s really exciting to know that anyone could be able to use a lot of tools that were out of reach. It [also] allows companies like us to not just target marketers and creative pros, but also the stakeholders of marketing and creative, which is almost everyone in an organization. And it also just drives better usage of the product. If an agent helps you and takes actions on your behalf and can explain and show you, it’s really a holy grail for helping people get value out of the products they use.

For creative and marketing tools at Adobe, what is on the horizon—in terms of new developments, ideas and concepts—that will impact what you do?

A couple come to mind. Number one is this new category of ideation and concepting. That typically is the top of the funnel of any creative process. It typically happens in a very offline or manual way, in the form of people making mood boards of images or bookmarks on Instagram of things that inspire them. As I’ve gone around and talked to a lot of creators, art and creative directors and brand managers, they’re constantly collecting all this stuff. For them to be able to use AI to rapidly expand the surface area of possibility they can explore to find great solutions to problems, that category of concepting and mood boarding in the age of AI is really interesting to us. I think that’s one new category that’s going to emerge that’s going to be important for Adobe, and also for the creative and marketing worlds.

The second thing that I think we will see is the maturing, development and innovation of our foundation models. Firefly is in the leading pack of models out there for media generation. The key difference is that Firefly was only trained on commercially safe content that we have licensed. I think we’re one of the only companies that’s really transparent about exactly how we’ve trained our model—as opposed to politely not answering the question. I think that’s important to a lot of our customers, especially in the enterprise, or [among] artists who want to use new tools like AI, but in the right way. I think these models will continue to get better, but that differentiation will continue to become more important.

While we want to make Firefly as great as possible and have it be the commercially safe model for commercial creativity and marketing, I also think that customers may want to use third-party models for various purposes. They might want to use a certain generative model that was trained to make explosions, or to help typography. I’ve been an advocate internally of us supporting third-party models in the workflows that people have. Adobe’s always been a platform-agnostic company, and whenever a platform like Apple or Windows gets better, so do our tools. Similarly, if a new video model comes out, and then another better one, and another better one, if all of those are accessible to our customers for various use cases, how could we not help our customers make use of them? That’s also something that we’re exploring.

You are one of the creators of the Content Authenticity Initiative. Where do you see that going in 2025?

This is one of the true passion projects of my career that I think [is] most important. The Content Authenticity Initiative started over five years ago when we started to see new features emerge in products like After Effects that allowed people to edit real video in a very fast way using AI. So this is before the advent of generative AI, but [it] allowed you to remove objects, mask and do things in video that made something very clear to us: The era ahead was one where we could no longer believe our eyes. In that sense, the question was: What do we do to help people demonstrate that something is real? Or if they’ve edited it, how do we help them show how it was changed so that they can still be trusted? It wasn’t developed to punish bad actors. It was developed to reward good actors who are willing to show the provenance and attribution of their work, which creators love. They want to get credit for their work, so they want to add credentials if they can.

Now, in the age of generative AI and the ubiquity of these technologies, the importance of the Content Authenticity Initiative became 10x, because suddenly all of this stuff, anything in your mind could be generated in a second. In some of the models out there that are trained on the open internet, you can make anyone be doing anything. We realized everyone’s looking for the same solution. The social platforms want a way to expose credentials of assets to show how they were edited or made. The enterprises of the world want to know which model was used to create assets, because if it’s a non-commercially safe model, they may not want to use it in a marketing campaign.

Over the last five years, this has become way bigger than Adobe. We are one of the leaders of a number of people. It’s an open-source, open protocol. It’s a nonprofit organization that runs this with constituents from other companies, including competitive companies. The adoption of content credentials is now north of 3,000. Organizations including all the major camera companies like Nikon, Leica, Canon, Sony. You name it, folks are getting on board.

The vision here is very simple. There’s a cryptographic signature that is put into every asset. It’s not a centralized service. This manifest of data, and how this asset was edited and made, travels with the asset. It’s done in such a way where even if you take a screenshot, it still preserves that information. The platforms that display it have the option—whether it’s YouTube, who’s on board, or Facebook, Meta’s on board—has the option of showing any degree of granularity over those credentials that are attached to that asset. And at the end of the day, us as end viewers can determine whether we can trust it or not.

If we were to be having a conversation a year from today, looking back on what’s happened in 2025 and what to expect in terms of AI for 2026, what would we be talking about?

I think we’d be talking about the new discoveries we’ve had on how to use these reasoning engines. I think we’re still at the infancy of how we leverage these state-of-the-art LLMs. As they gain more powerful and longer context windows, and they start to learn who we are, and in some ways empathize with us and where we are in our knowledge, and what our interests are, et cetera, these reasoning engines are going to play a really interesting role, kind of augmenting the way our brain works.

The thing that I find most interesting these days, personally, is when I talk to friends about how they’re using these tools: To go back and forth and make health decisions about themselves that they never had the resources to make. Or they’re putting a proposal together for a customer, and putting it into the LLM first and saying, what might give my customer pause? Why might my customer not want to do business with me based on this proposal? They’re getting such incredible, interesting responses increasingly based on who they are or what their company does. It’s really magical. It’s like having an instant focus group of exactly who you’d want to be in front of you at every given moment in time, and having those insights come back to you. I think we’re going to be surprised by the power of these reasoning engines for making decisions, for managing personal things, for being more effective in our jobs.

I also think they’re going to start to play a role in the commerce or purchase decisions that we make. The idea that we used to decide what restaurant to go to or what thing to buy based on how many stars it has from strangers is going to be replaced by the wisdom and guidance of an AI that knows us extremely well and can give us a recommendation based on our preferences, opposed to based on what the average person thinks.

If you could give some wisdom to CEOs that are looking forward and trying to map out an AI strategy for 2025, what would you say?

No. 1 is allow your teams to play and pilot this technology. Novelty precedes utility when it comes to adoption of new technology. You might not feel like it’s driving the bottom line immediately, but that play helps them discover the utility that ultimately makes a huge impact in the business.

No. 2 is constantly asking yourself a question. Whenever you’re reviewing the roadmap for a particular function of your organization, or the hiring plan, or the product strategy, you should always be asking yourself: Are we sufficiently refactoring how this works? And are we sufficiently re-imagining what’s possible? We’re in a platform shift now that happens probably once every decade—if not longer—where you actually can fundamentally change the way something operates. Over the coming years, roles will change. The tools that we use to run certain functions will change. And as a leader of an organization, you have to be asking yourself: Are we sufficiently refactoring? Are we sufficiently reimagining how we can achieve our mission?

Continue Reading