TL;DR
- Adobe CEO Shantanu Narayen talks about how the company is incorporating AI and its work to tackle misinformation, urging creators to either use AI or miss out.
- Narayen acknowledges the responsibility of companies like Adobe to mark the provenance of content generated by AI, but puts the onus on consumers to be more aware of the media they consume.
- He doesn’t believe artists will be overtaken by AI, only that Adobe will work with AI to build new tools for creators — but then what else is he going to say?
“If you don’t learn to use AI you’re going to be in trouble,” declared Adobe chair and CEO Shantanu Narayen, who also put on the onus on the general public to learn more about AI an to question the veracity of the content they are served up as fact-based news.
He was speaking to Washington Post tech columnist Geoffrey Fowler in an illuminating exchange about how the tech vendor is seeking to balance its commercial aims with tackling misinformation.
There’s also an existential threat to Adobe itself. Won’t generative AI simply erode any business the vendor has to market its own content creation tools?
Narayen responds. “I think [AI] is actually going to make people much more productive, and it’s going to bring so many more marketers in small or medium businesses into the fold [to be able to use Adobe’s tools even more easily to create content],” he says.
“AI really is an accelerant. It’s about more affordability. And it’s about more accessibility. And Adobe is always one when we solve problems, and allow more people into the field.”
He maintains that GenAI was a good thing, on the whole, both for creators and for Adobe itself:
“It is going to be disruptive if we don’t embrace it and we don’t use it to enable us to build better products and attract a whole new set of customers. But I’m completely convinced that this will actually be an accelerant to further democratize technology, rather than just a disruption.”
Fowler asks how Adobe can convince the creatives who buy its tools that these tools — and Adobe’s AI Firefly — are not in the process of replacing them with AI.
“I’m convinced that the creatives who use AI to take whatever idea they have in their brain are going to replace people who don’t use AI,” Narayen replies.
“If people don’t learn to use it, they’re in trouble. I would tell young creators today that if you want to be in the creative field, why not equip yourself with everything that’s out there that enables you to be a better creative. Why not understand the breadth of what you can do with technology? Why not understand new mediums? Why not understand, the different places where people are going to consume your content. A knowledge of what’s out there can only be helpful rather than ignoring it.”
Keeping the creator community at the center of its brand, Adobe has opted to differentiate itself from other AI tools developers, like Stable Diffusion or OpenAI, in training Firefly on data that it owns or that creators have given permission for use.
“I think we got it right, in terms of thinking about data and in terms of creating our own foundation models and learning from it,” he says. “But most important in creating the interfaces that people have loved. I think we’ve been really appreciated by the creative community for having this differentiated approach.”
The conversation shifts to the dangers of AI, and how much of a threat AI poses to truth. Fowler notes that people have long been able to use Photoshop “to try to lie or trick” people into believing misinformation, so what’s different with GenAI?
Narayen says technology has always a had unintended consequences. “It’s an age-old problem, [but where] generative AI is different is the ease at which people can create content. The pace at which it can be created, is going to dramatically expand,” he says.
“So it’s incumbent on all of us who are creating tools, and those distributing that content, including the Post, to actually specify how that piece of art was created, to give it a history of what was happened.”
The Washington Post has signed up to the Coalition for Content Provenance and Authenticity, of which Adobe is a founding member.
“The challenge — and the opportunity — that we have, is that this is not just a technology issue. Adobe and our partners have worked to implement credentials that identify, definitively, who created a piece of content and was AI involved and how it altered along the way. The question is, how do we as a company, an industry and a society, train consumers to want to look at that piece of content before determining whether that was real or not real,” Narayen says.
“We’re going to be flooded with more information. So it’s the training of the consumer, to want to interrogate a piece of content, and then ask Who created it? When was it created? That is the next step in that journey.”
Fowler pushes back on this, quizzing just how much onus should be on the user, or viewer, and how much responsibility should publishers or AI vendors share. He points out that Adobe was selling AI-generated images of the Israel Gaza war and that Adobe said the images were released because they were labeled as made by AI. “But is that just proof that the general public is not adequately prepared to identify generative AI images versus originals,” he said.
“The consumer is not solely responsible for all obligations associated with trying to determine whether it’s AI or not,” Narayen said.
“Certainly, distributors of that content and the creator of the content also has a role to play [but] the consumer has a role to play as well because they’re the ones who are at the end of the day consuming the content.”
He emphasizes the need for consumer education and insists that consumers take some, not all, but some responsibility for how they interpret the content they view, hear or read.
“The more a company like the Washington Post promotes this notion of content credentials, [the] education process will increase.”
Narayen also defends Adobe by saying it is not a source for news. “Adobe only offers creative content, we do not offer editorial content. And what people were doing was trying to pass off what was editorial content or actual events as creative. So we have to work and moderate [or] remove that content.”
Fowler challenges that idea of content credentials are welcome to those who view it as a good idea but it still leaves open the misuse of AI in content generation by bad actors. What can be done about them?
Narayen doesn’t really have an answer other than widening the education of the public. “The good guys are going to want to put content credentials in to identify their sources or identify what’s authentic. I think if we can continue to train other consumers to beware in terms of content [provenance] that’s one step in terms of the evolution of how we can educate people.”
He is optimistic about winning the battle. “We will get through this in a responsible way and it will both make people more productive and will make them more creative. We will respect IP, perhaps in a different way than it was done when it was just a picture, but it will happen I’m confident that. Companies and governments will work together to have the right thing happen.”