TL;DR
- Around the world, legislators are grappling with generative AI’s potential for both innovation and destruction. Is it already too late?
- Russell Wald, director of policy for the Stanford Institute, argues for common sense conversations about what AI means to society and how its issues can be addressed without hyperpolarization.
- Wald calls for increased regulation of AI in the US, and for lawmakers to educate themselves in the basics about this burgeoning field.
With the Presidential election looming and the fear that more deepfake videos will be unleashed, calls for national regulation of AI is growing. If nothing is done, the internet could soon be awash with synthetic media and confidence in verifiable truth will fade forever.
This is the most pressing concern for Russell Wald, director of policy for the Stanford Institute for Human-Centered Artificial Intelligence (HAI), who advises the U.S. government and other institutions on how to shape AI regulation.
“The reason I’m worried is if there’s a ubiquitous amount of synthetic media out there, what are ultimately going to do is create a moment where no one’s going to have confidence in the veracity of what they see digitally.
“And when you get into that situation, people will choose to believe what they want to believe, whether it’s an inconvenient truth or not. And that is really concerning.”
In a IEEE Spectrum podcast with senior editor Eliza Strickland, Wald said what is needed is a system in which generative AI platforms (and perhaps social media platforms) verify media.
“You’re not going to be able to necessarily stop the creation of a lot of synthetic media, but at a minimum, you can stop the amplification of it, [by putting] on some level of disclosure that signals that it may not be what it purports to be and that you are at least informed about that.”
Regulatory Approaches and Solutions at the Source
Regulators are looking at the issue domestically and in places like China and Europe, which is arguably the most advanced global territory. Even here, though, it could be well over a year before an AI Act is passed into law.
One suggestion is to impose some sort of watermark on “genuine” media to separate it from fakes, but there are a lot of unanswered questions about who bears responsibility for this, or who is potentially liable for the creation and dissemination of fake videos.
Wald thinks the terms of AI regulation need to be stripped right back to the data input into the models in the first place.
“We need to look at transparency regarding foundation models. There’s just so much data that’s been hovered up. What’s going into them? What’s the architecture of the compute? Because at least if you are seeing harms come out of the back end, by having a degree of transparency, you’re going to be able to go back to what [initial source data].”
He expresses concern about the inherent bias in current and future AI models and likewise argues for policy and lawmaking bodies to include a “diverse set of people” to be able to ensure that when these models are released, “there’s a degree of transparency that we can help review and be part of that conversation.”
Companies like Google, OpenAI and Microsoft have recently been vocal about the need for regulation. Wald views this positively but also as an ultimately cynical exercise in corporate risk management.
“They would rather work now to be able to create some of those regulations versus avoiding reactive regulation. So it’s an easier pill to swallow if they can try to shape this now at this point. Of course, the devil’s in the details on these things, right?”
Of greater concern is that even if we came up with the optimal regulations tomorrow, “it would be incredibly difficult for government to enforce it.”
There is next to no investment in the US in infrastructure to track and catch AI lawbreakers, he says.
“We need more of a national strategy part of which is ensuring that we have policymakers as informed as possible on this. I spend a lot of time with briefings with policymakers. You can tell the interest is growing, but we need more formalized ways to make sure that they understand all of the nuances here.”
Readying Society for AI
Because of how fast the technology is moving, we urgently need a workforce that understands AI and so can quickly adapt and make changes that might be needed in the future.
“We’ve got to recruit talent,” he says. “And that means we need to really look at STEM immigration. We need to expand programs like the Intergovernmental Personnel Act that can allow people who are in academia or other nonprofit research to go in and out of government and inform government so that they’re more clear on [AI].”
What we are seeing today with generative AI is just the tip of the iceberg. AI development is growing so fast it makes the need to regulate all the more urgent, provided discussion is balanced.
“Let’s not go to the extreme of, ‘This is going to kill us all.’ Let’s also not go and allow for a level of hype that says, ‘AI will fix this.’ We need to have a neutral view that says there are some unique benefits this technology will offer humanity and but at the same time there are some very serious dangers, so how can we can manage that process?”
Policymakers also need to educate themselves, he suggests. Not to the extent of using TensorFlow of course, but to at very least get to grips with what the technology can and cannot do.
“We can’t expect policymakers to know everything about AI but, at a minimum, they need to know what it can and cannot do and what its impact on society will be.”