Robert Brunner is the associate dean for innovation and chief disruption officer at the Gies College of Business at the University of Illinois Urbana-Champaign. Brunner spoke with News Bureau business and law editor Phil Ciciora about the Biden administration’s executive order on artificial intelligence systems.
What impact will an executive order have on artificial intelligence development in the U.S.?
You could argue that something like an executive order is perhaps not the best solution to the problem, but it’s certainly a way to do something quickly to draw more attention to the issue. So in that respect it’s probably a net positive.
The problem with executive orders is that they’re not permanent. If a new president is inaugurated in January 2025, then this executive order and all of its policy implications could be overturned. The bigger question is whether Congress will ever get its act together and actually pass some meaningful legislation regulating AI. That will ultimately be much more meaningful and impactful than any executive order.
But in terms of policy, I think it’s a decent first start. There are some really good things in the executive order, especially the idea of preventing AI from hurting marginalized communities. A good example of that is when prisoners come up for parole, the judicial system often uses software to help determine the likelihood a defendant might commit another crime in the future. It turns out that one of the prominent recidivism algorithms was biased against Black defendants. That’s just one example of AI potentially causing harm that you’d want to avoid.
I also like the idea of digitally watermarking content that’s created by AI. I don’t know how well that’s going to work in practice, but I like the idea that we’re at least going to try it because we need to protect people against deep fakes, especially heading into a presidential election year. That kind of misinformation is going to be a real issue, unfortunately.
At the same time, you have to wonder if an executive order will matter all that much because AI is such a global phenomenon. If the U.S. puts too many regulations on AI, then all of that development is inevitably going to flow somewhere else. Can the U.S. regulate a foreign company that may put something out open source? No, it can’t. President Biden had to do something. Is an executive order the right thing? We’ll have to see how it’s actually implemented and what other countries do to regulate AI.
Do we need an AI watchdog at the federal level?
That’s a good idea in theory, but I’m not sure how it would work in practice.
What powers would an AI watchdog have? Well, this is where you start to get knee-deep into the politics of it. For example, Elon Musk has announced that the X platform, formerly known as Twitter, has its own generative AI system called “Grok,” in a reference to Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy.” Grok is trained on Twitter data, and it’s meant to be more responsive and have fewer guardrails than regular generative AI. It also tries to be funny and sarcastic.
But if you have an AI watchdog under a Democratic administration, would it try to regulate more conservative-oriented speech generated by an AI? If so, you’ve just opened up a whole host of free speech issues. It’s not clear that any challenges from a watchdog would survive a court review.
So, in short, it’s a very slippery slope.
I think part of the problem is generative AI itself. For a lot of people, generative AI just kind of burst onto the scene and into the public consciousness. It was so novel, so easy to use – but the more you use it, the more you realize it’s not as magical or creative as we might have initially thought it would be. It’s not the superintelligence that many fear is imminent.
That’s why I think a lot of researchers have been pushing back on the regulations and warning that government is potentially overstepping here and causing more harm than good.
The Biden administration seems to be taking an “all-of-the-above” approach in their AI policy. Is that an attempt to shape AI while it’s still in its infancy?
There are a lot of very vocal people who look at our experience with social media and think the government didn’t do enough back in the early 2000s to place adequate guardrails around it. You’re seeing some that even today with the lawsuit against Meta that claims it lured children to Facebook and Instagram.
With the AI executive order, I think in some respects it is best seen as a trial balloon. Rather than just doing one or two basic things, the Biden administration is trying a number of different things, and trying to discern which of them will work. Some of them may not work, but at least they’re getting an idea about potential safeguards and doing it in a way that hopefully doesn’t curtail innovation or hurt us economically.
I’m hoping the executive order provides some clarity at the federal level about how the different organizations should work together around this topic and that eventually we can have something that’s more clear and comprehensive.