Q&A

Should We Trust AI? A Penn Expert Weighs In

As AI has become a part of our everyday lives, so has existential dread about its consequences. We reached out to Penn professor Chris Callison-Burch to quell our fears.


ai fears penn Chris Callison-Burch

Penn’s Chris Callison-Burch, an expert on AI and large language models who has been working and researching in the field since the early 2000s. / Photograph by Katherine Veri, Penn Engineering Online

As AI has become a part of our everyday lives, so has existential dread about where this whole “intelligent machines” thing will lead us. Rather than continue to fret into the wee hours of the morning, we reached out to a Penn professor to quell our fears.

Chris Callison-Burch is an expert on artificial intelligence and large language models who has been working and researching in the field since the early 2000s. Within the past year, he has testified before Congress on the emergence of ChatGPT and helped launch the Ivy League’s first bachelor of science program in AI.

This interview has been lightly edited for clarity and brevity.

Let’s dive right in. Are we nearing a robot takeover?

I personally don’t think there’s any sci-fi existential crisis. A few months back, there was a letter written by some thoughtful academics saying that we should take the risks posed by artificial intelligence to be an extinction-level event, equal to pandemics and climate change. I’m like, no way. There was this parlor game in Silicon Valley asking people, “What is your P(Doom)?” It stands for the probability you assign to an apocalypse due to AI. My P(Doom) is vanishingly small.

So fears about ChatGPT are overblown?

As the technology keeps advancing, I think we’re going to see both benefits and harms from it. Some positive aspects are as tools for writing, answering your emails more effectively, summarizing large documents that you’re searching through, and physicians using AI-powered tools to help with their patient interactions. In the negative column are things like deepfakes and the potential for spreading misinformation, like the voice synthesis of Biden discouraging people from voting in one of the primaries.

After that “Biden” robocall, the FCC banned voice impersonations of that sort. But it spoke to a broader anxiety of not knowing when we’re engaging with AI vs. a human these days, right?

Yes. Some of my PhD students did the largest-ever study of human detectability of AI-generated text. The two major takeaways from the paper were, first, that as the language models grow larger, humans’ ability to detect text that’s human-generated vs. machine-generated goes down. Second, we found that humans can be trained to spot the difference.

That sounds hopeful. Does it suggest that maybe AI isn’t all that intelligent yet?

In the history of artificial intelligence, we’ve had different ways of trying to measure whether or not machines are intelligent. One of the first was proposed by Alan Turing and is known as the Turing Test, which is basically that if you have a human engaging in conversation over text with multiple participants and they’re not able to distinguish which one is human and which one is the machine, then the machine has achieved intelligence. I am confident that we’ve moved past the Turing Test.

Yikes!

The goalposts keep moving. Making an AI system that could beat a human grandmaster in chess was a long-standing goal that was beaten when I was in college. But we often conflate the notion of beating a human at one task with achieving a necessary precondition for intelligence. It doesn’t mean that AI is good at all things even if we can design it to be super-specialized in one thing. So maybe we can soon build robots that are perfectly good at using language, which to me is an incredible surprise. It’s been my area of research for the past 20 years, and it boggles my mind that we’re where we are now. That was unimaginable five years ago — three years ago, even. But it doesn’t mean that AI systems possess all the attributes of human intelligence.

What could be a new Turing Test?

The next frontier with AI might be things to do with reasoning about the world, things to do with planning, things to do with decision-making. One of my friends who won a MacArthur “Genius” award is working on moral reasoning with AI. That is a cool modern version of the Turing Test: Could machines perform moral reasoning? One of the fundamental questions you can think about is whether you want to design AI systems that think and act humanly or ones that think and act rationally. Because they are not always in conjunction.

What are your biggest fears with the recent explosion in generative AI?

I do think there are things that could cause massive societal disruption. My fears are more around rapid displacement of work. If you can automate many jobs due to [large language models] and other generative AI models, two things can happen. One, we can be twice as productive and get a lot more done; or two, companies recognize that they only need half as much staff. That’s my personal fear for doomsday scenarios. It’s not an apocalyptic event, but it could have real harms.

Is there a role for government regulation?

I think it’s incredibly important, and I feel that lawmakers are recognizing the moment for potential regulation in a way [that shows] they feel they may have failed to act in a timely enough fashion to regulate social media applications, along with companies like Uber and Lyft. There seems to be bipartisan consensus that there needs to be national legislation, but it’s a really difficult thing to implement, because it’s such a rapidly changing technology.

Can you give an example of regulation that could help?

It could be legislation around harm prevention and making sure companies are adopting best practices when they are releasing technologies to the world. There are cautionary tales of systems that go off the rails because negative societal biases get coded into them. For example, Microsoft’s chatbot Tay behaved in a racist manner right after it was released. [Editor’s note: It quickly was shut down.] So how do you ensure that a system is going to behave in a way that’s not harmful, embarrassing, or promoting negative stereotypes while also ensuring that you don’t stymie growth or companies’ competitive advantage?

If you had to project 20 years down the road … will the world be fighting wars with autonomous weapons? Are we all going to be unemployed slugs? Where will the world be with AI?

I’m at a loss, because I don’t yet know what growth trajectory we’re on — whether it’s a linear growth curve or an exponential growth curve. Our future will depend on that pace of change. At the moment, I can hardly forecast 20 months into the future, let alone 20 years.

>> Click here to return to “How Philly Learned to Love AI”

Published as “Should We Trust AI?” in the June 2024 issue of Philadelphia magazine.