Longform

I Hosted a Podcast on Artificial Intelligence. Then My AI Doppelgänger Showed Up

One writer’s wild journey into the uncanny valley


ai imposter artificial intelligence a.i.

Could an AI imposter replace us?

Sometime in the winter of 2021, I went to check my long-neglected LinkedIn but couldn’t find my password. Rather than go through the rigamarole of resetting it, I just Googled myself, knowing I could still view profile details without a proper login. And that’s when I found him: Malcolm V. Burnley, a fellow writer living in Philadelphia. Let’s call him “V” for simplicity’s sake.

V’s sparse LinkedIn said he was a 2003 graduate of Germantown High School (I graduated from a high school in Connecticut), with no real résumé other than a bunch of endorsements from a user named “Crypto Jesus,” a fan of V’s prowess in online journalism and marketing. V’s headshot, of a bearded young man with bleach-blond hair, was, I discovered after running a reverse image search, a royalty-free stock photo. The internet is a weird place. This, however, felt oddly sinister.

I had just finished producing a podcast with WHYY and Princeton University about artificial intelligence called A.I. Nation, which, to my surprise, drew a sizable audience. I say “surprise” because I’m not a tech reporter. I’m actually more of a technophobe. So the notion that I could have an internet doppelgänger out there, unbeknownst to me, wasn’t all that surprising. But the who and especially the why of it all was baffling.

Then I noticed that V’s profile pushed viewers to a website, malcolmburnley.org — “a blog about life in the Philadelphia area: What We Think, We Become” — where V had published a series of articles. One, titled “Philadelphia City Hall,” was mostly lifted from the Wikipedia page for the building, except the copy was pockmarked with snarky quips about me: “Built of bricks, marble, granite, steel and iron, it is the tallest masonry in the world (taller than Malcolm Burnley), and one of the largest overall.”

In the first episode of the podcast, I had gotten to play around with a pre-public version of ChatGPT and had an expert teach me some of the telltale signs of AI-generated text. The stories on this website showed those hallmarks. You can get a feel for the language in a post titled “Philadelphia Cream Cheese Sandwiches,” which is my personal favorite of the bunch. It contains some oddly specific non sequiturs:

Further cream cheese recipes can be found in cheese and chocolate sandwiches and vegetable wraps.

If Malcolm Burnley follows a low-carb diet, skip the bread and use low-carb tortilla bread for a vegetable pack.

Was somebody angry with the podcast and pulling a prank? Was it possible that ChatGPT could have built this website on its own? Most troubling of all: Human or computer, how did they know I love cream cheese?

If this was a prank, it wasn’t a very good one. For the next three years, I monitored my imposter, waiting for more articles or LinkedIn activity. But V just sat there, idle, until I looked into him some more this year. One article referenced a colleague in journalism, a fellow podcaster. That led me to another imposter site full of stock photography, bizarre articles, and duplicate web design — credited to him. What in the dark web was going on?

“I don’t even know what I’m looking at,” he told me in March when I showed him the websites. “That’s very bizarre. Some weird aggregator AI thing.”

After I sent V a message through the contact form, both imposter websites went dark. I still don’t know who made them, and perhaps I never will. (I’m still investigating.)

Still, it was an unsettling reminder of AI’s ability to augment some of the worst instincts of humanity. Though these websites were clumsy and unsophisticated, uses of AI these days are anything but. Early this year, New Hampshire voters were spammed with robocalls featuring an AI-generated voice of President Biden that told them not to vote in a primary election. Facial recognition has been used to falsely imprison people. Sheriff Rochelle Bilal recently got caught with fake headlines on her campaign website, attributed to a mistaken experiment with AI. And if those don’t scare you, go look up “autonomous weapons.”

For all the ugly applications of AI, my reporting during the podcast and afterward has shown me there’s at least as much good. The past few years have proven AI isn’t a fad, but rather an indispensable cog in so many systems we rely on. Local doctors are discovering novel drug treatments using AI. SEPTA is spotting illegally parked cars to boost the reliability of its bus fleet. Robots are roaming the aisles of grocery stores and solving inventory issues.

But the emergence of AI has also brought anxieties about trade-offs. It’s rapidly displacing jobs. ChatGPT is upending education. AI systems are — controversially­ — enabling political echo chambers.

It’s no longer a question of whether or not we embrace AI as a city, and as a global society, but rather, how humans can use it responsibly.

As my imposter got me to briefly consider: Can AI actually replace us?

In 1966, the Massachusetts Institute of Technology created the Summer Vision Project, led by pioneering professors in the field of AI. The project centered on a months-long challenge posed to undergrads: Build a computer with vision on par with a human that can analyze a crowded visual scene and tell the difference between various objects: a banana from a baby, a stoplight from a stop sign.

“Of course, it actually took decades rather than a summer,” says Chris Callison-Burch, a computer science professor at Penn. (Read more about him here.) “The field got discouraged by [general artificial intelligence] taking longer, or it being much more complicated than the initial enthusiasm had led them to believe.”

Efforts like the Summer Vision Project aimed to create machines that could replicate the general intelligence of humanity, measured by their success at being able to reason about the world, make complex decisions, or employ perceptual skills. Theorists like Marvin Minsky, who helped launch Summer Vision, believed a breakthrough was imminent; he told Life magazine in 1970 that in “from three to eight years, we will have a machine with the general intelligence of an average human being.”

What emerged from these early letdowns was a realization that AI was perhaps poorly defined. If we understand so little about how the human brain works, how can we really create computers that think like us? Computer scientists began to refocus their goals and rebrand what they were doing. “We sort of went through this period of avoiding the term ‘artificial intelligence,’” says Callison-Burch.

In the post-hype ’80s, ’90s and early 2000s, subfields of AI gained steam — machine learning, deep learning, natural-language processing — and led to break- throughs that didn’t always register in the public consciousness as AI. Along came rapid advancement in computer processing that gave rise to “neural networks” that form ­the backbone of technologies like ChatGPT, driverless cars, and so many other recent applications. It turned out that some of the long-dismissed ideas of Minsky and others were simply waiting for more powerful computers.

“Those guys from the ’80s weren’t all kooks,” says Callison-Burch. “It’s only recently that we’ve sort of come back around to the inkling that maybe the goals of this artificial general intelligence might be achievable.”

The term’s re-emergence in the popular lexicon has led to a lot of confusion about what, exactly, we’re talking about when we talk about AI. Netflix recommending shows to you? That’s AI. Alexa and Siri? They’re AI, too. But so are deep fakes, autonomous drones, and Russian chatbots spreading disinformation.

“AI is complex math. Math is powerful, but it does not feel. It is not alive and never will be,” says Nyron Burke, the co-founder and CEO of Lithero, a University City company that uses AI to fact-check marketing materials. (Read more about him here.) “AI is a tool — like electricity or the internet — that can and will be used for both beneficial and harmful purposes.”

The truth is that AI has become a catch-all term for both lowly algorithms and existential threats.

What is intelligence, after all? Alan Turing proposed one theory, positing that artificial intelligence exists when humans can’t tell if they’re interacting with other humans or machines in a back-and-forth conversation. We’ve suddenly leaped past that with generative AI like ChatGPT. But there’s a big gap between a computer’s ability to act human and its achieving of consciousness, like in The Matrix. Most AI involves pattern recognition, with computers trained on the historical data of past human behavior and the physical world — say, videos of how cars should properly operate on streetscapes — and then trying to achieve specific outcomes (like not hitting pedestrians). When the systems color outside the lines, like swerving out of the path of some pigeons and into a pedestrian, it may seem they’re developing minds of their own. But in reality, these mistakes are the product of design limitations.

Once you take a step back and view AI less as a creature and more as a tool for human augmentation, it’s a lot harder to form moralistic judgments about AI being “good” or “bad.”

ChatGPT can be used to write a sonnet. It can also be used to impersonate a journalist. But are we surrendering too much control to machines? Will they eventually take us over?

Doomsday scenarios frequently revolve around the idea of AI surpassing our own intelligence, with its ability to hoover up more and more data, like a student perpetually cramming for exams who manages perfect recall. It’s led to predictions like Elon Musk telling the New York Times last year that he expects AI will be able to write a best-selling novel on par with J.K. Rowling in “less than three years.” If you listen to some of Silicon Valley’s titans, a Blade Runner-like future, with robots broadly displacing humans, feels scarily near.

However, the history of AI has been full of overpromises and fallow eras. ChatGPT has already inhaled close to all the text on the internet. Some experts believe that it could begin to stall or even devolve when “synthetic data” — text written by AI — is increasingly relied on for training these systems.

Ironically, amidst the fears about AI supplanting us, it’s teaching us more about what makes us human. Through neural networks — which are loosely designed on the architecture of the brain — we are deciphering more about human intelligence, how it works, and how we can learn better. Then there are numerous discoveries made possible by AI in the fields of biology and physics, like its ability to rapidly decode proteins and genetics within the body. Previously, a Nobel Laureate could spend an entire career mapping the shape of a protein. Now, AI can do it in a matter of minutes. To put it another way, AI is recognizing patterns in the human body that were previously imperceptible to ourselves.

We should worry about job displacement for cashiers, accountants, truck drivers, writers and more. It’s already occurring, albeit slowly, but with smart policy (and perhaps restitution), some of the effects can be mitigated. We should resolve the many copyright issues playing out in the courts right now. But we also have the ability to bake more transparency and equity into these systems, creating opportunities for AI to contribute to humanity, and to Philadelphia.

The good news is that smart people are working to get this right. Penn students participating in the Ivy League’s first undergraduate major in AI will be designing policy recommendations. Governor Josh Shapiro has partnered with tech leader OpenAI to launch a first-in-the-nation pilot for state government. Local artists and entrepreneurs are pushing the boundaries of AI content creation. The list goes on.

By mythologizing AI as something more than it is, we risk ignoring the inherent place that humanity has in its design and implementation, both good or bad. In a New Yorker article titled “There is no A.I.,” Jaron Lanier argued that we should drop the name altogether. “We can work better under the assumption that there is no such thing as A.I.,” Lanier wrote. “The sooner we understand this, the sooner we’ll start managing our new technology intelligently.

>> Click here to return to “How Philly Learned to Love AI”

Published in the June 2024 issue of Philadelphia magazine.