It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model's speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I'm losing my mind watching it.
The interview is part of a broader phenomenon that has emerged alongside the rapid advancement of artificial intelligence technology. As AI becomes more sophisticated, so too does our willingness to suspend disbelief, to accept digital recreations as meaningful substitutes for human connection and authentic experience.
This is not merely about the technology itself, but about our collective response to it—the way we've begun to treat AI-generated content not as a tool or curiosity, but as a legitimate form of human expression. We are witnessing what might be called a mass delusion event, where the boundaries between authentic and artificial have become so blurred that we've stopped asking whether we should cross them.
The implications extend far beyond entertainment or even journalism. When we normalize the digital resurrection of the dead, when we accept AI-generated voices speaking for those who can no longer speak for themselves, we fundamentally alter our relationship with memory, grief, and truth itself.