As photorealistic AI-generated images proliferate across social media platforms like X and Facebook, we find ourselves grappling with an unsettling new reality—what can be described as the “age of deep doubt.” This era is marked by heightened skepticism towards digital content, fueled by the ease of creating convincing fakes that challenge our understanding of truth.
While questioning the authenticity of media isn’t a new phenomenon, the rise of generative AI has intensified this skepticism. People are increasingly not only doubting digital content from unfamiliar sources but also questioning the authenticity of real events. This widespread uncertainty allows individuals to assert that genuine occurrences may have been fabricated using advanced AI tools.
Since the term “deepfake” was coined in 2017, we’ve witnessed a rapid evolution in AI-generated media. This has led to troubling examples, including conspiracy theories claiming that President Joe Biden has been replaced by an AI hologram, as well as former President Donald Trump’s baseless allegations regarding Vice President Kamala Harris faking crowd sizes using AI. Recently, Trump even invoked AI when faced with a photograph contradicting his narrative about writer E. Jean Carroll, whom he was found liable for assaulting.
Legal scholars Danielle K. Citron and Robert Chesney anticipated these challenges, coining the term “liar’s dividend” in 2019 to describe how deepfakes could undermine authentic evidence. What was once a theoretical concern has now become a concrete reality.
Doubt has always been a tool for political manipulation, and AI has only enhanced its effectiveness. By sowing seeds of uncertainty, those in power can shape public opinion, discredit opponents, and obscure the truth. In this context, AI becomes a refuge for those seeking to deceive, enabling the creation of altered images, audio, and video that mimic genuine media.
The deepfake phenomenon initially emerged from a Reddit user who shared AI-generated pornography, swapping faces in videos. As deep-learning technologies continue to advance, the line between reality and fabrication becomes increasingly blurred. The historical trust we placed in media—rooted in the skill and effort required to produce it—now faces erosion. Our understanding of truth is becoming more complex, putting our political discourse, legal systems, and collective memory at risk.
Recently, a panel of federal judges addressed the implications of AI-generated deepfakes on legal evidence. During a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, concerns were raised about the authentication of digital evidence amid the rapid evolution of AI capabilities. Although no immediate rule changes were implemented, the discussion highlights the urgency of addressing these issues within our judicial system.
As we navigate this landscape of deep doubt, we must cultivate tools and techniques to discern truth from fabrication. Heightened media literacy and critical thinking are essential in mitigating the effects of AI-generated misinformation. Society must adapt, recalibrating our collective understanding of authenticity and truth in media. The era of deep doubt demands vigilance as we strive to differentiate between the genuine and the synthetic in a world where reality is increasingly manipulated.
Read more at Wired.