Great documentaries reveal history’s truth. Unregulated AI threatens to distort it

- Share via
A furious political leader shouting a message of hate to an adoring audience. A child crying over the massacre of her family. Emaciated men in prison uniforms, starved to the edge of death because of their identities. As you read each sentence, specific imagery likely appears in your mind, seared in your memory and our collective consciousness through documentaries and textbooks, news media and museum visits.
We understand the significance of important historical images like these — images that we must learn from in order to move forward — in large part because they captured something true about the world when we weren’t around to see it with our own eyes.
We haven’t agreed on guardrails against deepfakes. But this fictional FAQ (from five years in the future) shows the events of 2024 may force the issue.
As archival producers for documentary films and co-directors of the Archival Producers Alliance, we are deeply concerned about what could happen when we can no longer trust that such images reflect reality. And we’re not the only ones: In advance of this year’s Oscars, Variety reported that the Motion Picture Academy is considering requiring contenders to disclose the use of generative AI.
While such disclosure may be important for feature films, it is clearly crucial for documentaries. In the spring of 2023, we began to see synthetic images and audio used in the historical documentaries we were working on. With no standards in place for transparency, we fear this commingling of real and unreal could compromise the nonfiction genre and the indispensable role it plays in our shared history.
If your phone had feelings would you treat it differently? It could happen sooner than you think
Artificial intelligence could achieve sentience in 10 years. We should prepare for these systems to have their own subjective experiences, including sensing pain caused by humans.
In February 2024, OpenAI previewed its new text-to-video platform, Sora, with a clip called “Historical footage of California during the Gold Rush.” The video was convincing: A flowing stream filled with the promise of riches. A blue sky and rolling hills. A thriving town. Men on horseback. It looked like a western where the good guy wins and rides off into the sunset. It looked authentic, but it was fake.
OpenAI presented “Historical Footage of California During the Gold Rush” to demonstrate how Sora, officially released in December 2024, creates videos based on user prompts using AI that “understands and simulates reality.” But that clip is not reality. It is a haphazard blend of imagery both real and imagined by Hollywood, along with the industry’s and archives’ historical biases. Sora, like other generative AI programs such as Runway and Luma Dream Machine, scrapes content from the internet and other digital material. As a result, these platforms are simply recycling the limitations of online media, and no doubt amplifying the biases. Yet watching it, we understand how an audience might be fooled. Cinema is powerful that way.
Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply troubling on the horizon. If our faith in the veracity of visuals is shattered, powerful and important films could lose their claim on the truth, even if they don’t use AI-generated material.
The next great free speech battle will be over artificial intelligence, which deserves as much constitutional deference as older communications technologies.
Transparency, something akin to the food labeling that informs consumers about what goes into the things they eat, could be a small step forward. But no regulation of AI disclosure appears to be over the next hill, coming to rescue us.
Generative AI companies promise a world where anyone can create audio-visual material. This is deeply concerning when it’s applied to representations of history. The proliferation of synthetic images makes the job of documentarians and researchers — safeguarding the integrity of primary source material, digging through archives, presenting history accurately — even more urgent. It’s human work that cannot be replicated or replaced. One only needs to look to this year’s Oscar-nominated documentary “Sugarcane” to see the power of careful research, accurate archival imagery and well-reported personal narrative to expose hidden histories, in this case about the abuse of First Nations children in Canadian residential schools.
‘Dahomey’ and ‘Sugarcane,’ two acclaimed documentaries vying for Oscar recognition, tackle the legacies of colonialism.
The speed with which new AI models are being released and new content is being produced makes the technology impossible to ignore. While it can be fun to use these tools to imagine and test, what results is not a true work of documentation — of humans bearing witness. It’s only a remix.
In response, we need robust AI media literacy for our industry and the general public. At the Archival Producers Alliance, we’ve published a set of guidelines — endorsed by more than 50 industry organizations — for the responsible use of generative AI in documentary film, practices that our colleagues are beginning to integrate into their work. We’ve also put out a call for case studies of AI use in documentary film. Our aim is to help the film industry ensure that documentaries will deserve that title and that the collective memory they inform will be protected.
We are not living in a classic western; no one is coming to save us from the threat of unregulated generative AI. We must work individually and together to preserve the integrity and diverse perspectives of our real history. Accurate visual records not only document what happened in the past, they help us understand it, learn its details and — maybe most importantly in this historical moment — believe it.
When we can no longer accurately witness the highs and lows of what came before, the future we share may turn out to be little more than a haphazard remix, too.
Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli are co-directors of the Archival Producers Alliance.
More to Read
Insights
L.A. Times Insights delivers AI-generated analysis on Voices content to offer all points of view.
Viewpoint
Perspectives
The following content is AI-generated and was not created by the editorial staff of the Los Angeles Times.
Ideas expressed in the piece
- The article argues that AI-generated historical reconstructions threaten the integrity of visual evidence, recycling biases from existing media and archives while presenting synthetic content as authentic. For example, OpenAI’s Sora created a fabricated Gold Rush-era California video that blended Hollywood tropes with historical inaccuracies, risking public confusion about real events[8].
- Transparency in AI use is critical for documentaries, as undisclosed synthetic content could erode trust in nonfiction media. The authors emphasize that films like Sugarcane—which exposed abuses in Canadian residential schools—rely on verified archival material to preserve historical truth[8].
- Human stewardship of primary sources remains irreplaceable, with the Archival Producers Alliance advocating for ethical guidelines and AI literacy to combat misinformation. They warn that unchecked AI could reduce collective memory to a “haphazard remix,” detached from factual accountability[8].
Different views on the topic
- AI democratizes historical storytelling, allowing creators to visualize poorly documented events or marginalized perspectives. Tools like Midjourney enable amateur historians to generate images of ancient Rome or Soviet-era Prague, sparking public interest in history despite authenticity gaps[2][4].
- Technological advancements can coexist with safeguards, such as forensic tools to detect AI manipulation. For instance, researchers note that AI-generated images still exhibit glitches (e.g., distorted hands), though skeptics warn detection will grow harder as the technology improves[2][7].
- AI mirrors existing systemic biases rather than creating new ones, reflecting historical distortions already embedded in colonial archives. Critics argue that decolonization requires redefining AI’s purpose—not just diversifying datasets—to prioritize indigenous knowledge systems over algorithmic reproduction[3][6].
- Regulation risks stifling innovation, as seen in creative applications like restoring Val Kilmer’s voice for Top Gun: Maverick. Proponents argue AI’s potential for artistic expression and education outweighs its misuse risks, provided users maintain critical awareness[1][5].
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.