As a reviewer for a couple of documentary grants, the process is a lovely way to learn more about what stories my peers are exploring, what styles and
AI
This past March, we announced the launch of the Nonfiction Core Application 3.0, an update to the application template created to ease the application burden on filmmakers while reflecting evolving artistic and ethical practices. In our launch post, co-authored with Keisha Knight, we described the Nonfiction Core Application 3.0 not only as a shared resource, but also as a living document that should evolve with the field. Today, we are continuing that evolution.
When making Deepfaking Sam Altman (2025), documentary director Adam Bhala Lough (Telemarketers) found himself in deep doo doo. Despite months of trying, he still hadn’t gotten access to an interview with Sam Altman (CEO of OpenAI) for a film Lough had promised about AI. So he took a page from Altman’s own MO. The resulting film follows Lough setting about on his journey, working with deepfakers in India, meeting with lawyers, and ultimately spending a lot of time chatting and bonding with the resulting AI chatbot, called SamBot. For this edition of The Synthesis, we spoke with Lough about the film, his use of AI, and its implications for documentary.
Last year, national and international press widely reported on what The Globe and Mail described as “the most tumultuous year in the festival’s history,” complete with sweeping personnel changes, social and financial pressures, and the temporary closure of their flagship Ted Rogers Cinema. Though Hot Docs managed to pull through for its 32nd year with a new executive director (Diana Sanchez, formerly of TIFF) and a replenished staff (some of the programmers, including department head Heather Haynes, returned after their prior exodus), what frightened this hamstrung fixture of Toronto’s flailing film scene was dismally clear. Social issues don’t entirely permeate the programming, nor do their chosen films observe such issues in totality, but Hot Docs has always strived to stay in tune with urgent matters of the present, especially through films that align their audience’s point of view with what will one day be the right side of history.
In a recent joint submission to a call for contributions on AI and Creativity at the United Nations Human Rights Council Advisory Committee, WITNESS, the Co-Creation Studio at MIT, and the Archival Producers Alliance (APA) outlined these pressing dangers. Drawing from years of frontline research, workshops, and advocacy with creative communities and human rights defenders around the world, we identified seven core threats AI poses to human creativity.
Sora, a new generative AI video tool from Open AI, is named after the Japanese word for sky. Is the sky the limit? Last year, the company gave early access to 300 artists, some of whom later denounced the company’s product release as artwashing. OpenAI responded with a series of exclusive promotional screenings of artist-made films for industry executives in New York, Los Angeles, and Tokyo. What might this all mean for the documentary field? We decided to run our own experiment. To test the limits of Sora, we prompted it with the taglines from the six most recent Oscar-winning documentaries. We showed the resulting 15-second silent clips to a panel of seven documentary luminaries over Zoom.
When it comes to AI and documentary, all bets are off in 2025. So, we scrapped our column line-up for The Synthesis and hit reset. To recap, it’s been a dizzying year so far: in Europe, the February Paris AI Policy Action summit failed to usher in much meaningful regulation, and in the U.S., under the new Presidential administration, a March directive from the National Institute of Standards and Technology eliminates the mention of “AI safety” and “AI fairness.” To reboot in this context, we checked in with a few documentarians, artists, and human rights advocates. We asked them this question: In this unregulated and dysregulated landscape, what are the immediate and new concerns of AI shaping the future of documentary filmmaking in 2025?
About a Hero is set to have its world premiere as the opening film at IDFA tomorrow. It’s based on a script generated by an AI trained on Werner Herzog’s interviews, voiceovers, and writing. The resulting film, full of ironic self-reflection, explores themes of originality, authenticity, common sense, and the human soul in an era shaped by machine-human relationships. The filmmakers employ a variety of AI tools—from scripting to voice synthesis to image experimentation. The film is also intercut with documentary interviews with various artists about AI. We spoke with Piotr Winiewicz, the film’s director, in advance of the film’s premiere over Zoom and email.
Welcome to The Synthesis, a new monthly column exploring the intersection of Artificial Intelligence and documentary practice. Over the next year, co-authors shirin anlen and Kat Cizek will lay out ten (or so) key takeaways that synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.
In recent decades, the Locarno Film Festival has established itself as a premiere market for some of the more unusual experiments to come through the