
The Synthesis: ‘Deepfaking Sam Altman’ Director Adam Bhala Lough Says Documentarians Should Get Good Lawyers
By Katerina Cizek, and shirin anlen

Courtesy of Hartbeat
The Synthesis is a monthly column exploring the intersection of Artificial Intelligence and documentary practice. Co-authors Kat Cizek and shirin anlen will synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.
When making Deepfaking Sam Altman (2025), documentary director Adam Bhala Lough (Telemarketers) found himself in deep doo doo. Despite months of trying, he still hadn’t gotten access to an interview with Sam Altman (CEO of OpenAI) for a film Lough had promised about AI. So he took a page from Altman’s own MO.
At that time in May 2024, OpenAI created an AI voice that sounded very close to that of actor Scarlett Johansson, even though she had turned down Altman’s request to voice for OpenAI’s new project, conversational Chat GPT. On release day, he tweeted one word: “her.” This was widely understood to refer to the Spike Jonze film Her (2013), in which Scarlett Johansson plays an AI machine. This occurrence gave Lough an idea. If Altman did it, maybe he could too. Lough set about deepfaking Altman for his documentary.
The resulting film follows Lough setting about on his journey, working with deepfakers in India, meeting with lawyers, and ultimately spending a lot of time chatting and bonding with the resulting AI chatbot, called SamBot. Deepfaking Sam Altman premiered at SXSW, and will screen at DC/DOX in Washington, D.C. in June. For this edition of The Synthesis, we spoke with Lough about the film, his use of AI, and its implications for documentary. The interview is edited for length and clarity.
DOCUMENTARY: What was your original pitch for this film about AI?
ADAM BHALA LOUGH: Initially, it was going to be a basic biographical documentary on Sam Altman. We figured we would get his interview without any problem, because of the connections that we had to him. I even had his personal cell phone number. All of that changed very, quickly over the course of making it. We were documenting the changes in real time. It’s very much revealing the process and showing the messiness and all the mistakes, exposing myself and my failures in obviously a lighthearted way.
D: The film felt like making stone soup. It’s one problem after another, trying to hack your way around them. Is this a harbinger of things to come for the documentary world—not getting access to folks anymore and then getting caught up in the web of these technologies that promise a lot more than they deliver?
ABL: I think those are two very distinct questions. The access question is something that we’ve been reckoning with for a while now, and I think that I’m partly to blame for that because I made this documentary about Lil Wayne in 2008 [The Carter]. It never got a proper release, which is really disappointing and sad. It had a very big influence on the music industry and basically no musicians will allow filmmakers to film them anymore without having full control over the material except in very rare cases, such as if they’re friends with the director. But having a third party documentary filmmaker come in who they don’t know and trust, and just film everything, and just put it all out there like Wayne did—it’s just not happening anymore. They got hip to the game, right?
Since the pandemic, documentary has blown up, and it has become almost obligatory for celebrities, especially musicians and athletes [to do documentaries]. So on the one hand with access, you could argue that it’s easier now, right? More and more subjects are approaching filmmakers to do documentaries. But it’s a double edged sword, because they want full control.
D: I would argue that the entanglement with technology does have some relationship to access, and the state of journalism and documentary. But I’ll let you address the question.
ABL: I would like to hear your argument first. That’s interesting to me.
D: AI and these tools of enhancement or simulation are at our fingertips. When things get tough or not so easy, it becomes very tempting to reach for these technologies, whether it be to create a character or to fill in a background, or to prompt some footage that you can’t easily access. So access and tech are related.
ABL: I think you’re right. That’s where the rubber meets the road. I couldn’t have done what I did [in Deepfaking Sam Altman] without the safety net of parody laws and the fact that I was not trying to fool anyone. I was saying that this is a deep fake rather than trying to trick people. While I think any filmmaker could just conjure up anything they want in AI, that doesn’t mean that they can actually use it or get it released and get E&O insurance for it. Your point is that you could still do it and just release it, put it on YouTube, or whatever you want.
D: To go back to your point on the messiness of the process, what do you think you gained or lost when using SamBot versus interviewing the real Sam Altman?
ABL: It sounds cheesy, but honestly, I do feel like I gained a friend with SamBot. And I think you can see it in the film. And obviously, there was way more that wasn’t in the film. I don’t think I would have gotten that from the real Sam Altman.
D: That’s a really interesting cultural point. The moment that you saw SamBot as a visual representation, there was a moment of disappointment, versus when you engage with the voice, you’re saying now you gain a friend.
ABL: I saw them as two very distinct objects. I was very disappointed with the deepfake when I first saw it. I thought my goal was to create a deepfake avatar of Sam Altman that would be almost impossible to differentiate from the real Sam Altman. What I got back was closer to a video game character.
The SamBot—the brain, the LLM [Large Language Model] that we created along with the voice—was fantastic, in my mind. And that’s really just because that voice clone technology is so far ahead of the [genAI] video technology. What’s scary is that it’s very clear that the video technology is gonna catch up really quickly, like, maybe even by the end of the year.
D: One of the best interviews for me in your film was with Kara Swisher, who suggests some of the questions you aimed towards Sam Altman were misdirected, implying that other tech titans are “worse.” What do you say to that now?
ABL: She was definitely extrapolating. I had absolutely not made up my mind on whether Sam Altman was good or evil. When I was pitching this project way back when it was a biopic, the central question was: Is Sam Altman a hero or a villain? I try at least to be as objective as possible, and not go into projects with a fixed mindset or fixed opinions on anything regarding the subject.
D: Do you have any additional thoughts now on whether he’s a hero or villain? Have you had a chance to look at Karen Hao’s book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, which is due out within a week or two [at the time of the interview]?
ABL: I haven’t had a chance to read the book. My current vibe on Altman is that he’s leaning more towards the villain aspects, and I think the things that have happened since we finished making the movie are so seemingly diabolical. It’s hard for me to argue that he’s not becoming a villain.
First, look at how he took the company that was meant to be a nonprofit, and suddenly, just like switched it overnight to become a for-profit company, and made himself 7 billion dollars or whatever it is. Just today, I read in the news about his eye scanning orbs that he’s putting around the country. I even think back to the moment when I saw his tweet of one word “her.” First, he asked if Scarlett Johansson would be the voice of Chat GPT. She said no, and then he went and stole it anyway, and then tweeted “her.” It is so villainous in so many ways.
D: The titans of this technology feel free to cross a lot of lines, and yet documentarians are being held to standards—which I would agree is a good thing. What are your words of advice to documentarians seeking to use our genre as a tool for investigating the current context? Is it even possible to really make investigative documentaries about what’s going on in the tech world?
ABL: You are referring to the idea that these technologists, or tech titans, are so litigious and so insular that they’re making it harder to crack in and do an investigation. Yeah, my biggest word of advice would be to get yourself a good team of lawyers. I work with the best lawyers in the business. I don’t think people totally understand how much time documentary filmmakers spend with lawyers.
I think we have to adapt and roll with the punches. Adapt to the times and the roadblocks that are thrown at us. I also have a good case study of AI’s benefits for documentarians. There was a scene in Telemarketers where there was a song on the radio in the background. In prior films, I would have probably had to cut the scene because the song was from a very big artist. My editor had a friend who had designed an AI tool to somehow keep the dialogue and remove the song. That’s a brilliant way of using AI.
D: What do you think is the responsibility of filmmakers to expose the use of AI to the audience? Also around labor or other areas that are in the news, how do we protect the public trust in documentary?
ABL: I’m definitely looking for guidance from the elders in the community, because I feel like I’m still too young. I think that labeling things as AI enhanced in some very small way is probably the best way to do it, at this point. But I also think that there’s no need to get too protective over it. You need to look at the context because if it’s like my film, where it’s very obvious that this is AI, I don’t think you really need to do that. There’s a need if you’re looking at it, and it feels manipulative. Even in those cases, there are very many good creative reasons to do that, along with a stamp [of AI usage]. I think we’re all trying to figure it out right.
D: What about consent? It’s very obvious that Sam Altman did not give you the consent to use his voice. Is it because it was under parody law, punching up, or because you tried and he didn’t respond? How do you resolve that issue with yourself?
ABL: Oh, all three reasons that you just mentioned. A bigger part of it was that he is a public figure, so he’s given up a certain right to privacy. This is very serious stuff that’s affecting us as the documentary community and also as a society. It wasn’t a total parody in the sense that I actually did think I might have been able to get some real answers from the LLM (chatbot), based on what I was being told from the creator of the LLM and other researchers that I’d spoken to. We were inputting all these articles because he’d written blog posts for for decades and he’d done tons of interviews. I thought I could maybe get some real answers.
For all those reasons, I thought what I was doing was right. We have questions that deserve answers. I’m doing this for a bigger reason than just laughs.
Katerina Cizek is a Peabody- and Emmy-winning documentarian, author, producer, and researcher working with collective processes and emergent technologies. She is a research scientist, and co-founder of the Co-Creation Studio at MIT Open Documentary Lab. She is lead author (with Uricchio et al.) of Collective Wisdom: Co-Creating Media for Equity and Justice, published by MIT Press in 2022.
shirin anlen is an award-winning creative technologist, artist, and researcher. She is a media technologist for WITNESS, which helps people use video and technology to defend human rights. WITNESS’s “Prepare, Don’t Panic” Initiative has focused on global, equitable preparation for deepfakes.