Eighteen months after an AI researcher watched Otter.ai email him the unvarnished post-mortem of a VC meeting that was supposed to have ended, the resulting fallout is playing out in Courtroom 7 of the San Jose federal courthouse.
A Transcript That Kept Going
The researcher was Alex Bilzerian, a machine-learning engineer. The meeting was a Zoom call with a venture capital firm in September 2024. It was unremarkable while it lasted. What happened afterwards was not.
A few minutes after the call wrapped, an email from Otter landed in Bilzerian’s inbox. Attached was a transcript of the meeting. Attached to that transcript were several more hours of transcript, recorded after the call formally ended, during which the investors had discussed what Bilzerian later described to The Washington Post as “strategic failures and cooked metrics.” The bot had kept listening. The email had gone to everyone on the invite, including him.
The Viral Moment that Set the Stage
Bilzerian posted the story on X on 26 September 2024. It reached more than five million views. The VCs, he later said, apologised profusely. The deal did not happen.
A VC firm I had a Zoom meeting with used Otter AI to record the call, and after the meeting, it automatically emailed me the transcript, including hours of their private conversations afterward, where they discussed intimate, confidential details about their business.
— Alex Bilzerian (@alexbilz) September 26, 2024
Otter’s response at the time was measured.
“We at @otter_ai take user privacy seriously. Users have full control over conversation sharing permissions.”
– Otter.ai, official response on X, September 2024.
One reading of that reply is that the problem was not Otter’s. The account-holder had configured things a certain way; the bot had done as it was told. It is broadly the argument Otter is making now, in more formal language, in the federal courthouse in San Jose.
From One Leaked Transcript to Four Lawsuits
In re Otter.AI Privacy Litigation consolidates four class action suits filed between August and September 2025. The plaintiffs are not famous. Justin Brewer is from San Jacinto. Jasper Walker and Michael Walker are from Illinois. Chaka Theus and Nadine Winston signed on shortly after.
None of them was an Otter customer. All of them were, allegedly, recorded by Otter’s bot without knowing it. The suits invoke the federal Electronic Communications Privacy Act, California’s Invasion of Privacy Act and Illinois’s Biometric Information Privacy Act. Statutory damages, if plaintiffs prevail, could run to thousands of dollars per affected meeting.
Liang’s Argument for Inevitability
Otter chief executive Sam Liang has not addressed the litigation directly. In a TechCrunch interview in October, he came close.
“If they accuse us, then they could accuse everyone else, all the tools you heard about doing meeting notes. My view is that we are on the right side of history. We’re building this new AI revolution. If you want AI to help, you need to put AI in the meetings.”
– Sam Liang, chief executive, Otter.ai.
It is an argument about inevitability, not consent. Whether the law agrees is now the question before Judge Eumi K. Lee.
What the Bot Heard After the Meeting
The Bilzerian incident is worth lingering on, because it captured something uncomfortable about the product category long before the lawyers arrived. An AI meeting assistant is designed to be unobtrusive. It joins the call, transcribes, leaves. Its whole appeal is that the humans in the room stop noticing it.
The trouble is that, in the case of the VCs, they stopped noticing it after the call had formally ended, while they were still on Zoom, still being recorded, still speaking as though no-one else was listening.
The Wider Reckoning
Otter is not alone in the dock. Fireflies.ai now faces two BIPA class actions in Illinois. Read AI has been banned, by policy rather than by court order, from Zoom and Teams environments at the University of Washington, Chapman University and the University of California, Riverside. The pattern is clear. Institutions that used to wave AI notetakers through are starting to ask harder questions.
Jackson Lewis attorney Joseph Lazzarotti has described the Otter case as a test of the “single-consent” model, under which a meeting host gives permission on behalf of everyone else. That model, he notes, is legally precarious in all-party consent states such as California and Illinois.
Otter’s motion-to-dismiss hearing is set for 20 May 2026. Until then, the bot is still in the lobby. It is still waiting to be admitted. Most of the time, someone still clicks yes.
Further reading
- Otter.ai on Trial, and the AI Notetaker Industry with it
- Otter.ai Goes Full Enterprise: New AI Suite Wants To Turn Meetings Into Living Knowledge Base
Sources
- Alex Bilzerian – X post, 26 September 2024
- Danielle Abril, “AI assistants may be sharing your work secrets” – The Washington Post, 2 October 2024
- Marina Temkin interview with Sam Liang – TechCrunch, 7 October 2025
- In re Otter.AI Privacy Litigation, 5:25-cv-06911 – CourtListener docket
- Joseph J. Lazzarotti – Jackson Lewis Workplace Privacy Report
- University of Washington IT – Read AI deactivation notice