Five Illinois residents have filed a class action lawsuit against Microsoft, alleging that the tech giant’s flagship collaboration platform, Teams, has been illegally collecting and analyzing voice data.
The complaint, filed on February 5, 2026, in the US District Court for the Western District of Washington (Basich et al. v. Microsoft Corp.), centers on violations of the Illinois Biometric Privacy Act (BIPA).
The plaintiffs, led by Alex Basich and Kristin Bondlow, claim that Microsoft’s real-time transcription feature works by capturing speakers’ voices and assessing specific biometric qualities, including pitch, tone, and timbre, to identify who is speaking. This process, known technically as diarization, creates distinct speaker profiles. The lawsuit argues that by doing so, Microsoft effectively created “voiceprints” of users without their knowledge or consent.
According to the filing, “Microsoft never informed Teams meeting participants that their biometrics, such as voiceprints, were being collected during Microsoft Teams Meetings.” Furthermore, the plaintiffs allege Microsoft failed to explain the purpose of this data collection or provide a retention schedule detailing how long the biometric data would be stored and when it would be destroyed, which are key requirements under Illinois law.
The lawsuit seeks to represent a class of Microsoft Teams participants whose biometric information was captured while they resided in Illinois, dating back to March 1, 2021.
The financial stakes could be significant. The suit demands actual damages or $1,000 per negligent violation, whichever is greater. If the court finds the violation was intentional or reckless, damages could rise to $5,000 per violation. Given the ubiquity of Teams in the corporate world, the potential liability could reach into the billions.
Market Analysis: The Hidden Cost of Convenience
For tech buyers, this lawsuit is a stark reminder that the “black box” of productivity tools comes with hidden regulatory costs. We have spent the last few years racing to implement digital transformation, automating notes, generating meeting summaries, and tracking sentiment, often without questioning the mechanics of these features.
The core friction here is not the tech itself. Diarization is a marvel of modern engineering that makes hybrid meetings followable. However, the issue is the opacity of consent. For CIOs and Compliance Officers, this case highlights a dangerous gap in the SaaS supply chain. When a vendor updates a Terms of Service agreement to enable a new feature, particularly an AI one, does that feature automatically comply with rigid local laws like BIPA?
If the allegations hold true, the burden of “shadow AI” shifts from the vendor to the customer. When you deploy “smart” collaboration tools to thousands of employees, are you inadvertently exposing your organization to the strictest privacy liabilities in the US, UK, or beyond? As we move deeper into 2026, vetting the privacy policies of your UC vendors should be considered a financial imperative. Innovation cannot come at the cost of compliance, and silence on data practices is no longer an option.
A History of Legal Battles
This case does not exist in a vacuum. Microsoft is no stranger to navigating complex legal thickets. The company has spent decades fending off regulatory challenges, most recently concluding a long-running dispute with the European Commission. Following a 2020 complaint by Slack (now owned by Salesforce), the European Commission investigated whether Microsoft bundling Teams with the Office 365 suite constituted anti-competitive behavior. To appease regulators, Microsoft eventually agreed to unbundle its products globally, a process finalized in late 2025.
Similarly, in July 2024, Microsoft settled a major antitrust complaint with CISPE (Cloud Infrastructure Services Providers in Europe) for €20 million, avoiding a potentially deeper probe into its cloud licensing practices. Beyond antitrust, the company is also fighting on the intellectual property front. Microsoft, along with key partner OpenAI, is currently facing consolidated multidistrict litigation over the alleged use of copyrighted works, including those of The New York Times, to train AI models that power tools like ChatGPT and Copilot.
However, the Basich et al. case represents a different kind of threat. Unlike antitrust fines, which are often viewed as the “cost of doing business,” biometric privacy violations strike at the heart of user trust. As AI becomes more embedded in UC and collaboration, from voice analysis to sentiment tracking, the legal scrutiny on how that data is processed is intensifying, and Microsoft once again finds itself in the crosshairs.