The AI Compliance Blind Spot: When UC ‘Helpful’ AI Becomes an Audit Risk

The EU AI Act is pushing organizations toward a more rigorous kind of AI compliance: not just “responsible use,” but auditable proof, such as documentation, logs, oversight, and accountability. Yet as transcription, sentiment analysis, and automated moderation become default features in enterprise communications platforms, many buyers are finding that the level of technical detail they can obtain doesn’t always match what auditors and internal risk teams want. The result is rarely malice. It’s usually complexity, and a growing risk gap that CISOs and buying committees must manage explicitly

9
The AI Compliance Gap: What Security Leaders Need to Know UC Under the EU AI Act
Security, Compliance & RiskFeatureInterview

Published: January 21, 2026

Kieran Devlin

AI compliance is transitioning from policy to procurement. Under the EU AI Act, organizations will increasingly need to demonstrate how AI systems process communications data; what they ingest, what they produce, and how risks are controlled.

In UC, collaboration, contact centers, and employee experience tooling, AI features often arrive as product enhancements rather than standalone “regulated systems.” For CISOs, CIOs, and tech buyers, the crux of the matter is how to build audit-ready AI governance when critical technical transparency may be partial, evolving, or challenging to operationalize.

Europe’s AI rulebook is often described as “risk-based.” That’s true. But the more practical impact, especially for security and governance teams, is that it nudges AI compliance toward an evidence standard: documentation, classification, monitoring, and human oversight that can be shown to auditors and regulators.

Vasant Dhar, AI expert and pioneer, professor at NYU Stern and the Center for Data Science, and the author of Thinking With Machines, offers a vivid way to think about why this is hard in practice. “The closest analogy is an alien growing up alongside us; becoming more intelligent, acquiring new capabilities,” he told UC Today. “This is more like something organic, learning new things, gaining in capability by the day.”

That “alien” is already embedded across enterprise communications; meeting transcription, summaries, contact center coaching, content moderation, and “insights” layered over employee and customer interactions. None of that is inherently a problem. The risk emerges when organizations assume that because these features are commonplace and marketed as “assistive,” they are automatically low-risk and easily auditable. Under the EU AI Act’s trajectory, that assumption can become a colossal vulnerability.

AI Compliance is Becoming an Evidence Requirement, So Governance Has to Start Early

The EU AI Act is a framework designed to be operationalized through risk classification, controls, and documentation that can support regulatory scrutiny.

The rollout is phased. Specific provisions already apply, including prohibitions and AI literacy requirements from February 2025, and the application of rules covering general-purpose AI (GPAI) models and key governance and penalties provisions from August 2025. Other obligations, among them the EU AI Act’s high-risk system requirements, including technical documentation expectations, are scheduled to apply later in the staged timeline.

For CISOs and buying committees, these signposts indicate that audit expectations are moving toward documentation and evidence, even as organizational AI deployments in communications continue to expand.

The immediate point is that “communications data” isn’t passive anymore. A transcript is a transformation of speech into text; a meeting summary is an interpretive artifact; sentiment signals are inferences about people. Even when a vendor treats these features as productivity helpers, they can create secondary risks when outputs are reused in HR processes, legal disputes, compliance investigations, customer escalations, or internal monitoring.

Ryan Johnson, Founder and Principal Consultant at The Technology Law Group, suggested to UC Today that early audits will likely focus on proof more than promise:

Many AI compliance programs are “well-meaning and articulate principles, risk tiers, and governance goals,” Johnson added, but they “struggle to produce auditable outputs that clearly connect specific systems to risk classification, data inputs and outputs, human oversight controls, and post-deployment monitoring.”

Laura Clayton McDonnell, President of the Corporate segment at Thomson Reuters, emphasized to UC Today that organizations should begin with internal structure, not vendor questionnaires:

“One of the first things we talk about is that it’s not really about how much you’ve invested in terms of the budget, it’s about the governance infrastructure you have in place. First things first: get your house in order.”

That’s the non-negotiable starting point for AI compliance. Before asking vendors for documentation, enterprises need to know which AI they have enabled, where it operates, and what it is allowed to influence.

“Assistive” AI Can Become High-Impact AI, Depending On How It’s Used

A recurring challenge in UC and collaboration is that AI enters through user experience. Features appear as default options, product bundles, or admin toggles. They can spread quickly, especially when they reduce meeting fatigue or boost contact center productivity.

Dhar argued that this is a different kind of technology adoption cycle, because the behavior of AI systems isn’t as predictable as earlier enterprise software waves. “The reality of error is not theoretical. AI will always make mistakes, just like humans do. Mistakes will occur,” he said.

From an AI compliance perspective, the primary issue isn’t whether errors happen but whether the organization has calibrated the cost of those errors, implemented oversight, and documented the rationale.

Dhar described an “automation frontier” that shifts when the consequences of mistakes become manageable. “It crosses the automation frontier when the cost of error becomes sufficiently low,” he explained. But enterprises don’t always apply that thinking to communication systems, where the same feature can be low-risk in one setting and high-impact in another.

Johnson views buyers as underestimating how context changes classification and scrutiny. “The most dangerous misalignment I’m seeing is the assumption that AI features embedded in communications and collaboration tools, such as transcription, sentiment analysis, or productivity insights, are inherently low risk because they are positioned as assistive,” he said.

Those features can look benign in isolation, but “the risk profile changes dramatically when they are always on, applied to internal communications, and used to generate insights about people.” “Ultimately, the risk is not just about the data being processed, but about the context, scale, and real-world impact of how those AI systems are deployed,” he elaborated.

The EU AI Act explicitly recognizes sensitive workplace contexts and prohibits certain practices, such as the use of AI systems to infer workplace emotions (with limited exceptions). That prohibition begins applying earlier in the phased rollout.

The practical takeaway for buyers is that they must treat these features as systems that can cross compliance thresholds quietly and, consequently, require governance and documentation well before broad deployment.

Transparency is Often a Spectrum, So Buyers Need to Define “Enough Detail” Up Front

In an ideal world, every AI feature would come with clear, complete documentation that is easy to map to enterprise risk controls. In reality, however, documentation tends to be uneven across features, regions, hosting models, and partner ecosystems.

That is not necessarily a refusal. Often, it reflects genuine complexity. AI features may rely on multiple components (including upstream models), change over time, and produce probabilistic outputs that are hard to summarize in one static packet.

Dhar noted that even when everyone, whether vendors, buyers, or channel providers, is acting in good faith, contracts can struggle to capture technical nuance:

“Sometimes English just isn’t precise enough. You need math, and you can’t specify contracts in math.”

That makes it risky for enterprises to treat contractual language alone as a substitute for technical clarity.

Johnson described where the documentation gap becomes operational for AI compliance teams. “The most common issues I see are the absence of clear and auditable assurances around how enterprise data is processed, whether it is logged or retained, and whether it is used to train models,” he said.

He also warned that downstream implementations can create new risks that aren’t fully addressed by upstream policies. “Smaller orgs building on top of these models often rely too heavily on the provider’s terms and policies, which rarely account for the risks introduced by their own customized use cases,” he added.

The EU AI Act sets expectations around technical documentation for high-risk AI systems, documentation that should demonstrate compliance and provide authorities with information “in a clear and comprehensive form.” So the buyer’s challenge becomes: what level of transparency is “enough” to satisfy audit readiness, internal risk governance, and procurement standards, especially when a vendor’s materials are informative but not tailored to a specific deployment?

Clayton McDonnell suggested organizations start with internal governance and then extend requirements outward. “Once you have governance and internal guidance in place, it extends to your partners,” she outlined. In practice, that means defining documentation expectations during procurement, not after rollout.

Contracts Aren’t the Enemy, But They Can’t Carry AI Compliance by Themselves

Most vendors aren’t trying to “dump risk.” Many are working through genuinely unsettled regulatory expectations and evolving technology. Still, CISOs and buying committees should recognize that standard contract language may not automatically provide what they need for AI compliance, particularly regarding auditability and change management.

Johnson observed a common market dynamic: “In practice, many UC&C vendors shift AI-related risk to customers through contract language, standardized terms of use, and a take-it-or-leave-it posture driven by their size and market power.” Often, “Downstream providers usually accept this due to sheer lack of bargaining power, even when it leaves meaningful gaps in compliance protection.” That doesn’t mean the vendor is acting unfairly, but it does mean the buyer must treat AI governance as a core commercial requirement.

Audit rights may appear helpful, but Johnson cautions that they can be hard to operationalize. “While enterprises may attempt to negotiate audit rights, those provisions are often difficult to exercise in reality and provide limited practical value,” he said. Instead, he points to clearer, more actionable protections for AI compliance, such as “written commitments around regulatory cooperation, advance notice when new AI features materially change risk, and indemnification that aligns with how the EU AI Act allocates responsibility between providers and deployers.”

There is also a point at which a lack of clarity becomes a risk the enterprise cannot reasonably bear. Johnson framed that threshold in practical terms:

“Transparency becomes a true deal breaker when a vendor cannot clearly explain, in writing, how data flows through the system, whether data is used for training, or how the AI system is classified under the Act. Without that clarity, the enterprise simply cannot meet its own compliance obligations, regardless of how robust its internal program may be.”

Clayton McDonnell’s approach is to make AI rules explicit in agreements when the use case demands it. “You might attach guidelines as an appendix, or you might say AI cannot be used for the work you’re asking for,” she said. For large organizations, this is less about distrust and more about consistency. The enterprise cannot meet AI compliance obligations if third parties operate under different assumptions.

Accountability remains an executive topic as well. Dhar noted that responsibility often depends on the nature of the error, but it can rise. “In many cases, responsibility may exist at the top,” he says. For CISOs, that’s a critical governance reality. If the enterprise chooses to deploy AI features broadly without sufficient clarity on behavior and data handling, leadership is effectively accepting that risk.

Johnson added that regulators are likely to respond more favorably to organizations that can demonstrate good-faith, documented controls, even if they’re not perfect. However, they will be less forgiving where public claims and documentation diverge. “Companies that are most likely to attract early regulatory attention are those whose public claims about responsible or ethical AI are not supported by documentation, controls, or operational reality,” he suggested.

AI Compliance in UC&C is About Reducing Uncertainty, Not Assigning Blame

The most important shift the EU AI Act introduces for enterprise communications and collaboration isn’t ideological. Rather, it’s operational. AI compliance is moving toward auditability. When AI is embedded in meetings, calls, chat, and collaboration workflows, the organization needs to be able to explain what the system does, what it sees, how outputs are generated, and what guardrails are in place.

That doesn’t require treating vendors as adversaries. In many cases, the right approach is partnership: co-defining documentation expectations, narrowing use cases where necessary, implementing human-in-the-loop review for higher-cost errors, and putting change-notification and cooperation commitments into contracts so governance keeps pace with product evolution.

Still, CISOs and tech buyers shouldn’t confuse vendor assurances with audit-ready proof. AI features are changing faster than most procurement templates, and the same “assistive” tool can become high-impact depending on context and scale. The organizations that navigate this well will be those that adopt it deliberately, map features to risk, demand the right level of detail, and document decisions.

Clayton McDonnell’s advice remains the right starting point: calm, practical, and difficult to argue with, “get your house in order.”

With the EU AI Act now active in its phased rollout, that “house” includes not just internal controls, but also the clarity you can obtain about how your communications platforms process sensitive data, because that clarity is increasingly what AI compliance looks like.

Agentic AIAgentic AI in the Workplace​AI AgentsAI Copilots & Assistants​Artificial IntelligenceCall RecordingChatbotsCommunication Compliance​CopilotDigital Transformation
Featured

Share This Post