UC Identity Risks are Evolving: Deepfakes, Impersonation, and UC-Based Fraud

The growth of UC identity risks inside everyday meetings

10
A genuine employee, and a deepfake version of the same employee, sit opposite each other
Security, Compliance & RiskExplainer

Published: January 18, 2026

Rebekah Carter - Writer

Rebekah Carter

For years, fraud and scam risks mostly lived in inboxes. We’d get emails from people we’d never heard of, packed with suspicious-looking links or bad grammar. It was easy enough to spot the risk, forward the message to security, and forget about it. Now, AI is reshaping UC identity risks.

Voice cloning and video synthesis are good enough now that an attacker doesn’t need to compromise a device or steal credentials. They just need a few audio clips and the right moment. Meetings give them both. You’d think we’d still be able to detect deepfakes in meetings, but as more platforms introduce features that allow people to have AI avatars join conversations on their behalf, speaking to a slightly more “robotic” version of a colleague is starting to feel more normal.

That’s dangerous when you consider just how important meetings can be. They’re where budgets get approved, vendors get paid, and “just do it now” decisions happen. Unified communications platforms have become transactional systems, even though they were never designed to verify identity at decision time.

What worries leaders isn’t the technology, it’s the trust. We still treat live calls as proof, but in today’s world, we can’t always believe what we see.

UC Identity Risks: Why Meetings are High-Trust Environments

Honestly, it’s surprisingly easy to trust meetings more than anything we see written down. A voice feels real. A face feels accountable. Meetings come pre-loaded with assumptions.

If someone’s on the call, camera on, using the right name, we treat them as verified. Nobody really thinks about asking someone to “prove” it’s them. Add urgency and authority, and the effect compounds. A senior voice asking for something “before the next meeting” shuts down doubt fast. That’s how deepfakes in meetings become more credible.

It’s why research shows that 37% of fraud experts have already dealt with voice deepfakes, 29% have encountered video deepfakes, and almost half have seen synthetic identity fraud.

There’s another thing that makes this worse. Meetings don’t disappear anymore. They turn into recordings, transcripts, summaries, and follow-ups. Once the wrong identity is accepted in the room, everything that comes after, the notes, action items, and approvals, carries that error forward.

UC platforms were built to help people collaborate; we don’t think of them as dangerous. Live channels are treated as safe by default, even as attackers move into them at scale.

The New UC Identity Risks Leaders Need to Know About

The trouble is that most people still imagine fraud as a single moment: a bad email or a suspicious call. What’s actually happening looks more like a relay race. Each step hands just enough credibility to the next.

It starts simply enough. Someone scrapes some public information from a few earning calls, a podcast appearance, or a conference clip. That’s all it takes to clone a voice well enough. From there, the first contact might be a call, or a chat message, or something that feels harmless. Then they ask to jump on a call.

Inside the meeting, the pressure ramps up. Everything feels familiar, even if it’s not “exact”. You hear a voice that sounds mostly right, with recognizable tone and phrasing, and it’s connected to the right name. Maybe the video looks a little off, but you just assume someone’s using an AI avatar or a filter because they feel a bit shy on camera.

They only need a few minutes. Long enough for someone to say yes, confirm the change, or approve the payment. Phishing hasn’t gone away. It’s just been stacked on top of something more immediate. Vishing opens the door. UC platforms provide the stage. The meeting delivers the authority. By the time the ask comes, the room already feels legitimate.

We’ve seen how disastrous this can be. In 2025, an employee at the global engineering firm, Arup, joined what looked like a routine internal video meeting. Senior leaders were present. Cameras were on. Voices sounded right. During the call, urgent instructions were given to move money. By the time anyone realized something was wrong, roughly $25 million had been wired out.

Multiple participants on that call were later confirmed to be deepfakes. Not cartoons. Not glitches. Convincing enough to pass in a real business conversation.

The Real Problem: Lack of Identity Assurance

Most identity systems still think in straight lines. You log in. You pass MFA. Your device looks clean enough. Box ticked. From that point on, the system mostly stops asking questions. Meanwhile, collaboration does the opposite. It’s fluid, fast, and messy. Decisions happen mid-sentence. Authority shifts in real time. That mismatch is the breeding ground for UC identity risks.

Identity, as we’ve built it, is binary. You’re in, or you’re out. Collaboration isn’t. It’s continuous. A meeting can drift from status update to financial approval without anyone noticing the moment it crosses a line. That’s why UC impersonation risk shows up so late; by the time something feels wrong, the decision is already made.

Tool sprawl doesn’t help. Every new UC app, integration, or workflow adds identities, permissions, and assumptions. Some are human. Many aren’t. Over time, visibility blurs. Who actually triggered that action? Was it a person or a bot?

Now add AI to the room. Meeting copilots. Transcription bots. Workflow agents that kick off follow-up actions. These non-human identities already outnumber people in many environments, and a surprising number of them don’t have a clear owner.

They join meetings, read chats, and generate records. Someuctimes they act.

When humans and AI operate together in the same collaboration space, accountability starts to blur. It’s making deepfakes in meetings harder to spot, explain, and unwind after the fact.

Identity assurance didn’t fall behind because teams were careless. It fell behind because collaboration evolved faster than anyone expected. Now we’re asking binary systems to govern fluid, high-stakes moments they were never designed to see.

UC Identity Risks: Real Threat Scenarios

If you think of things like the Arup case as a “severe” edge case, it’s easy to assume the problem isn’t too drastic. You can tell yourself that the worst thing that can happen if a deepfake joins a meeting is that a little information gets leaked, or employees end up confused. Realistically, the dangers can be much bigger. For instance:

Finance approvals

Picture a group conversation about money. A senior leader joins late, apologizes, and sounds rushed. There’s a payment that needs to go out before the end of the day. “We’ll clean up the paperwork after.” The request doesn’t feel odd at a time when a lot of meetings seem chaotic in the first place. That’s how UC impersonation issues sneak past controls. The urgency compresses the verification window until it effectively disappears.

Vendor banking detail changes

This one’s a bit harder to spot, and arguably more dangerous. A vendor flags a “simple update” to payment details. A short call replaces the written confirmation process because it feels faster and more human. The voice sounds right. The name matches. The meeting ends. Money goes somewhere new. When deepfakes in meetings enter this flow, the paper trail looks legitimate until it’s far too late.

CEO or executive urgency

“I’m boarding a flight.” “I can’t stay long.” Those phrases shut down skepticism fast. Authority plus time pressure is a powerful combination, especially in live conversations. People don’t want to be the blocker. They want to help. That instinct is exactly what attackers lean on.

What ties these scenarios together isn’t carelessness. Its structure. Meetings feel final. Controls assume legitimacy once a call starts, and very few organizations clearly define which meetings are high-risk and deserve extra scrutiny. Until that changes, UC identity risks will keep surfacing in the same painfully ordinary ways.

Shadow AI: the Accelerant Behind UC Identity Risks

Most teams don’t think of AI tools as risky. They think of them as helpful. Most of us use note-takers or copilots to save time. They don’t feel dangerous in the moment. But this is exactly how UC identity risks get harder to see, let alone manage.

Unapproved AI tools now sit alongside sanctioned UC platforms, quietly siphoning context. People paste chat logs into consumer AI because it’s quick. They drop meeting transcripts into tools that no one’s vetted. Those actions don’t look like data exfiltration. They look like productivity. Meanwhile, the organization loses track of who’s seen what, where it went, and whether it comes back dressed up as something authoritative.

Shadow AI also blurs accountability. When a summary sounds confident, people trust it. When an action item appears automatically, someone assumes it came from “the system.” That’s a gift to attackers exploiting UC impersonation risk, especially when deepfakes in meetings have already polluted the conversation upstream.

Addressing the New UC Identity Risks

Some companies are starting to recognize these problems. They’re asking about platforms with in-built fraud and deepfake detection, or exploring watermarking tools and biometric analysis. But detection is only part of the solution.

It matters, of course, but it’s downstream. By the time you’re arguing over whether a voice was synthetic, the money’s gone or the approval’s been acted on. Deepfakes in meetings aren’t dangerous because they fool machines. They’re dangerous because they fit perfectly into human workflows that were never designed to question presence.

This isn’t a user-error problem either. People behave exactly the way organizations have trained them to behave: respond quickly, respect authority, keep things moving. Meetings reward speed and alignment, not skepticism.

Traditional UC security doesn’t help much here. Encryption, uptime, and platform hardening still matter, but they protect availability and data in transit. Impersonation exploits confidence. The assumption that if someone’s in the meeting, they belong there.

What teams really need to do today is simple.

Redefine “high-risk meetings”

Most organizations treat all meetings the same. That’s the mistake. A weekly stand-up and a call that authorizes a payment should not live under the same assumptions. Finance approvals. Vendor banking changes. Executive directives. Legal and compliance decisions. These are moments where UC impersonation risk can do real damage, fast.

If a meeting can trigger irreversible action, it deserves different rules.

Introduce friction where it helps

This doesn’t mean slowing everything down. It means adding just enough pause at the edges that matter. Implement secondary confirmation before a video meeting. Add clear escalation paths. Normalize verification as a process, not suspicion. The goal isn’t mistrusting everything; it’s consistency. Remember, controls only work when they don’t punish people for doing the right thing.

Treat voice and presence as data, not proof

Things have changed in the age of AI. Voice isn’t identity. Video isn’t authority. Familiarity isn’t legitimacy. Once you accept that deepfakes in meetings are good enough to pass socially, you stop using presence as evidence and start treating it as a signal that still needs context.

Govern non-human identities in collaboration

Bots, copilots, and agents don’t get a free pass just because they’re helpful. Assign ownership. Define scope. Review access. Preserve auditability. If a non-human identity can influence decisions, it needs the same scrutiny as a person.

Align UC, identity, security, and governance teams

Collaboration platforms are now risk surfaces. UC security can’t sit in a corner anymore. When identity, governance, and collaboration teams actually talk to each other, UC identity risks start being manageable.

The Broader Trend: AI, UC reset, and Rising Identity Pressure

Most leaders know this by now. UC and collaboration platforms aren’t just where work happens anymore; they are the workplace. Calls trigger workflows. Meetings generate records. Chat drives decisions. That’s why UC identity risks keep showing up here first, before anyone notices them anywhere else.

At the same time, AI is becoming an active participant. Meeting copilots summarize. Agents assign tasks. Avatars and digital twins stand in for people who can’t join live. Collaboration stacks are absorbing more responsibility, not less, and responsibility without identity clarity is a problem.

As collaboration gets smarter and faster, identity certainty keeps thinning out. Governance models that assume humans, static roles, and clear boundaries can’t keep up. If they don’t evolve, UC identity risks won’t just increase; they’ll become the background noise of everyday work.

So, start simple. Where, exactly, do meetings function as approval mechanisms in your business? Where does a verbal “yes” move money, data, or authority faster than any written control ever could? Then get specific. Where does identity verification actually stop today? At login? At MFA? Or does it disappear the moment a call starts and the conversation feels real enough?

Ask whether your people know when it’s acceptable to challenge identity in a meeting. When urgency and hierarchy collide, do they have permission to slow things down without feeling like the problem?

Finally, ask the question most teams avoid because it gets awkward fast: Can you prove who authorized what, and when, if that decision happened live? If the answer relies on memory, trust, or a meeting recording that “looks right,” you’ve already wandered into UC impersonation risk territory.

UC Identity Risks: The Threat Leaders Can’t Ignore

Honestly, none of the big issues with UC identity risks need reckless staff members or exotic, advanced attack strategies. They’re all just happening because meetings sit in the middle of how work gets done, and we’ve treated them as trustworthy by default for too long.

That’s why UC identity risks are so dangerous. They blend in with a familiar voice, a face on camera, or a rushed request that sounds reasonable at the time.

The fix isn’t paranoia. It’s realism. Identity can’t stop at login anymore. It has to show up where authority is exercised, inside meetings, collaboration flows, and the moments that actually move money, data, and people.

If you care about trust, auditability, and decision integrity in modern work, this is now part of the job. UC platforms aren’t just communication tools. They’re control surfaces.

If you want a clearer framework for thinking through this, our ultimate guide to UC security, compliance, and risk is a good place to start.

Agentic AIAI Copilots & Assistants​Artificial IntelligenceCall RecordingChatbotsCommunication Compliance​CopilotGenerative AIGenerative AI Security​Low-Code Automation​
Featured

Share This Post