Employees are now using more than 3,400 AI apps at work β most outside IT visibility. According to Zscaler, that surge is creating a major shadow AI compliance & securityΒ challenge, as sensitive company data flows into tools that many IT teams cannot fully monitor.
As Jay Chaudhry, CEO and Founder of Zscaler, said on the companyβs recent earnings call:
βOrganizations are rapidly adopting AI to drive productivity and innovation, but doing so is creating new vulnerabilities, significantly expanding the attack surface and increasing cyber threats in scale, sophistication, and speed β recasting AI from a productivity engine into a dangerous security threat.β
The scale behind that warning canβt be ignored. Zscaler said AI application usage across its customers has expanded to more than 3,400 apps, a quadrupling over the last 12 months. Meanwhile, data transfers to AI applications exceeded 18,000 terabytes in 2025.
The company also reported that enterprise AI usage rose 91% year over year, while data transfers to AI and machine learning applications climbed 93%.
Read More:Β
- Is Your UC Platform Breaking Zero-Trust Security Models?
- 5 Use Cases Where UC Security and Compliance Became a Competitive Advantage
- The UC Security & Compliance Buyerβs Checklist: 20 Questions to Ask Before You Sign
Securing Uncontrolled AI Usage at Scale
Zscaler is positioning its new AI Protect tools as a response to this shift, arguing that enterprise AI security must now focus as much on employee behavior and governance as on traditional cyber defense.
In its recent financial results, they highlighted Zscaler AI Protect as a necessity for securing enterprise AI usage at scale. Zscaler is trying to give enterprises something most currently lack: visibility into how AI is actually being used. That means identifying which AI tools employees are using, controlling access, and monitoring how data flows into them.
This allows enterprises to move from reactive policy enforcement to proactive governance.
The AI Protect package is no longer being framed as a niche add-on for experimental AI projects. Instead, it is being positioned as a control layer forΒ AI compliance and broader enterprise AI security.
This is already showing up in customer deals. Zscaler said a Fortune 500 semiconductor manufacturer signed an eight-figure new logo deal that included Zscaler AI Protect and data security products. Their purpose? To block unsanctioned AI applications, prevent data leakage into public large language models, and provide visibility into prompts.
One of the most telling details from the quarter came from an entertainment customer. According to Chaudhry, a major entertainment company activated Zscalerβs policy enforcement for AI traffic and discovered that 4 million AI prompts per week were now being secured. That kind of number suggests companies may be much further into shadow AI usage than leadership teams realize.
Enterprise AI Security Gets Harder as AI Agents Enter the Workflow
Zscaler is also trying to widen the conversation beyond employees using AI tools manually. The company says the next challenge for enterprise AI security will come from AI agents operating autonomously across workflows, applications, and data environments. Chaudhry explained:
βAI agents shift the threat landscape and operate autonomously at speeds far exceeding humans, exponentially increasing agentic traffic while compressing the time to prevent, detect, and respond to threats.β
That warning matters in the employee experience space because AI is increasingly being embedded into collaboration and workflow automation. Once AI agents begin acting across enterprise systems at scale, shadow AI compliance becomes harder to manage. The challenge is no longer just what employees type into AI tools, but what connected AI systems can access, share, and trigger on their own.
Keep up to date on the latest UC security trends by following UC Today on LinkedIn.
Compliance Pressure Is Giving Zscaler Another Opening
The compliance dimension adds even more weight to Zscalerβs argument. In its recent expansion of global compliance capabilities, the company emphasized the need for stronger local controls. Misha Kuperman, Chief Reliability Officer at Zscaler, said in the announcement:
βEffective data sovereignty requires customers to have verified authority over their data residency, telemetry and control data plane data.β
For enterprises dealing with shadow AI, this raises a critical issue. It is not just about seeing how employees use AI, but ensuring that any data shared with those tools does not violate regional compliance requirements or data residency rules.
What This Signals for IT and Security Leaders
The bigger takeaway from Zscalerβs quarter is that shadow AI compliance is no longer a side issue caused by a few curious employees testing new tools. It is becoming a mainstream enterprise governance problem, driven by widespread workplace adoption and the rapid growth of AI-powered workflows.
That is where Zscaler AI Protect is trying to land its message. The company is betting that customers will increasingly need a dedicated policy and visibility layer between employees, AI applications, and sensitive corporate data. If that thesis holds, enterprise AI security will become one of the most important budget conversations in the market over the next year.
For many enterprises, the uncomfortable reality is simple: AI adoption is speeding ahead β leaving governance at the wayside.
Want to upgrade your enterprise security? Check out UC Todayβs Guide to Security & Compliance to kickstart your adoption journey and find all the guidance youβll need.
FAQs
What is Zscaler AI Protect?
Zscaler AI Protect is Zscalerβs platform for discovering AI usage, managing access, and inspecting prompts. It also helps prevent sensitive data leakage across AI applications.
What does shadow AI compliance mean?
Shadow AI compliance refers to the challenge of governing employee use of AI tools. In particular, in cases where usage may not be approved, monitored, or covered by existing compliance controls.
Why is enterprise AI security becoming more urgent?
Enterprise AI security is becoming more urgent because employees are using more AI tools. This includes sharing sensitive data with them, and beginning to interact with AI agents that can operate autonomously at scale.