Agentic AI is moving from a promising productivity tool to a security problem that enterprise leaders can no longer ignore. In the conversation below, Kristian speaks with Irina Tsukerman, President at Scarab Rising; Shlomi Beer, Co-Founder & CEO at ImpersonAlly; and Roey Eliyahu, Co-Founder & CEO at Salt Security, about what agentic AI is, where the risks are emerging, and how organizations can manage systems designed to act on their behalf.
What makes the topic urgent is that agentic AI is not simply a chatbot with a smarter interface. These systems can make decisions, take actions, access data, and move across business systems with limited human oversight. That creates efficiency, but it also creates exposure, especially when companies are adopting the technology faster than they are building the controls around it.
Kristian frames the discussion around a simple but important question: how do you secure systems that are meant to behave autonomously when traditional security assumptions are no longer enough? The answer, as the guests make clear, is that the companies moving fastest on agentic AI are often the ones least prepared for its consequences.
Where The Security Risks Emerge
The first major theme in the conversation is that adoption is outpacing governance. Shlomi Beer says attackers do not always need to break through classic perimeter defenses; instead, they can manipulate external inputs, prompt chains, or other content that agents ingest and trust. In that environment, the attack surface is no longer just a network or an endpoint. It is the workflow itself.
Roey Eliyahu adds a broader operational view. He argues that consumer-facing sectors, where support volume is high and repetitive tasks are common, are adopting agents aggressively because the business case is compelling. But once an agent is expected to act like an employee, it also needs the permissions of an employee. That is where the security problem begins to scale.
Both guests point to the same underlying issue: the more useful the agent becomes, the more access it needs. And the more access it receives, the more dangerous it becomes if it is abused, hijacked, or allowed to make the wrong call. What begins as automation can quickly become a privilege problem, an observability problem, and a governance problem at the same time.
A second theme is that organizations often leave the rules unclear because the technology is moving faster than internal policy. Irina Tsukerman says some companies rush into deployment because they want competitive advantage, while others delay formal controls because they do not yet understand the risks well enough.
Why Governance Is Lagging
But the risks are not abstract. Irina points to predictable failure modes: an agent being hijacked, exposing customer information, or falling for a deepfake-style manipulation. Roey widens that lens by explaining that agents also create compliance exposure, especially in regulated sectors such as finance, insurance, and pharma. Even when the agent improves service, it still has access to sensitive data.
The discussion also shows why the current security market can feel fragmented. Vendors often sell point solutions for one layer of the stack, such as identity, the model, or the MCP layer, but the speakers argue that this rarely maps cleanly to the real business risk. The problem is not one isolated component. It is the chain linking agent, prompt, data, API, and downstream action.
The conversation turns to remedy, and here the emphasis is clear: start with visibility, then add guardrails, then add detection. Roey says readiness begins with full discovery and observability across agents, MCP servers, APIs, code, runtime, and configuration. Without that holistic view, security teams are trying to defend something they cannot fully see.
Once organizations understand the full chain, they can apply business-specific restrictions. An airline may want to prevent agents from issuing refunds or changing fares. A retailer may need to block unauthorized customer data access or prevent cross-customer leakage. Irina reinforces that point by arguing that prevention is not enough on its own; companies also need monitoring that can detect misuse before the damage becomes externally visible.
How Companies Can Respond
The final takeaway is that agentic AI does not just expand what employees can do. It also expands what attackers, insiders, and careless users can trigger through the same systems. That makes security both more urgent and more difficult, because the threat is embedded in the workflow itself rather than sitting outside it.
In the end, the conversation leaves Kristian and his guests with a cautionary but practical message. Agentic AI can deliver real productivity gains, but only if organizations stop treating security as an afterthought. The companies most likely to benefit from the technology are the ones that pair adoption with observability, limit privilege by design, and recognize that autonomy without control is not innovation. It is exposure.