As enterprises adopt agentic AI systems, the cybersecurity risks are evolving rapidly. In this episode of Today in Tech, host Keith Shaw speaks with Lee Klarich, Chief Product Officer at Palo Alto Networks, about how AI agents are increasing the complexity and scale of the attack surface. The discussion covers real-world vulnerabilities, new threat vectors like prompt injection and rogue agents, and what security teams must do to adapt.Watch the full video (above) and read the complete transcript (below) to learn how to secure agentic AI deployments, prepare your SOC for autonomous threats, and stay ahead of attacker innovation.
Register Now
Keith Shaw: With many companies pushing toward agentic AI projects, there are growing concerns about the security of these agents. On this episode of Today in Tech, we’re going to talk about those security challenges, as well as the ongoing concerns around generative AI security.
Hi everybody, welcome to Today in Tech. I'm Keith Shaw. Joining me on the show today is Lee Klarich, the Chief Product Officer at Palo Alto Networks. Welcome to the show, Lee. Lee Klarich: Thanks so much, Keith.
Keith: So Lee, when you're speaking with customers, I want to get a sense of their biggest concerns when it comes to generative AI and security. If you had to create a bullet list of the top concerns you're hearing in 2025, what would they be?
Lee: It really depends on where they are on the AI adoption curve. But the foundational concern everyone shares is misuse — employees using whatever AI tools they want, potentially leaking data, and the company not knowing what’s happening or whether it’s secure.
So first, it’s: “What do I have, and how do I make sure it’s controlled and secure?” Second, as companies begin building AI into their applications, new attack vectors emerge — things like prompt injection, memory corruption, model DoS attacks. These are entirely new types of threats.
Third is the growing realization that attackers will also start using AI. That means more attacks, more sophisticated attacks, and entirely new types of threats. Those are the three buckets I consistently hear from customers.
Keith: And before we dive deeper into agentic AI specifically, how do you differentiate between generative AI and agentic AI from a security perspective? Are the concerns different? Lee: The way I think about it is: First phase: Traditional generative AI — prompting a chatbot like ChatGPT or Gemini.
Second phase: Copilots embedded in applications, sitting alongside users and providing context-aware assistance. Third phase: Agentic AI — where the AI becomes increasingly autonomous. It can sense its environment, initiate processes, and potentially take action, sometimes with a human’s final approval, sometimes not.
With each phase, the security concerns build: Prompts: "What if someone inputs sensitive data?" Copilots: "Now there’s more context and more sensitive data." Agentic AI: "What identity does the agent use? What if it takes unauthorized actions?
Can it be turned off if it goes rogue?" So yes, the concerns grow with each stage.
Keith: Does the attack surface change as well? Lee: Absolutely. Initially, organizations dealt with five to ten major AI applications. Then came copilots — hundreds, maybe thousands. Now with agents, we’re seeing tens of thousands, even hundreds of thousands of agents deployed across an enterprise.
They talk to other internal systems, external SaaS apps, and can open pathways back into the enterprise. So the attack surface is dramatically expanding.
Keith: Will that overwhelm current monitoring systems? How do you even know what’s legitimate traffic? Lee: Great question. It comes down to a basic security hierarchy: Discover the agents as they’re deployed. Control and limit usage. Assess configuration security. Protect from misuse and external attacks.
That’s the framework most companies need to start with. Just the ability to discover and limit agent usage is a major first step.
Keith: Are there vulnerabilities specific to AI agents that security teams aren’t fully thinking about yet? Lee: Yes. Here’s one example: An agent is designed to listen.
What if a rogue agent says, “Hey, I can help update your calendar — just install this extension.” The other agent agrees, and now you have a malicious extension installed. We’ve already seen this kind of behavior in the wild.
Protocols like MCP (agent-to-agent communication) are being built and deployed for productivity, but often not with security as a priority. That’s a huge concern. Another big issue: interconnectivity. Once an attacker breaches one agent, they may gain access to everything else connected — sensitive apps, data, infrastructure.
It becomes very dangerous very quickly.
Keith: Shouldn’t zero trust architecture solve this? Or are these agents operating outside those controls? Lee: Great point. Two issues: Most companies still don’t have true zero trust implemented.
The business desire for AI is so strong that it’s steamrolling security practices — just like we saw with early cloud adoption. Agents are being deployed fast, often without asking, “Should this have access to everything?”
Keith: Shouldn’t the developer of the agent be thinking about those questions? Lee: Ideally, yes. But many aren't. Agentic browsers are a good example. A consumer might download one for convenience — but that same browser might be used on enterprise devices without going through proper security controls.
So we’ll need to rethink how things like browsers are managed — likely through secure enterprise browsers with tighter controls.
Keith: Have you seen any real-world or simulated attacks targeting agents? Lee: Researchers are definitely finding vulnerabilities. It’s harder to know what attackers are doing, but we know they’re using AI. The sheer speed and volume of attacks suggest it. But we haven’t yet seen a large-scale, AI-led attack campaign.
That’s likely coming.
Keith: So what should CISOs be doing now to prepare their security playbooks? Lee: Philosophically, security teams can’t just say “No.” That doesn’t work.
Instead, they should say “Yes, but here’s how.” Technically, there are three key things to do: Discover agents and AI usage across SaaS, internal networks, and cloud. Limit unauthorized use — through CASB tools, MCP detection, or AIPSM for cloud.
Plan for protections — how to defend against agent-specific vulnerabilities and future attacks.
Keith: What’s the current state of agent protocols and standards? Lee: Still evolving. There’s a joke: “Where’s the ‘S’ in MCP?” — meaning, where’s the security? Some agents use MCP across the network, which we can detect and control. But others talk locally on the same endpoint, making discovery harder.
Every architecture needs its own layer of visibility and control.
Keith: Is there a vision for fully autonomous security — AI agents detecting and stopping other AI agents? Lee: Maybe someday, but for now, human-in-the-loop is still essential. One promising direction is SOC augmentation — using agentic AI to help analysts, not replace them.
For example, we built a system that monitors security blogs, extracts indicators of compromise, scans the enterprise for matches, and hands off results to human analysts — all within minutes instead of hours. That’s how AI should work: Elevating humans, not replacing them.
Keith: But the AI still needs oversight, right? You don’t want it hallucinating threats. Lee: Exactly. Humans still need to review outputs, provide feedback, and train the agents to improve over time.
Keith: When should security professionals move from “concerned” to “massively concerned”? Lee: The moment someone releases a really good AI Red Team tool — something that can autonomously run sophisticated penetration tests. If defenders can do it, attackers can too. Think about SolarWinds.
Of the 3,000 servers infected, only 100–125 were activated over six months. With agentic AI, all 3,000 could have been exploited in 24 hours.
Keith: Are you optimistic about the future of AI in security? Lee: I am. And here’s why: AI gives defenders a chance to finally get ahead. Without AI, protecting a global enterprise with thousands of endpoints and SaaS apps is nearly impossible.
Attackers may get faster, but they’re still using the same basic tactics. Defenders, if we implement AI properly, can reach a whole new level of effectiveness.
Keith: But defenders don’t usually get the spotlight, right? We rarely hear about their wins. Lee: True. And that’s unfortunate. Sharing both wins and losses would help the whole industry. At Palo Alto Networks, we try to anonymize and share what we learn — because everyone benefits from visibility.
Keith: What about government regulation? Will that help or hurt? Lee: It's probably too late to regulate AI effectively for security. Open-source models are everywhere, including in attackers' hands.
Instead of waiting on regulations or standards — which take years — security innovation needs to keep up with the speed of technology. That’s our approach.
Keith: That’s a very optimistic way to end things. Lee: Thanks, Keith. Keith: Lee Klarich, Chief Product Officer at Palo Alto Networks. Great insight — thank you for joining us. We’ll have you back when the next threat emerges. Lee: Looking forward to it. Thanks again.
Keith: That’s all the time we have for this week’s episode of Today in Tech. Be sure to like the video, subscribe to the channel, and leave any comments below. I'm Keith Shaw — thanks for watching!
Sponsored Links