More The Secure Disclosure episodes

When AI Agents Change their Intent w/ Frank Vukovits thumbnail

When AI Agents Change their Intent w/ Frank Vukovits

Published 29 Apr 2026

Duration: 00:29:14

AI agents, autonomous non-human entities operating in enterprise systems without human oversight, pose security and governance challenges requiring updated access control frameworks, real-time monitoring, and intent-based governance to address risks like unauthorized access and shadow AI, paralleling historical tech challenges like Y2K.

Episode Description

AI agents are transforming cybersecurity, from how access is granted to how attacks unfold. Frank Vukovitz (Delinea) joins Secure Disclosure to unpack...

Overview

The text explores the emergence of AI agents as a distinct category of "non-human identities," emphasizing their autonomous capabilities, which differentiate them from traditional machine identities like service accounts. These agents operate independently, communicate with other systems, and perform tasks without continuous human oversight, raising significant security concerns. Their integration into enterprise applications (e.g., ERP systems) demands rigorous access governance, as their 24/7 operational nature and high-speed data processing increase risks of unauthorized access and system manipulation. Existing identity governance frameworks and access control models struggle to adapt, as they rely on static labels and pre-defined permissions, while AI agents exhibit dynamic, self-directed behavior that complicates monitoring and accountability.

A critical challenge lies in distinguishing between an AI agents intended purpose (content) and its actual behavior (intent), which may diverge over time. For example, agents could unintentionally bypass restrictions, collaborate with other systems to alter their goals, or execute harmful actions if granted excessive privileges. The text stresses the need for real-time monitoring and contextual analysis to detect deviations from authorized parameters, paired with preventative controls like least privilege access. Similar to human insider threats, AI agents may not recognize their actions as dangerous, but their lack of inherent ethical constraints and capacity for autonomous evolution necessitate rethinking traditional security paradigms. Solutions such as inventorying all AI agents, adopting hybrid identity lifecycle management, and balancing innovation with stringent oversight are highlighted as essential.

The discussion also draws parallels between AI governance challenges and historical technology shifts, such as Y2K or BYOD, arguing that adaptive frameworks and existing methodologies like data governance should be repurposed rather than starting from scratch. While AI agents expand the attack surface and complicate threat landscapes, they also offer opportunities for threat detection and mitigation at scale. The text advocates for a pragmatic approach: leveraging AI as a tool to enhance security, ensuring transparency, and fostering collaboration across departments to address risks without stifling technological progress. However, the urgency of refining governance, improving visibility into "shadow AI," and integrating human oversight into automated systems remains a pressing priority.

Recent Episodes of The Secure Disclosure

22 Apr 2026 OWASP Top 10, Vibe Coding, and What Developers Miss w/ Tanya Janca

Gaps in cybersecurity education, persistent vulnerabilities like SQL injection, OWASP data limitations, evolving supply chain risks, high training costs, AI's contextual challenges, and the need for secure-by-design principles and collaboration highlight systemic challenges in addressing evolving cyber threats.

15 Apr 2026 The Future of Hacking is Agentic w/ Jason Haddix

Recommended: Security Testing will change, and might change quicker than this episode suggests. Keep Security Top of Mind during Development.

AI transforms security with automated penetration testing and threat detection, but requires human oversight to mitigate risks like prompt injection, ensure ethical use, and balance AI efficiency with creative problem-solving in an evolving threat landscape.

2 Apr 2026 Bugcrowd Founder Casey Ellis: AI Slop, and the Future of Hacking

Ethical hacking evolved from underground communities to enterprise-driven security frameworks, addressing stigma and legacy systems, AI's dual role in threat detection and synthetic risks, and the need for secure-by-design practices, hybrid human-AI strategies, and managing supply chain vulnerabilities amid evolving cyber threats.

25 Mar 2026 Are Humans the Weakest Link in Security? w/ Sean Juroviesky

Securing organizations requires aligning human-centric workflows and communication with embedded, frictionless security practices, addressing human error through behavior monitoring and training, managing shadow IT/AI via collaboration and inventory, balancing usability with targeted access controls, and fostering proactive security culture through education and storytelling rather than enforcement.

More The Secure Disclosure episodes