More The Secure Disclosure episodes

AI Agents Must Have Identity & Access Control w/ Johannes Keienburg thumbnail

AI Agents Must Have Identity & Access Control w/ Johannes Keienburg

Published 17 Mar 2026

Duration: 00:37:08

Autonomous AI agents, with transformative productivity potential, pose significant security, accountability, and governance challenges requiring dynamic access controls, human oversight, and industry-wide standards to ensure safe and regulated integration.

Episode Description

AI agents are here, and theyre already transforming how we work. But beneath the hype lies a massive, unsolved security problem.In this episode, Macke...

Overview

The podcast explores the rapid emergence of autonomous AI agents, likening their current development to a "Wild West" scenario due to a lack of established norms or regulations. It highlights parallels to past technological revolutions, emphasizing the transformative potential of AI agents while addressing significant challenges, including security risks, accountability gaps, and insufficient governance frameworks. Autonomous agents pose threats due to their ability to access systems with broad permissions, operate without personal accountability, and execute actions at machine speed, often beyond human oversight. The discussion underscores the complexity of securing these agents, particularly in managing access rights, which are already a critical issue in cybersecurity (e.g., OWASPs top concern: broken access control). Current systems struggle to enforce "least privilege" principles for AI agents, which interact with multiple systems autonomously, exacerbating authorization challenges.

While the podcast acknowledges the excitement around AIs potential to revolutionize productivitysuch as streamlining workflows and enhancing efficiencyit cautions against uncontrolled adoption. Risks include agents performing unintended or harmful actions, like data deletion or unauthorized access, due to static, overly broad permissions. The conversation critiques existing solutions like LLM-based guardrails as inadequate, stressing the need for dynamic, job-specific access controls and human oversight to manage agent activities responsibly. Proposals include implementing time-bound, task-specific permissions via a "separated access gateway" and prioritizing cross-industry standards to mitigate risks. The text concludes that while AI agents could unlock significant productivity gains, their safe integration hinges on developing robust authorization systems, fostering collaboration, and balancing innovation with security safeguards to prevent misuse.

Recent Episodes of The Secure Disclosure

25 Mar 2026 Are Humans the Weakest Link in Security? w/ Sean Juroviesky

Securing organizations requires aligning human-centric workflows and communication with embedded, frictionless security practices, addressing human error through behavior monitoring and training, managing shadow IT/AI via collaboration and inventory, balancing usability with targeted access controls, and fostering proactive security culture through education and storytelling rather than enforcement.

16 Mar 2026 The Creator of Curl on Why AI Is Breaking Bug Bounties w/ Daniel Stenberg

The Curl project's evolution from a 1996 currency tool to a prominent open-source library highlights community-driven growth, open-source maintenance challenges, AI's impact on security reporting, sustainability issues, and tensions between innovation and unresolved technical risks.

9 Mar 2026 LLMs Will Never Be Fully Secure w/ Brooks McMillin

Security oversights in AI/MCP server development, mirroring historical flaws like SQL injection, include unsafe practices such as `eval` usage and weak authorization, risking remote code execution and data leaks, while stressing the need for layered defenses against AI-amplified exploits in untested ecosystems.

More The Secure Disclosure episodes