More The Secure Disclosure episodes

LLMs Will Never Be Fully Secure w/ Brooks McMillin thumbnail

LLMs Will Never Be Fully Secure w/ Brooks McMillin

Published 9 Mar 2026

Duration: 00:25:38

Security oversights in AI/MCP server development, mirroring historical flaws like SQL injection, include unsafe practices such as `eval` usage and weak authorization, risking remote code execution and data leaks, while stressing the need for layered defenses against AI-amplified exploits in untested ecosystems.

Episode Description

Were back in the wild west only this time, the apps can be social engineered at machine speed. Live from CactusCon, Brooks McMillin breaks down malici...

Overview

The podcast discusses recurring security issues in AI and related technologies, drawing parallels between modern challenges and historical vulnerabilities such as SQL injection and broken access control. It emphasizes that lessons from past mistakeslike insecure APIs or misconfigurationshave not been adequately applied to new systems, such as MCP (Malicious Model Context Protocol) servers, which act as intermediaries for LLMs to interact with backend tools. Concerns are raised about the vulnerabilities in MCP servers, including the use of insecure practices (e.g., eval on third-party input) leading to risks like remote code execution and inadequate access control, which could allow unauthorized manipulation of data or permissions. The discussion also highlights the persistent problem of broken access control, a long-standing issue that remains critical in AI systems despite repeated warnings.

A central theme is the amplification of security flaws by AI and LLMs, which can exploit vulnerabilities faster and more creatively than humans, effectively serving as "magnifying glasses" for existing gaps. Prompt injection is identified as a novel threat akin to SQL injection but harder to mitigate due to its reliance on social engineering. The podcast underscores the need for robust security measures, such as logical controls, anomaly detection, and strict authorization checks, while balancing innovation with caution. It stresses that AI integration requires careful implementationstarting with limited access and incorporating safeguards like multi-factor authentication, logging, and human verification for sensitive actions. However, the rapid deployment of tools like OpenClaw, without sufficient testing, and the prevalence of malicious packages in repositories highlight ongoing risks in the ecosystem.

The conversation also touches on the challenges of testing LLM-driven systems, including the unpredictability of their behavior and the limitations of traditional security frameworks. Solutions proposed include using LLMs themselves to simulate attacks, dynamic testing, and layered defenses. A recurring caution is the tension between the speed of innovation and the need for thorough security hardening, with a call for embedding security expertise in AI development. Ultimately, the dialogue reflects a concern that, despite the transformative potential of AI, the field risks repeating historical security missteps unless lessons from the past are systematically applied.

Recent Episodes of The Secure Disclosure

25 Mar 2026 Are Humans the Weakest Link in Security? w/ Sean Juroviesky

Securing organizations requires aligning human-centric workflows and communication with embedded, frictionless security practices, addressing human error through behavior monitoring and training, managing shadow IT/AI via collaboration and inventory, balancing usability with targeted access controls, and fostering proactive security culture through education and storytelling rather than enforcement.

17 Mar 2026 AI Agents Must Have Identity & Access Control w/ Johannes Keienburg

Autonomous AI agents, with transformative productivity potential, pose significant security, accountability, and governance challenges requiring dynamic access controls, human oversight, and industry-wide standards to ensure safe and regulated integration.

16 Mar 2026 The Creator of Curl on Why AI Is Breaking Bug Bounties w/ Daniel Stenberg

The Curl project's evolution from a 1996 currency tool to a prominent open-source library highlights community-driven growth, open-source maintenance challenges, AI's impact on security reporting, sustainability issues, and tensions between innovation and unresolved technical risks.

More The Secure Disclosure episodes