More The Secure Disclosure episodes

The Future of Hacking is Agentic w/ Jason Haddix thumbnail

The Future of Hacking is Agentic w/ Jason Haddix

Published 15 Apr 2026

Recommended: Security Testing will change, and might change quicker than this episode suggests. Keep Security Top of Mind during Development.

Duration: 00:40:10

AI transforms security with automated penetration testing and threat detection, but requires human oversight to mitigate risks like prompt injection, ensure ethical use, and balance AI efficiency with creative problem-solving in an evolving threat landscape.

Episode Description

Jason Haddix joins the podcast to break down how AI is transforming offensive security from attacking LLM-powered applications to why he believes 90%...

Overview

The text discusses the growing role of AI in penetration testing and security, emphasizing both its transformative potential and inherent vulnerabilities. AI is predicted to conduct 90% of future penetration tests, requiring human pen testers to shift from manual execution to mastering AI tools. However, AI adoption in enterprises remains slow, with risks emerging from using public data to train models and vulnerabilities like prompt injection, which exploit AIs inability to separate inputs from instructions. Attack vectors include both internal AI systems (e.g., LLMs) and external APIs, with challenges such as non-deterministic AI responses and high testing costs due to the need for repeated attempts. Mitigation strategies focus on layered defensescombining guardrails, classifiers, and system promptsto detect and block malicious inputs, though these are not foolproof against evolving threats. The text underscores the need for human expertise to manage AIs unpredictability, especially in identifying complex vulnerabilities that automated tools may miss.

Security professionals are urged to adapt to AIs integration into workflows, balancing automation with human judgment. Challenges include prompt injections persistence due to LLM architecture, risks in AI-based systems like RAG (Retrieval-Augmented Generation) for data access, and the misuse of LLMs in applications lacking clear business value. Bug bounty programs face overload due to excessive submissions, with AI triage systems proposed to filter reports. While AI can enhance security automationstreamlining tasks like reconnaissance and vulnerability prioritizationit also highlights gaps in dependency management, malware detection, and social engineering risks targeting developers. The discussion concludes that AIs future in security hinges on proactive measures: combining pre-trained safety models, rigorous testing, and layered defenses to mitigate risks without stifling innovation. Human creativity remains critical for discovering nuanced vulnerabilities, ensuring AI complements rather than replaces expert judgment in security practices.

Final Notes

Based on the provided text, here are the key insights and takeaways relevant to readers:

Key Insights:
  1. AI in Penetration Testing: By 2025, 90% of penetration tests will be conducted by AI, making human penetration testers a crucial component as "masters of the tool," not controlled by AI.
  2. Attack Vectors on AI Systems: AI systems are vulnerable to embedded AI systems, blind cross-site scripting and other injection attacks, and exploiting the "input problem" in AI, which can be similar to traditional AppSec vulnerabilities but with new complexities.
  3. The Need for Human Expertise: Human expertise remains critical in managing AI unpredictability and ensuring comprehensive security, even as AI becomes a more powerful tool.
  4. Security Risks in AI Systems: AI systems face inherent security risks, including prompt injection, exploitation of the input problem, and data access vulnerabilities in RAG (Retrieval-Augmented Generation) databases.
  5. Industry Trends in AI Security: AI adoption is slow and measured in enterprises due to the risks of using public data sources for training models.
  6. Challenges in AI Security: AI requires high-quality data and tailored prompts to avoid inefficiency, and attackers often adapt techniques to bypass current security measures.
  7. Potential for AI to Enhance Security: AI can automate tasks, provide real-time analysis, and help identify complex vulnerabilities, but it is essential to balance AI's advantages with human expertise.
Takeaways:
  1. Human and AI Collaboration: Human penetration testers will collaborate with AI tools, using them as a tool, not replacing human judgment.
  2. New Skills Required: Security professionals must learn natural language-based attack techniques and adapt to LLM (Large Language Model)-specific vulnerabilities.
  3. Importance of Guardrails: Guardrails and classifiers can help detect and prevent malicious inputs, but they are not foolproof and require ongoing development and testing.
  4. Need for Balance in AI Development: AI development must balance utility and security concerns, and security professionals should take an active role in engaging with AI development rather than simply criticizing it.
  5. Emphasis on Proactive Measures: Proactive measures, such as testing and training, must become a priority to mitigate the risks associated with AI adoption.
Practical Applications:
  1. Integration of AI and Human Expertise: Organizations should prioritize the integration of AI and human expertise to leverage the benefits of AI while ensuring comprehensive security.
  2. Continuous Training and Development: Security professionals should undergo continuous training and development to stay up-to-date with the latest AI-related threats and vulnerabilities.
  3. Development of AI Security Frameworks: The development of AI security frameworks and guidelines will become increasingly important as AI adoption continues to grow.

Overall, the text highlights the complex and rapidly evolving landscape of AI security and the importance of balancing AI's benefits with human expertise and proactive measures to mitigate its risks.

Recent Episodes of The Secure Disclosure

2 Apr 2026 Bugcrowd Founder Casey Ellis: AI Slop, and the Future of Hacking

Ethical hacking evolved from underground communities to enterprise-driven security frameworks, addressing stigma and legacy systems, AI's dual role in threat detection and synthetic risks, and the need for secure-by-design practices, hybrid human-AI strategies, and managing supply chain vulnerabilities amid evolving cyber threats.

25 Mar 2026 Are Humans the Weakest Link in Security? w/ Sean Juroviesky

Securing organizations requires aligning human-centric workflows and communication with embedded, frictionless security practices, addressing human error through behavior monitoring and training, managing shadow IT/AI via collaboration and inventory, balancing usability with targeted access controls, and fostering proactive security culture through education and storytelling rather than enforcement.

17 Mar 2026 AI Agents Must Have Identity & Access Control w/ Johannes Keienburg

Autonomous AI agents, with transformative productivity potential, pose significant security, accountability, and governance challenges requiring dynamic access controls, human oversight, and industry-wide standards to ensure safe and regulated integration.

16 Mar 2026 The Creator of Curl on Why AI Is Breaking Bug Bounties w/ Daniel Stenberg

The Curl project's evolution from a 1996 currency tool to a prominent open-source library highlights community-driven growth, open-source maintenance challenges, AI's impact on security reporting, sustainability issues, and tensions between innovation and unresolved technical risks.

More The Secure Disclosure episodes