More The Secure Disclosure episodes

The Creator of Curl on Why AI Is Breaking Bug Bounties w/ Daniel Stenberg thumbnail

The Creator of Curl on Why AI Is Breaking Bug Bounties w/ Daniel Stenberg

Published 16 Mar 2026

Duration: 00:33:36

The Curl project's evolution from a 1996 currency tool to a prominent open-source library highlights community-driven growth, open-source maintenance challenges, AI's impact on security reporting, sustainability issues, and tensions between innovation and unresolved technical risks.

Episode Description

Daniel Stenberg, creator of curl, explains how a small open source tool became core internet infrastructure. The conversation covers curls origin, mai...

Overview

The podcast explores the origins and evolution of the open-source tool Curl, which began as a personal project in 1996 by Daniel Stenberg to fetch currency rates for his IRC bot. Initially based on a minimal tool called HTTPGAT, it gradually expanded into a widely-used utility for handling URLs and HTTP requests. The development process was organic, driven by community contributions, user feedback, and iterative improvements over two decades. Despite its scalenow used in 30 billion applicationsthe project retained its core ethos of simplicity and collaborative refinement. However, the discussion also highlights challenges faced by open-source maintainers, including mental health strains from managing high-impact projects and navigating toxic interactions within the community.

A significant portion of the content revolves around modern challenges in open-source security, particularly the surge in AI-generated vulnerability reports. These reports, often detailed but invalid, overwhelm security teams, consuming time and resources while masking genuine issues. The decision to discontinue the Curl bug bounty program underscores tensions between external funding models and the need for quality control, as platforms like HackerOne struggle with low-validity submissions. Proposed solutionssuch as AI-based filtering, reputation systems, and credential requirementsface trade-offs between reducing spam and excluding new contributors. The dialogue also touches on the broader implications of AIs role in coding and security, noting both its productivity benefits and risks of deepening confusion or undermining foundational knowledge.

The narrative culminates in reflections on the evolving nature of open-source development and its sustainability. Maintainers grapple with balancing inclusivity and quality, ensuring newer contributors are not unfairly sidelined by rigid systems. Simultaneously, the rise of AI in technical fields raises philosophical questions about human adaptability, the erosion of deep technical understanding, and the resilience of the engineering community in confronting increasingly complex systems. The podcast ultimately portrays open source as a dynamic, human-driven endeavor shaped by both technical innovation and the persistent challenges of collaboration, ethics, and unforeseen consequences.

Recent Episodes of The Secure Disclosure

25 Mar 2026 Are Humans the Weakest Link in Security? w/ Sean Juroviesky

Securing organizations requires aligning human-centric workflows and communication with embedded, frictionless security practices, addressing human error through behavior monitoring and training, managing shadow IT/AI via collaboration and inventory, balancing usability with targeted access controls, and fostering proactive security culture through education and storytelling rather than enforcement.

17 Mar 2026 AI Agents Must Have Identity & Access Control w/ Johannes Keienburg

Autonomous AI agents, with transformative productivity potential, pose significant security, accountability, and governance challenges requiring dynamic access controls, human oversight, and industry-wide standards to ensure safe and regulated integration.

9 Mar 2026 LLMs Will Never Be Fully Secure w/ Brooks McMillin

Security oversights in AI/MCP server development, mirroring historical flaws like SQL injection, include unsafe practices such as `eval` usage and weak authorization, risking remote code execution and data leaks, while stressing the need for layered defenses against AI-amplified exploits in untested ecosystems.

More The Secure Disclosure episodes