More Open Source Security episodes

MCP and Agent security with Luke Hinds thumbnail

MCP and Agent security with Luke Hinds

Published 16 Mar 2026

Duration: 35:36

The text explores AI agent security risks like prompt injection and open-source vulnerabilities, emphasizing the No-NO project's kernel-based sandboxing with a deny-by-default model, hardware enclaves, and Rust-driven efficiency, alongside layered defenses, restricted commands, and collaborative efforts to tackle evolving threats like social engineering and insecure coding practices.

Episode Description

Josh talks to Luke Hinds, CEO of Always Further, about MCP and agent security. We start out talking about Luke's new tool, nono which is a sandboxing...

Overview

The text discusses the development of No-NO, a security tool created by Always Further, a startup focused on addressing vulnerabilities in AI agents and models. No-NO leverages kernel-based sandboxing to isolate processes, preventing data exfiltration, unauthorized file access, and destructive actions like rm -rf, using a deny-by-default model and hardware security enclaves (e.g., Apples Secure Enclave). Designed for speed and simplicity, it contrasts with Docker by enabling near-instant startup times and minimal user configuration, though it is not intended as a replacement but a complementary tool for specific use cases. The tool also incorporates restricted commands, time-based command allocation, and rollback mechanisms to mitigate risks like accidental deletions or unauthorized API interactions.

Broader challenges in AI security are highlighted, including the unpredictability of AI agents (e.g., bypassing sandboxing) and vulnerabilities in MCP servers (Model-Controller-Proxy), which enable AI models to execute external functions. Risks include supply chain vulnerabilities in open-source tools, prompt injection attacks, and the merged control/data architecture of large language models (LLMs), which make them susceptible to social engineering and data leaks. Historical parallels are drawn to early hacking exploits, such as the "2600 Hz" phone fraud, emphasizing the persistent difficulty of securing systems against autonomous, high-dimensional threats. The text also stresses the need for collaborative solutions, education in secure development practices, and tools that balance simplicity for non-experts with customization options for professionals, while addressing the growing gap between rapid AI innovation and robust security frameworks.

Recent Episodes of Open Source Security

30 Mar 2026 Open Source Security at scale with Michael Wisner

The Alpha Omega Project addresses open-source security by targeting leverage points like Node.js and Python ecosystems, advocating for systemic solutions, dedicated security roles, sustainable funding, and registry infrastructure improvements to counter fragmented practices and downstream risks.

23 Mar 2026 2026 State of the Software Supply Chain with Brian Fox

The State of the Software Supply Chain Report underscores explosive open source growth (10T annual downloads) paired with critical challenges like malware proliferation (1.2M malicious packages), unresolved vulnerabilities (65% unaddressed), infrastructure strain, AI's dual role in risk (hallucinations) and potential (MCP systems), and urgent needs for improved tools, policies, and cost management amid regulatory and scalability pressures.

2 Mar 2026 Rust coreutils with Sylvestre Ledru

A modern rewrite of Unix command-line tools using Rust aims for memory safety, performance, and maintainability while achieving high compatibility.

23 Feb 2026 Goose and the Agentic AI Foundation with Brad Axen

The development and application of AI tools, such as Goose AI, in software development is explored, highlighting challenges and opportunities in using AI-generated code and the evolving role of developers.

More Open Source Security episodes