More The BugBash Podcast episodes

Programming as an Act of Building Vocabulary thumbnail

Programming as an Act of Building Vocabulary

Published 2 Apr 2026

Duration: 00:49:27

The podcast examines challenges in creating reliable software, highlighting LLMs' struggles with complex systems due to lacking shared abstractions, explores distributed systems theory vs. practice, emphasizes learning through code analysis and DST testing, and underscores the necessity of human expertise and clear terminology in bridging AI limitations and real-world engineering.

Episode Description

Why do LLMs struggle to build complex architecture?According to Unmesh Joshi, Distinguished Engineer at ThoughtWorks, it often comes down to a lack of...

Overview

The podcast discusses the challenges of Large Language Models (LLMs) in constructing and understanding complex systems, citing a lack of shared abstractions and vocabulary as a key limitation. It explores the difficulties in bridging academic theories of distributed systems (e.g., consensus algorithms like Paxos and Raft) with real-world code, emphasizing the need for practical resources and patterns to translate theoretical concepts into actionable implementations. Inmesh Josie highlights the importance of analyzing open-source systems and creating minimal implementations to grasp core principles, while underscoring the value of deterministic simulation testing (DST) as a tool for teaching, debugging, and verifying distributed systems. Frameworks like Tickloom enable controlled failure scenarios and unit testing, revealing bugs and ensuring robustness. However, DST has limitations, such as not covering all edge cases, and its effectiveness depends on human expertise and structured abstraction-building.

The discussion also addresses the role of shared terminology in enabling collaboration across disciplines, as fragmented vocabularies hinder knowledge sharing. While LLMs can enhance productivity by offloading routine tasks, their reliability in generating maintainable code for complex systems remains limited without explicit guidance on domain-specific abstractions. The podcast emphasizes the importance of hands-on learning, such as building and testing simplified systems, to understand failure scenarios and system behavior. Additionally, it critiques the gap between optimistic AI hype and the practical challenges faced by developers, stressing that deep technical understanding, vocabulary mastery, and disciplined abstraction-building are essential for leveraging tools like LLMs and DST. The conversation concludes by encouraging continued exploration of AI's role in software development while prioritizing foundational knowledge and human-driven problem-solving.

Recent Episodes of The BugBash Podcast

18 Mar 2026 Semmathesy and the Agentic Era: Learning Systems in 2026

AI reshapes software development through Semathesis systems, enhancing productivity and collaboration while navigating challenges like legacy code limitations, the shift from deterministic coding to adaptive AI agents, and balancing automation with human oversight in complex, dynamic environments.

More The BugBash Podcast episodes