
Episode 9: AI, Testing and DORA with Lisa Crispin
Published 16 Apr 2026
Duration: 00:47:09
AI reshapes software development by emphasizing integrated testing to prevent cognitive debt, balancing automation with human oversight, fostering collaboration, and prioritizing governance, diverse teams, and iterative practices to ensure quality and adaptability.
Episode Description
Vitaly, Anupam, and Maryia sit down with Lisa Crispin (independent consultant, DORA community guide, co-author with Janet Gregory of the Agile Testing...
Overview
The podcast explores the evolving landscape of software testing and development in the era of AI-assisted tools, emphasizing the need for collaboration, adaptive practices, and human oversight. It highlights how AI-generated code is reshaping quality assurance, drawing parallels between current AI integration and past automation shifts, while addressing risks like job displacement for testers and the emergence of "cognitive debt" from over-reliance on AI. Testers and QA professionals are stressed as irreplaceable in ensuring alignment with user needs, conducting risk assessments, and questioning specifications, particularly in the context of non-deterministic AI outputs. The discussion underlines the necessity of integrating testing throughout the development lifecycle, not just at the end, and advocates for organizational designs that embed testing as a collaborative, questioning function rather than siloed expertise. Challenges include managing AIs non-deterministic nature, the need for continuous validation of AI-generated content, and the importance of balancing AI tools with deterministic practices to maintain reliability. The conversation also emphasizes small-batch development, user-centric approaches, and the role of diverse teams and cross-functional collaboration in mitigating AI-related risks, such as burnout and oversight gaps.
Key themes include the tension between AIs potential to amplify productivity and the risks of complacency or quality degradation if poorly integrated. The podcast underscores the need for governance frameworks around AI tool usage, ongoing research to refine best practices, and the critical role of quality engineering in ensuring robust software delivery. It also addresses systemic issues like the lack of explicit testing skills in developer roles, the risks of large-scale AI-generated code without iterative validation, and the importance of psychological safety and team satisfaction in high-performing teams. Overall, the discussion frames AI as a tool that requires intentional, human-centric integration to enhance, rather than undermine, software quality and development processes.