More How To Test This? episodes

How To Test As Agentic Quality Engineer  Dragan Spiridonov thumbnail

How To Test As Agentic Quality Engineer Dragan Spiridonov

Published 8 Apr 2026

Duration: 00:38:26

The evolution of quality engineering integrates AI through Agent Quality Engineering (AQE), emphasizing structured frameworks like PACT, human oversight, and the balance between AI efficiency and foundational QA skills while addressing challenges like alignment with business goals and ethical considerations.

Episode Description

Most testers wont be replaced by AI.But many will fall behind because they dont evolve with it.Episode #22 How to Test as an Agentic Quality EngineerD...

Overview

The podcast discusses the evolving role of artificial intelligence (AI) in quality assurance (QA) and testing, emphasizing the shift from traditional QA methods to agentic engineering. It outlines three levels of AI integration: AI Assistant QE, where AI supports tasks with human oversight; AI Augmented QE, where AI autonomously performs tasks like coding but requires supervision; and Agentic QE, where AI agents are fully orchestrated to handle complex software development lifecycle tasks. The PACT framework (Proactive, Autonomous, Collaborative, Targeted) is introduced as a structured approach to designing intelligent agents for testing, focusing on preemptive issue detection, minimal supervision, inter-agent collaboration, and goal-specific execution. Key challenges include avoiding unstructured "vibe coding" and ensuring AI is integrated with mature processes to avoid inefficiencies. Human skills like critical thinking, collaboration, and creativity remain essential, as AI complements rather than replaces QA expertise in risk assessment and exploratory testing.

The discussion also highlights the importance of foundational QA principles alongside AI adoption, such as validating agent outputs and aligning automation with business goals. Tools like Cloud Code and Rueflow are emphasized for managing agent workflows, with examples of open-source projects and AI-assisted testing platforms. Career advice encourages QA professionals to embrace AI as a tool for efficiency, develop context-driven testing skills, and engage in communities or open-source initiatives to stay relevant. The "10% rule" is proposed, advocating for 90% of time spent verifying AI outputs and only 10% instructing agents. Technical challenges like context drifting and over-engineering are addressed through task segmentation and progress-tracking systems. Overall, the content underscores the need for QA engineers to evolve into quality architects, leveraging AI for automation while maintaining ethical and strategic oversight of testing processes.

The podcast also touches on practical implementations, such as the Sentinel API project, which demonstrates agent-based testing of APIs, and the Agent QE Fleet, a custom-built set of AI agents for quality assurance tasks. Resources for learning, including blogs, online courses, and community engagement, are recommended to help QA professionals adapt to this new paradigm. The emphasis is on balancing AI's capabilities with human judgment, ensuring that tools enhance, rather than replace, core QA competencies like exploratory testing, risk analysis, and critical thinking. The discussion concludes with encouragement to experiment with new technologies, build a strong portfolio of open-source projects, and stay engaged with the evolving field of QA engineering.

Recent Episodes of How To Test This?

29 Mar 2026 How to Test With Agentic AI automation Tool - Geosley Andrades

Agentic automation AI is reshaping QA by enabling autonomous testing, addressing skill gaps and scalability through adaptive, no-code solutions, and urging professionals to upskill in AI/ML, prioritize business logic, and balance automation with human oversight for reliable, secure, and context-aware quality assurance.

27 Mar 2026 How to Test a Release Oleksandr Bolzhelarskyi

Strategies for effective software testing emphasize separating QA from quality management, addressing role confusion and oversight gaps, utilizing process improvements and tools, balancing speed with stability through rigorous regression testing, fostering collaboration between teams, and leveraging automation and continuous improvement to ensure reliable releases.

19 Mar 2026 How to Test with Independent QA | Guest: Tudor Brad

The evolving role of QA in software development emphasizes independent teams for unbiased testing, addresses challenges like post-launch failures and AI tool adaptation, integrates proactive security and ethics, and highlights future trends in AI-driven QA and ethical compliance.

17 Mar 2026 How to Test This with AI and MCP - Deepak Kamboj

AI integration in test automation streamlines processes via agents generating test cases, analyzing failures, and executing accessibility/performance checks with tools like Playwright, leveraging frameworks like MCP, TypeScript/Python workflows, while addressing challenges such as context awareness, flaky tests, and advancing toward autonomous, scalable AI-driven testing strategies.

7 Mar 2026 How to test with HIST - Ruslan Desyatnikov

The podcast discusses a transformative approach to Quality Assurance that emphasizes proactive risk mitigation, business alignment, and elevating the QA profession through critical thinking and AI adaptation.

More How To Test This? episodes